From nginx-forum at nginx.us Tue Jan 1 00:51:21 2013 From: nginx-forum at nginx.us (justin) Date: Mon, 31 Dec 2012 19:51:21 -0500 Subject: Allow directive with variables Message-ID: <2f1054aca24886494db786dd3b27b4f1.NginxMailingListEnglish@forum.nginx.org> I am trying to use a variable with the `allow` directive, i.e. set $home_ip 1.2.3.4; location ^~ /apc/ { # Allow home allow $home_ip; deny all; include /etc/nginx/php.conf; } But I am getting an error. Does the `allow` directive allow variables? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234600,234600#msg-234600 From nginx-forum at nginx.us Tue Jan 1 14:12:37 2013 From: nginx-forum at nginx.us (oleksandr-shb) Date: Tue, 01 Jan 2013 09:12:37 -0500 Subject: Purge whole cache zone Message-ID: <19db3ae2f33e3678db49ac417426f741.NginxMailingListEnglish@forum.nginx.org> Hi, I am interested if it is possible to invalidate whole cache zone. Now it is possible to invalidate single cache item using cache key and third party module Cache Purge. What I need is to remove all cache items when hitting some location. So far the ideas are next: 1. remove cache folder, but then you need to restart nginx. 2. create several cache_zones and use the one stored with the name stored in memcache. So when cache needs to be invalidated the backend will update cache_zone name record stored in memcache. It seems to be like a possible way, but have no idea how to implement it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234605,234605#msg-234605 From nginx-forum at nginx.us Tue Jan 1 14:39:32 2013 From: nginx-forum at nginx.us (oleksandr-shb) Date: Tue, 01 Jan 2013 09:39:32 -0500 Subject: Purge whole cache zone In-Reply-To: <19db3ae2f33e3678db49ac417426f741.NginxMailingListEnglish@forum.nginx.org> References: <19db3ae2f33e3678db49ac417426f741.NginxMailingListEnglish@forum.nginx.org> Message-ID: <81889ddb36608cc896512f2dee38fe57.NginxMailingListEnglish@forum.nginx.org> Sorry, folks my question is a duplicate of this thread http://forum.nginx.org/read.php?2,30833,32763#msg-32763 (solution). So, the solution is to remove cache folder and kill nginx processes. Not killing processes caused troubles with blank pages for me. # rm -rf /data/nginx/proxy_cache2 && killall -HUP nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234605,234606#msg-234606 From nginx-forum at nginx.us Tue Jan 1 15:06:16 2013 From: nginx-forum at nginx.us (gadh) Date: Tue, 01 Jan 2013 10:06:16 -0500 Subject: nginx crash only when using Chromium (in ubuntu) In-Reply-To: <6b2696652f0eea27abf34fe157600f36.NginxMailingListEnglish@forum.nginx.org> References: <6b2696652f0eea27abf34fe157600f36.NginxMailingListEnglish@forum.nginx.org> Message-ID: <03cc97142e1af0d17a932e8e67e07f05.NginxMailingListEnglish@forum.nginx.org> i think i found the source of the crash - i often hibernate my vbox (virtual machine) and also my ubuntu (the host machine) so it appears that the memory was garbaged. after revooting only the vnox all is normal now, no crash. the one thing i cuold not understand is why i got the crash only when using Chromium and not in other browsers ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234580,234607#msg-234607 From someukdeveloper at gmail.com Tue Jan 1 18:21:02 2013 From: someukdeveloper at gmail.com (Some Developer) Date: Tue, 01 Jan 2013 18:21:02 +0000 Subject: Nginx FastCGI question Message-ID: <50E3290E.8040007@googlemail.com> Hi, When you run a FastCGI application behind Nginx does Nginx pass all the HTTP request headers to the FastCGI server / app? Or do you need to explicitly pass them using fastcgi_param? If Nginx does pass them, does it pass them all or only a subset of the request headers? The FastCGI specification is not at all clear on whether the HTTP headers are passed or not. Also in the same way I've read through the FastCGI specification and can find no information in which states which entity is responsible for generating all the HTTP response headers. I would assume it would be my FastCGI app and Nginx just forwards it on but does Nginx add or modify any headers after I have sent my response from my FastCGI app back to Nginx? Thanks. From siefke_listen at web.de Tue Jan 1 18:33:05 2013 From: siefke_listen at web.de (Silvio Siefke) Date: Tue, 1 Jan 2013 19:33:05 +0100 Subject: Multilanguage Websites In-Reply-To: <20121231175518.GL18139@craic.sysops.org> References: <20121227162757.0614c719b67d5235ec41800f@web.de> <20121231010405.79f361636e8d27c43830a2cf@web.de> <20121231175014.8c37a3c1482045ec9857fca6@web.de> <20121231175518.GL18139@craic.sysops.org> Message-ID: <20130101193305.2d2648aac1e6868dba78bd76@web.de> On Mon, 31 Dec 2012 17:55:18 +0000 Francis Daly wrote: > If you mean "separate content for each of language1, language2, > language3; all available at separate urls", then you need no > special web server cleverness after the user has chosen to go to > http://language1.example.org/ or to http://www.example.org/language1/ > (depending on how you deploy it). Yes at moment i have my webroot so htdocs/de -> Default htdocs/en htdocs/fr htdocs/ar It need only be redirected. > All you need is for the index page on the "main" web site to offer a > series of links to each of the known separate language index pages. The best way it is when it happens automatically. I find the language selection boxes unappealing. > Have a look at (for example) http://www.wikipedia.org/ or (as you've > previously linked to) http://www.justasysadmin.net/ The CMS or wiki systems use different languages that's normal for the Design and the use of the Software. I use never this Systems. > If you want to implement some cleverness on the index page to avoid the > user having to manually choose language, you must decide what you want > the choice to be based on. Whatever you do choose, it may be worth your > while making clear to the user why you chose that one, and what the user > can do to get to the language version they actually prefer. > > The HTTP Accept-Language header is probably a reasonable choice, if your > users know how to change it or to override it for your site. We must again use this language selection boxes. Web 2.0 let's rock. Regards Silvio From mdounin at mdounin.ru Tue Jan 1 20:29:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Jan 2013 00:29:40 +0400 Subject: Purge whole cache zone In-Reply-To: <81889ddb36608cc896512f2dee38fe57.NginxMailingListEnglish@forum.nginx.org> References: <19db3ae2f33e3678db49ac417426f741.NginxMailingListEnglish@forum.nginx.org> <81889ddb36608cc896512f2dee38fe57.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130101202939.GW40452@mdounin.ru> Hello! On Tue, Jan 01, 2013 at 09:39:32AM -0500, oleksandr-shb wrote: > Sorry, folks my question is a duplicate of this thread > http://forum.nginx.org/read.php?2,30833,32763#msg-32763 (solution). So, the > solution is to remove cache folder and kill nginx processes. Not killing > processes caused troubles with blank pages for me. > # rm -rf /data/nginx/proxy_cache2 && killall -HUP nginx Correct solution would be to remove all items (subdirectories and/or files) within the cache folder, not the cache folder itself. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Tue Jan 1 20:45:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Jan 2013 00:45:20 +0400 Subject: Nginx FastCGI question In-Reply-To: <50E3290E.8040007@googlemail.com> References: <50E3290E.8040007@googlemail.com> Message-ID: <20130101204520.GX40452@mdounin.ru> Hello! On Tue, Jan 01, 2013 at 06:21:02PM +0000, Some Developer wrote: > Hi, > > When you run a FastCGI application behind Nginx does Nginx pass all > the HTTP request headers to the FastCGI server / app? Or do you need > to explicitly pass them using fastcgi_param? > > If Nginx does pass them, does it pass them all or only a subset of > the request headers? The FastCGI specification is not at all clear > on whether the HTTP headers are passed or not. All HTTP request headers are passed to a FastCGI application by default. You may modify/clear some by using the fastcgi_param directive. > Also in the same way I've read through the FastCGI specification and > can find no information in which states which entity is responsible > for generating all the HTTP response headers. I would assume it > would be my FastCGI app and Nginx just forwards it on but does Nginx > add or modify any headers after I have sent my response from my > FastCGI app back to Nginx? FastCGI relies on CGI here, so you should read RFC 3875 for basics, see http://tools.ietf.org/html/rfc3875#section-6. In short - you are responsible for headers like "Content-Type", but must not add headers like "Transfer-Encoding". Response from a FastCGI backend is treated by nginx more or less like any other response, so response headers will be modified if it's needed. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Tue Jan 1 21:31:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Jan 2013 01:31:09 +0400 Subject: Allow directive with variables In-Reply-To: <2f1054aca24886494db786dd3b27b4f1.NginxMailingListEnglish@forum.nginx.org> References: <2f1054aca24886494db786dd3b27b4f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130101213109.GZ40452@mdounin.ru> Hello! On Mon, Dec 31, 2012 at 07:51:21PM -0500, justin wrote: > I am trying to use a variable with the `allow` directive, i.e. > > set $home_ip 1.2.3.4; > > location ^~ /apc/ { > # Allow home > allow $home_ip; > > deny all; > > include /etc/nginx/php.conf; > } > > But I am getting an error. Does the `allow` directive allow variables? No, variables are not allowed within the "allow" directive parameters. If a parameter can contain variables, it's indicated in a directive description. Note well: it's bad idea to use variables to shorten configs, see http://nginx.org/en/docs/faq/variables_in_config.html. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Wed Jan 2 05:36:45 2013 From: nginx-forum at nginx.us (Ensiferous) Date: Wed, 02 Jan 2013 00:36:45 -0500 Subject: Nginx FastCGI question In-Reply-To: <50E3290E.8040007@googlemail.com> References: <50E3290E.8040007@googlemail.com> Message-ID: In addition to what Maxim said some people do not realize that not all headers are created equally and that something as inane as underscores versus dashes causes it to break RFC and nginx need you to explicitly tell it that that's okay: http://nginx.org/en/docs/http/ngx_http_core_module.html#underscores_in_headers Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234608,234616#msg-234616 From someukdeveloper at gmail.com Wed Jan 2 08:13:58 2013 From: someukdeveloper at gmail.com (Some Developer) Date: Wed, 02 Jan 2013 08:13:58 +0000 Subject: Nginx FastCGI question In-Reply-To: <20130101204520.GX40452@mdounin.ru> References: <50E3290E.8040007@googlemail.com> <20130101204520.GX40452@mdounin.ru> Message-ID: <50E3EC46.10804@googlemail.com> On 01/01/13 20:45, Maxim Dounin wrote: > Hello! > > On Tue, Jan 01, 2013 at 06:21:02PM +0000, Some Developer wrote: > >> Hi, >> >> When you run a FastCGI application behind Nginx does Nginx pass all >> the HTTP request headers to the FastCGI server / app? Or do you need >> to explicitly pass them using fastcgi_param? >> >> If Nginx does pass them, does it pass them all or only a subset of >> the request headers? The FastCGI specification is not at all clear >> on whether the HTTP headers are passed or not. > > All HTTP request headers are passed to a FastCGI application by > default. You may modify/clear some by using the fastcgi_param > directive. > >> Also in the same way I've read through the FastCGI specification and >> can find no information in which states which entity is responsible >> for generating all the HTTP response headers. I would assume it >> would be my FastCGI app and Nginx just forwards it on but does Nginx >> add or modify any headers after I have sent my response from my >> FastCGI app back to Nginx? > > FastCGI relies on CGI here, so you should read RFC 3875 for basics, > see http://tools.ietf.org/html/rfc3875#section-6. In short - you > are responsible for headers like "Content-Type", but must not add > headers like "Transfer-Encoding". > > Response from a FastCGI backend is treated by nginx more or less > like any other response, so response headers will be modified if > it's needed. > Thanks for that. The CGI RFC certainly seems more useful than the FastCGI specification. From someukdeveloper at gmail.com Wed Jan 2 08:14:33 2013 From: someukdeveloper at gmail.com (Some Developer) Date: Wed, 02 Jan 2013 08:14:33 +0000 Subject: Nginx FastCGI question In-Reply-To: References: <50E3290E.8040007@googlemail.com> Message-ID: <50E3EC69.7020508@googlemail.com> On 02/01/13 05:36, Ensiferous wrote: > In addition to what Maxim said some people do not realize that not all > headers are created equally and that something as inane as underscores > versus dashes causes it to break RFC and nginx need you to explicitly tell > it that that's okay: > http://nginx.org/en/docs/http/ngx_http_core_module.html#underscores_in_headers I'll bear that in mind. Thanks. From tom at miramedia.co.uk Wed Jan 2 08:25:32 2013 From: tom at miramedia.co.uk (Tom Barrett) Date: Wed, 2 Jan 2013 08:25:32 +0000 Subject: Purge whole cache zone In-Reply-To: <20130101202939.GW40452@mdounin.ru> References: <19db3ae2f33e3678db49ac417426f741.NginxMailingListEnglish@forum.nginx.org> <81889ddb36608cc896512f2dee38fe57.NginxMailingListEnglish@forum.nginx.org> <20130101202939.GW40452@mdounin.ru> Message-ID: Is it possible to have each server{} block write to it's own cache directory? Or otherwise identify cached pages by server{} block? The aim is to make it easy to clear all cached pages for a specific site on a box hosting multiple sites. On 1 January 2013 20:29, Maxim Dounin wrote: > Hello! > > On Tue, Jan 01, 2013 at 09:39:32AM -0500, oleksandr-shb wrote: > > > Sorry, folks my question is a duplicate of this thread > > http://forum.nginx.org/read.php?2,30833,32763#msg-32763 (solution). So, > the > > solution is to remove cache folder and kill nginx processes. Not killing > > processes caused troubles with blank pages for me. > > # rm -rf /data/nginx/proxy_cache2 && killall -HUP nginx > > Correct solution would be to remove all items (subdirectories > and/or files) within the cache folder, not the cache folder > itself. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Wed Jan 2 08:46:45 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 2 Jan 2013 12:46:45 +0400 Subject: Purge whole cache zone In-Reply-To: References: <19db3ae2f33e3678db49ac417426f741.NginxMailingListEnglish@forum.nginx.org> <81889ddb36608cc896512f2dee38fe57.NginxMailingListEnglish@forum.nginx.org> <20130101202939.GW40452@mdounin.ru> Message-ID: On Jan 2, 2013, at 12:25 , Tom Barrett wrote: > Is it possible to have each server{} block write to it's own cache directory? Or otherwise identify cached pages by server{} block? > > The aim is to make it easy to clear all cached pages for a specific site on a box hosting multiple sites. proxy_cache_path /path/to/cache/server1 keys_zone=SERVER1:10m; server { server_name server1.domain.com; proxy_cache SERVER1; ... } proxy_cache_path /path/to/cache/server2 keys_zone=SERVER2:10m; server { server_name server2.domain.com; proxy_cache SERVER2; ... } -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at wildgooses.com Wed Jan 2 11:18:21 2013 From: lists at wildgooses.com (Ed W) Date: Wed, 02 Jan 2013 11:18:21 +0000 Subject: Can't figure out secure PHP incantation for owncloud... Message-ID: <50E4177D.80103@wildgooses.com> Hi, I'm using php-fpm to run an "owncloud" install. The application in question is a bit icky in that it has PHP files spread across the filesystem, uploadable data files in the htdocs root, all PHP files default to writeable (erk) and there is extensive use of both path_info and parameters. I'm struggling to figure out a a secure implementation for the php execution which guarantees that the PHP files in question exist and aren't in the upload directory. So example (expected) php URLs would be: # with path_info and params (index|remote|public|status).php/some/long/path?possibly=withparams (public|remote|index).php?some=params # scattered around the filesystem in various (limited) subdirs (apps|search|core)/.*/.*\.php?some=params # needing default index file (grr) /?app=gallery&getfile=ajax%2Fthumbnail.php&filepath=blah # to return asset files (eek?) /remote.php?core.css /remote.php/core.css I'm struggling to figure out how to use try_files to ensure that the php file in question really exists, because it seems like using try_files changes the URI and removes the path_info part? (Also note that some asset files are returned by php scripts and we desire to match those urls and set various expiry/cache times on them.) At present I have: fastcgi2.conf is a copy of fastcgi.conf with one change: fastcgi_param REQUEST_URI $uri$is_args$args; nginx config: server { listen 443; server_name cloud.example.com; ssl on; ssl_certificate /etc/ssl/nginx/cloud.example.com.crt; ssl_certificate_key /etc/ssl/nginx/cloud.example.com.key; access_log /var/log/nginx/cloud.example.com.access_log main; error_log /var/log/nginx/cloud.example.com.error_log info; root /var/www/$server_name/htdocs; client_max_body_size 1200M; fastcgi_buffers 64 4K; index index.php; location ~ ^/(data|config|\.ht|db_structure\.xml|README) { deny all; } location / { rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/carddav /remote.php/carddav/ redirect; rewrite ^/.well-known/caldav /remote.php/caldav/ redirect; try_files $uri $uri/ index.php; } location ~ ^(?P.+\.php)(/|$) { fastcgi_split_path_info ^(.+\.php)(/.*)$; if (!-f $script_name) { #return 404; break; } include fastcgi2.conf; fastcgi_pass 127.0.0.1:9000; } location ~* ^.+.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ { expires 30d; access_log off; } } Can anyone please help me do better with the php section? In particular for some reason if I use "return 404" then the app breaks, seems like the URL paths get messed up (why?), however, leaving it as is, then missing files return a 403 response... Thanks for any help (I guess it can go on the wiki once thrashed out?) Ed W From friedrich.locke at gmail.com Wed Jan 2 13:53:20 2013 From: friedrich.locke at gmail.com (Friedrich Locke) Date: Wed, 2 Jan 2013 11:53:20 -0200 Subject: support for kerberos Message-ID: Hi folks, i wonder if nginx support for authentication via kerberos ticket or even fetching passwords from kerberos! Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From siefke_listen at web.de Wed Jan 2 14:26:14 2013 From: siefke_listen at web.de (Silvio Siefke) Date: Wed, 2 Jan 2013 15:26:14 +0100 Subject: Multilanguage Websites In-Reply-To: <20121231175518.GL18139@craic.sysops.org> References: <20121227162757.0614c719b67d5235ec41800f@web.de> <20121231010405.79f361636e8d27c43830a2cf@web.de> <20121231175014.8c37a3c1482045ec9857fca6@web.de> <20121231175518.GL18139@craic.sysops.org> Message-ID: <20130102152614.1c8fa30175fb8e55be15e58a@web.de> Hello, ok i have make with PHP a Redirect from Accept Language. It's running perfect and was easy. How do I make the best with nginx? For each site a server? In the webroot subfolder is that the solution? How can I best be used to ensure that the search engines are also satisfied. Thanks for help. Greetings Silvio From francis at daoine.org Wed Jan 2 15:05:31 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 2 Jan 2013 15:05:31 +0000 Subject: Multilanguage Websites In-Reply-To: <20130102152614.1c8fa30175fb8e55be15e58a@web.de> References: <20121227162757.0614c719b67d5235ec41800f@web.de> <20121231010405.79f361636e8d27c43830a2cf@web.de> <20121231175014.8c37a3c1482045ec9857fca6@web.de> <20121231175518.GL18139@craic.sysops.org> <20130102152614.1c8fa30175fb8e55be15e58a@web.de> Message-ID: <20130102150531.GN18139@craic.sysops.org> On Wed, Jan 02, 2013 at 03:26:14PM +0100, Silvio Siefke wrote: Hi there, > ok i have make with PHP a Redirect from Accept Language. It's running > perfect and was easy. How do I make the best with nginx? For each site > a server? In the webroot subfolder is that the solution? How can I best > be used to ensure that the search engines are also satisfied. It's good that you got the nginx-related part working -- deciding to use external code for the redirect, and using fasctcgi_pass or proxy_pass or whichever you chose. For the questions in this mail, you'll probably get better answers from a list which deals with that topic. "better answers" really means "when one person gives bad advice, others are knowledgeable and interested enough to correct it". nginx.conf can handle each of http://language1.example.org/ and http://www.example.org/language1/ as easily as the other, so there's no "best" from an nginx perspective. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Jan 2 17:18:33 2013 From: nginx-forum at nginx.us (zuger) Date: Wed, 02 Jan 2013 12:18:33 -0500 Subject: SSL pass through Message-ID: Hello, I would like to use NGINX as a reverse proxy and pass https requests to a back-end server without having to install certificates on the NGINX reverse proxy because the backend servers are already set up to handle https requests. How would the configuration look like for this purpose? Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234641,234641#msg-234641 From francis at daoine.org Wed Jan 2 17:43:03 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 2 Jan 2013 17:43:03 +0000 Subject: SSL pass through In-Reply-To: References: Message-ID: <20130102174303.GO18139@craic.sysops.org> On Wed, Jan 02, 2013 at 12:18:33PM -0500, zuger wrote: Hi there, > I would like to use NGINX as a reverse proxy and pass https requests to a > back-end server without having to install certificates on the NGINX reverse > proxy because the backend servers are already set up to handle https > requests. What you are describing sounds more like a tcp port forwarder than a reverse proxy to me. > How would the configuration look like for this purpose? Do not have "listen 443" or "ssl on" in the nginx.conf. Let your separate port forwarder listen on port 443 and tunnel the data straight to your back-end server. nginx.conf for http will be like pretty much any examples you can find. f -- Francis Daly francis at daoine.org From friedrich.locke at gmail.com Wed Jan 2 17:50:53 2013 From: friedrich.locke at gmail.com (Friedrich Locke) Date: Wed, 2 Jan 2013 15:50:53 -0200 Subject: gssapi Message-ID: I am using kerberos to implement SSO in my network. Does nginx support gssapi autentication or even fetching passwords from kerberos ? Thanks for your help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Wed Jan 2 18:08:38 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 2 Jan 2013 18:08:38 +0000 Subject: gssapi In-Reply-To: References: Message-ID: On 2 January 2013 17:50, Friedrich Locke wrote: > Does nginx support gssapi http://bit.ly/134lRHh Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Wed Jan 2 21:14:49 2013 From: nginx-forum at nginx.us (zuger) Date: Wed, 02 Jan 2013 16:14:49 -0500 Subject: SSL pass through In-Reply-To: <20130102174303.GO18139@craic.sysops.org> References: <20130102174303.GO18139@craic.sysops.org> Message-ID: Thank you for the quick answer. I will be a little more precise. I would like to forward https requests to different backend server based on the hostname header, e.g. https://machine1.domain.com should be forwarded to https://10.0.0.1 and https://machine2.domain.com to https://10.0.0.2. You mentioned something like a tcp port forwarder. Is this tcp port forwarding part of the NGINX configuration or something outside NGINX? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234641,234648#msg-234648 From supairish at gmail.com Wed Jan 2 21:16:44 2013 From: supairish at gmail.com (Chris Irish) Date: Wed, 2 Jan 2013 14:16:44 -0700 Subject: Prevent Chrome SSL Domain Mismatch Warning When Redirecting Message-ID: Hello, I have a SSL cert setup for a domain with no subdomain, i.e. mydomain.org. And a server block setup to redirect all https 'www' subdomain requests to the non subdomain server block. This works fine in Safari, FF, etc. But Chrome gives me a certificate domain name mismatch warning ( The big red warning screen ) How can I prevent this? It's like Chrome checks the SSL cert name before even following the nginx redirect. Here's what I'm doing. Any help appreciated server { listen 443; server_name www.mydomain.org; return 301 $scheme://mydomain.org$request_uri; ssl on; ssl_certificate /etc/nginx/certs/new_sslchain.crt; ssl_certificate_key /etc/nginx/certs/azcharters-10-29-12.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; } server { listen 443; server_name mydomain.org; root /home/deploy/apps/myapp/current/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/certs/new_sslchain.crt; ssl_certificate_key /etc/nginx/certs/azcharters-10-29-12.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; } -- Chris Irish Burst Software Rails Web Development e: supairish at gmail.com c: 623-523-2221 w: www.burstdev.com w: www.christopherirish.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Wed Jan 2 21:25:59 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 2 Jan 2013 21:25:59 +0000 Subject: SSL pass through In-Reply-To: References: <20130102174303.GO18139@craic.sysops.org> Message-ID: On 2 January 2013 21:14, zuger wrote: > Thank you for the quick answer. I will be a little more precise. > > I would like to forward https requests to different backend server based on > the hostname header, e.g. https://machine1.domain.com should be forwarded to > https://10.0.0.1 and https://machine2.domain.com to https://10.0.0.2. You can't do this HTTP-level routing inside nginx without allowing nginx to terminate the SSL connection, which would require the certificates to be available to nginx at startup/reload. Have a read of https://wiki.apache.org/httpd/NameBasedSSLVHosts for a decent discussion of the generic (HTTPd-agnostic) possibilities and problems. > You mentioned something like a tcp port forwarder. Is this tcp port > forwarding part of the NGINX configuration or something outside NGINX? I would personally use HAProxy in TCP mode for this purpose, however there's a non-trivial operational/PCI-DSS/code problem that crops up when you *don't* terminate your SSL at network edge: you lose visibility of the client's IP address at the point at which you *do* terminate the SSL. You lose this visibility regardless of any X-Forwarded-For headers you might use. The HAProxy "PROXY" protocol is a possible fix for this, but it's not yet available in a stable release of HAProxy. Basically, terminate your SSL at the edge. Or get people who understand your problem/app domain, SSL, and security to design a solution for you. Cheers, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From contact at jpluscplusm.com Wed Jan 2 21:33:57 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 2 Jan 2013 21:33:57 +0000 Subject: Prevent Chrome SSL Domain Mismatch Warning When Redirecting In-Reply-To: References: Message-ID: On 2 January 2013 21:16, Chris Irish wrote: > Hello, > I have a SSL cert setup for a domain with no subdomain, i.e. > mydomain.org. And a server block setup to redirect all https 'www' > subdomain requests to the non subdomain server block. This works fine in > Safari, FF, etc. But Chrome gives me a certificate domain name mismatch > warning ( The big red warning screen ) How can I prevent this? It's like > Chrome checks the SSL cert name before even following the nginx redirect. Of course it does. That's how SSL works. You're serving up the certificate for azcharters.org where browsers (it's not just Chrome!) are expecting one that identifies itself as belonging to www.azcharters.org. You need to serve up a certificate that matches www.azcharters.org in its Common Name (CN) or Subject Alternative Name (SAN), just for the redirect listener block. If you only have a single IP to serve both :443 listeners, by the way, you're out of luck with your current cert. You'd have to find an SSL vendor who'll sell you a single cert with (say) azcharters.org in the CN and www.azcharters.org in the SAN. This may be more expensive than you'd expect and - to be honest - I wouldn't bother. Regards, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Wed Jan 2 22:12:34 2013 From: nginx-forum at nginx.us (zuger) Date: Wed, 02 Jan 2013 17:12:34 -0500 Subject: SSL termination and HAProxy In-Reply-To: References: Message-ID: Thank you Jonathan. Your explanations were very helpful and the link to "NameBasedSSLVHosts" also. I will now evaluate the two scenarios. Teminate SSL in NGINX and forward http to the backend servers or use HAProxy. Did I understood correctly that when I use HAProxy I do not have to terminate SSL at HAProxy server? SSL will then be terminated at the backend servers? zuger Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234641,234653#msg-234653 From contact at jpluscplusm.com Wed Jan 2 22:29:16 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 2 Jan 2013 22:29:16 +0000 Subject: SSL termination and HAProxy In-Reply-To: References: Message-ID: On 2 January 2013 22:12, zuger wrote: > Thank you Jonathan. > > Your explanations were very helpful and the link to "NameBasedSSLVHosts" > also. Glad it helped, Zuger. > I will now evaluate the two scenarios. Teminate SSL in NGINX and forward > http to the backend servers or use HAProxy. SSL termination at the edge (I suggest in nginx) will save you much grief, over time. I would only be considering passing SSL through to a back-end layer if I had to for specific security reasons, such as PCI-DSS compliance or because the machine at the network edge was untrusted somehow. Do note: with nginx you can proxy_pass to a *different* SSL FQDN, after having terminated the SSL connection. I.e. server { listen 443; server_name external-domain.com # ssl cert config options which I can't remember off the top of my head ... location / { proxy_pass https://my-internal-service-name-which-is-still-ssl-encrypted.internal.fqdn:443; } } This way, you unwrap the SSL for long enough to route it correctly, but then encrypt it again to ensure the communication between nginx and the backend service is secure. This still requires the cert/key for "external-domain.com" on the nginx server, however. Do be aware that this setup *won't* allow you to exclude the nginx machine from being part of your PCI-DSS CDE, I believe. (If that was meaningless to you, just ignore it!) Also be aware that, if your nginx machine is actually untrusted, this doesn't help. Any attacker who gets control of the box still gets access to your certs and can sniff any "SSL" traffic s/he likes. > Did I understood correctly that when I use HAProxy I do not have to > terminate SSL at HAProxy server? SSL will then be terminated at the backend > servers? [ NB: I'm only suggesting HAP as that's what I'd use in the scenario you painted. Other TCP-Level Load Balancers Are Available. ] HAProxy only learned to speak SSL in a recent-ish development version. If you need to use a stable release (1.4) then you *cannot* terminate SSL with it, and would have to pass the TCP connection through to something that owned the appropriate SSL certificates. HTH, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Thu Jan 3 07:31:33 2013 From: nginx-forum at nginx.us (PascalTurbo) Date: Thu, 03 Jan 2013 02:31:33 -0500 Subject: path_info in alias environment Message-ID: Hi There I need to get this working on nginx with php-fpm: example.com/studip/dispatch.php/admin/user/ The Problem seems to be, that /studip isn't a subfolder under root but a alias to /usr/local/studip/public/ Here's the configuration without the (non working) path_info foo: server { listen 80; server_name example.com; root /var/www/example.com/htdocs; index index.php # Here are a few other subfolders hosted # ... # ... # and now studip: location /studip { alias /usr/local/studip/public/; index index.php; location ~ /studip/(.*\.php)$ { fastcgi_pass unix:/var/www/sockets/studip.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$1; include fastcgi_params; } } } And the fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; I tried it for a sub-domain where root points to /usr/local/studip/public/ and get it working with this params: location / { try_files $uri $uri/ /index.php; } location ~ \.php { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_pass unix:/var/www/sockets/www.socket; fastcgi_index index.php; } But I got no idea how to port this to subfolder. Any suggestions? Thanks allot Pascal Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234659,234659#msg-234659 From steve at greengecko.co.nz Thu Jan 3 08:13:42 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 03 Jan 2013 21:13:42 +1300 Subject: path_info in alias environment In-Reply-To: References: Message-ID: <50E53DB6.3050107@greengecko.co.nz> Why not just use a symbolic link? Steve On 03/01/13 20:31, PascalTurbo wrote: > Hi There > > I need to get this working on nginx with php-fpm: > > example.com/studip/dispatch.php/admin/user/ > > The Problem seems to be, that /studip isn't a subfolder under root but a > alias to /usr/local/studip/public/ > > Here's the configuration without the (non working) path_info foo: > > server { > listen 80; > server_name example.com; > > root /var/www/example.com/htdocs; > index index.php > > # Here are a few other subfolders hosted > # ... > # ... > > # and now studip: > > location /studip { > alias /usr/local/studip/public/; > index index.php; > location ~ /studip/(.*\.php)$ { > fastcgi_pass unix:/var/www/sockets/studip.socket; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$1; > include fastcgi_params; > } > } > } > And the fastcgi_params: > > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > > fastcgi_param SCRIPT_FILENAME $request_filename; > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param DOCUMENT_ROOT $document_root; > fastcgi_param SERVER_PROTOCOL $server_protocol; > > fastcgi_param GATEWAY_INTERFACE CGI/1.1; > fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; > > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > > fastcgi_param HTTPS $https; > > # PHP only, required if PHP was built with --enable-force-cgi-redirect > fastcgi_param REDIRECT_STATUS 200; > > fastcgi_buffers 8 16k; > fastcgi_buffer_size 32k; > > > > I tried it for a sub-domain where root points to /usr/local/studip/public/ > and get it working with this params: > > location / { > try_files $uri $uri/ /index.php; > } > > location ~ \.php { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; > > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param DOCUMENT_ROOT $document_root; > fastcgi_param SERVER_PROTOCOL $server_protocol; > > fastcgi_param GATEWAY_INTERFACE CGI/1.1; > fastcgi_param SERVER_SOFTWARE nginx; > > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > > fastcgi_pass unix:/var/www/sockets/www.socket; > fastcgi_index index.php; > } > > But I got no idea how to port this to subfolder. > > Any suggestions? > > Thanks allot > Pascal > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234659,234659#msg-234659 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Jan 3 11:53:19 2013 From: nginx-forum at nginx.us (PascalTurbo) Date: Thu, 03 Jan 2013 06:53:19 -0500 Subject: path_info in alias environment In-Reply-To: <50E53DB6.3050107@greengecko.co.nz> References: <50E53DB6.3050107@greengecko.co.nz> Message-ID: <2d7c1389ceb9fe0838e8a446b12bbcc7.NginxMailingListEnglish@forum.nginx.org> GreenGecko Wrote: ------------------------------------------------------- > Why not just use a symbolic link? Because it's an ungly solution. Isn't there a "correct" way? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234659,234662#msg-234662 From r at roze.lv Thu Jan 3 12:42:03 2013 From: r at roze.lv (Reinis Rozitis) Date: Thu, 3 Jan 2013 14:42:03 +0200 Subject: Prevent Chrome SSL Domain Mismatch Warning When Redirecting In-Reply-To: References: Message-ID: <0C80123BB5874454A0730CF71B7A9691@MasterPC> > You'd have to find an SSL vendor who'll sell you a single cert with (say) > azcharters.org in the CN and www.azcharters.org in the SAN. This may be > more expensive than you'd expect and - to be honest - I wouldn't bother. To avoid such issues quite many SSL vendors include the (www.) alternative name automatically (like Godaddy, Comodo and Geotrust for sure). rr From nginx-forum at nginx.us Thu Jan 3 13:49:05 2013 From: nginx-forum at nginx.us (pieter@lxnex.com) Date: Thu, 03 Jan 2013 08:49:05 -0500 Subject: Server won't start AND Nginx as reverse proxy Message-ID: <91c46b03d0ec7b4093603089f372fc7f.NginxMailingListEnglish@forum.nginx.org> Hi guys, I have two issues on which I can not seem to find decent help: 1- See the configuration below. If the https://some.site.com site is down, Nginx won't start. I still want Nginx to start whether this site is down or not: server { listen 8000; location / { proxy_pass https://some.site.com; } } 2- We have set up Nginx as a reverse proxy server to send users to a few backend Swazoo web servers. If a particular Swazoo web server is currently busy handling a request, Nginx does not reverse proxy new incoming requests to one of the other Swazoo web servers and the site appears to be 'hanging'. Any help on this? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234664,234664#msg-234664 From nginx-forum at nginx.us Thu Jan 3 19:04:04 2013 From: nginx-forum at nginx.us (SupaIrish) Date: Thu, 03 Jan 2013 14:04:04 -0500 Subject: Prevent Chrome SSL Domain Mismatch Warning When Redirecting In-Reply-To: <0C80123BB5874454A0730CF71B7A9691@MasterPC> References: <0C80123BB5874454A0730CF71B7A9691@MasterPC> Message-ID: <786997ef627def0edbbe51cd8c289427.NginxMailingListEnglish@forum.nginx.org> Jonathan, Reinis, thank you both for your responses. That clarified things a lot! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234649,234666#msg-234666 From mdounin at mdounin.ru Fri Jan 4 03:47:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Jan 2013 07:47:37 +0400 Subject: nginx post response doesn't get cached In-Reply-To: References: Message-ID: <20130104034737.GC12313@mdounin.ru> Hello! On Sat, Dec 29, 2012 at 12:46:21PM -0500, nurettin wrote: > I'm using an old version of nginx (0.8) on centos as reverse proxy for > caching POST requests in front of two upstream servers. > The servers are built for receiving post requests and returning media, > sometimes 10 MB in size. > > When the responses are small, nginx caches work fine. When I get a 2 MB > response, nginx doesn't cache the POST response. > > I tried increasing proxy buffer size and busy buffer size but it had no > effect, how do I cache large POST responses in nginx? Normally responses for POST requests are not cached (even if response indicates it is cacheable) as there is no good generic way to construct a cache key. If you want nginx to cache responses to POST requests, you should instruct it to do so explicitly using the "proxy_cache_methods" directive, e.g. proxy_cache_methods GET HEAD POST; See http://nginx.org/r/proxy_cache_methods. -- Maxim Dounin http://nginx.com/support.html From yaoweibin at gmail.com Fri Jan 4 05:18:05 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Fri, 4 Jan 2013 13:18:05 +0800 Subject: Dynamic set upstream severs weight? In-Reply-To: References: Message-ID: No, it can't. if you want to make them offline without reload, you could use my upstream check module: https://github.com/yaoweibin/nginx_upstream_check_module 2012/12/27 howard chen > Hi > > Is it possible to change server weight or take them offline > without modifying the config and reload? > > We want to change server weight from a script that monitor backend load > average, is it possible? > > p.s. something like Feedbackd for LVS. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 4 06:53:39 2013 From: nginx-forum at nginx.us (nurettin) Date: Fri, 04 Jan 2013 01:53:39 -0500 Subject: nginx post response doesn't get cached In-Reply-To: <20130104034737.GC12313@mdounin.ru> References: <20130104034737.GC12313@mdounin.ru> Message-ID: <3c6ce786a373a6c0009060e2cf9d82b6.NginxMailingListEnglish@forum.nginx.org> Hi Maxim Dounin! The proxy already caches small post responses. My cache key is request uri and body. I just don't know how to increase the buffer. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234567,234681#msg-234681 From mdounin at mdounin.ru Fri Jan 4 12:32:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Jan 2013 16:32:55 +0400 Subject: nginx post response doesn't get cached In-Reply-To: <3c6ce786a373a6c0009060e2cf9d82b6.NginxMailingListEnglish@forum.nginx.org> References: <20130104034737.GC12313@mdounin.ru> <3c6ce786a373a6c0009060e2cf9d82b6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130104123255.GG12313@mdounin.ru> Hello! On Fri, Jan 04, 2013 at 01:53:39AM -0500, nurettin wrote: > Hi Maxim Dounin! > > The proxy already caches small post responses. My cache key is request uri > and body. I just don't know how to increase the buffer. There are no buffers which influence request cacheability. Responses are either cached or not regardless of their size. I would recommend you to check if max_size= configured in proxy_cache_path (if any) is big enough to store responses you want to cache. If it is, you may want to produce debug log to investigate what goes on, see here for details http://nginx.org/en/docs/debugging_log.html You may also want to upgrade to make sure you are not hitting some old bug. The 0.8.x branch is way too old. -- Maxim Dounin http://nginx.com/support.html From anoopalias01 at gmail.com Fri Jan 4 13:16:05 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 4 Jan 2013 18:46:05 +0530 Subject: Wordpress in a subfolder Message-ID: Hi, I have a setup where there are 2 installations of wordpress .One under / and one under /wordpress . The configs are as below ############################## location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location /wordpress { try_files $uri $uri/ /wordpress/index.php?q=$uri&$args; } location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/opt/pifpm/fpmsockets/picdn.sock; fastcgi_index index.php; include fastcgi_params; } ###################### The problem is that URL's like http://picdn.com/2013/01/hello-world/ work ; but not http://picdn.com/wordpress/2013/01/04/hello-world/ The second URL simply redirects to http://picdn.com/2013/01/hello-world/ Here is what the nginx log show ============================================ 122.164.61.17 - - [04/Jan/2013:08:14:22 -0500] "GET /wordpress/2013/01/04/hello-world/ HTTP/1.1" 301 5 " http://picdn.com/wordpress/" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" 122.164.61.17 - - [04/Jan/2013:08:14:22 -0500] "GET /2013/01/hello-world/ HTTP/1.1" 200 3764 "http://picdn.com/wordpress/" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" =========================================== Any idea why this might be happening? Thanks in advance, -- Anoop P Alias (PGP Key ID : 0x014F9953) GNU system administrator http://UniversalAdm.in -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Fri Jan 4 13:25:43 2013 From: edho at myconan.net (Edho Arief) Date: Fri, 4 Jan 2013 20:25:43 +0700 Subject: Wordpress in a subfolder In-Reply-To: References: Message-ID: On Fri, Jan 4, 2013 at 8:16 PM, Anoop Alias wrote: > Hi, > > I have a setup where there are 2 installations of wordpress .One under / and > one under /wordpress . The configs are as below > > > ############################## > location / { > > try_files $uri $uri/ /index.php?q=$uri&$args; > > } > > location /wordpress { > > try_files $uri $uri/ /wordpress/index.php?q=$uri&$args; > > } > > location ~ \.php$ { > try_files $uri =404; > fastcgi_pass unix:/opt/pifpm/fpmsockets/picdn.sock; > fastcgi_index index.php; > include fastcgi_params; > } > > I think you forgot this line fastcgi_param SCRIPT_FILENAME $request_filename; From anoopalias01 at gmail.com Fri Jan 4 13:29:59 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 4 Jan 2013 18:59:59 +0530 Subject: Wordpress in a subfolder In-Reply-To: References: Message-ID: On Fri, Jan 4, 2013 at 6:55 PM, Edho Arief wrote: > On Fri, Jan 4, 2013 at 8:16 PM, Anoop Alias > wrote: > > Hi, > > > > I have a setup where there are 2 installations of wordpress .One under / > and > > one under /wordpress . The configs are as below > > > > > > ############################## > > location / { > > > > try_files $uri $uri/ /index.php?q=$uri&$args; > > > > } > > > > location /wordpress { > > > > try_files $uri $uri/ /wordpress/index.php?q=$uri&$args; > > > > } > > > > location ~ \.php$ { > > try_files $uri =404; > > fastcgi_pass unix:/opt/pifpm/fpmsockets/picdn.sock; > > fastcgi_index index.php; > > include fastcgi_params; > > } > > > > > > I think you forgot this line > > fastcgi_param SCRIPT_FILENAME $request_filename; > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > In my fastcgi_params include file I have ========== fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ========== Will this not work? Thanks -- Anoop P Alias (PGP Key ID : 0x014F9953) GNU system administrator http://UniversalAdm.in -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Fri Jan 4 13:35:14 2013 From: edho at myconan.net (Edho Arief) Date: Fri, 4 Jan 2013 20:35:14 +0700 Subject: Wordpress in a subfolder In-Reply-To: References: Message-ID: On Fri, Jan 4, 2013 at 8:29 PM, Anoop Alias wrote: > > fastcgi_param DOCUMENT_ROOT $document_root; > > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > ========== > > Will this not work? > My last test few years ago showed $request_filename worked much better. May or may not matter now, though. Still worth a try, I'd say. From nginx-forum at nginx.us Fri Jan 4 13:38:53 2013 From: nginx-forum at nginx.us (brama) Date: Fri, 04 Jan 2013 08:38:53 -0500 Subject: http 500 errors are not cached but go to the backend Message-ID: <524a31c50d5acd084a94dce0b18f6d5f.NginxMailingListEnglish@forum.nginx.org> Hi, I've set up nginx (tested with 1.2.6 and 1.3.10) to cache all requests to our fastcgi backends. If a cache entry expires, stale entries wil be served while nginx is updating the cache. I'm using the fastcgi_cache_lock feature to make sure only 1 request will be sent to the backend to update the cache. This works fine for everything but http 500 errors. If the backend produces a http 500 result, it will initially be cached for the fastcgi_cache_valid 500 time (1 minute in my example), but when that expires, ALL requests for that document will go to the backend, instead of coming out of the cache. Also, the cache entry is never updated anymore, until it is removed from the cache altogether after 10 minutes (the value of 'inactive' in fastcgi_cache_path). My cache config: # in http context fastcgi_cache_path /var/nginx/cache/content levels=1:2 keys_zone=content:24m inactive=10m max_size=512M; # in location context: fastcgi_cache content; # cache entire url, but only sig&exp query string parameters fastcgi_cache_key $scheme|$host|$uri?sig=$arg_sig&exp=$arg_exp; # Ignore caching headers from the response so that clients will not cache # the results; nginx, however, will cache everything fastcgi_ignore_headers Expires Cache-Control; fastcgi_cache_valid 500 1m; fastcgi_cache_valid any 1m; # start caching at the first request fastcgi_cache_min_uses 1; # serve stale cache entries if the fastcgi backend acts up, and also while # a new entry is being generated # XXX for some reason, if http_500 is included here, EACH hit will go to the # backend as soon as the initial cache entry becomes stale. Bug? fastcgi_cache_use_stale error timeout invalid_header updating http_500; # only allow 1 backend request to generate a cache entry at any time; all other # connections wait for it. fastcgi_cache_lock on; # wait max 20s for a cache entry to be generated fastcgi_cache_lock_timeout 20s; As an example, I made the fastcgi backend log each & every request it receives. I am hammering the server with ab -c100 -n500000 and this is what the backend logs (each request produces a http 500): # I start hammering here with 100 concurrent requests. Only 1 request is received at the backend: 2013-01-03 22:00:33 content_server.dynamic.main:344 INFO CONTENT SERVER BACKEND REQUEST FOR: http://myserver.localhost/dynamic/http_500.js # Now, it's quiet for exactly 1 minute, and then ALL requests go to the backend: 2013-01-03 22:01:34 content_server.dynamic.main:344 INFO CONTENT SERVER BACKEND REQUEST FOR: http://myserver.localhost/dynamic/http_500.js 2013-01-03 22:01:34 content_server.dynamic.main:344 INFO CONTENT SERVER BACKEND REQUEST FOR: http://myserver.localhost/dynamic/http_500.js 2013-01-03 22:01:34 content_server.dynamic.main:344 INFO CONTENT SERVER BACKEND REQUEST FOR: http://myserver.localhost/dynamic/http_500.js 2013-01-03 22:01:34 content_server.dynamic.main:344 INFO CONTENT SERVER BACKEND REQUEST FOR: http://myserver.localhost/dynamic/http_500.js 2013-01-03 22:01:34 content_server.dynamic.main:344 INFO CONTENT SERVER BACKEND REQUEST FOR: http://myserver.localhost/dynamic/http_500.js 2013-01-03 22:01:34 content_server.dynamic.main:344 INFO CONTENT SERVER BACKEND REQUEST FOR: http://myserver.localhost/dynamic/http_500.js 2013-01-03 22:01:34 content_server.dynamic.main:344 INFO CONTENT SERVER BACKEND REQUEST FOR: http://myserver.localhost/dynamic/http_500.js 2013-01-03 22:01:34 content_server.dynamic.main:344 INFO CONTENT SERVER BACKEND REQUEST FOR: http://myserver.localhost/dynamic/http_500.js I've logged the cache status results in the logfile, and this is the contents of the access log at the time the cache entry expires at 22:01:34: "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "STALE" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "STALE" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "STALE" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:01:34 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" If I remove http_500 from fastcgi_cache_use_stale, THEN only 1 entry will go to the backend if it expires, as desired. The access log then looks like this as soon as the cache entry expires: "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:53 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:53 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:53 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:53 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:53 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:54 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "UPDATING" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:54 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:54 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "EXPIRED" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:54 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:54 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:54 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:54 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:54 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" "192.168.63.171" myserver.localhost - [03/Jan/2013:22:14:54 +0000] "GET /dynamic/http_500.js HTTP/1.0" 500 31 "-" "ApacheBench/2.3" "HIT" This seems counter-intuitive. If the http 500 document expires, nginx should serve the stale cache entry until it has updated the cache entry. However, with http_500 included in fastcgi_cache_use_stale, it never appears to update the cache entry at all. Bug? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234691,234691#msg-234691 From anoopalias01 at gmail.com Fri Jan 4 13:43:44 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 4 Jan 2013 19:13:44 +0530 Subject: Wordpress in a subfolder In-Reply-To: References: Message-ID: On Fri, Jan 4, 2013 at 7:05 PM, Edho Arief wrote: > On Fri, Jan 4, 2013 at 8:29 PM, Anoop Alias > wrote: > > > > fastcgi_param DOCUMENT_ROOT $document_root; > > > > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > ========== > > > > Will this not work? > > > > My last test few years ago showed $request_filename worked much > better. May or may not matter now, though. Still worth a try, I'd say. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Edho ..that worked .Thanks But curious to know whats the difference between ===== #fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $request_filename; ====== If I had set a root /home/picdn/public_html/wordpress; inside the location /wordpress ..it would have worked ?. Whats the best route to take here as I see many docs doing it differently . Thanks, -- Anoop P Alias (PGP Key ID : 0x014F9953) GNU system administrator http://UniversalAdm.in -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Fri Jan 4 13:50:28 2013 From: edho at myconan.net (Edho Arief) Date: Fri, 4 Jan 2013 20:50:28 +0700 Subject: Wordpress in a subfolder In-Reply-To: References: Message-ID: On Fri, Jan 4, 2013 at 8:43 PM, Anoop Alias wrote: > > > Edho ..that worked .Thanks > > But curious to know whats the difference between > > ===== > #fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param SCRIPT_FILENAME $request_filename; > ====== > my wild guess says the $fastcgi_script_name doesn't get passed correctly (or in unexpected way, e.g. blank) when using try_files causing it to use default /index.php. From anoopalias01 at gmail.com Fri Jan 4 14:12:33 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 4 Jan 2013 19:42:33 +0530 Subject: Wordpress in a subfolder In-Reply-To: References: Message-ID: On Fri, Jan 4, 2013 at 7:20 PM, Edho Arief wrote: > #fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > fastcgi_param SCRIPT_FILENAME $request_filename; > Actually i guess it was a caching issue with my browser when i was using a different try_files line fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $request_filename; Both works now .. Thanks, -- Anoop P Alias (PGP Key ID : 0x014F9953) GNU system administrator http://UniversalAdm.in -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Fri Jan 4 14:14:32 2013 From: edho at myconan.net (Edho Arief) Date: Fri, 4 Jan 2013 21:14:32 +0700 Subject: Wordpress in a subfolder In-Reply-To: References: Message-ID: On Fri, Jan 4, 2013 at 9:12 PM, Anoop Alias wrote: > > > On Fri, Jan 4, 2013 at 7:20 PM, Edho Arief wrote: >> >> #fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> > fastcgi_param SCRIPT_FILENAME $request_filename; > > > Actually i guess it was a caching issue with my browser when i was using a > different try_files line > > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param SCRIPT_FILENAME $request_filename; > > > Both works now .. > Yeah, I don't see difference with either setting. Though I'd say you forgot to actually reload the config after adding the subdirectory config since I also got redirect on the subdirectory url. From kevin at my.walr.us Fri Jan 4 17:33:11 2013 From: kevin at my.walr.us (KT Walrus) Date: Fri, 4 Jan 2013 12:33:11 -0500 Subject: "first" load balancing Message-ID: The purpose of this message is to request a new feature be added to NGINX - "first" load balancing algorithm and MAXCONN setting for upstream servers. I am setting up a new site that will require multiple upstream servers and load balancing. I plan to use ip_hash load balancing on frontend servers to do simple load balancing to the backend servers. I would like to protect each backend server from becoming overloaded and possibly failing because of too many connections sent to it. I plan on using haproxy to "guard" each backend server by using haproxy's "first" load balancing and setting a MAXCONN for the localhost backend. If the maximum connections for the server is met, haproxy will send the request to the next server in the list. I also plan on using Amazon EC2 instances and this set up will not only protect each backend server from becoming overloaded, it allows me to "spin up" new backend servers if the last backend server is approaching MAXCONN. Similarly, if the last two backends in my server farm are not receiving any requests, I can "spin down" the last server (and save me money!!!). Now, it seems to me I could remove haproxy from my server software stack if NGINX added support for "first" load balancing and MAXCONN setting for each upstream server. I think this should be relatively easy to support since I see NGINX already supports "least conn" load balancing. I guess this means that NGINX already knows how many connections it has open to an upstream server and could easily implement "first" and "max conn". Kevin From kevin at my.walr.us Fri Jan 4 20:18:53 2013 From: kevin at my.walr.us (KT Walrus) Date: Fri, 4 Jan 2013 15:18:53 -0500 Subject: dynamic upstream configuration Message-ID: I'm new to this mailing list, but have used nginx for simple websites for years. Now, I am planning a more complicated website which will require multiple upstream servers and more dynamic reconfiguration events. I see that there are several third party health check modules available, but I would like to use only "officially" supported nginx modules. I would like to configure/reconfigure my upstream servers dynamically using a health check module (or upgraded upstream module). Rather than editing the nginx conf file and reloading for upstream reconfiguration, I'd to do it through health checks to each upstream server. Initially, my nginx conf file would define all possible backend servers and mark them as DOWN: upstream backend { ip_hash; server backend1.example.com:8000 DOWN; server backend1.example.com:8001 DOWN; server backend2.example.com:8000 DOWN; server backend2.example.com:8001 DOWN; . . . server backendN.example.com:8000 DOWN; server backendN.example.com:8001 DOWN; } Then, I would like to have nginx do health check to each downed backend once every 5 minutes (or configurable interval). If the health check times out, the backend stays down. If the health check returns "UP", the server is marked up. If check returns "STARTING", the health check will be made again in a minute (or configurable interval). If check returns "STOPPING" or fails a configurable number of checks (like implemented now), the server is marked down. An UPed server should be checked frequently (every couple seconds) for a status change and a DOWNed server should be checked less frequently. As requested in my first post to this list, I also want the server to have a MAXCONN attribute and "first" load balancing. I would like the health checks to also be able to dynamically change the MAXCONN setting which would be checked before sending new requests to the backend. This way, the backend can "throttle" itself, by only allowing a few connections on startup to warm up the server and gradually increase the potential number of requests that it can handle. The load balancing algorithms would need to be adjusted to honor the dynamic MAXCONN attribute when deciding to send a request to the upstream server. I only need HTTP health checks supported and the status UP, STARTING, STOPPING, and DOWN could be http status codes instead of text. I like using health checks in this manner since the backend server has input into when and how many requests it can server. And, I don't have to have a backend server edit the load balancer's configuration file and reload nginx when I want to take the server offline for maintenance or to deploy a new version of the backend app (new app can start listening on port 8001 for requests while old app can mark the port 8000 server DOWN and finish handling current requests). From info at pkern.at Fri Jan 4 22:18:50 2013 From: info at pkern.at (Patrik Kernstock) Date: Fri, 4 Jan 2013 23:18:50 +0100 Subject: Process a PHP5 file through a socket Message-ID: <004301cdeac9$79ae3b00$6d0ab100$@pkern.at> Hello, is it possible to process a php file directly in a socket without nginx? Thanks for any ideas and help! Greets, Patschi From mdounin at mdounin.ru Fri Jan 4 22:31:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 5 Jan 2013 02:31:05 +0400 Subject: http 500 errors are not cached but go to the backend In-Reply-To: <524a31c50d5acd084a94dce0b18f6d5f.NginxMailingListEnglish@forum.nginx.org> References: <524a31c50d5acd084a94dce0b18f6d5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130104223104.GJ12313@mdounin.ru> Hello! On Fri, Jan 04, 2013 at 08:38:53AM -0500, brama wrote: > I've set up nginx (tested with 1.2.6 and 1.3.10) to cache all requests to > our fastcgi backends. If a cache entry expires, stale entries wil be served > while nginx is updating the cache. I'm using the fastcgi_cache_lock feature > to make sure only 1 request will be sent to the backend to update the > cache. Just a side note: the fastcgi_cache_lock directive doesn't affect update of the cache, it only affects adding new items to the cache. To handle cache updating the "fastcgi_cache_use_stale updating" should be used (it's actually already in your config). [...] > This seems counter-intuitive. If the http 500 document expires, nginx should > serve the stale cache entry until it has updated the cache entry. However, > with http_500 included in fastcgi_cache_use_stale, it never appears to > update the cache entry at all. Bug? The fastcgi_cache_use_stale http_500; in your config instructs nginx to don't cache 500 response but return stale cached content instead. As soon as original cached resource expires - nginx starts to ask backend about new response, but since 500 is returned it returns stale response to clients instead. In your case the behaviour looks a bit confusing as "original cached resource" above is the same 500 response (cached as fastcgi_cache_use_stale don't affect initial content caching), but the behaviour is as expected with your configuration - when you ask nginx to cache 500 responses and to don't use 500 responses for cache update at the same time. -- Maxim Dounin http://nginx.com/support.html From anoopalias01 at gmail.com Sat Jan 5 06:55:39 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 5 Jan 2013 12:25:39 +0530 Subject: For the nginx community Message-ID: Hello, Here is a sneak preview of cpXstack . The plugin for cpanel powered servers to use the full potential of nginX + PHP-FPM === http://youtu.be/UAmXOJIC93o === -- Anoop P Alias (PGP Key ID : 0x014F9953) GNU system administrator http://UniversalAdm.in -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Sat Jan 5 07:53:29 2013 From: agentzh at gmail.com (agentzh) Date: Fri, 4 Jan 2013 23:53:29 -0800 Subject: [ANN] ngx_openresty devel version 1.2.6.1 released In-Reply-To: References: Message-ID: Hello, folks! I am delighted to announce the new development version of ngx_openresty, 1.2.6.1: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (stable) release, 1.2.4.14: * upgraded the Nginx core to 1.2.6. * see for changes. * upgraded LuaNginxModule to 0.7.13. * bugfix: ngx.decode_args() might result in Lua string storage corruption. thanks Xu Jian for the report and Kindy Lin for the patch. * bugfix: using a key with underscores in ngx.header.KEY resulted in Lua string storage corruption. thanks rkearsley for reporting this issue. * bugfix: accessing ngx.var.VARIABLE allocated temporary memory buffers in the request memory pool, which could lead to unnecessarily large memory footprint; now it allocates such buffers via Lua GC. * feature: automatically detect LuaJIT 2.0 on FreeBSD by default. thanks rkearsley for the patch. * docs: explained why "local foo = require "foo"" is required for loading a Lua module. thanks rkearsley for asking. * docs: fixed a typo in the code sample for tcpsock:receiveuntil(). thanks Yecheng Fu for the patch. * docs: fixed a typo in the Lua code sample for ngx.re.gmatch (we forgot to add "do" there). thanks Guo Yin for reporting this issue. * upgraded LuaRestyUploadLibrary to 0.06. * optimize: use the pure lower-case form of the key "content-type" to index the headers table returned by ngx.req.get_headers() so as to avoid the overhead of calling the "__index" metamethod. * upgraded SrcacheNginxModule to 0.17. * bugfix: srcache_store would emit the misleading error message "srcache_store: skipped because response body truncated: N > 0" for HEAD requests (because a HEAD request's response never carries a body); now it just skips such responses silently. thanks Yang Jin for reporting this issue. * bugfix: when relative paths were used in "--with-zlib=DIR", "--with-libatomic=DIR", "--with-md5=DIR", and "--with-sha1=DIR", the build system of Nginx could not find "DIR" at all. thanks LazyZhu for reporting this issue. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002006 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Happy New Year! Best regards, -agentzh From nginx-forum at nginx.us Sat Jan 5 09:19:47 2013 From: nginx-forum at nginx.us (EricHarth) Date: Sat, 05 Jan 2013 04:19:47 -0500 Subject: Default server showing incorrectly for site homepage only Message-ID: <90b05375aac39b666d01f56540777b1a.NginxMailingListEnglish@forum.nginx.org> Hi All I have a rather strange issue that appears to have only started in the last few days (no changes have been made in the last few days) . I'll try and describe it as best I can but please let me know if I need to post particular configs etc. I have a setup with an nginx load balancer in front of 2 backend nginx web servers running on CentOS 6.3. This site has been running fine up until now but recently when visiting the home page it has been showing me the page I created as the default site on the webserver (it is a discrete notice that says the host header has not been recognised on the webserver). When you visit a sub-page e.g. domain.com/about-us it is fine but visiting domain.com shows the default site. It appears that this problem can be solved by disabling the caching on the load balancer for this particular site. As soon as the cache is not used, the homepage shows correctly. The cache on the load balancer is held in a RAM drive mapped to /cache with a total available size of 256Mb. The cache configuration is as follows proxy_cache_path /cache levels=1:2 keys_zone=app-cache:30m max_size=126m inactive=10m; proxy_temp_path /cache/tmp; proxy_buffer_size 8192; proxy_max_temp_file_size 1m; I reduced the total size of the cache last night as I have a feeling it was causing another problem I saw recently where we came under extreme load, the cache filled to the max allowed size of 256Mb but that left no room on the mount point for the temp path to buffer files from the upstreams and started serving empty files to clients. I've looked in the error log for the particular site and found a number of instances of 2013/01/03 05:47:24 [crit] 22889#0: *352983 pwrite() "/cache/tmp/0000028264" failed (28: No space left on device) while reading upstream, client: 121.58.173.7, server: www.domain.com, request: "GET /images/slider-plus.gif HTTP/1.1", upstream: "http://10.0.100.193:80/images/slider-plus.gif", host: "www.domain.com", referrer: "http://www.domain.com/example-page" A df -h shows the /cache mount to only be using 1% of the available space at the moment, so I'm not sure why I should be still getting this error. The nginx version on the load balancer is 1.0.11 (built from source) The nginx version on the web server is 1.0.15 (installed from package from epel) I realise that nginx is in need of updating, I only realised this yesterday. If this is the problem then I can bring forward the plans to update it, but it'd be useful to have confirmation of what this problem might be and how it can be solved. Please feel free to request any extra info needed to help diagnose this situation. Many thanks Eric Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234716,234716#msg-234716 From nginx-forum at nginx.us Sat Jan 5 15:30:46 2013 From: nginx-forum at nginx.us (Lekensteyn) Date: Sat, 05 Jan 2013 10:30:46 -0500 Subject: nginx + FollowSymLinks owner verification In-Reply-To: <20120426144308.GC31671@mdounin.ru> References: <20120426144308.GC31671@mdounin.ru> Message-ID: Maxim, I found that the disable_symlinks option does not work properly when the permissions are restrictive. Please see my observations on http://serverfault.com/q/463243/51929. In summary: ngx_file_info_wrapper() tries to open() a file if symlinks are disabled. That fails if nginx does not have read permissions for the said file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,225152,234722#msg-234722 From kevin at my.walr.us Sat Jan 5 16:15:14 2013 From: kevin at my.walr.us (KT Walrus) Date: Sat, 5 Jan 2013 11:15:14 -0500 Subject: How to cap server load? Message-ID: <203006A2-6415-4637-8F89-6F6F3765AC24@my.walr.us> I really want to ensure that my web servers are not overloaded. Can I do this with nginx? That is, is there a variable I could test to decide whether nginx should send the request to the local PHP backend or to forward the request to other nginx servers in the server farm, based on the load of the PHP backend? Maybe a variable that contains how many concurrent requests to nginx are waiting for a response? From vbart at nginx.com Sat Jan 5 16:26:00 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 5 Jan 2013 20:26:00 +0400 Subject: nginx + FollowSymLinks owner verification In-Reply-To: References: <20120426144308.GC31671@mdounin.ru> Message-ID: <201301052026.00990.vbart@nginx.com> On Saturday 05 January 2013 19:30:46 Lekensteyn wrote: > Maxim, I found that the disable_symlinks option does not work properly when > the permissions are restrictive. Please see my observations on > http://serverfault.com/q/463243/51929. > > In summary: ngx_file_info_wrapper() tries to open() a file if symlinks are > disabled. That fails if nginx does not have read permissions for the said > file. > So, you found exactly what the documentation says: http://nginx.org/r/disable_symlinks wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Sat Jan 5 17:41:58 2013 From: nginx-forum at nginx.us (Lekensteyn) Date: Sat, 05 Jan 2013 12:41:58 -0500 Subject: nginx + FollowSymLinks owner verification In-Reply-To: <201301052026.00990.vbart@nginx.com> References: <201301052026.00990.vbart@nginx.com> Message-ID: <535bfed0d5c55969a92e1c03e057c7f9.NginxMailingListEnglish@forum.nginx.org> I consider it a feature if try_files and if can really check whether a file exists or not (instead of accessible). I have cooked a patch [1] that implements this functionality. Please review, comments are welcome. Note: this patch changes behaviour. Previously, files which were not accessible were simply skipped. After applying this patch, files which exist, but are not accessible are not skipped. Maybe an option can be added to try_files and if to toggle this behavior? Regards, Peter [1]: http://lekensteyn.nl/files/0001-Do-not-require-read-permissions-for-try_files-if.patch Posted at Nginx Forum: http://forum.nginx.org/read.php?2,225152,234726#msg-234726 From stef at scaleengine.com Sat Jan 5 18:20:37 2013 From: stef at scaleengine.com (Stefan Caunter) Date: Sat, 5 Jan 2013 13:20:37 -0500 Subject: How to cap server load? In-Reply-To: <203006A2-6415-4637-8F89-6F6F3765AC24@my.walr.us> References: <203006A2-6415-4637-8F89-6F6F3765AC24@my.walr.us> Message-ID: You need to test the response time of a sample php script. Mark the back end as down if it fails the response time threshold a certain number of times. After you back off, it should recover health if your algorithm is working. Remember, the database is likely to be the ultimate performance issue with php performance. ---- Stefan Caunter https://www.scaleengine.com/ On Sat, Jan 5, 2013 at 11:15 AM, KT Walrus wrote: > I really want to ensure that my web servers are not overloaded. > > Can I do this with nginx? > > That is, is there a variable I could test to decide whether nginx should send the request to the local PHP backend or to forward the request to other nginx servers in the server farm, based on the load of the PHP backend? Maybe a variable that contains how many concurrent requests to nginx are waiting for a response? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From kevin at my.walr.us Sat Jan 5 18:32:17 2013 From: kevin at my.walr.us (KT Walrus) Date: Sat, 5 Jan 2013 13:32:17 -0500 Subject: How to cap server load? In-Reply-To: References: <203006A2-6415-4637-8F89-6F6F3765AC24@my.walr.us> Message-ID: <2BFCDE0C-C57A-4A3C-ABB6-9142405C2EAA@my.walr.us> Good idea of measuring actual response time, but I'm just looking to for a more crude limit of setting a maximum number of concurrent requests for the localhost server before requests are "bounced" to another backend. But, do you know how to do the "response time" limit within NGINX? Or, do I need to do this test with scripts outside NGINX and have all load balancers that send requests to this backend do health checks (again outside NGINX) and edit configuration file and reload NGINX? If so, I might as well put HAProxy in front of NGINX which can do what I want. I was looking for a simple way within NGINX to see how many concurrent requests there are for the localhost backend. Just exploring my options... Kevin On Jan 5, 2013, at 1:20 PM, Stefan Caunter wrote: > You need to test the response time of a sample php script. Mark the > back end as down if it fails the response time threshold a certain > number of times. After you back off, it should recover health if your > algorithm is working. Remember, the database is likely to be the > ultimate performance issue with php performance. > > > ---- > > Stefan Caunter > https://www.scaleengine.com/ > > > On Sat, Jan 5, 2013 at 11:15 AM, KT Walrus wrote: >> I really want to ensure that my web servers are not overloaded. >> >> Can I do this with nginx? >> >> That is, is there a variable I could test to decide whether nginx should send the request to the local PHP backend or to forward the request to other nginx servers in the server farm, based on the load of the PHP backend? Maybe a variable that contains how many concurrent requests to nginx are waiting for a response? >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From stef at scaleengine.com Sat Jan 5 23:35:09 2013 From: stef at scaleengine.com (Stefan Caunter) Date: Sat, 5 Jan 2013 18:35:09 -0500 Subject: How to cap server load? In-Reply-To: <2BFCDE0C-C57A-4A3C-ABB6-9142405C2EAA@my.walr.us> References: <203006A2-6415-4637-8F89-6F6F3765AC24@my.walr.us> <2BFCDE0C-C57A-4A3C-ABB6-9142405C2EAA@my.walr.us> Message-ID: There are a number of ways; varnish does this with its director back end definitions, gdnsd also lets you use health checks to manage server pools, and you have mentioned haproxy. The reason I mentioned database at the beginning, is that you ultimately have to "protect" the database with your system, by identifying which requests are hurting you most frequently, and looking for creative ways to reduce that pressure on the database. Unless you can provide relief and backend pool management with a protection system, you can make things worse with larger webservers and more backends. On Sat, Jan 5, 2013 at 1:32 PM, KT Walrus wrote: > Good idea of measuring actual response time, but I'm just looking to for a more crude limit of setting a maximum number of concurrent requests for the localhost server before requests are "bounced" to another backend. > > But, do you know how to do the "response time" limit within NGINX? Or, do I need to do this test with scripts outside NGINX and have all load balancers that send requests to this backend do health checks (again outside NGINX) and edit configuration file and reload NGINX? > > If so, I might as well put HAProxy in front of NGINX which can do what I want. > > I was looking for a simple way within NGINX to see how many concurrent requests there are for the localhost backend. > > Just exploring my options... > > Kevin > > On Jan 5, 2013, at 1:20 PM, Stefan Caunter wrote: > >> You need to test the response time of a sample php script. Mark the >> back end as down if it fails the response time threshold a certain >> number of times. After you back off, it should recover health if your >> algorithm is working. Remember, the database is likely to be the >> ultimate performance issue with php performance. >> >> >> ---- >> >> Stefan Caunter >> https://www.scaleengine.com/ >> >> >> On Sat, Jan 5, 2013 at 11:15 AM, KT Walrus wrote: >>> I really want to ensure that my web servers are not overloaded. >>> >>> Can I do this with nginx? >>> >>> That is, is there a variable I could test to decide whether nginx should send the request to the local PHP backend or to forward the request to other nginx servers in the server farm, based on the load of the PHP backend? Maybe a variable that contains how many concurrent requests to nginx are waiting for a response? >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Jan 7 01:03:05 2013 From: nginx-forum at nginx.us (anonymous-one) Date: Sun, 06 Jan 2013 20:03:05 -0500 Subject: Why does nginx sometimes send Connection: close to Connection: keep-alive requests? Message-ID: <91ee864aef7be96923709809f06820d2.NginxMailingListEnglish@forum.nginx.org> This may be a bit difficult to explain but I will try my best: We have the following setup: [ BOX 1 : NGINX Frontend ] ---reverse-proxy---> [ BOX 2: NGINX Backend ---> PHP-FPM ] Upstream keepalives are enabled on BOX 1 as follows: upstream backend{ server 1.2.3.4; keepalive 512; } Keepalives are enabled on BOX 2 as follows: keepalive_timeout 86400s; keepalive_requests 10485760; Yes, really high values... BOX 2 never sees any external traffic. Its all coming just from the front end (BOX1). We have noticed, sometimes BOX 2 will return a Connection: close header, and leave the connection in TIME_WAIT state EVEN THO the request came with a Connection: keep-alive header. This is correct behavior if BOX 2 wanted to close the connection... But why would it want to? We have sniffed this info via netstat AND ngrep. We are 100% sure BOX 2 sometimes sends back a Connection: close header, and the connection is left in a TIME_WAIT state. When we run the ngrep utility and watch netstat -na | grep TIME_WAIT | grep BOX1IP | etc etc etc... As soon as a connection:close is sent, the count of TIME_WAIT sockets increases. So to summarize: In what situations would nginx dispatch a Connection: close to a client who makes a request with Connection: keep-alive. Worth nothing: Generally, the upstream keep alives work... There is a high number of reqs / sec happening. These connection: close events happen rarely, but frequently enough to warrant this lengthy post ;) Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234746,234746#msg-234746 From info at pkern.at Mon Jan 7 07:05:21 2013 From: info at pkern.at (Patrik Kernstock) Date: Mon, 7 Jan 2013 08:05:21 +0100 Subject: Webserver crashes sometimes - don't know why Message-ID: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at> Hello, my nginx webserver crashes sometimes and I don't know why. It's not on every day and not at the same time as the crash before. I just get sometimes a message from my monitoring service that my http server isn't working anymore. Then I restart nginx and everything is working fine again - till the next mysterious crash. So how I can find out why it crashes? I use the latest development version of nginx. I have the problem since 1.3.8 or 1.3.9 (I guess). Thanks for any help! :) Greets from Austria, Patrik / Patschi From luky-37 at hotmail.com Mon Jan 7 09:04:43 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 7 Jan 2013 10:04:43 +0100 Subject: Webserver crashes sometimes - don't know why In-Reply-To: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at> References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at> Message-ID: So you are actually running?1.3.10? What modules have you build in, and whats your exact configuration? What is the error logfile saying at crash time? You may need to run a debug build of nginx to track this down further [1]... [1] http://nginx.org/en/docs/debugging_log.html ---------------------------------------- > From: info at pkern.at > To: nginx at nginx.org > Subject: Webserver crashes sometimes - don't know why > Date: Mon, 7 Jan 2013 08:05:21 +0100 > > Hello, > > my nginx webserver crashes sometimes and I don't know why. It's not on every > day and not at the same time as the crash before. I just get sometimes a > message from my monitoring service that my http server isn't working > anymore. Then I restart nginx and everything is working fine again - till > the next mysterious crash. > > So how I can find out why it crashes? I use the latest development version > of nginx. I have the problem since 1.3.8 or 1.3.9 (I guess). > > Thanks for any help! :) > > Greets from Austria, > Patrik / Patschi > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Jan 7 09:16:54 2013 From: nginx-forum at nginx.us (cyberjar09) Date: Mon, 07 Jan 2013 04:16:54 -0500 Subject: http to https rewrite, non-standard port? In-Reply-To: <4CFCCD3E.5080408@jmsd.co.uk> References: <4CFCCD3E.5080408@jmsd.co.uk> Message-ID: <6cd84074e76c18c6e593161db27fd8e1.NginxMailingListEnglish@forum.nginx.org> I tried the above but Cant seem to get it working : please see below ...... include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 9000; server_name iossapp1.com; if ($scheme = 'http') { rewrite ^ https://$server_name:9443$request_uri? permanent; } } ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; server { listen 9443 ssl; server_name iossapp1.com; error_page 497 https://$host:9443$request_uri; location / { proxy_pass http://127.0.0.1:9001; proxy_set_header Host $host; # proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } } Any help would be much appreciated Posted at Nginx Forum: http://forum.nginx.org/read.php?2,155978,234760#msg-234760 From info at pkern.at Mon Jan 7 09:18:26 2013 From: info at pkern.at (Patrik Kernstock) Date: Mon, 7 Jan 2013 10:18:26 +0100 Subject: AW: Webserver crashes sometimes - don't know why In-Reply-To: References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at> Message-ID: <004601cdecb7$f3790f60$da6b2e20$@pkern.at> Hi Lukas, No, I currently have the latest development version from svn of 01.01.2013 - 1.3.11. (Revision 5001). I only have one installed module: headers more Have I add "debug" behind error.log? I currently have it without debug and I don't see any information about the crash. Thanks for your help :) Greets, Patschi -----Urspr?ngliche Nachricht----- Von: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] Im Auftrag von Lukas Tribus Gesendet: Montag, 07. J?nner 2013 10:05 An: nginx at nginx.org Betreff: RE: Webserver crashes sometimes - don't know why So you are actually running?1.3.10? What modules have you build in, and whats your exact configuration? What is the error logfile saying at crash time? You may need to run a debug build of nginx to track this down further [1]... [1] http://nginx.org/en/docs/debugging_log.html ---------------------------------------- > From: info at pkern.at > To: nginx at nginx.org > Subject: Webserver crashes sometimes - don't know why > Date: Mon, 7 Jan 2013 08:05:21 +0100 > > Hello, > > my nginx webserver crashes sometimes and I don't know why. It's not on > every day and not at the same time as the crash before. I just get > sometimes a message from my monitoring service that my http server > isn't working anymore. Then I restart nginx and everything is working > fine again - till the next mysterious crash. > > So how I can find out why it crashes? I use the latest development > version of nginx. I have the problem since 1.3.8 or 1.3.9 (I guess). > > Thanks for any help! :) > > Greets from Austria, > Patrik / Patschi > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From luky-37 at hotmail.com Mon Jan 7 09:36:51 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 7 Jan 2013 10:36:51 +0100 Subject: AW: Webserver crashes sometimes - don't know why In-Reply-To: <004601cdecb7$f3790f60$da6b2e20$@pkern.at> References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at>, , <004601cdecb7$f3790f60$da6b2e20$@pkern.at> Message-ID: Well, actually you have to recompile the source-code and configure it with --with-debug AND enable the debug command behind error.log (see link [1] at the bottom of the mail). Obviously this will log a huge amounts of data, so make sure you are filesystem can cope with the amount of data logged. [1] http://nginx.org/en/docs/debugging_log.html ---------------------------------------- > From: info at pkern.at > To: nginx at nginx.org > Subject: AW: Webserver crashes sometimes - don't know why > Date: Mon, 7 Jan 2013 10:18:26 +0100 > > Hi Lukas, > > No, I currently have the latest development version from svn of 01.01.2013 > - 1.3.11. (Revision 5001). > I only have one installed module: headers more > > Have I add "debug" behind error.log? I currently have it without debug and I > don't see any information about the crash. > > Thanks for your help :) > > Greets, > Patschi > From nginx-forum at nginx.us Mon Jan 7 09:40:05 2013 From: nginx-forum at nginx.us (cyberjar09) Date: Mon, 07 Jan 2013 04:40:05 -0500 Subject: http to https rewrite, non-standard port? In-Reply-To: <6cd84074e76c18c6e593161db27fd8e1.NginxMailingListEnglish@forum.nginx.org> References: <4CFCCD3E.5080408@jmsd.co.uk> <6cd84074e76c18c6e593161db27fd8e1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <19725d325deffbd646a8931188db141d.NginxMailingListEnglish@forum.nginx.org> To add to the issue: the problem only surfaces when I POST a form to the server, here are a few of examples from the /var/log/nginx/access.log file : ================== 127.0.0.1 - - [07/Jan/2013:17:29:14 +0800] "GET /admin/logout HTTP/1.1" 302 0 "https://iossapp1.com:9443/admin/iosIndex" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" =============================== 127.0.0.1 - - [07/Jan/2013:17:32:45 +0800] "POST /admin/editStaff HTTP/1.1" 302 0 "https://iossapp1.com:9443/admin/editStaff?staff.id=612" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" 127.0.0.1 - - [07/Jan/2013:17:32:45 +0800] "GET /admin/staffIndex HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" ================================== 127.0.0.1 - - [07/Jan/2013:17:35:45 +0800] "POST /admin/authenticate HTTP/1.1" 302 0 "https://iossapp1.com:9443/admin/login" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" 127.0.0.1 - - [07/Jan/2013:17:35:45 +0800] "GET /admin/masterIndex HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" In contrast a GET URL behaves correctly like so : 127.0.0.1 - - [07/Jan/2013:17:36:21 +0800] "GET /admin/masterIndex HTTP/1.1" 200 2324 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" I noticed the numerical value after the response code (302, 304) that is 0 when the redirect is going to be unsuccessful, in the cases where it works, the numerical value is non-zero. What does this number represent? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,155978,234763#msg-234763 From info at pkern.at Mon Jan 7 09:41:24 2013 From: info at pkern.at (Patrik Kernstock) Date: Mon, 7 Jan 2013 10:41:24 +0100 Subject: AW: AW: Webserver crashes sometimes - don't know why In-Reply-To: References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at>, , <004601cdecb7$f3790f60$da6b2e20$@pkern.at> Message-ID: <004701cdecbb$2911e310$7b35a930$@pkern.at> I already built nginx with the --with-debug flag. Well, I have 10GB for my /var/log/ partition. I hope that this is enough :) I don't know when the next crash will be, so I have to wait. Thanks for the help. I it crashes again I will answer back. -----Urspr?ngliche Nachricht----- Von: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] Im Auftrag von Lukas Tribus Gesendet: Montag, 07. J?nner 2013 10:37 An: nginx at nginx.org Betreff: RE: AW: Webserver crashes sometimes - don't know why Well, actually you have to recompile the source-code and configure it with --with-debug AND enable the debug command behind error.log (see link [1] at the bottom of the mail). Obviously this will log a huge amounts of data, so make sure you are filesystem can cope with the amount of data logged. [1] http://nginx.org/en/docs/debugging_log.html ---------------------------------------- > From: info at pkern.at > To: nginx at nginx.org > Subject: AW: Webserver crashes sometimes - don't know why > Date: Mon, 7 Jan 2013 10:18:26 +0100 > > Hi Lukas, > > No, I currently have the latest development version from svn of > 01.01.2013 > - 1.3.11. (Revision 5001). > I only have one installed module: headers more > > Have I add "debug" behind error.log? I currently have it without debug > and I don't see any information about the crash. > > Thanks for your help :) > > Greets, > Patschi > _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From luky-37 at hotmail.com Mon Jan 7 09:54:35 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 7 Jan 2013 10:54:35 +0100 Subject: AW: AW: Webserver crashes sometimes - don't know why In-Reply-To: <004701cdecbb$2911e310$7b35a930$@pkern.at> References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at>, , , , <004601cdecb7$f3790f60$da6b2e20$@pkern.at>, , <004701cdecbb$2911e310$7b35a930$@pkern.at> Message-ID: Well, I would monitor the free space and the IO load carefully. If you have a lot of traffic this may become a problem. ---------------------------------------- > From: info at pkern.at > To: nginx at nginx.org > Subject: AW: AW: Webserver crashes sometimes - don't know why > Date: Mon, 7 Jan 2013 10:41:24 +0100 > > I already built nginx with the --with-debug flag. Well, I have 10GB for my > /var/log/ partition. I hope that this is enough :) > I don't know when the next crash will be, so I have to wait. Thanks for the > help. I it crashes again I will answer back. > > > -----Urspr?ngliche Nachricht----- > Von: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] Im Auftrag von > Lukas Tribus > Gesendet: Montag, 07. J?nner 2013 10:37 > An: nginx at nginx.org > Betreff: RE: AW: Webserver crashes sometimes - don't know why > > > Well, actually you have to recompile the source-code and configure it with > --with-debug AND enable the debug command behind error.log (see link [1] at > the bottom of the mail). > > Obviously this will log a huge amounts of data, so make sure you are > filesystem can cope with the amount of data logged. > > > [1] http://nginx.org/en/docs/debugging_log.html > > From nginx-forum at nginx.us Mon Jan 7 10:01:30 2013 From: nginx-forum at nginx.us (cyberjar09) Date: Mon, 07 Jan 2013 05:01:30 -0500 Subject: http to https rewrite, non-standard port? In-Reply-To: <647f5665ee1a4ec1617375a480342ce0.NginxMailingListEnglish@forum.nginx.org> References: <20101206120059.GE42828@rambler-co.ru> <647f5665ee1a4ec1617375a480342ce0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6f0c67597b4e0eb63fe5499802bb3395.NginxMailingListEnglish@forum.nginx.org> I found the solution: My entry for proxy_set_header Host $host:$server_port; was originally missing the $server_port Source : http://stackoverflow.com/questions/10168155/nginx-proxy-https-to-http-on-non-standard-port Now it works! =D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,155978,234767#msg-234767 From info at pkern.at Mon Jan 7 11:45:52 2013 From: info at pkern.at (Patrik Kernstock) Date: Mon, 7 Jan 2013 12:45:52 +0100 Subject: AW: AW: Webserver crashes sometimes - don't know why In-Reply-To: References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at>, , <004601cdecb7$f3790f60$da6b2e20$@pkern.at> Message-ID: <005601cdeccc$8bd97420$a38c5c60$@pkern.at> I just found something interest in "dmesg" log: [5294633.862284] __ratelimit: 20 callbacks suppressed [5294633.862288] nginx[20568]: segfault at aa ip 00007fdc5a44eb41 sp 00007fff0260a1a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] [5294634.659735] nginx[20569]: segfault at aa ip 00007fdc5a44eb41 sp 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] [5294634.818078] nginx[20571]: segfault at aa ip 00007fdc5a44eb41 sp 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] [5294634.819429] nginx[20581]: segfault at aa ip 00007fdc5a44eb41 sp 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] [5294634.920149] nginx[20567]: segfault at aa ip 00007fdc5a44eb41 sp 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] [5294635.313816] nginx[20589]: segfault at aa ip 00007fdc5a44eb41 sp 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] [5294635.402682] nginx[20590]: segfault at aa ip 00007fdc5a44eb41 sp 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] [5294682.926163] nginx[20596]: segfault at 4a ip 00000000004459df sp 00007fff0260a0f0 error 4 in nginx[400000+a3000] [5294685.155117] nginx[20595]: segfault at 4a ip 00000000004459df sp 00007fff0260a280 error 4 in nginx[400000+a3000] [5294686.158466] nginx[21276]: segfault at 4a ip 00000000004459df sp 00007fff0260a130 error 4 in nginx[400000+a3000] [5294688.683947] nginx[21313]: segfault at 1 ip 00007fdc5a44eb41 sp 00007fff0260a0c8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] [5294695.987059] nginx[21361]: segfault at 1193d ip 00007fdc5a44eb41 sp 00007fff0260a0c8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] Seems to be a error in libc... -----Urspr?ngliche Nachricht----- Von: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] Im Auftrag von Lukas Tribus Gesendet: Montag, 07. J?nner 2013 10:37 An: nginx at nginx.org Betreff: RE: AW: Webserver crashes sometimes - don't know why Well, actually you have to recompile the source-code and configure it with --with-debug AND enable the debug command behind error.log (see link [1] at the bottom of the mail). Obviously this will log a huge amounts of data, so make sure you are filesystem can cope with the amount of data logged. [1] http://nginx.org/en/docs/debugging_log.html ---------------------------------------- > From: info at pkern.at > To: nginx at nginx.org > Subject: AW: Webserver crashes sometimes - don't know why > Date: Mon, 7 Jan 2013 10:18:26 +0100 > > Hi Lukas, > > No, I currently have the latest development version from svn of > 01.01.2013 > - 1.3.11. (Revision 5001). > I only have one installed module: headers more > > Have I add "debug" behind error.log? I currently have it without debug > and I don't see any information about the crash. > > Thanks for your help :) > > Greets, > Patschi > _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From luky-37 at hotmail.com Mon Jan 7 22:49:41 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 7 Jan 2013 23:49:41 +0100 Subject: AW: AW: Webserver crashes sometimes - don't know why In-Reply-To: <005601cdeccc$8bd97420$a38c5c60$@pkern.at> References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at>, , , , <004601cdecb7$f3790f60$da6b2e20$@pkern.at>, , <005601cdeccc$8bd97420$a38c5c60$@pkern.at> Message-ID: I doubt the dmesg output is enough for the developers to track down the bug. Apart from the actual debug log (which is crucial), can you provide output of "nginx -V" and your configuration (remove confidential information like IP addresses or domain names if you need to, but leave the rest of it intact). If you post all those informations, the developers should get a better picture of what happens here. Regards, Lukas From mdounin at mdounin.ru Tue Jan 8 03:41:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Jan 2013 07:41:55 +0400 Subject: Why does nginx sometimes send Connection: close to Connection: keep-alive requests? In-Reply-To: <91ee864aef7be96923709809f06820d2.NginxMailingListEnglish@forum.nginx.org> References: <91ee864aef7be96923709809f06820d2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130108034155.GB68127@mdounin.ru> Hello! On Sun, Jan 06, 2013 at 08:03:05PM -0500, anonymous-one wrote: > This may be a bit difficult to explain but I will try my best: > > We have the following setup: > > [ BOX 1 : NGINX Frontend ] ---reverse-proxy---> [ BOX 2: NGINX Backend ---> > PHP-FPM ] > > Upstream keepalives are enabled on BOX 1 as follows: > > upstream backend{ > server 1.2.3.4; > keepalive 512; > } > > Keepalives are enabled on BOX 2 as follows: > > keepalive_timeout 86400s; > keepalive_requests 10485760; > > Yes, really high values... BOX 2 never sees any external traffic. Its all > coming just from the front end (BOX1). > > We have noticed, sometimes BOX 2 will return a Connection: close header, and > leave the connection in TIME_WAIT state EVEN THO the request came with a > Connection: keep-alive header. This is correct behavior if BOX 2 wanted to > close the connection... But why would it want to? > > We have sniffed this info via netstat AND ngrep. > > We are 100% sure BOX 2 sometimes sends back a Connection: close header, and > the connection is left in a TIME_WAIT state. When we run the ngrep utility > and watch netstat -na | grep TIME_WAIT | grep BOX1IP | etc etc etc... As > soon as a connection:close is sent, the count of TIME_WAIT sockets > increases. > > So to summarize: > > In what situations would nginx dispatch a Connection: close to a client who > makes a request with Connection: keep-alive. > > Worth nothing: > > Generally, the upstream keep alives work... There is a high number of reqs / > sec happening. These connection: close events happen rarely, but frequently > enough to warrant this lengthy post ;) Even if you configure nginx to allow infinite keepalive timeout and lots of requests per connection - it might still need to close a connection e.g. while returning certain errors like 400 Bad Request (which might assume connection might be out of sync). There are also some browser workaround which might disable keepalive in certain situations, see http://nginx.org/r/keepalive_disable. Note that in general keepalive should not be relied on as something required, it's just an optimization. If you depend on connections being kept alive forever and never closed - you've probably did something wrong. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Tue Jan 8 03:48:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Jan 2013 07:48:46 +0400 Subject: AW: Webserver crashes sometimes - don't know why In-Reply-To: <005601cdeccc$8bd97420$a38c5c60$@pkern.at> References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at> <004601cdecb7$f3790f60$da6b2e20$@pkern.at> <005601cdeccc$8bd97420$a38c5c60$@pkern.at> Message-ID: <20130108034846.GC68127@mdounin.ru> Hello! On Mon, Jan 07, 2013 at 12:45:52PM +0100, Patrik Kernstock wrote: > I just found something interest in "dmesg" log: > [5294633.862284] __ratelimit: 20 callbacks suppressed > [5294633.862288] nginx[20568]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a1a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294634.659735] nginx[20569]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294634.818078] nginx[20571]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294634.819429] nginx[20581]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294634.920149] nginx[20567]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294635.313816] nginx[20589]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294635.402682] nginx[20590]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294682.926163] nginx[20596]: segfault at 4a ip 00000000004459df sp > 00007fff0260a0f0 error 4 in nginx[400000+a3000] > [5294685.155117] nginx[20595]: segfault at 4a ip 00000000004459df sp > 00007fff0260a280 error 4 in nginx[400000+a3000] > [5294686.158466] nginx[21276]: segfault at 4a ip 00000000004459df sp > 00007fff0260a130 error 4 in nginx[400000+a3000] > [5294688.683947] nginx[21313]: segfault at 1 ip 00007fdc5a44eb41 sp > 00007fff0260a0c8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294695.987059] nginx[21361]: segfault at 1193d ip 00007fdc5a44eb41 sp > 00007fff0260a0c8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > > Seems to be a error in libc... It's highly unlikely to be an error in libc, segfaults in libc usually heppen when libc functions are called with incorrect arguments. You need to obtain coredump and provide a backtrace, see http://wiki.nginx.org/Debugging for details, in paricular these two sections: http://wiki.nginx.org/Debugging#Core_dump http://wiki.nginx.org/Debugging#Asking_for_help Please note: it would be good idea to make sure you are able to reproduce the problem without any 3rd party modules compiled in. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Tue Jan 8 10:19:29 2013 From: nginx-forum at nginx.us (brama) Date: Tue, 08 Jan 2013 05:19:29 -0500 Subject: http 500 errors are not cached but go to the backend In-Reply-To: <20130104223104.GJ12313@mdounin.ru> References: <20130104223104.GJ12313@mdounin.ru> Message-ID: Hi Maxim, > Just a side note: the fastcgi_cache_lock directive doesn't affect > update of the cache, it only affects adding new items to the > cache. To handle cache updating the "fastcgi_cache_use_stale > updating" should be used (it's actually already in your config). > Ok, that's good to know. > The > > fastcgi_cache_use_stale http_500; > > in your config instructs nginx to don't cache 500 response but > return stale cached content instead. As soon as original cached > resource expires - nginx starts to ask backend about new response, > but since 500 is returned it returns stale response to clients > instead. > > In your case the behaviour looks a bit confusing as "original > cached resource" above is the same 500 response (cached as > fastcgi_cache_use_stale don't affect initial content caching), but > the behaviour is as expected with your configuration - when you > ask nginx to cache 500 responses and to don't use 500 responses > for cache update at the same time. > Ok, got it. That was indeed confusing. Thanks for your explanation. Bram Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234691,234782#msg-234782 From nginx-forum at nginx.us Tue Jan 8 11:57:08 2013 From: nginx-forum at nginx.us (nurettin) Date: Tue, 08 Jan 2013 06:57:08 -0500 Subject: nginx post response doesn't get cached In-Reply-To: <20130104123255.GG12313@mdounin.ru> References: <20130104123255.GG12313@mdounin.ru> Message-ID: Here's the related configuration: proxy_cache_path /var/www/cache levels=1:2 keys_zone=kendi-cache:1000m max_size=10000m; proxy_cache_key "$request_uri|$request_body"; When I send small requests, nginx works great. When I send a large post request (long uri) I get these logs: I have debug on error.log, here's the output: 2013/01/08 13:50:25 [error] 32765#0: *1 cache key too large, increase upstream buffer size 4096, client:... 2013/01/08 13:51:01 [warn] 32765#0: *1 an upstream response is buffered to a temporary file /var/www/cache/tmp/0000000001 while reading upstream, client:... 2013/01/08 13:51:20 [notice] 300#0: http file cache: /var/www/cache 0.000M, bsize: 4096 2013/01/08 13:51:20 [notice] 32764#0: signal 17 (SIGCHLD) received 2013/01/08 13:51:20 [notice] 32764#0: cache loader process 300 exited with code 0 2013/01/08 13:51:20 [notice] 32764#0: signal 29 (SIGIO) received I'm not sure what to do here. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234567,234783#msg-234783 From nginx-list at puzzled.xs4all.nl Tue Jan 8 19:08:33 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Tue, 08 Jan 2013 20:08:33 +0100 Subject: Primary script unknown error - can't figure out how to fix Message-ID: <50EC6EB1.6020108@puzzled.xs4all.nl> Hi all, I'm new to nginx and trying to setup a piwik virtual hosts on an up-to-date CentOS 6.3 box with nginx 1.2.6, php-5.3.3 with php-fpm src from the 5.3.20 release. I'm seeing some "Primary script unknown" errors which I can't figure out with the info in the book, wiki or google. Below are my configs. Hopefully someone has a clue what I am doing wrong. # ls -l /usr/share/nginx drwxr-xr-x. 12 root root 4096 Jan 8 18:47 piwik # ls -l /usr/share/nginx/piwik total 96 -rw-r-----. 1 nginx nginx 676 Aug 12 16:05 composer.json drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 config drwxr-x---. 25 nginx nginx 4096 Nov 27 11:11 core -rw-r-----. 1 nginx nginx 822 Feb 14 2005 favicon.ico -rw-rw-r--. 1 nginx nginx 273 Nov 27 11:11 How to install Piwik.html -rw-r-----. 1 nginx nginx 1611 Mar 20 2012 index.php drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 js drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 lang -rw-r-----. 1 nginx nginx 6070 Feb 13 2012 LEGALNOTICE drwxr-x---. 21 nginx nginx 4096 Nov 27 11:11 libs drwxr-x---. 6 nginx nginx 4096 Nov 27 11:11 misc -rw-r-----. 1 nginx nginx 21548 Nov 1 10:02 piwik.js -rw-r-----. 1 nginx nginx 2967 Oct 23 11:22 piwik.php drwxr-x---. 46 nginx nginx 4096 Nov 27 11:11 plugins -rw-r-----. 1 nginx nginx 2640 Mar 6 2012 README drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 tests drwxr-x---. 3 nginx nginx 4096 Nov 27 11:11 themes drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 tmp # cat /etc/nginx/nginx.conf user nginx nginx; ## as recommended in the Nginx book, p98 worker_processes 2; worker_rlimit_nofile 1024; worker_priority -5; worker_cpu_affinity 01 10; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { ## as recommended in the Nginx book, p98 worker_connections 128; ## epoll is preferred on 2.6 Linux use epoll; ## Accept as many connections as possible multi_accept on; } http { ## http://nginx.org/en/docs/http/server_names.html server_names_hash_bucket_size 64; ## MIME types include /etc/nginx/mime.types; default_type application/octet-stream; ## FastCGI include /etc/nginx/fastcgi.conf; ## define the log format log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; ## Default log and error files access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log; ## Use sendfile() syscall to speed up I/O operations and speed up ## static file serving sendfile on; ## Handling of IPs in proxied and load balancing situations set_real_ip_from 0.0.0.0/32; # all addresses get a real IP real_ip_header X-Forwarded-For; # ip forwarded from the load balancer/proxy ## Define a zone for limiting the number of simultaneous ## connections nginx accepts. 1m means 32000 simultaneous ## sessions. We need to define for each server the limit_conn ## value refering to this or other zones limit_conn_zone $binary_remote_addr zone=arbeit:10m; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 10 10; send_timeout 60; ## reset lingering timed out connections. Deflect DDoS reset_timedout_connection on; ## Body size client_max_body_size 10m; ## TCP options tcp_nodelay on; ## Optimization of socket handling when using sendfile tcp_nopush on; ## Compression. gzip on; gzip_buffers 16 8k; gzip_comp_level 1; gzip_http_version 1.1; gzip_min_length 10; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon application/vnd.ms-fontobject font/opentype application/x-font-ttf; gzip_vary on; gzip_proxied any; # Compression for all requests ## No need for regexps. See ## http://wiki.nginx.org/NginxHttpGzipModule#gzip_disable gzip_disable "msie6"; ## Serve already compressed files directly, bypassing on-the-fly ## compression gzip_static on; ## Hide the Nginx version number. server_tokens off; ## Use a SSL/TLS cache for SSL session resume. This needs to be ## here (in this context, for session resumption to work. See this ## thread on the Nginx mailing list: ## http://nginx.org/pipermail/nginx/2010-November/023736.html ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ## Enable clickjacking protection in modern browsers. Available in ## IE8 also. See ## https://developer.mozilla.org/en/The_X-FRAME-OPTIONS_response_header add_header X-Frame-Options SAMEORIGIN; ## add Maxmind GeoIP databases ## http://dev.maxmind.com/geoip/geolite geoip_country /etc/nginx/GeoIP.dat; ##geoip_country /etc/nginx/GeoIPv6.dat; geoip_city /etc/nginx/GeoLiteCity.dat; ##geoip_city /etc/nginx/GeoLiteCityv6.dat; ## Include the upstream servers for PHP FastCGI handling config. include upstream_phpcgi.conf; ## FastCGI cache zone definition. include fastcgi_cache_zone.conf; ## Include the php-fpm status allowed hosts configuration block. ## Uncomment to enable if you're running php-fpm. include php_fpm_status_allowed_hosts.conf; ## Include all vhosts. include /etc/nginx/sites-enabled/*; } # cat /etc/nginx/fastcgi.conf ### fastcgi configuration. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_buffers 256 4k; fastcgi_intercept_errors on; ## allow 4 hrs - pass timeout responsibility to upstream fastcgi_read_timeout 14400; fastcgi_index index.php; # cat /etc/nginx/upstream_phpcgi.conf ### Upstream configuration for PHP FastCGI. ## Add as many servers as needed. Cf. http://wiki.nginx.org/HttpUpstreamModule. upstream phpcgi { ##server unix:/var/run/php-fpm.sock; server 127.0.0.1:9000; } # cat /etc/nginx/fastcgi_cache_zone.conf fastcgi_cache_path /var/lib/nginx/tmp/fastcgi levels=1:2 keys_zone=fcgicache:100k max_size=10M inactive=3h loader_threshold=2592000000 loader_sleep=1 loader_files=100000; # cat /etc/nginx/sites-available/piwik.conf ### Nginx configuration for Piwik. ### based on https://github.com/perusio/piwik-nginx server { listen :80; # IPv4 listen :80; # IPv6 server_name piwik.domain.com; # always redirect to the ssl version rewrite ^ https://$server_name$request_uri? permanent; } server { listen :80; # IPv4 listen :80; # IPv6 ## SSL config ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5:!RC4; ssl_prefer_server_ciphers on; # public server cert ssl_certificate /etc/pki/tls/certs/piwik.domain.com.crt; # private server key without pass ssl_certificate_key /etc/pki/tls/private/piwik.domain.com.key; # public CA cert to verify client certs ssl_client_certificate /etc/pki/tls/certs/My_CA.crt; ## verify client certs ssl_verify_client on; #ssl_verify_depth 1; limit_conn arbeit 32; server_name piwik.domain.com; ## Access and error log files. access_log /var/log/nginx/piwik.domain.com_access.log; error_log /var/log/nginx/piwik.domain.com_error.log; root /usr/share/nginx/piwik.domain.com; index index.php; ## Disallow any usage of piwik assets if referer is non valid. location ~* ^.+\.(?:css|gif|jpe?g|js|png|swf)$ { ## Defining the valid referers. valid_referers none blocked *.domain.com domain.com; if ($invalid_referer) { return 444; } expires max; ## No need to bleed constant updates. Send the all shebang in one ## fell swoop. tcp_nodelay off; ## Set the OS file cache. open_file_cache max=500 inactive=120s; open_file_cache_valid 45s; open_file_cache_min_uses 2; open_file_cache_errors off; } ## Support for favicon. Return a 204 (No Content) if the favicon ## doesn't exist. location = /favicon.ico { try_files /favicon.ico =204; } ## Try all locations and relay to index.php as a fallback. location / { try_files $uri /index.php?$query_string; } ## Relay all index.php requests to fastcgi. location = /index.php { fastcgi_pass 127.0.0.1:9001; ## FastCGI cache. ## cache ui for 5m (set the same interval of your crontab) include sites-available/fcgi_piwik_cache.conf; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## Relay all piwik.php requests to fastcgi. location = /piwik.php { fastcgi_pass 127.0.0.1:9001; include sites-available/fcgi_piwik_long_cache.conf; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## Any other attempt to access PHP files redirects to the root. location ~* ^.+\.php$ { return 302 /; } ## Redirect to the root if attempting to access a txt file. location ~* (?:DESIGN|(?:gpl|README|LICENSE)[^.]*|LEGALNOTICE)(?:\.txt)*$ { return 302 /; } ## Disallow access to several helper files. location ~* \.(?:bat|html?|git|ini|sh|svn[^.]*|txt|tpl|xml)$ { return 404; } ## No crawling of this site for bots that obey robots.txt. location = /robots.txt { return 200 "User-agent: *\nDisallow: /\n"; } ## Including the php-fpm status and ping pages config. ## Uncomment to enable if you're running php-fpm. include php_fpm_status_vhost.conf; } # server # cat /etc/nginx/fastcgi_params ### fastcgi parameters. fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # Maxmind GeoIP for Piwik # http://piwik.org/faq/how-to/#faq_166 fastcgi_param GEOIP_ADDR $remote_addr; fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name; fastcgi_param GEOIP_REGION $geoip_region; fastcgi_param GEOIP_REGION_NAME $geoip_region_name; fastcgi_param GEOIP_CITY $geoip_city; fastcgi_param GEOIP_AREA_CODE $geoip_area_code; fastcgi_param GEOIP_LATITUDE $geoip_latitude; fastcgi_param GEOIP_LONGITUDE $geoip_longitude; fastcgi_param GEOIP_POSTAL_CODE $geoip_postal_code; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; # cat /etc/php-fpm.conf [global] pid = run/php-fpm.pid error_log = log/php-fpm.log syslog.facility = daemon log_level = notice emergency_restart_threshold = 10 emergency_restart_interval = 1 process_control_timeout = 10s daemonize = yes rlimit_files = 131072 rlimit_core = unlimited events.mechanism = epoll [piwik] user = nginx group = nginx listen = 127.0.0.1:9001 listen.owner = nginx listen.group = nginx listen.mode = 0666 listen.backlog = -1 listen.allowed_clients = 127.0.0.1 pm = dynamic pm.max_children = 10 pm.start_servers = 3 pm.min_spare_servers = 2 pm.max_spare_servers = 4 pm.status_path = /fpm-status ping.path = /ping access.log = /var/log/php-fpm-$pool.access.log slowlog = /var/log/php-fpm-$pool.slow.log request_slowlog_timeout = 5s request_terminate_timeout = 120s catch_workers_output = yes security.limit_extensions = .php env[HOSTNAME] = $HOSTNAME env[PATH] = /usr/local/bin:/usr/bin:/bin env[TMP] = /tmp env[TMPDIR] = /tmp env[TEMP] = /tmp The various errors I see are: 2013/01/08 19:31:00 [error] 638#0: *3 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: , server: piwik.domain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9001", host: "piwik.domain.com" 2013/01/08 19:32:00 [error] 638#0: *15 FastCGI sent in stderr: "Primary script unknown", client: , server: piwik.domain.com, request: "GET / HTTP/1.1", host: "piwik.domain.com" Any advice or pointers to docs where I can find a solution are most appreciated. Thanks/???????, Patrick From mdounin at mdounin.ru Tue Jan 8 19:31:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Jan 2013 23:31:54 +0400 Subject: nginx post response doesn't get cached In-Reply-To: References: <20130104123255.GG12313@mdounin.ru> Message-ID: <20130108193154.GH73378@mdounin.ru> Hello! On Tue, Jan 08, 2013 at 06:57:08AM -0500, nurettin wrote: > Here's the related configuration: > > proxy_cache_path /var/www/cache levels=1:2 keys_zone=kendi-cache:1000m > max_size=10000m; > proxy_cache_key "$request_uri|$request_body"; > > > When I send small requests, nginx works great. > When I send a large post request (long uri) I get these logs: > > I have debug on error.log, here's the output: > > 2013/01/08 13:50:25 [error] 32765#0: *1 cache key too large, increase > upstream buffer size 4096, client:... > 2013/01/08 13:51:01 [warn] 32765#0: *1 an upstream response is buffered to a > temporary file /var/www/cache/tmp/0000000001 while reading upstream, > client:... > 2013/01/08 13:51:20 [notice] 300#0: http file cache: /var/www/cache 0.000M, > bsize: 4096 > 2013/01/08 13:51:20 [notice] 32764#0: signal 17 (SIGCHLD) received > 2013/01/08 13:51:20 [notice] 32764#0: cache loader process 300 exited with > code 0 > 2013/01/08 13:51:20 [notice] 32764#0: signal 29 (SIGIO) received > > I'm not sure what to do here. Ah, ok, the error message logged suggests that you need to increase upstream buffer - it's used to store cache header and it's too small to hold your cache key with request body included. The "increase upstream buffer size" wording is indeed not very helpful, it was made more specific in nginx 1.1.0+. In case of proxy, you have to increase proxy_buffer_size, see http://nginx.org/r/proxy_buffer_size. -- Maxim Dounin http://nginx.com/support.html From steve at greengecko.co.nz Tue Jan 8 19:44:17 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 09 Jan 2013 08:44:17 +1300 Subject: Primary script unknown error - can't figure out how to fix In-Reply-To: <50EC6EB1.6020108@puzzled.xs4all.nl> References: <50EC6EB1.6020108@puzzled.xs4all.nl> Message-ID: <50EC7711.40800@greengecko.co.nz> At first glance I can't see anything listening on port 443. Cheers, Steve On 09/01/13 08:08, Patrick Lists wrote: > Hi all, > > I'm new to nginx and trying to setup a piwik virtual hosts on an > up-to-date CentOS 6.3 box with nginx 1.2.6, php-5.3.3 with php-fpm src > from the 5.3.20 release. I'm seeing some "Primary script unknown" > errors which I can't figure out with the info in the book, wiki or > google. Below are my configs. Hopefully someone has a clue what I am > doing wrong. > > # ls -l /usr/share/nginx > drwxr-xr-x. 12 root root 4096 Jan 8 18:47 piwik > > # ls -l /usr/share/nginx/piwik > total 96 > -rw-r-----. 1 nginx nginx 676 Aug 12 16:05 composer.json > drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 config > drwxr-x---. 25 nginx nginx 4096 Nov 27 11:11 core > -rw-r-----. 1 nginx nginx 822 Feb 14 2005 favicon.ico > -rw-rw-r--. 1 nginx nginx 273 Nov 27 11:11 How to install Piwik.html > -rw-r-----. 1 nginx nginx 1611 Mar 20 2012 index.php > drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 js > drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 lang > -rw-r-----. 1 nginx nginx 6070 Feb 13 2012 LEGALNOTICE > drwxr-x---. 21 nginx nginx 4096 Nov 27 11:11 libs > drwxr-x---. 6 nginx nginx 4096 Nov 27 11:11 misc > -rw-r-----. 1 nginx nginx 21548 Nov 1 10:02 piwik.js > -rw-r-----. 1 nginx nginx 2967 Oct 23 11:22 piwik.php > drwxr-x---. 46 nginx nginx 4096 Nov 27 11:11 plugins > -rw-r-----. 1 nginx nginx 2640 Mar 6 2012 README > drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 tests > drwxr-x---. 3 nginx nginx 4096 Nov 27 11:11 themes > drwxr-x---. 2 nginx nginx 4096 Nov 27 11:11 tmp > > > # cat /etc/nginx/nginx.conf > > user nginx nginx; > > ## as recommended in the Nginx book, p98 > worker_processes 2; > worker_rlimit_nofile 1024; > worker_priority -5; > worker_cpu_affinity 01 10; > > error_log /var/log/nginx/error.log; > pid /var/run/nginx.pid; > > events { > ## as recommended in the Nginx book, p98 > worker_connections 128; > ## epoll is preferred on 2.6 Linux > use epoll; > ## Accept as many connections as possible > multi_accept on; > } > > http { > ## http://nginx.org/en/docs/http/server_names.html > server_names_hash_bucket_size 64; > > ## MIME types > include /etc/nginx/mime.types; > default_type application/octet-stream; > > ## FastCGI > include /etc/nginx/fastcgi.conf; > > ## define the log format > log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > ## Default log and error files > access_log /var/log/nginx/access.log main; > error_log /var/log/nginx/error.log; > > ## Use sendfile() syscall to speed up I/O operations and speed up > ## static file serving > sendfile on; > > ## Handling of IPs in proxied and load balancing situations > set_real_ip_from 0.0.0.0/32; # all addresses get a real IP > real_ip_header X-Forwarded-For; # ip forwarded from the load > balancer/proxy > > ## Define a zone for limiting the number of simultaneous > ## connections nginx accepts. 1m means 32000 simultaneous > ## sessions. We need to define for each server the limit_conn > ## value refering to this or other zones > limit_conn_zone $binary_remote_addr zone=arbeit:10m; > > ## Timeouts > client_body_timeout 60; > client_header_timeout 60; > keepalive_timeout 10 10; > send_timeout 60; > > ## reset lingering timed out connections. Deflect DDoS > reset_timedout_connection on; > > ## Body size > client_max_body_size 10m; > > ## TCP options > tcp_nodelay on; > ## Optimization of socket handling when using sendfile > tcp_nopush on; > > ## Compression. > gzip on; > gzip_buffers 16 8k; > gzip_comp_level 1; > gzip_http_version 1.1; > gzip_min_length 10; > gzip_types text/plain text/css application/x-javascript > text/xml application/xml application/xml+rss text/javascript > image/x-icon application/vnd.ms-fontobject font/opentype > application/x-font-ttf; > gzip_vary on; > gzip_proxied any; # Compression for all requests > ## No need for regexps. See > ## http://wiki.nginx.org/NginxHttpGzipModule#gzip_disable > gzip_disable "msie6"; > > ## Serve already compressed files directly, bypassing on-the-fly > ## compression > gzip_static on; > > ## Hide the Nginx version number. > server_tokens off; > > ## Use a SSL/TLS cache for SSL session resume. This needs to be > ## here (in this context, for session resumption to work. See this > ## thread on the Nginx mailing list: > ## http://nginx.org/pipermail/nginx/2010-November/023736.html > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > > ## Enable clickjacking protection in modern browsers. Available in > ## IE8 also. See > ## > https://developer.mozilla.org/en/The_X-FRAME-OPTIONS_response_header > add_header X-Frame-Options SAMEORIGIN; > > ## add Maxmind GeoIP databases > ## http://dev.maxmind.com/geoip/geolite > geoip_country /etc/nginx/GeoIP.dat; > ##geoip_country /etc/nginx/GeoIPv6.dat; > geoip_city /etc/nginx/GeoLiteCity.dat; > ##geoip_city /etc/nginx/GeoLiteCityv6.dat; > > ## Include the upstream servers for PHP FastCGI handling config. > include upstream_phpcgi.conf; > > ## FastCGI cache zone definition. > include fastcgi_cache_zone.conf; > > ## Include the php-fpm status allowed hosts configuration block. > ## Uncomment to enable if you're running php-fpm. > include php_fpm_status_allowed_hosts.conf; > > ## Include all vhosts. > include /etc/nginx/sites-enabled/*; > > } > > > # cat /etc/nginx/fastcgi.conf > > ### fastcgi configuration. > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; > fastcgi_buffers 256 4k; > fastcgi_intercept_errors on; > ## allow 4 hrs - pass timeout responsibility to upstream > fastcgi_read_timeout 14400; > fastcgi_index index.php; > > # cat /etc/nginx/upstream_phpcgi.conf > > ### Upstream configuration for PHP FastCGI. > > ## Add as many servers as needed. Cf. > http://wiki.nginx.org/HttpUpstreamModule. > upstream phpcgi { > ##server unix:/var/run/php-fpm.sock; > server 127.0.0.1:9000; > } > > > # cat /etc/nginx/fastcgi_cache_zone.conf > > fastcgi_cache_path /var/lib/nginx/tmp/fastcgi levels=1:2 > keys_zone=fcgicache:100k max_size=10M inactive=3h > loader_threshold=2592000000 loader_sleep=1 loader_files=100000; > > > # cat /etc/nginx/sites-available/piwik.conf > > ### Nginx configuration for Piwik. > ### based on https://github.com/perusio/piwik-nginx > > server { > listen :80; # IPv4 > listen :80; # IPv6 > > server_name piwik.domain.com; > > # always redirect to the ssl version > rewrite ^ https://$server_name$request_uri? permanent; > } > > server { > listen :80; # IPv4 > listen :80; # IPv6 > > ## SSL config > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers HIGH:!aNULL:!MD5:!RC4; > ssl_prefer_server_ciphers on; > > # public server cert > ssl_certificate /etc/pki/tls/certs/piwik.domain.com.crt; > # private server key without pass > ssl_certificate_key /etc/pki/tls/private/piwik.domain.com.key; > # public CA cert to verify client certs > ssl_client_certificate /etc/pki/tls/certs/My_CA.crt; > > ## verify client certs > ssl_verify_client on; > #ssl_verify_depth 1; > > limit_conn arbeit 32; > server_name piwik.domain.com; > > ## Access and error log files. > access_log /var/log/nginx/piwik.domain.com_access.log; > error_log /var/log/nginx/piwik.domain.com_error.log; > > root /usr/share/nginx/piwik.domain.com; > index index.php; > > ## Disallow any usage of piwik assets if referer is non valid. > location ~* ^.+\.(?:css|gif|jpe?g|js|png|swf)$ { > ## Defining the valid referers. > valid_referers none blocked *.domain.com domain.com; > if ($invalid_referer) { > return 444; > } > expires max; > ## No need to bleed constant updates. Send the all shebang in one > ## fell swoop. > tcp_nodelay off; > ## Set the OS file cache. > open_file_cache max=500 inactive=120s; > open_file_cache_valid 45s; > open_file_cache_min_uses 2; > open_file_cache_errors off; > } > > ## Support for favicon. Return a 204 (No Content) if the favicon > ## doesn't exist. > location = /favicon.ico { > try_files /favicon.ico =204; > } > > ## Try all locations and relay to index.php as a fallback. > location / { > try_files $uri /index.php?$query_string; > } > > ## Relay all index.php requests to fastcgi. > location = /index.php { > fastcgi_pass 127.0.0.1:9001; > ## FastCGI cache. > ## cache ui for 5m (set the same interval of your crontab) > include sites-available/fcgi_piwik_cache.conf; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > } > > ## Relay all piwik.php requests to fastcgi. > location = /piwik.php { > fastcgi_pass 127.0.0.1:9001; > include sites-available/fcgi_piwik_long_cache.conf; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > } > > ## Any other attempt to access PHP files redirects to the root. > location ~* ^.+\.php$ { > return 302 /; > } > > ## Redirect to the root if attempting to access a txt file. > location ~* > (?:DESIGN|(?:gpl|README|LICENSE)[^.]*|LEGALNOTICE)(?:\.txt)*$ { > return 302 /; > } > > ## Disallow access to several helper files. > location ~* \.(?:bat|html?|git|ini|sh|svn[^.]*|txt|tpl|xml)$ { > return 404; > } > > ## No crawling of this site for bots that obey robots.txt. > location = /robots.txt { > return 200 "User-agent: *\nDisallow: /\n"; > } > > ## Including the php-fpm status and ping pages config. > ## Uncomment to enable if you're running php-fpm. > include php_fpm_status_vhost.conf; > > } # server > > > # cat /etc/nginx/fastcgi_params > ### fastcgi parameters. > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param DOCUMENT_ROOT $document_root; > fastcgi_param SERVER_PROTOCOL $server_protocol; > > fastcgi_param GATEWAY_INTERFACE CGI/1.1; > fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; > > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > > # Maxmind GeoIP for Piwik > # http://piwik.org/faq/how-to/#faq_166 > fastcgi_param GEOIP_ADDR $remote_addr; > fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; > fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name; > fastcgi_param GEOIP_REGION $geoip_region; > fastcgi_param GEOIP_REGION_NAME $geoip_region_name; > fastcgi_param GEOIP_CITY $geoip_city; > fastcgi_param GEOIP_AREA_CODE $geoip_area_code; > fastcgi_param GEOIP_LATITUDE $geoip_latitude; > fastcgi_param GEOIP_LONGITUDE $geoip_longitude; > fastcgi_param GEOIP_POSTAL_CODE $geoip_postal_code; > > # PHP only, required if PHP was built with --enable-force-cgi-redirect > fastcgi_param REDIRECT_STATUS 200; > > # cat /etc/php-fpm.conf > [global] > pid = run/php-fpm.pid > error_log = log/php-fpm.log > syslog.facility = daemon > log_level = notice > emergency_restart_threshold = 10 > emergency_restart_interval = 1 > process_control_timeout = 10s > daemonize = yes > rlimit_files = 131072 > rlimit_core = unlimited > events.mechanism = epoll > > [piwik] > > user = nginx > group = nginx > > listen = 127.0.0.1:9001 > > listen.owner = nginx > listen.group = nginx > listen.mode = 0666 > listen.backlog = -1 > > listen.allowed_clients = 127.0.0.1 > > pm = dynamic > pm.max_children = 10 > pm.start_servers = 3 > pm.min_spare_servers = 2 > pm.max_spare_servers = 4 > > pm.status_path = /fpm-status > ping.path = /ping > > access.log = /var/log/php-fpm-$pool.access.log > slowlog = /var/log/php-fpm-$pool.slow.log > > request_slowlog_timeout = 5s > request_terminate_timeout = 120s > > catch_workers_output = yes > > security.limit_extensions = .php > > env[HOSTNAME] = $HOSTNAME > env[PATH] = /usr/local/bin:/usr/bin:/bin > env[TMP] = /tmp > env[TMPDIR] = /tmp > env[TEMP] = /tmp > > > The various errors I see are: > > 2013/01/08 19:31:00 [error] 638#0: *3 FastCGI sent in stderr: "Primary > script unknown" while reading response header from upstream, client: > , server: piwik.domain.com, request: "GET / > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9001", host: "piwik.domain.com" > > 2013/01/08 19:32:00 [error] 638#0: *15 FastCGI sent in stderr: > "Primary script unknown", client: , server: > piwik.domain.com, request: "GET / HTTP/1.1", host: "piwik.domain.com" > > > Any advice or pointers to docs where I can find a solution are most > appreciated. > > Thanks/???????, > Patrick > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Tue Jan 8 20:09:49 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 8 Jan 2013 20:09:49 +0000 Subject: Primary script unknown error - can't figure out how to fix In-Reply-To: <50EC6EB1.6020108@puzzled.xs4all.nl> References: <50EC6EB1.6020108@puzzled.xs4all.nl> Message-ID: <20130108200949.GA4332@craic.sysops.org> On Tue, Jan 08, 2013 at 08:08:33PM +0100, Patrick Lists wrote: Hi there, > I'm seeing some "Primary script unknown" errors That message from the fastcgi server usually means that the SCRIPT_FILENAME that it was given was not found as a file on its filesystem. Your filesystem has: > # ls -l /usr/share/nginx > drwxr-xr-x. 12 root root 4096 Jan 8 18:47 piwik > > # ls -l /usr/share/nginx/piwik > -rw-r-----. 1 nginx nginx 1611 Mar 20 2012 index.php But your nginx config file has: > root /usr/share/nginx/piwik.domain.com; > location = /index.php { > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > } so SCRIPT_FILENAME will be /usr/share/nginx/piwik.domain.com/index.php Which does not exist, and so is not found. > Any advice or pointers to docs where I can find a solution are most > appreciated. Set "root" correctly in the nginx config. (Which is more or less the same as "set the directory name correctly in the filesystem".) f -- Francis Daly francis at daoine.org From nginx-list at puzzled.xs4all.nl Tue Jan 8 20:39:49 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Tue, 08 Jan 2013 21:39:49 +0100 Subject: Primary script unknown error - can't figure out how to fix In-Reply-To: <50EC7711.40800@greengecko.co.nz> References: <50EC6EB1.6020108@puzzled.xs4all.nl> <50EC7711.40800@greengecko.co.nz> Message-ID: <50EC8415.7020204@puzzled.xs4all.nl> On 01/08/2013 08:44 PM, Steve Holdoway wrote: > At first glance I can't see anything listening on port 443. Thanks Steve. It was a copy & paste error. Here's what it really should be: server { listen IP:80; # IPv4 listen IP:80; # IPv6 server_name piwik.mydomain.com; # always redirect to the ssl version rewrite ^ https://$server_name$request_uri? permanent; } server { listen IP:443 ssl; # IPv4 listen IP:443 ssl ; ## SSL config ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5:!RC4; ssl_prefer_server_ciphers on; ... The SSL part works fine. Regards, Patrick From nginx-list at puzzled.xs4all.nl Tue Jan 8 21:39:48 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Tue, 08 Jan 2013 22:39:48 +0100 Subject: Primary script unknown error - can't figure out how to fix In-Reply-To: <20130108200949.GA4332@craic.sysops.org> References: <50EC6EB1.6020108@puzzled.xs4all.nl> <20130108200949.GA4332@craic.sysops.org> Message-ID: <50EC9224.5070508@puzzled.xs4all.nl> On 01/08/2013 09:09 PM, Francis Daly wrote: [snip] > Set "root" correctly in the nginx config. (Which is more or less the > same as "set the directory name correctly in the filesystem".) Thanks Francis. Another sharp pair of eyes and mine not so much. The error was introduced when redacting the configs to protect the innocent. The root in piwik.conf points correctly to the actual location of the site in /usr/share/nginx. I also tried turning off SELinux but that did not make a difference. I can make it work when I chmod all files to 644 and all directories to 755. But I do not understand why that is. If nginx and php-fpm are running as nginx/ningx shouldn't both be able to handle files & dirs owned by nginx/nginx with mode 640 and mode 750? Regards, Patrick From steve at greengecko.co.nz Tue Jan 8 21:44:42 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 09 Jan 2013 10:44:42 +1300 Subject: Primary script unknown error - can't figure out how to fix In-Reply-To: <50EC9224.5070508@puzzled.xs4all.nl> References: <50EC6EB1.6020108@puzzled.xs4all.nl> <20130108200949.GA4332@craic.sysops.org> <50EC9224.5070508@puzzled.xs4all.nl> Message-ID: <1357681482.14535.3229.camel@steve-new> In my experience, that's usually caused by access to parent dirs... ie /home is made unreadable by nginx user when changed to 750 root:root. Cheers, Steve On Tue, 2013-01-08 at 22:39 +0100, Patrick Lists wrote: > On 01/08/2013 09:09 PM, Francis Daly wrote: > [snip] > > Set "root" correctly in the nginx config. (Which is more or less the > > same as "set the directory name correctly in the filesystem".) > > Thanks Francis. Another sharp pair of eyes and mine not so much. The > error was introduced when redacting the configs to protect the innocent. > The root in piwik.conf points correctly to the actual location of the > site in /usr/share/nginx. I also tried turning off SELinux but that did > not make a difference. > > I can make it work when I chmod all files to 644 and all directories to > 755. But I do not understand why that is. If nginx and php-fpm are > running as nginx/ningx shouldn't both be able to handle files & dirs > owned by nginx/nginx with mode 640 and mode 750? > > Regards, > Patrick > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From nginx-list at puzzled.xs4all.nl Tue Jan 8 21:56:11 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Tue, 08 Jan 2013 22:56:11 +0100 Subject: Primary script unknown error - can't figure out how to fix In-Reply-To: <1357681482.14535.3229.camel@steve-new> References: <50EC6EB1.6020108@puzzled.xs4all.nl> <20130108200949.GA4332@craic.sysops.org> <50EC9224.5070508@puzzled.xs4all.nl> <1357681482.14535.3229.camel@steve-new> Message-ID: <50EC95FB.4050008@puzzled.xs4all.nl> On 01/08/2013 10:44 PM, Steve Holdoway wrote: > In my experience, that's usually caused by access to parent dirs... > ie /home is made unreadable by nginx user when changed to 750 root:root. Gold star for you Steve :-) It works fine now. I guess that's what you get for staring at configs for way too long. Thank you very much! Regards, Patrick From steve at greengecko.co.nz Tue Jan 8 22:09:58 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 09 Jan 2013 11:09:58 +1300 Subject: Primary script unknown error - can't figure out how to fix In-Reply-To: <50EC95FB.4050008@puzzled.xs4all.nl> References: <50EC6EB1.6020108@puzzled.xs4all.nl> <20130108200949.GA4332@craic.sysops.org> <50EC9224.5070508@puzzled.xs4all.nl> <1357681482.14535.3229.camel@steve-new> <50EC95FB.4050008@puzzled.xs4all.nl> Message-ID: <1357682998.14535.3230.camel@steve-new> On Tue, 2013-01-08 at 22:56 +0100, Patrick Lists wrote: > On 01/08/2013 10:44 PM, Steve Holdoway wrote: > > In my experience, that's usually caused by access to parent dirs... > > ie /home is made unreadable by nginx user when changed to 750 root:root. > > Gold star for you Steve :-) It works fine now. I guess that's what you > get for staring at configs for way too long. Thank you very much! > > Regards, > Patrick Graag gedaan (: Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From lists at ruby-forum.com Tue Jan 8 23:13:17 2013 From: lists at ruby-forum.com (Jason R.) Date: Wed, 09 Jan 2013 00:13:17 +0100 Subject: try_files, POST, and redirecting requests to Passenger Message-ID: <2379729bbfc0e374cb9abe0d4e3dc4a2@ruby-forum.com> The application I'm working on (CMS) has a few interesting requirements: * Custom user domains * Very heavily page cached. * Any page can be POSTed to (page has a form on it) In Apache, this was easily handled. If GET, then look in the cache, then fall to Passenger. Otherwise, just go straight to Passenger. I have been unable to get nginx working for my needs and am wondering if anyone else has any insight into how to solve this problem. Basically what I want is the following (but can't because try_files can't be in an if): location / { if ($request_method ~* ^(GET|HEAD)$) { try_files /cache/$domain/$uri /cache/$domain/$uri.html /cache/$domain/$uri/index.html /maintenance.html @passenger; break; } try_files /maintenance.html @passenger; } location @passenger { passenger_enabled on; } I initially had the idea that try_files was more of a switch-statement, and tried to do something like: try_files @cache maintenance.html @passenger; then in @cache simply break if the request is not a GET, but that obviously only ever went to @passenger, because @cache wasn't a real file on the system. I've tried the error_page 405 = @passenger route, but that has a very severe problem in that it turns my POST request into a GET, and I was unable to find out how to stop that. Is there a way? I've also tried doing an internal redirect to a different location block, something like: location /post { internal; passenger_enabled on; } but then I end up with $uris that have /post/(uri i want) and don't know how to tell nginx to ignore /post when sending down to passenger. Would also appreciate ideas here if there are any. Any other suggestions? I'm almost to resorting to a number of if statements and really don't want to end up there. I am using nginx release: 1.2.6 on ubuntu 12.04 and Passenger Enterprise. Thanks Jason -- Posted via http://www.ruby-forum.com/. From contact at jpluscplusm.com Wed Jan 9 01:31:49 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 9 Jan 2013 01:31:49 +0000 Subject: try_files, POST, and redirecting requests to Passenger In-Reply-To: <2379729bbfc0e374cb9abe0d4e3dc4a2@ruby-forum.com> References: <2379729bbfc0e374cb9abe0d4e3dc4a2@ruby-forum.com> Message-ID: On 8 January 2013 23:13, Jason R. wrote: > In Apache, this was easily handled. If GET, then look in the cache, then > fall to Passenger. Otherwise, just go straight to Passenger. > > I have been unable to get nginx working for my needs and am wondering if > anyone else has any insight into how to solve this problem. > > Basically what I want is the following (but can't because try_files > can't be in an if): > > location / { > if ($request_method ~* ^(GET|HEAD)$) { > try_files /cache/$domain/$uri > /cache/$domain/$uri.html > /cache/$domain/$uri/index.html > /maintenance.html > @passenger; > break; > } > > try_files /maintenance.html @passenger; > } > > location @passenger { > passenger_enabled on; > } Does try_files accept a variable generated from a map based on the $request_method variable, perhaps? That's the way I usually avoid ifs. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From info at pkern.at Wed Jan 9 04:23:39 2013 From: info at pkern.at (Patrik Kernstock) Date: Wed, 9 Jan 2013 05:23:39 +0100 Subject: AW: AW: Webserver crashes sometimes - don't know why In-Reply-To: <20130108034846.GC68127@mdounin.ru> References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at> <004601cdecb7$f3790f60$da6b2e20$@pkern.at> <005601cdeccc$8bd97420$a38c5c60$@pkern.at> <20130108034846.GC68127@mdounin.ru> Message-ID: <00fc01cdee21$1a11d480$4e357d80$@pkern.at> Thanks for your help, but I don't really understand the part with "coredump" and "backtrace"... Thanks :) -----Urspr?ngliche Nachricht----- Von: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] Im Auftrag von Maxim Dounin Gesendet: Dienstag, 08. J?nner 2013 04:49 An: nginx at nginx.org Betreff: Re: AW: Webserver crashes sometimes - don't know why Hello! On Mon, Jan 07, 2013 at 12:45:52PM +0100, Patrik Kernstock wrote: > I just found something interest in "dmesg" log: > [5294633.862284] __ratelimit: 20 callbacks suppressed [5294633.862288] > nginx[20568]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a1a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294634.659735] nginx[20569]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294634.818078] nginx[20571]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294634.819429] nginx[20581]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294634.920149] nginx[20567]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294635.313816] nginx[20589]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294635.402682] nginx[20590]: segfault at aa ip 00007fdc5a44eb41 sp > 00007fff0260a0a8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294682.926163] nginx[20596]: segfault at 4a ip 00000000004459df sp > 00007fff0260a0f0 error 4 in nginx[400000+a3000] [5294685.155117] > nginx[20595]: segfault at 4a ip 00000000004459df sp > 00007fff0260a280 error 4 in nginx[400000+a3000] [5294686.158466] > nginx[21276]: segfault at 4a ip 00000000004459df sp > 00007fff0260a130 error 4 in nginx[400000+a3000] [5294688.683947] > nginx[21313]: segfault at 1 ip 00007fdc5a44eb41 sp > 00007fff0260a0c8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > [5294695.987059] nginx[21361]: segfault at 1193d ip 00007fdc5a44eb41 > sp > 00007fff0260a0c8 error 6 in libc-2.11.3.so[7fdc5a3cf000+159000] > > Seems to be a error in libc... It's highly unlikely to be an error in libc, segfaults in libc usually heppen when libc functions are called with incorrect arguments. You need to obtain coredump and provide a backtrace, see http://wiki.nginx.org/Debugging for details, in paricular these two sections: http://wiki.nginx.org/Debugging#Core_dump http://wiki.nginx.org/Debugging#Asking_for_help Please note: it would be good idea to make sure you are able to reproduce the problem without any 3rd party modules compiled in. -- Maxim Dounin http://nginx.com/support.html _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jan 9 06:00:59 2013 From: nginx-forum at nginx.us (jimt79) Date: Wed, 09 Jan 2013 01:00:59 -0500 Subject: htaccess to nginx, real nightmare - Any advice ? Message-ID: Hello I'm using Nginx from 3 weeks for wordpress and scripts like phplinkdirectory. Unfortunately i'm still noob converting htaccess rules to nginx and sometimes a simple conversion is a real nightmare. I cannot solve this conversion from htaccess to nginx and i hope to find someone to point me to the right direction. htaccess code: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^(.+)\~s$ RewriteRule ^(.*) stats.php?u=$1 [L] RewriteCond %{REQUEST_URI} ^(.+)\~d$ RewriteRule ^(.*) delete_file.php?u=$1 [QSA,L] RewriteCond %{REQUEST_URI} ^(.+)\~i$ RewriteRule ^(.*) share_file.php?u=$1 [QSA,L] RewriteCond %{REQUEST_URI} ^(.+)\~f$ RewriteRule ^(.*) view_folder.php?f=$1 [QSA,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond $1 !\.html$ RewriteRule ^(.*) file_download.php?u=$1 [QSA,L] RewriteRule ^(.*).html$ $1.php [QSA,L] Nginx code (i'm using it on a subdomain, my server is centos 6.3 + virtualmin + nginx + fastcgi. Server is already configured and 100% working.): server { server_name download.mysite.com www.download.mysite.com; listen 173.245.7.94; root /home/mysite/domains/download.mysite.com/public_html; access_log /var/log/virtualmin/download.mysite.com_access_log; error_log /var/log/virtualmin/download.mysite.com_error_log; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME /home/mysite/domains/download.mysite.com/public_html$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT /home/mysite/domains/download.mysite.com/public_html; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; location ~ \.php$ { try_files $uri =404; fastcgi_pass localhost:9003; } location / { index index.html index.htm index.php; if (!-f $request_uri) { rewrite ^/([^/]+)~s /stats.php?u=$1 last;} if (!-f $request_uri) { rewrite ^/([^/]+)~d /delete_file.php?u=$1 last;} if (!-f $request_uri) { rewrite ^/([^/]+)~i /share_file.php?u=$1 last;} if (!-f $request_uri) { rewrite ^/([^/]+)~f /view_folder.php?u=$1 last;} if (!-f $request_filename) { rewrite ^/([^/]+) /file_download.php?u=$1 last;} #rewrite html to php rewrite ^(.*)\.html$ $1.php break; } I have other domains (wordpress, phplinkdirectory) and they work well with nginx. The only problem is converting this subdomain htaccess to nginx. Code seems correct (virtualmin accept it and doesn't return errors) but in practise this code is completely incorrect. rewrite is 1/2 working, i'm having a lot of loops and a lot of php pages (rewrite php to html) aren't processed (i see only the source or the browser download my PHP page instead of rendering it.) Any advice ? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234820,234820#msg-234820 From nginx-forum at nginx.us Wed Jan 9 06:11:44 2013 From: nginx-forum at nginx.us (langthangko) Date: Wed, 09 Jan 2013 01:11:44 -0500 Subject: upstream fast_cgi no port error In-Reply-To: <4CF4164F.4010101@fouter.net> References: <4CF4164F.4010101@fouter.net> Message-ID: <43b894c9c66a6285ce8f7bdee1a23fa6.NginxMailingListEnglish@forum.nginx.org> I have an error as yours, After that I found that my config of upstream is not included. Because I config upstream in a separated file. Please check Posted at Nginx Forum: http://forum.nginx.org/read.php?2,153847,234821#msg-234821 From appa at perusio.net Wed Jan 9 08:40:59 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 09 Jan 2013 09:40:59 +0100 Subject: try_files, POST, and redirecting requests to Passenger In-Reply-To: <2379729bbfc0e374cb9abe0d4e3dc4a2@ruby-forum.com> References: <2379729bbfc0e374cb9abe0d4e3dc4a2@ruby-forum.com> Message-ID: <876236bvfo.wl%appa@perusio.net> On 9 Jan 2013 00h13 CET, lists at ruby-forum.com wrote: > The application I'm working on (CMS) has a few interesting > requirements: > > * Custom user domains > * Very heavily page cached. > * Any page can be POSTed to (page has a form on it) > > In Apache, this was easily handled. If GET, then look in the cache, > then fall to Passenger. Otherwise, just go straight to Passenger. > > I have been unable to get nginx working for my needs and am > wondering if anyone else has any insight into how to solve this > problem. > > Basically what I want is the following (but can't because try_files > can't be in an if): > > location / { > if ($request_method ~* ^(GET|HEAD)$) { > try_files /cache/$domain/$uri > /cache/$domain/$uri.html > /cache/$domain/$uri/index.html > /maintenance.html > @passenger; > break; > } > > try_files /maintenance.html @passenger; > } > > location @passenger { > passenger_enabled on; > } You're mixing different things. break is a rewrite phase directive, like if. So they're executed well before try_files and the content phase handlers. You should use the map directive. At the http level: map $request_method $idempotent { default 0; GET 1; HEAD 1; } then at the server level (vhost config): location / { error_page 418 = @idempotent; if ($idempotent) { return 418; } try_files /cache/$domain/$uri /cache/$domain/$uri.html /cache/$domain/$uri/index.html /maintenance.html @passenger; } location @idempotent { try_files /maintenance.html @passenger; } location @passenger { passenger_enabled on; } Try it. --- appa From nginx-forum at nginx.us Wed Jan 9 09:27:12 2013 From: nginx-forum at nginx.us (philipp) Date: Wed, 09 Jan 2013 04:27:12 -0500 Subject: OCSP_basic_verify() failed Message-ID: I tried nginx 1.3.10 with ocsp stapling... but I get this error: 2013/01/09 09:14:52 [error] 27663#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: ocsp.startssl.com my config looks lile this server { listen [::]:443 ssl spdy; ssl on; ssl_certificate /etc/ssl/private/www.hellmi.de.pem; ssl_certificate_key /etc/ssl/private/www.hellmi.de.key; ## OCSP Stapling resolver 127.0.0.1; ssl_stapling on; ssl_stapling_verify on; server_name www.hellmi.de; ... } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234832,234832#msg-234832 From mdounin at mdounin.ru Wed Jan 9 09:46:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jan 2013 13:46:40 +0400 Subject: OCSP_basic_verify() failed In-Reply-To: References: Message-ID: <20130109094640.GA80623@mdounin.ru> Hello! On Wed, Jan 09, 2013 at 04:27:12AM -0500, philipp wrote: > I tried nginx 1.3.10 with ocsp stapling... but I get this error: > > 2013/01/09 09:14:52 [error] 27663#0: OCSP_basic_verify() failed (SSL: > error:27069065:OCSP routines:OCSP_basic_verify:certificate verify > error:Verify error:unable to get local issuer certificate) while requesting > certificate status, responder: ocsp.startssl.com > > my config looks lile this > > server { > listen [::]:443 ssl spdy; > > ssl on; > ssl_certificate /etc/ssl/private/www.hellmi.de.pem; > ssl_certificate_key /etc/ssl/private/www.hellmi.de.key; > > ## OCSP Stapling > resolver 127.0.0.1; > ssl_stapling on; > ssl_stapling_verify on; > > server_name www.hellmi.de; > > ... > } http://nginx.org/r/ssl_stapling_verify Quote: For verification to work, the certificate of the issuer of the server certificate, the root certificate, and all intermediate certificates should be configured as trusted using the ssl_trusted_certificate directive. -- Maxim Dounin http://nginx.com/support.html From andrew at nginx.com Wed Jan 9 10:01:15 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 9 Jan 2013 14:01:15 +0400 Subject: nginx deployments survey Message-ID: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> Hello, We had a pretty packed 2012 in regards to nginx development. We've got a lot of feedback too from a variety of users. However as we gear up for the new development cycle in 2013, we'd appreciate a lot if you could spare 7-10 minutes of your time and take a look at our online survey: http://nginx-survey.questionpro.com/ This survey will help us greatly to define goals and adjust priorities in 2013 and to make nginx better. Many thanks in advance! -- Andrew Alexeev nginx From nginx-forum at nginx.us Wed Jan 9 10:02:11 2013 From: nginx-forum at nginx.us (philipp) Date: Wed, 09 Jan 2013 05:02:11 -0500 Subject: OCSP_basic_verify() failed In-Reply-To: <20130109094640.GA80623@mdounin.ru> References: <20130109094640.GA80623@mdounin.ru> Message-ID: <88158ead6b8940bbef2dd00b430b7927.NginxMailingListEnglish@forum.nginx.org> I have created a trust file both ways: cat www.hellmi.de.pem > www.hellmi.de.trust cat subca.pem >> www.hellmi.de.trust cat ca.pem >> www.hellmi.de.trust or cat subca.pem > www.hellmi.de.trust cat ca.pem >> www.hellmi.de.trust and configured it as ssl_trusted_certificate, this did not help either. How do I create a trusted certificate for a StartCom CA? This chain looks like this: StartCom Certification Authority (ca.pem) StartCom Class 1 Primary Intermediate Server CA (subca.pem) www.hellmi.de (www.hellmi.de.pem) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234832,234836#msg-234836 From lists at ruby-forum.com Wed Jan 9 14:22:32 2013 From: lists at ruby-forum.com (Jason R.) Date: Wed, 09 Jan 2013 15:22:32 +0100 Subject: try_files, POST, and redirecting requests to Passenger In-Reply-To: <876236bvfo.wl%appa@perusio.net> References: <2379729bbfc0e374cb9abe0d4e3dc4a2@ruby-forum.com> <876236bvfo.wl%appa@perusio.net> Message-ID: <6272766608b208a60a621ee2e3ed25fc@ruby-forum.com> "Ant?nio P. P. Almeida" wrote in post #1091574: > On 9 Jan 2013 00h13 CET, lists at ruby-forum.com wrote: > > error_page 418 = @idempotent; > > if ($idempotent) { > return 418; > } > > location @idempotent { > try_files /maintenance.html @passenger; > } I never knew about the map directive, that's quite interesting. One question though, will this make sure that a POST that hits the error_page 418 stays a POST when it goes through the @idempotent location? Thanks for the info! Jason -- Posted via http://www.ruby-forum.com/. From appa at perusio.net Wed Jan 9 15:47:06 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 09 Jan 2013 16:47:06 +0100 Subject: try_files, POST, and redirecting requests to Passenger In-Reply-To: <6272766608b208a60a621ee2e3ed25fc@ruby-forum.com> References: <2379729bbfc0e374cb9abe0d4e3dc4a2@ruby-forum.com> <876236bvfo.wl%appa@perusio.net> <6272766608b208a60a621ee2e3ed25fc@ruby-forum.com> Message-ID: <87y5g29x51.wl%appa@perusio.net> On 9 Jan 2013 15h22 CET, lists at ruby-forum.com wrote: > "Ant?nio P. P. Almeida" wrote in post #1091574: >> On 9 Jan 2013 00h13 CET, lists at ruby-forum.com wrote: >> >> error_page 418 = @idempotent; >> >> if ($idempotent) { >> return 418; >> } >> >> location @idempotent { >> try_files /maintenance.html @passenger; >> } > > I never knew about the map directive, that's quite interesting. One > question though, will this make sure that a POST that hits the > error_page 418 stays a POST when it goes through the @idempotent > location? Perhaps I misunderstood. The way it is configured above is such that the @idempotent location will only be used for GET and HEAD requests. All other requests are handled by the / location using the lenghty try_files. I thought that was your desired config. > Thanks for the info! You're welcome. --- appa From hnakamur at gmail.com Wed Jan 9 15:48:38 2013 From: hnakamur at gmail.com (Hiroaki Nakamura) Date: Thu, 10 Jan 2013 00:48:38 +0900 Subject: [PATCH] Use pcre-config to set ngx_feature_path and ngx_feature_libs Message-ID: Hi, there. Here is a patch for using pcre-config to set ngx_feature_path and ngx_feature_libs. I need this since I would like to use pcre 8.32 built from source and installed in /opt/lib64/libpcre.so.1. Could you review this patch? Thanks -- )Hiroaki Nakamura) hnakamur at gmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.pcre_conf.patch Type: application/octet-stream Size: 572 bytes Desc: not available URL: From mdounin at mdounin.ru Wed Jan 9 16:24:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jan 2013 20:24:26 +0400 Subject: [PATCH] Use pcre-config to set ngx_feature_path and ngx_feature_libs In-Reply-To: References: Message-ID: <20130109162425.GD80623@mdounin.ru> Hello! On Thu, Jan 10, 2013 at 12:48:38AM +0900, Hiroaki Nakamura wrote: > Hi, there. > > Here is a patch for using pcre-config to set ngx_feature_path and > ngx_feature_libs. > I need this since I would like to use pcre 8.32 built from source and > installed in /opt/lib64/libpcre.so.1. > > Could you review this patch? > Thanks > > -- > )Hiroaki Nakamura) hnakamur at gmail.com > --- auto/lib/pcre/conf.orig 2012-03-28 01:44:52.000000000 +0900 > +++ auto/lib/pcre/conf 2013-01-09 16:55:48.375745628 +0900 > @@ -105,6 +105,17 @@ > > if [ $ngx_found = no ]; then > > + # pkgconfig > + > + ngx_feature="PCRE library in `pcre-config --prefix`/include" > + ngx_feature_path="`pcre-config --prefix`/include" > + ngx_feature_libs="`pcre-config --libs`" > + > + . auto/feature > + fi > + > + if [ $ngx_found = no ]; then > + > # FreeBSD port > > ngx_feature="PCRE library in /usr/local/" I don't think that "`pcre-config --prefix`/include" is a good aproach. Have you considered just using ./configure --with-cc-opt="-I/path/to/include" --with-ld-opt="-L/path/to/lib" ... ? -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Wed Jan 9 16:35:50 2013 From: nginx-forum at nginx.us (daveyfx) Date: Wed, 09 Jan 2013 11:35:50 -0500 Subject: Remove URI string? Message-ID: <5bd2d7ace8820feffdbe0a666711bf03.NginxMailingListEnglish@forum.nginx.org> Hello all - I would like to strip all request_uri strings for a group of server names and serve up an index page. Example: Client requests http://host1.domain.com/blah, nginx will direct client to http://host1.domain.com and serve the index page. Same scenario for host2 - I would like nginx to direct client to http://host2.domain.com and serve the same index page. Would the below work? server { server_name host1.domain.com host2.domain.com; rewrite ^ http://$server_name; root /path/to/document/root; index index.html; } Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234822,234822#msg-234822 From contact at jpluscplusm.com Wed Jan 9 16:43:36 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 9 Jan 2013 16:43:36 +0000 Subject: Remove URI string? In-Reply-To: <5bd2d7ace8820feffdbe0a666711bf03.NginxMailingListEnglish@forum.nginx.org> References: <5bd2d7ace8820feffdbe0a666711bf03.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 9 January 2013 16:35, daveyfx wrote: > Hello all - > > I would like to strip all request_uri strings for a group of server names > and serve up an index page. > Example: > > Client requests http://host1.domain.com/blah, nginx will direct client to > http://host1.domain.com and serve the index page. Same scenario for host2 - > I would like nginx to direct client to http://host2.domain.com and serve the > same index page. > > Would the below work? > > server { > server_name host1.domain.com host2.domain.com; > rewrite ^ http://$server_name; > root /path/to/document/root; > index index.html; > } Have you tried it? How about doing that? It looks ok to me, but running it on a test machine will probably tell you all you need to know ... Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From hnakamur at gmail.com Wed Jan 9 16:46:32 2013 From: hnakamur at gmail.com (Hiroaki Nakamura) Date: Thu, 10 Jan 2013 01:46:32 +0900 Subject: [PATCH] Use pcre-config to set ngx_feature_path and ngx_feature_libs In-Reply-To: <20130109162425.GD80623@mdounin.ru> References: <20130109162425.GD80623@mdounin.ru> Message-ID: Hi! In my case ./configure --with-pcre-jit --with-cc-opt="-I/opt/include" --with-ld-opt="-L/opt/lib64 -lpcre" does it! Now it is properly configured without my patch. I get the following output. Configuration summary + using system PCRE library Thank you very much for your help. 2013/1/10 Maxim Dounin : > Hello! > > On Thu, Jan 10, 2013 at 12:48:38AM +0900, Hiroaki Nakamura wrote: > >> Hi, there. >> >> Here is a patch for using pcre-config to set ngx_feature_path and >> ngx_feature_libs. >> I need this since I would like to use pcre 8.32 built from source and >> installed in /opt/lib64/libpcre.so.1. >> >> Could you review this patch? >> Thanks >> >> -- >> )Hiroaki Nakamura) hnakamur at gmail.com > >> --- auto/lib/pcre/conf.orig 2012-03-28 01:44:52.000000000 +0900 >> +++ auto/lib/pcre/conf 2013-01-09 16:55:48.375745628 +0900 >> @@ -105,6 +105,17 @@ >> >> if [ $ngx_found = no ]; then >> >> + # pkgconfig >> + >> + ngx_feature="PCRE library in `pcre-config --prefix`/include" >> + ngx_feature_path="`pcre-config --prefix`/include" >> + ngx_feature_libs="`pcre-config --libs`" >> + >> + . auto/feature >> + fi >> + >> + if [ $ngx_found = no ]; then >> + >> # FreeBSD port >> >> ngx_feature="PCRE library in /usr/local/" > > I don't think that "`pcre-config --prefix`/include" is a good > aproach. > > Have you considered just using > > ./configure --with-cc-opt="-I/path/to/include" --with-ld-opt="-L/path/to/lib" ... > > ? > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- )Hiroaki Nakamura) hnakamur at gmail.com From lists at ruby-forum.com Wed Jan 9 19:39:33 2013 From: lists at ruby-forum.com (Jason R.) Date: Wed, 09 Jan 2013 20:39:33 +0100 Subject: try_files, POST, and redirecting requests to Passenger In-Reply-To: <87y5g29x51.wl%appa@perusio.net> References: <2379729bbfc0e374cb9abe0d4e3dc4a2@ruby-forum.com> <876236bvfo.wl%appa@perusio.net> <6272766608b208a60a621ee2e3ed25fc@ruby-forum.com> <87y5g29x51.wl%appa@perusio.net> Message-ID: <0d6f6d0016a62c3e47de3e603ff9d0de@ruby-forum.com> "Ant?nio P. P. Almeida" wrote in post #1091605: > On 9 Jan 2013 15h22 CET, lists at ruby-forum.com wrote: > >>> try_files /maintenance.html @passenger; >>> } >> >> I never knew about the map directive, that's quite interesting. One >> question though, will this make sure that a POST that hits the >> error_page 418 stays a POST when it goes through the @idempotent >> location? > > Perhaps I misunderstood. The way it is configured above is such that > the @idempotent location will only be used for GET and HEAD requests. > All other requests are handled by the / location using the lenghty > try_files. > > I thought that was your desired config. > >> Thanks for the info! > > You're welcome. > > --- appa Sorry if I mispoke then, what I want is the exact opposte because POST should never hit a cache file. We ended up going a slightly different route in that we hack the cache file location according to the request type (we have $cache_host because there's some other processing we do, not relevant here): # Set cache_path to a non-existant directory so try_files fails if we cannot # serve the request from the cache. set $cache_path "no-cache"; set $cache_host $host; if ($request_method ~* ^(GET|HEAD)$) { set $cache_path "cache"; } try_files /$cache_path/$cache_host/$uri /$cache_path/$cache_host/$uri.html /$cache_path/$cache_host/$uri/index.html /maintenance.html @passenger; This way there's no possible way a valid file is found when POST-ing. This was the simplest solution we could come up with at this time. Thanks for everyone's help. Jason -- Posted via http://www.ruby-forum.com/. From luky-37 at hotmail.com Wed Jan 9 20:24:07 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 9 Jan 2013 21:24:07 +0100 Subject: AW: AW: Webserver crashes sometimes - don't know why In-Reply-To: <00fc01cdee21$1a11d480$4e357d80$@pkern.at> References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at>, , <004601cdecb7$f3790f60$da6b2e20$@pkern.at>, , <005601cdeccc$8bd97420$a38c5c60$@pkern.at>, <20130108034846.GC68127@mdounin.ru>, <00fc01cdee21$1a11d480$4e357d80$@pkern.at> Message-ID: > Thanks for your help, but I don't really understand the part with "coredump" > and "backtrace"... 1. Recompile nginx with CFLAGS="-g -O0" (for debugging symbols and without compiler optimization). You can just prepend it to your ./configure line. Before: ./configure --with-debug --with-ipv6 --with-http_flv_module --with-http_mp4_module After: CFLAGS="-g -O0" ./configure --with-debug --with-ipv6 --with-http_flv_module --with-http_mp4_module 2. compile nginx with "make" like you always do. 3. create a directory for the core files and make it readable from your workers. For example: mkdir /nginx-core-dumps/ && chmod a+w /nginx-core-dumps/ 4. add this to your nginx configuration: worker_rlimit_core? 500M; working_directory?? /nginx-core-dumps/; 5. (install and) start nginx and wait until it crashes. It should have created the core-dump in /nginx-core-dumps/. 6. (install and) start gdb: gdb nginx /nginx-core-dumps/nginx.core 7. within gdb, run the commands "bt" and "backtrace full", followed by a "quit". 8. Post the gdb output on this mailing list, the developers will analyze it then. I hope I didn't missed anything, but I think this should be it. Example at [1]. [1] http://pastebin.com/raw.php?i=NPjdQcVu From nginx-forum at nginx.us Thu Jan 10 04:35:30 2013 From: nginx-forum at nginx.us (daveyfx) Date: Wed, 09 Jan 2013 23:35:30 -0500 Subject: Remove URI string? In-Reply-To: References: Message-ID: <2d3a2bebb69b2820ebaadd2b49eb72f1.NginxMailingListEnglish@forum.nginx.org> I gave it a try and it did not work out. My access log repeated the following entry ~ 20 times. [09/Jan/2013:23:31:47 -0500] "GET / HTTP/1.1" 302 160 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20100101 Firefox/17.0" Firefox kindly informed me that "The page isn't redirecting properly" I'm guessing it's getting caught in an infinite redirect. Is there anything else I can try? Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234822,234868#msg-234868 From edho at myconan.net Thu Jan 10 04:37:44 2013 From: edho at myconan.net (Edho Arief) Date: Thu, 10 Jan 2013 11:37:44 +0700 Subject: Remove URI string? In-Reply-To: <2d3a2bebb69b2820ebaadd2b49eb72f1.NginxMailingListEnglish@forum.nginx.org> References: <2d3a2bebb69b2820ebaadd2b49eb72f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Thu, Jan 10, 2013 at 11:35 AM, daveyfx wrote: > I gave it a try and it did not work out. My access log repeated the > following entry ~ 20 times. > > [09/Jan/2013:23:31:47 -0500] "GET / HTTP/1.1" 302 160 "-" "Mozilla/5.0 > (Windows NT 6.1; WOW64; rv:17.0) Gecko/20100101 Firefox/17.0" > > Firefox kindly informed me that "The page isn't redirecting properly" I'm > guessing it's getting caught in an infinite redirect. > > Is there anything else I can try? > location = / { index index.html; } location / { return 302 /; } From vaibhavmallya at gmail.com Thu Jan 10 06:16:37 2013 From: vaibhavmallya at gmail.com (Vaibhav Mallya) Date: Wed, 9 Jan 2013 22:16:37 -0800 Subject: Two questions/feature requests based on first-time setup experience Message-ID: Hi all, I just got nginx/uwsgi up and running on Ubuntu Server 12.04. It works great without a great deal of configuration (which is awesome), but and I did have two questions/comments: 1) AFAIK, /etc/init.d/nginx start|reload|stop|etc is expected to be the "right" way to start the server on Ubuntu, but out of the box, it doesn't let you pass in arguments. In my particular case, I wanted to do something like /etc/init.d/nginx start -c /path/to/my/special/file.conf You could imagine wanting to do similar things for -p and -g. Ergo, it would be really nice if nginx supported argument-passing to to its init.d script. 2) Specifying default error log at runtime - nginx now logs to var/log/nginx/error.log as the "default" logging location. It would be nice if a default could be specified instead of this, e.g. nginx --default-error-log ~/custom-error-location/my- error-log.log In my particular case, I'm trying to detach nginx as much as possible from the global context and isolate everything within my local context only (project folder). If there's already a way to what I specified above, let me know, and let me know if you have any input or questions, etc. Thanks all! Vaibhav Mallya @mallyvai From yaoweibin at gmail.com Thu Jan 10 06:18:26 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 10 Jan 2013 14:18:26 +0800 Subject: nginx deployments survey In-Reply-To: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> References: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> Message-ID: Done, I'm a very eager user for the new features.[?] 2013/1/9 Andrew Alexeev > Hello, > > We had a pretty packed 2012 in regards to nginx development. We've got a > lot of feedback too from a variety of users. > > However as we gear up for the new development cycle in 2013, we'd > appreciate a lot if you could spare 7-10 minutes of your time and take a > look at our online survey: > > http://nginx-survey.questionpro.com/ > > This survey will help us greatly to define goals and adjust priorities in > 2013 and to make nginx better. > > Many thanks in advance! > > -- > Andrew Alexeev > nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 330.gif Type: image/gif Size: 96 bytes Desc: not available URL: From oyzc at yahoo.cn Thu Jan 10 08:46:56 2013 From: oyzc at yahoo.cn (=?ISO-8859-1?B?b3l6Yw==?=) Date: Thu, 10 Jan 2013 16:46:56 +0800 Subject: log module Message-ID: hello: when i using the log module to write the acess_log.there are three set-cookie line in reponse hearders, But the log module only writes the first set-cookie of response headers into log. here is my log_format: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" $upstream_addr ' '$http_cookie $request_time set_cookie=$sent_http_set_cookie'; -------------- next part -------------- An HTML attachment was scrubbed... URL: From delta.yeh at gmail.com Thu Jan 10 08:58:40 2013 From: delta.yeh at gmail.com (Delta Yeh) Date: Thu, 10 Jan 2013 16:58:40 +0800 Subject: nginx deployments survey In-Reply-To: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> References: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> Message-ID: Done! Thanks for the excellent work in the past 2012! 2013/1/9 Andrew Alexeev : > Hello, > > We had a pretty packed 2012 in regards to nginx development. We've got a lot of feedback too from a variety of users. > > However as we gear up for the new development cycle in 2013, we'd appreciate a lot if you could spare 7-10 minutes of your time and take a look at our online survey: > > http://nginx-survey.questionpro.com/ > > This survey will help us greatly to define goals and adjust priorities in 2013 and to make nginx better. > > Many thanks in advance! > > -- > Andrew Alexeev > nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From adrianhayter at gmail.com Thu Jan 10 11:20:58 2013 From: adrianhayter at gmail.com (Adrian Hayter) Date: Thu, 10 Jan 2013 11:20:58 +0000 Subject: Weird SSL Issue Message-ID: I use nginx to host multiple websites, and one of them has a valid SSL certificate. I've noticed recently (from early November 2012 according to Google Webmaster Tools), that if I make an SSL connection to one of the sites which does not have a valid SSL cert, I get the content of the site that does. That is, is example.com has the SSL cert, and I host example2.com without, if I go to https://example2.com I will get the homepage for example.com. This is despite the fact that the configuration file for example2.comdoesn't have anything concerning SSL in it (not even listening on port 443), and the configuration file for example.com doesn't have anything concerning example2.com. If configuration files are needed, I can provide them. However this was definitely not an issue before November. I suspect it started happening after I upgraded to the latest stable release of nginx. Any help is appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Thu Jan 10 11:37:16 2013 From: edho at myconan.net (Edho Arief) Date: Thu, 10 Jan 2013 18:37:16 +0700 Subject: Weird SSL Issue In-Reply-To: References: Message-ID: On Thu, Jan 10, 2013 at 6:20 PM, Adrian Hayter wrote: > I use nginx to host multiple websites, and one of them has a valid SSL > certificate. I've noticed recently (from early November 2012 according to > Google Webmaster Tools), that if I make an SSL connection to one of the > sites which does not have a valid SSL cert, I get the content of the site > that does. > > That is, is example.com has the SSL cert, and I host example2.com without, > if I go to https://example2.com I will get the homepage for example.com. > > This is despite the fact that the configuration file for example2.com > doesn't have anything concerning SSL in it (not even listening on port 443), > and the configuration file for example.com doesn't have anything concerning > example2.com. > Because there's something listening on port 443. When there's no matching server_name but there's something listening on that port, that block will handle the request. If you have dedicated ip for ssl host, set the ip. Otherwise, just create a default fallback server block for ssl and handle redirect from there. From adrianhayter at gmail.com Thu Jan 10 12:11:48 2013 From: adrianhayter at gmail.com (Adrian Hayter) Date: Thu, 10 Jan 2013 12:11:48 +0000 Subject: Weird SSL Issue In-Reply-To: References: Message-ID: Ok, so how do I prevent that? I only want the content of example.com to be sent when example.com is given as the host in the HTTP request. Can you give examples? -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Thu Jan 10 13:04:17 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Thu, 10 Jan 2013 14:04:17 +0100 Subject: try_files, POST, and redirecting requests to Passenger In-Reply-To: <0d6f6d0016a62c3e47de3e603ff9d0de@ruby-forum.com> References: <2379729bbfc0e374cb9abe0d4e3dc4a2@ruby-forum.com> <876236bvfo.wl%appa@perusio.net> <6272766608b208a60a621ee2e3ed25fc@ruby-forum.com> <87y5g29x51.wl%appa@perusio.net> <0d6f6d0016a62c3e47de3e603ff9d0de@ruby-forum.com> Message-ID: <87vcb59oku.wl%appa@perusio.net> On 9 Jan 2013 20h39 CET, lists at ruby-forum.com wrote: > Sorry if I mispoke then, what I want is the exact opposte because > POST should never hit a cache file. > > We ended up going a slightly different route in that we hack the > cache file location according to the request type (we have > $cache_host because there's some other processing we do, not > relevant here): > > # Set cache_path to a non-existant directory so try_files fails if > # we > cannot > # serve the request from the cache. > set $cache_path "no-cache"; > set $cache_host $host; > > if ($request_method ~* ^(GET|HEAD)$) { > set $cache_path "cache"; > } > > try_files > /$cache_path/$cache_host/$uri > /$cache_path/$cache_host/$uri.html > /$cache_path/$cache_host/$uri/index.html > /maintenance.html > @passenger; > > This way there's no possible way a valid file is found when > POST-ing. This was the simplest solution we could come up with at > this time. > > Thanks for everyone's help. It's even simpler then. No need for map. location / { error_page 418 = @post; if ($request_method = POST) { return 418; } try_files /cache/$domain/$uri /cache/$domain/$uri.html /cache/$domain/$uri/index.html /maintenance.html @passenger; } location @post { try_files /maintenance.html @passenger; } location @passenger { passenger_enabled on; } --- appa From mdounin at mdounin.ru Thu Jan 10 13:37:02 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Jan 2013 17:37:02 +0400 Subject: nginx-1.3.11 Message-ID: <20130110133701.GJ80623@mdounin.ru> Changes with nginx 1.3.11 10 Jan 2013 *) Bugfix: a segmentation fault might occur if logging was used; the bug had appeared in 1.3.10. *) Bugfix: the "proxy_pass" directive did not work with IP addresses without port specified; the bug had appeared in 1.3.10. *) Bugfix: a segmentation fault occurred on start or during reconfiguration if the "keepalive" directive was specified more than once in a single upstream block. *) Bugfix: parameter "default" of the "geo" directive did not set default value for IPv6 addresses. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Thu Jan 10 14:29:14 2013 From: nginx-forum at nginx.us (daveyfx) Date: Thu, 10 Jan 2013 09:29:14 -0500 Subject: Remove URI string? In-Reply-To: References: Message-ID: Edho Arief Wrote: ------------------------------------------------------- > On Thu, Jan 10, 2013 at 11:35 AM, daveyfx > wrote: > > I gave it a try and it did not work out. My access log repeated the > > following entry ~ 20 times. > > > > [09/Jan/2013:23:31:47 -0500] "GET / HTTP/1.1" 302 160 "-" > "Mozilla/5.0 > > (Windows NT 6.1; WOW64; rv:17.0) Gecko/20100101 Firefox/17.0" > > > > Firefox kindly informed me that "The page isn't redirecting > properly" I'm > > guessing it's getting caught in an infinite redirect. > > > > Is there anything else I can try? > > > > location = / { > index index.html; > } > location / { > return 302 /; > } > Thank you for the suggestion Edho. Unfortunately this still gave me a redirect loop. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234822,234906#msg-234906 From edho at myconan.net Thu Jan 10 14:42:39 2013 From: edho at myconan.net (Edho Arief) Date: Thu, 10 Jan 2013 21:42:39 +0700 Subject: Remove URI string? In-Reply-To: References: Message-ID: On Thu, Jan 10, 2013 at 9:29 PM, daveyfx wrote: > Edho Arief Wrote: > ------------------------------------------------------- >> On Thu, Jan 10, 2013 at 11:35 AM, daveyfx >> wrote: >> > I gave it a try and it did not work out. My access log repeated the >> > following entry ~ 20 times. >> > >> > [09/Jan/2013:23:31:47 -0500] "GET / HTTP/1.1" 302 160 "-" >> "Mozilla/5.0 >> > (Windows NT 6.1; WOW64; rv:17.0) Gecko/20100101 Firefox/17.0" >> > >> > Firefox kindly informed me that "The page isn't redirecting >> properly" I'm >> > guessing it's getting caught in an infinite redirect. >> > >> > Is there anything else I can try? >> > >> >> location = / { >> index index.html; >> } >> location / { >> return 302 /; >> } >> > > Thank you for the suggestion Edho. Unfortunately this still gave me a > redirect loop. > clear your browser's history. Actually, don't use browser for testing redirect. Use curl. And make sure you've removed all other rewrites. From max at blubolt.com Thu Jan 10 16:01:11 2013 From: max at blubolt.com (Maxwell Lamb) Date: Thu, 10 Jan 2013 16:01:11 +0000 Subject: ngx_slab_alloc() failed: no memory (not push, not key space) Message-ID: Hi Folks, As ever, thanks for the mindblowingly great webserver. Unfortunately, we've run into an odd little issue. We've recently started auto-scaling at AWS, and we add and remove upstream hosts from the config via a dinky shell script which checks for hosts in the autoscaling group via the AWS API, and just adds them to/removes them from the config appropriately. After modifying our upstream conf, we do service nginx configtest which if successful is followed by service nginx reload so far, so good. Our config is pretty straightforward, and looks a bit like upstream apache_v5 { server 10.0.1.81:80; server 10.0.1.78:80; server 10.0.1.218:80; server 10.0.1.96:80; server 10.0.1.71:80; server 10.0.1.97:80; server 10.0.1.237:80; server 10.0.1.224:80; server 10.0.1.66:80; server 10.0.1.51:80; server 10.0.1.21:80; fair; } This all works beautifully for a while, but after a number of reloads and config modifications (no particular number we've been able to establish), each reload results in ngx_slab_alloc() failed: no memory and no reload of the config, preventing us from adding or removing further upstream hosts without completely restarting nginx. There are no errors preceding this, and on each subsequent reload, it reports the same error. We thought this may have been related to proxy buffers, so we decreased both their size and number, but this has had no impact. Additionally, while there's no particular pattern in terms of an absolute number of reloads that we've been able to determine, the two nginx entry servers we're currently using both exhibit the same behaviour at the same time. We're running 1.0.14. We haven't tried with a more up-to-date version just yet, in the hope that this may be an obscure known issue. Any help/insight/suggestions would be much appreciated! Thanks, Max From francis at daoine.org Thu Jan 10 16:13:20 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 10 Jan 2013 16:13:20 +0000 Subject: Remove URI string? In-Reply-To: References: <2d3a2bebb69b2820ebaadd2b49eb72f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130110161320.GD4332@craic.sysops.org> On Thu, Jan 10, 2013 at 11:37:44AM +0700, Edho Arief wrote: > On Thu, Jan 10, 2013 at 11:35 AM, daveyfx wrote: Hi there, > > I gave it a try and it did not work out. My access log repeated the > > following entry ~ 20 times. As mentioned: curl -i http://server/something curl -i http://server/ are much friendlier for testing with. You'll see exactly what the server sends back. > location = / { > index index.html; > } > location / { > return 302 /; > } With that configuration, a request for / will lead to an internal rewrite to /index.html, which will then hit the second location and do an external redirect again. So either add a third "location = /index.html" to handle that, or avoid the internal rewrite by doing something like try_files /index.html =404 in the "location = /" block. f -- Francis Daly francis at daoine.org From hobbsjb at yahoo.com Thu Jan 10 19:59:06 2013 From: hobbsjb at yahoo.com (JB Hobbs) Date: Thu, 10 Jan 2013 11:59:06 -0800 (PST) Subject: Request time of 60s when denying SSL requests? Message-ID: <1357847946.19517.YahooMailNeo@web142405.mail.bf1.yahoo.com> I purposely use the following rule to deny both http and https requests made to the root or our nginx server: location = / { access_log /logs/nginx/forbidden.log main; deny all; } If you enter http://whatever.domaina234.com into a browser then nginx immediately returns the 403 page to the browser, as expected. This shows up in the log as this: "[10/Jan/2013:12:57:30 -0500]" "-" "400" "0" "80" "-" "-" "0.000" "-" "-" "-" where 0.000 is the $request_time. However, if you make the request using https, like this https://whatever.domaina234.com then nginx immediately displays a 408 page in the browser (why this instead of 403?). And the most troubling part is that nothing shows up in the logs until about 60 seconds later, and then shows like this: "[10/Jan/2013:12:59:20 -0500]" "-" "408" "0" "443" "-" "-" "59.999" "-" "-" "-" Sometimes the request_time is 59.999, sometimes it is 60.000. But it is always 60 seconds. This is troubling because it seems nginx is in a wait state of some sort for 60 seconds before finishing up with the request. I am concerned this is tying up resources of some kind. I am using nginx to front-end Tomcat, but my understanding is that with the "deny all" the processing should end there? And even if it was passing this on to Jetty, it would get a valid response back within a few ms. I am certain the above "location" rule is being triggered, because if I change "deny all" to "return 507;" (just to pick an arbitrary number) then the browser shows "507" as the error code. This seems odd to me. I don't know why nginx is following the rule I set up to deny the request, yet still seems to be "in process" in some way to account for the 60 seconds. And this only happens for HTTPS. So it looks like nginx handles it from the client perspective immediately, but then expects something else to happen during that 60 seconds. I don't think nginx is really doing any work on this during the 60 seconds. It doesn't show in top and the cpu is at 0% (doing this on a testing box). I tried forcing keep alive off in these situations but the results is still the 60 second "request time". Nginx is being used to front end a web service and in no case should someone make a request to the root like this. Therefore my goal is to immediately terminate any such request and minimize the amount of cpu resources being used to service such requests. Any ideas? Thank you so much in advance for any help you can provide! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 10 23:25:39 2013 From: nginx-forum at nginx.us (daveyfx) Date: Thu, 10 Jan 2013 18:25:39 -0500 Subject: Remove URI string? In-Reply-To: <20130110161320.GD4332@craic.sysops.org> References: <20130110161320.GD4332@craic.sysops.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > On Thu, Jan 10, 2013 at 11:37:44AM +0700, Edho Arief wrote: > > On Thu, Jan 10, 2013 at 11:35 AM, daveyfx > wrote: > > Hi there, > > > > I gave it a try and it did not work out. My access log repeated > the > > > following entry ~ 20 times. > > As mentioned: > > curl -i http://server/something > curl -i http://server/ > > are much friendlier for testing with. You'll see exactly what the > server > sends back. > > > location = / { > > index index.html; > > } > > location / { > > return 302 /; > > } > > With that configuration, a request for / will lead to an internal > rewrite to /index.html, which will then hit the second location and do > an external redirect again. > > So either add a third "location = /index.html" to handle that, or > avoid > the internal rewrite by doing something like > > try_files /index.html =404 > > in the "location = /" block. > > f To all - Thank you for your help with this. Francis put the puzzle together and the following is working out great for me. server { server_name host1.domain.com host2.domain.com ... ... (so on and so forth) root /path/to/document/root; location = / { try_files /index.html = 404; } location / { return 302 /; } } > -- > Francis Daly francis at daoine.org Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234822,234914#msg-234914 From kworthington at gmail.com Thu Jan 10 23:39:59 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 10 Jan 2013 18:39:59 -0500 Subject: [nginx-announce] nginx-1.3.11 In-Reply-To: <20130110133712.GK80623@mdounin.ru> References: <20130110133712.GK80623@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.11 For Windows http://goo.gl/8oZ3E (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Thu, Jan 10, 2013 at 8:37 AM, Maxim Dounin wrote: > *) Bugfix: a segmentation fault might occur if logging was used; the bug > had appeared in 1.3.10. > > *) Bugfix: the "proxy_pass" directive did not work with IP addresses > without port specified; the bug had appeared in 1.3.10. > > *) Bugfix: a segmentation fault occurred on start or during > reconfiguration if the "keepalive" directive was specified more than > once in a single upstream block. > > *) Bugfix: parameter "default" of the "geo" directive did not set > default value for IPv6 addresses. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmiller at amfes.com Thu Jan 10 23:40:54 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Thu, 10 Jan 2013 15:40:54 -0800 Subject: Nginx, PHP, Wordpress, VirtualBox Message-ID: Dunno if anyone's running anything similar. I recently shifted to Nginx from Cherokee - and in so doing I setup a virtual server using VirtualBox to run it in. My primary use is for serving a pair of Wordpress sites. This is not (currently) a high-traffic server - but I do want it to run well regardless. My current configuration for the virtual hardware is 1 CPU and 1G RAM. Nginx (obviously) is installed, as is php-fpm. Mysql is running on the host - both host & guest are Ubuntu. Generally, of that 1G I see half in-use, a quarter cached, and a quarter free. So my first reaction is I don't THINK I'm starving the VM for RAM - but maybe I'm missing something. I generally don't seeing anything actively running except for php during a request - which hits 25% usage. Any suggestions for modifying my virtual or nginx config? Or do I need to focus on Wordpress caching? -- Daniel From steve at greengecko.co.nz Thu Jan 10 23:49:28 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 11 Jan 2013 12:49:28 +1300 Subject: Nginx, PHP, Wordpress, VirtualBox In-Reply-To: References: Message-ID: <1357861768.5359.27.camel@steve-new> On Thu, 2013-01-10 at 15:40 -0800, Daniel L. Miller wrote: > Dunno if anyone's running anything similar. I recently shifted to Nginx > from Cherokee - and in so doing I setup a virtual server using > VirtualBox to run it in. My primary use is for serving a pair of > Wordpress sites. > > This is not (currently) a high-traffic server - but I do want it to run > well regardless. My current configuration for the virtual hardware is 1 > CPU and 1G RAM. Nginx (obviously) is installed, as is php-fpm. Mysql > is running on the host - both host & guest are Ubuntu. > > Generally, of that 1G I see half in-use, a quarter cached, and a quarter > free. So my first reaction is I don't THINK I'm starving the VM for RAM > - but maybe I'm missing something. > > I generally don't seeing anything actively running except for php during > a request - which hits 25% usage. > > Any suggestions for modifying my virtual or nginx config? Or do I need > to focus on Wordpress caching? You don't say what the problem is, but if it's performance, look at: 1. Host database config 2. PHP config - memory use 3. Add an opcode cacher - APC seems to work best on php-fpm 4. Nginx config - compression, expiry headers, fpm resources. TBH I feel that WP caching options are for those without the ability to to the job properly - ie cannot tune their servers. ...but it all sounds ok to me TBH. I run pure nginx servers on KVM VPSes with 128MB - and they only use half of that. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From dmiller at amfes.com Fri Jan 11 00:46:05 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Thu, 10 Jan 2013 16:46:05 -0800 Subject: Wordpress Multisite Subdomain Message-ID: I have Wordpress currently setup with subdomain multisite. I have multiple domains that are aliases for a primary domain. With the rules I have in place, going to one of the secondary domains results in a hard redirect to my primary. Example: Enter http://secondarydomain.com in the browser. Get redirected to http://primarydomain.com. Is this the only way I can have it work? My preference would be to have the server name remain the "secondary" name as entered - so it appears to be multiple discreet sites. -- Daniel From dmiller at amfes.com Fri Jan 11 01:04:18 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Thu, 10 Jan 2013 17:04:18 -0800 Subject: Nginx, PHP, Wordpress, VirtualBox In-Reply-To: <1357861768.5359.27.camel@steve-new> References: <50EF5186.9080604@amfes.com> <1357861768.5359.27.camel@steve-new> Message-ID: On 1/10/2013 3:49 PM, Steve Holdoway wrote: > On Thu, 2013-01-10 at 15:40 -0800, Daniel L. Miller wrote: >> Dunno if anyone's running anything similar. I recently shifted to Nginx >> from Cherokee - and in so doing I setup a virtual server using >> VirtualBox to run it in. My primary use is for serving a pair of >> Wordpress sites. >> >> This is not (currently) a high-traffic server - but I do want it to run >> well regardless. My current configuration for the virtual hardware is 1 >> CPU and 1G RAM. Nginx (obviously) is installed, as is php-fpm. Mysql >> is running on the host - both host & guest are Ubuntu. >> >> Generally, of that 1G I see half in-use, a quarter cached, and a quarter >> free. So my first reaction is I don't THINK I'm starving the VM for RAM >> - but maybe I'm missing something. >> >> I generally don't seeing anything actively running except for php during >> a request - which hits 25% usage. >> >> Any suggestions for modifying my virtual or nginx config? Or do I need >> to focus on Wordpress caching? > You don't say what the problem is, but if it's performance, look at: > > 1. Host database config > 2. PHP config - memory use > 3. Add an opcode cacher - APC seems to work best on php-fpm > 4. Nginx config - compression, expiry headers, fpm resources. > > TBH I feel that WP caching options are for those without the ability to > to the job properly - ie cannot tune their servers. > > ...but it all sounds ok to me TBH. I run pure nginx servers on KVM VPSes > with 128MB - and they only use half of that. > LOL - you're right! I didn't mention what my problem might be! Yes, it was a performance concern. My site's rather small - basically a corporate vanity site - and I haven't been slashdotted yet...so I don't think it's a huge issue now... It just "felt" like it was running slow. I did just switch from xcache to apc - and also adjusted the apc settings to where they might do some good. I also just realized that many of my caching options get invalidated when I access the site as a logged-in admin. That all by itself makes a HUGE difference! -- Daniel From zjay1987 at gmail.com Fri Jan 11 07:21:20 2013 From: zjay1987 at gmail.com (li zJay) Date: Fri, 11 Jan 2013 15:21:20 +0800 Subject: Is it possible that nginx will not buffer the client body? Message-ID: Hello! is it possible that nginx will not buffer the client body before handle the request to upstream? we want to use nginx as a reverse proxy to upload very very big file to the upstream, but the default behavior of nginx is to save the whole request to the local disk first before handle it to the upstream, which make the upstream impossible to process the file on the fly when the file is uploading, results in much high request latency and server-side resource consumption. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at donka.hu Fri Jan 11 08:07:34 2013 From: peter at donka.hu (peter at donka.hu) Date: Fri, 11 Jan 2013 09:07:34 +0100 Subject: Multiple site with PHP-FPM home directory permission Message-ID: <7c22dba875a323f47cc2568f15703564@mail1.vhost.hu> Hi Guys! I have an nginx server with multiple virtual hosted site. Every site running with unique user permission using PHP-FPM. Its all fine, i see the user variable in the phpinfo page and i see the right username. However i have a little problem. Here an example what is have then i write what is the problem. in the /var/www directory i have all site webroot like: domain.tld domain1.tld etc.. every folder have the connected php-fpm user rights like owner and group so domain.tld folder user and group is domain.tld and have 0755 permission, so only the owner can write group and everybody else just read. I want to restrict this to that only thy owner/group can enter this directory, so i need 0750 flag. In that case the web site no longer loaded i see 404 error and in the log files a permission denied error. Then i realize i need to gain access to the www-data too, because this user try to enter to the main directory. So i add www-data to the domain.tld group, but same problem. I all can get the permission denied. If i set back the 0755 permission, so everybody can read/enter this directory it will working again. Is there any way to set a permission that the web page working fine but the directory only accessible by the owner and www-data and root? Thx for the help! Peter From jgehrcke at googlemail.com Fri Jan 11 09:04:25 2013 From: jgehrcke at googlemail.com (Jan-Philip Gehrcke) Date: Fri, 11 Jan 2013 10:04:25 +0100 Subject: Nginx, PHP, Wordpress, VirtualBox In-Reply-To: References: <50EF5186.9080604@amfes.com> <1357861768.5359.27.camel@steve-new> Message-ID: <50EFD599.7030903@googlemail.com> > On 1/10/2013 3:49 PM, Steve Holdoway wrote: >> You don't say what the problem is, but if it's performance, look at: >> >> 1. Host database config >> 2. PHP config - memory use >> 3. Add an opcode cacher - APC seems to work best on php-fpm >> 4. Nginx config - compression, expiry headers, fpm resources. >> >> TBH I feel that WP caching options are for those without the ability to >> to the job properly - ie cannot tune their servers. Not sure what you are referring to here. The premise is that caching for WordPress is a must: I am running a WordPress site with nginx, php-fpm, and APC on a low performance machine with two cores. Without caching, it is able to serve about 5 page requests per second. At this request rate, php-fpm constantly produces 100 % CPU load on both cores. The take-home message is that WordPress is bloated and requires its resources. Maybe this is still tunable in order to increase performance by a few or even 100 percent. But this is not worth the effort, because with enabled nginx fastcgi_cache the server easily answers thousands of requests per second, i.e. exhibits several orders of magnitude more performance. Some people are using a caching plugin for WordPress itself -- which is a good solution when using a shared hosting platform without the chance to change the web stack or change the web server configuration. Others implement caching below the web application level as I did. This is a cleaner and probably faster solution. In any case, caching for WordPress is a must. Cheers, Jan-Philip From yaoweibin at gmail.com Fri Jan 11 09:17:18 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Fri, 11 Jan 2013 17:17:18 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: Message-ID: I know nginx team are working on it. You can wait for it. If you are eager for this feature, you could try my patch: https://github.com/taobao/tengine/pull/91. This patch has been running in our production servers. 2013/1/11 li zJay > Hello! > > is it possible that nginx will not buffer the client body before handle > the request to upstream? > > we want to use nginx as a reverse proxy to upload very very big file to > the upstream, but the default behavior of nginx is to save the whole > request to the local disk first before handle it to the upstream, which > make the upstream impossible to process the file on the fly when the file > is uploading, results in much high request latency and server-side resource > consumption. > > Thanks! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Fri Jan 11 09:27:56 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 11 Jan 2013 22:27:56 +1300 Subject: Multiple site with PHP-FPM home directory permission In-Reply-To: <7c22dba875a323f47cc2568f15703564@mail1.vhost.hu> References: <7c22dba875a323f47cc2568f15703564@mail1.vhost.hu> Message-ID: <50EFDB1C.7090906@greengecko.co.nz> On 11/01/13 21:07, peter at donka.hu wrote: > Hi Guys! > > I have an nginx server with multiple virtual hosted site. Every site > running with unique user permission using PHP-FPM. > Its all fine, i see the user variable in the phpinfo page and i see the > right username. > > However i have a little problem. > Here an example what is have then i write what is the problem. > > in the /var/www directory i have all site webroot like: > > domain.tld > domain1.tld > > etc.. > > every folder have the connected php-fpm user rights like owner and group > > so domain.tld folder user and group is domain.tld > and have 0755 permission, so only the owner can write group and everybody > else just read. > > I want to restrict this to that only thy owner/group can enter this > directory, so i need 0750 flag. > In that case the web site no longer loaded i see 404 error and in the log > files a permission denied error. > Then i realize i need to gain access to the www-data too, because this > user try to enter to the main directory. > So i add www-data to the domain.tld group, but same problem. I all can get > the permission denied. > If i set back the 0755 permission, so everybody can read/enter this > directory it will working again. > > Is there any way to set a permission that the web page working fine but > the directory only accessible by the owner and www-data and root? > > Thx for the help! > Peter > chgrp -R www-data . find . -type d | xargs chmod 2750 will provide and future proof read access to the web server. I assume there is a dedicated php-fpm process for each site, running as the appropriate owner. From steve at greengecko.co.nz Fri Jan 11 09:31:40 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 11 Jan 2013 22:31:40 +1300 Subject: Nginx, PHP, Wordpress, VirtualBox In-Reply-To: <50EFD599.7030903@googlemail.com> References: <50EF5186.9080604@amfes.com> <1357861768.5359.27.camel@steve-new> <50EFD599.7030903@googlemail.com> Message-ID: <50EFDBFC.4020705@greengecko.co.nz> On 11/01/13 22:04, Jan-Philip Gehrcke wrote: >> On 1/10/2013 3:49 PM, Steve Holdoway wrote: >>> You don't say what the problem is, but if it's performance, look at: >>> >>> 1. Host database config >>> 2. PHP config - memory use >>> 3. Add an opcode cacher - APC seems to work best on php-fpm >>> 4. Nginx config - compression, expiry headers, fpm resources. >>> >>> TBH I feel that WP caching options are for those without the ability to >>> to the job properly - ie cannot tune their servers. > > Not sure what you are referring to here. The premise is that caching > for WordPress is a must: > > I am running a WordPress site with nginx, php-fpm, and APC on a low > performance machine with two cores. Without caching, it is able to > serve about 5 page requests per second. At this request rate, php-fpm > constantly produces 100 % CPU load on both cores. The take-home > message is that WordPress is bloated and requires its resources. Maybe > this is still tunable in order to increase performance by a few or > even 100 percent. But this is not worth the effort, because with > enabled nginx fastcgi_cache the server easily answers thousands of > requests per second, i.e. exhibits several orders of magnitude more > performance. > > Some people are using a caching plugin for WordPress itself -- which > is a good solution when using a shared hosting platform without the > chance to change the web stack or change the web server configuration. > > Others implement caching below the web application level as I did. > This is a cleaner and probably faster solution. In any case, caching > for WordPress is a must. > > Cheers, > > Jan-Philip So you agree with me then... From andrejaenisch at googlemail.com Fri Jan 11 11:29:00 2013 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Fri, 11 Jan 2013 12:29:00 +0100 Subject: Zero day security hole in Java plugin Message-ID: Hello, a friend of mine called my attention to the following link: http://malware.dontneedcoffee.com/2013/01/0-day-17u10-spotted-in-while-disable.html I'm new to the server's world, so I'm not sure, wether this is "just" a Java problem, but also a nginx one, since the server in question is nginx 1.0.15 ? However, it might be a good idea to spread the word of this security hole. Regards, Andre Jaenisch From kasperg at benjamin.dk Fri Jan 11 11:34:28 2013 From: kasperg at benjamin.dk (Kasper Grubbe) Date: Fri, 11 Jan 2013 12:34:28 +0100 Subject: Zero day security hole in Java plugin In-Reply-To: References: Message-ID: It is in the Java plugin running on the browser, nothing to do with NGINX. The Java zeroday is webserver agnostic, which means that is compatible with Apache, NGINX, Lighttpd etc. It requires a webpage to show an applet, and everything goes to hell afterwards. Disable your Java plugin in your browser, and never activate it again. 2013/1/11 Andre Jaenisch > Hello, > > a friend of mine called my attention to the following link: > > http://malware.dontneedcoffee.com/2013/01/0-day-17u10-spotted-in-while-disable.html > > I'm new to the server's world, so I'm not sure, wether this is "just" > a Java problem, but also a nginx one, since the server in question is > nginx 1.0.15 ? > However, it might be a good idea to spread the word of this security hole. > > Regards, > > > Andre Jaenisch > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 11 13:13:37 2013 From: nginx-forum at nginx.us (refercon) Date: Fri, 11 Jan 2013 08:13:37 -0500 Subject: who can help me ? Message-ID: The nginx make proxy server... Some time was ok,some time : [error] 18528#0: *50881 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 222.178.73.129, server: oa.owenschool.com, request: "GET / HTTP/1.1", upstream: "http://10.2.4.33:80/", host: "oa.owens.com.cn" Why? But some time is ok, some time is error.... The proxy config file: server { listen 80; server_name oa.owens.com.cn; location / { root html; index index.html index.htm; proxy_pass http://10.2.4.11; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 60; proxy_buffer_size 512k; proxy_buffers 8 512k; proxy_busy_buffers_size 512k; } location = /50x.html { root html; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234954,234954#msg-234954 From mdounin at mdounin.ru Fri Jan 11 13:20:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Jan 2013 17:20:26 +0400 Subject: Request time of 60s when denying SSL requests? In-Reply-To: <1357847946.19517.YahooMailNeo@web142405.mail.bf1.yahoo.com> References: <1357847946.19517.YahooMailNeo@web142405.mail.bf1.yahoo.com> Message-ID: <20130111132026.GC80623@mdounin.ru> Hello! On Thu, Jan 10, 2013 at 11:59:06AM -0800, JB Hobbs wrote: > I purposely use the following rule to deny both http and https requests made to the root or our nginx server: > location = / { access_log /logs/nginx/forbidden.log main; deny all; } > If you enter http://whatever.domaina234.com into a browser then nginx immediately returns the 403 page to the browser, as expected. This shows up in the log as this: > "[10/Jan/2013:12:57:30 -0500]" "-" "400" "0" "80" "-" "-" "0.000" "-" "-" "-" > where 0.000 is the $request_time. > However, if you make the request using https, like this https://whatever.domaina234.com then nginx immediately displays a 408 page in the browser (why this instead of 403?). And the most troubling part is that nothing shows up in the logs until about 60 seconds later, and then shows like this: > "[10/Jan/2013:12:59:20 -0500]" "-" "408" "0" "443" "-" "-" "59.999" "-" "-" "-" > Sometimes the request_time is 59.999, sometimes it is 60.000. But it is always 60 seconds. > This is troubling because it seems nginx is in a wait state of some sort for 60 seconds before finishing up with the request. I am concerned this is tying up resources of some kind. I am using nginx to front-end Tomcat, but my understanding is that with the "deny all" the processing should end there? And even if it was passing this on to Jetty, it would get a valid response back within a few ms. > I am certain the above "location" rule is being triggered, because if I change "deny all" to "return 507;" (just to pick an arbitrary number) then the browser shows "507" as the error code. > This seems odd to me. I don't know why nginx is following the rule I set up to deny the request, yet still seems to be "in process" in some way to account for the 60 seconds. And this only happens for HTTPS. So it looks like nginx handles it from the client perspective immediately, but then expects something else to happen during that 60 seconds. I don't think nginx is really doing any work on this during the 60 seconds. It doesn't show in top and the cpu is at 0% (doing this on a testing box). I tried forcing keep alive off in these situations but the results is still the 60 second "request time". Nginx is being used to front end a web service and in no case should someone make a request to the root like this. Therefore my goal is to immediately terminate any such request and minimize the amount of cpu resources being used to service such requests. > Any ideas? Thank you so much in advance for any help you can provide! I would suggest that what you see in logs is actually empty connection (without any request sent) opened by your browser in addition to one which actually did a request. These are expected to show up as 400 if client closes connection, but 408 if it's closed by nginx, and the exact code might depend on browser behaviour. The odd thing is that 408 page is displayed in the browser. Could you please double check and provide full sample configuration to reproduce? I've just checked with the following config: daemon off; error_log /dev/stderr notice; events { } http { server { listen 8443 ssl; ssl_certificate test-ssl.crt; ssl_certificate_key test-ssl-nopasswd.key; access_log /dev/stderr combined; location / { deny all; } } } and it returns 403 Forbidden as expected. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Fri Jan 11 13:23:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Jan 2013 17:23:20 +0400 Subject: who can help me ? In-Reply-To: References: Message-ID: <20130111132320.GD80623@mdounin.ru> Hello! On Fri, Jan 11, 2013 at 08:13:37AM -0500, refercon wrote: > The nginx make proxy server... > > Some time was ok,some time : > [error] 18528#0: *50881 recv() failed (104: Connection reset by peer) while > reading response header from upstream, client: 222.178.73.129, server: > oa.owenschool.com, request: "GET / HTTP/1.1", upstream: > "http://10.2.4.33:80/", host: "oa.owens.com.cn" > > Why? But some time is ok, some time is error.... You have to check what goes on on your backend server to find out why it resets the connection. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Fri Jan 11 14:48:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Jan 2013 18:48:12 +0400 Subject: OCSP_basic_verify() failed In-Reply-To: <88158ead6b8940bbef2dd00b430b7927.NginxMailingListEnglish@forum.nginx.org> References: <20130109094640.GA80623@mdounin.ru> <88158ead6b8940bbef2dd00b430b7927.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130111144812.GG80623@mdounin.ru> Hello! On Wed, Jan 09, 2013 at 05:02:11AM -0500, philipp wrote: > I have created a trust file both ways: > > cat www.hellmi.de.pem > www.hellmi.de.trust > cat subca.pem >> www.hellmi.de.trust > cat ca.pem >> www.hellmi.de.trust > > or > > cat subca.pem > www.hellmi.de.trust > cat ca.pem >> www.hellmi.de.trust > > and configured it as ssl_trusted_certificate, this did not help either. How > do I create a trusted certificate for a StartCom CA? > > This chain looks like this: > > StartCom Certification Authority (ca.pem) > StartCom Class 1 Primary Intermediate Server CA (subca.pem) > www.hellmi.de (www.hellmi.de.pem) Something like cat sub.class1.server.ca.pem ca.pem > trusted.pem should be enough (files named to match ones available from StartCom). I've just tested with a free class 1 cert from StartCom, and it works fine. If you still see errors with ssl_trusted_certificate configured - you may want to provide more details. -- Maxim Dounin http://nginx.com/support.html From hobbsjb at yahoo.com Fri Jan 11 15:37:04 2013 From: hobbsjb at yahoo.com (JB Hobbs) Date: Fri, 11 Jan 2013 07:37:04 -0800 (PST) Subject: Request time of 60s when denying SSL requests? In-Reply-To: <20130111132026.GC80623@mdounin.ru> References: <1357847946.19517.YahooMailNeo@web142405.mail.bf1.yahoo.com> <20130111132026.GC80623@mdounin.ru> Message-ID: <1357918624.76621.YahooMailNeo@web142403.mail.bf1.yahoo.com> Thank you Maxim. ?I have a few follow up points and questions please: 1. I should have mentioned that I was doing this on Nginx 0.6.x. ?I just tried the same test on Nginx 1.2.6. With 1.2.6 it does return the 403 to the browser as expected.? The following applies to my testing on Nginx 1.2.6: 2. I understand (and verfied by closing the browser sooner) from your response that the browser (Chrome in this case) is keeping the connection open with Nginx for 60 seconds when it is HTTPS (and about 10 seconds with http). ?However, if a browser makes a request to the root, I want to tell Nginx to force the connection closed immediately after retuning the 403. ?This is a high volume web service and I do not want browsers keeping requests open. Is there some sort of directive or option I can set within my location=/ block to tell nginx to drop the connection immediately upon returning the 403? ?This is highly desirable so I hope there is a way to do it. 3. On a related note - as I mentioned nginx is serving as a front-end to Jetty. The way our web service makes, a browser should only make a single request for one html page and never make another request until 24 hours later, when the cache period expires. ?With this in mind, even for the legitimate requests, I am wondering if it would be more efficient for the server if I turned off keep-alive because there will just be this single request. What do you think? Are there any other optimizations I can make to this or other settings to use considering nginx will be serving just one single request per 24 hour per unique browser? 4. I have a access_log directive that points to main.log outside of the "location" blocks so it serves as the default location for where Nginx should log requests to. ?Inside my "location=/" block I have another access_log directive that points to forbidden.log. ?When the above http and https request are made to "/", I do get a log entry in the forbidden.log as desired. ?However I also get this log entry in my main.log file as well. What do I need to do so that nginx only logs this to the forbidden.log, without (hopefully) removing the main.log entry defined outside of the location blocks (since I use this as a default from many other location blocks). Thank you so much for the excellent support!! :) ============================================ I would suggest that what you see in logs is actually empty? connection (without any request sent) opened by your browser in? addition to one which actually did a request. ?These are expected? to show up as 400 if client closes connection, but 408 if it's? closed by nginx, and the exact code might depend on browser? behaviour. The odd thing is that 408 page is displayed in the browser. ?Could? you please double check and provide full sample configuration to? reproduce? I've just checked with the following config: ? ? daemon off; ? ? error_log /dev/stderr notice; ? ? events { ? ? } ? ? http { ? ? ? ? server { ? ? ? ? ? ? listen 8443 ssl; ? ? ? ? ? ? ssl_certificate test-ssl.crt; ? ? ? ? ? ? ssl_certificate_key test-ssl-nopasswd.key; ? ? ? ? ? ? access_log /dev/stderr combined; ? ? ? ? ? ? location / { ? ? ? ? ? ? ? ? deny all; ? ? ? ? ? ? } ? ? ? ? } ? ? } and it returns 403 Forbidden as expected. --? Maxim Dounin http://nginx.com/support.html _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmiller at amfes.com Fri Jan 11 17:49:34 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 11 Jan 2013 09:49:34 -0800 Subject: Nginx, PHP, Wordpress, VirtualBox In-Reply-To: <1357861768.5359.27.camel@steve-new> References: <50EF5186.9080604@amfes.com> <1357861768.5359.27.camel@steve-new> Message-ID: On 1/10/2013 3:49 PM, Steve Holdoway wrote: > You don't say what the problem is, but if it's performance, look at: > > 1. Host database config > 2. PHP config - memory use > 3. Add an opcode cacher - APC seems to work best on php-fpm > 4. Nginx config - compression, expiry headers, fpm resources. > > TBH I feel that WP caching options are for those without the ability to > to the job properly - ie cannot tune their servers. > > ...but it all sounds ok to me TBH. I run pure nginx servers on KVM VPSes > with 128MB - and they only use half of that. > Do you have php or other services running on other VM's than the ngninx servers with such minimum settings? -- Daniel From mdounin at mdounin.ru Fri Jan 11 18:17:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Jan 2013 22:17:52 +0400 Subject: Request time of 60s when denying SSL requests? In-Reply-To: <1357918624.76621.YahooMailNeo@web142403.mail.bf1.yahoo.com> References: <1357847946.19517.YahooMailNeo@web142405.mail.bf1.yahoo.com> <20130111132026.GC80623@mdounin.ru> <1357918624.76621.YahooMailNeo@web142403.mail.bf1.yahoo.com> Message-ID: <20130111181752.GI80623@mdounin.ru> Hello! On Fri, Jan 11, 2013 at 07:37:04AM -0800, JB Hobbs wrote: > Thank you Maxim. ?I have a few follow up points and questions > please: > > 1. I should have mentioned that I was doing this on Nginx 0.6.x. > ?I just tried the same test on Nginx 1.2.6. With 1.2.6 it does > return the 403 to the browser as expected.? Well, ancient versions may do strange things. :) > The following applies to my testing on Nginx 1.2.6: > > 2. I understand (and verfied by closing the browser sooner) from > your response that the browser (Chrome in this case) is keeping > the connection open with Nginx for 60 seconds when it is HTTPS > (and about 10 seconds with http). ?However, if a browser makes a > request to the root, I want to tell Nginx to force the > connection closed immediately after retuning the 403. ?This is a > high volume web service and I do not want browsers keeping > requests open. > > Is there some sort of directive or option I can set within my > location=/ block to tell nginx to drop the connection > immediately upon returning the 403? ?This is highly desirable so > I hope there is a way to do it. You may disable keepalive by configuring keepalive_timeout to 0, see http://nginx.org/r/keepalive_timeout. It may be configured on a per-location basis, and hence you may configure nginx to don't use keepalive after 403 by configuring something like error_page 403 /403.html; location = /403.html { keepalive_timeout 0; } Note well: 400 and 408 in your previous message aren't after 403. They are logged for connections without any single request got by nginx, and keepalive_timeout do not apply here. To limit the time nginx will wait for a request you may tune client_header_timeout, see http://nginx.org/r/client_header_timeout. > 3. On a related note - as I mentioned nginx is serving as a > front-end to Jetty. The way our web service makes, a browser > should only make a single request for one html page and never > make another request until 24 hours later, when the cache period > expires. ?With this in mind, even for the legitimate requests, I > am wondering if it would be more efficient for the server if I > turned off keep-alive because there will just be this single > request. What do you think? Are there any other optimizations I > can make to this or other settings to use considering nginx will > be serving just one single request per 24 hour per unique > browser? I think you are right, disabling keepalive completely may be beneficial in such case. (Well, nginx itself doesn't care much, but it should be less painfull for your OS.) > 4. I have a access_log directive that points to main.log outside > of the "location" blocks so it serves as the default location > for where Nginx should log requests to. ?Inside my "location=/" > block I have another access_log directive that points to > forbidden.log. ?When the above http and https request are made > to "/", I do get a log entry in the forbidden.log as desired. > ?However I also get this log entry in my main.log file as well. > What do I need to do so that nginx only logs this to the > forbidden.log, without (hopefully) removing the main.log entry > defined outside of the location blocks (since I use this as a > default from many other location blocks). I believe you misunderstood what you actually get. Defining access_log inside the location overrides all access_log's difined on previous levels, so you'll only get requests logged to fobidden.log. What you see in your main.log is other connections opened by the browser. -- Maxim Dounin http://nginx.com/support.html From steve at greengecko.co.nz Fri Jan 11 18:43:32 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sat, 12 Jan 2013 07:43:32 +1300 Subject: Nginx, PHP, Wordpress, VirtualBox In-Reply-To: References: <50EF5186.9080604@amfes.com> <1357861768.5359.27.camel@steve-new> Message-ID: On 12/01/2013, at 6:49 AM, "Daniel L. Miller" wrote: > On 1/10/2013 3:49 PM, Steve Holdoway wrote: >> You don't say what the problem is, but if it's performance, look at: >> >> 1. Host database config >> 2. PHP config - memory use >> 3. Add an opcode cacher - APC seems to work best on php-fpm >> 4. Nginx config - compression, expiry headers, fpm resources. >> >> TBH I feel that WP caching options are for those without the ability to >> to the job properly - ie cannot tune their servers. >> >> ...but it all sounds ok to me TBH. I run pure nginx servers on KVM VPSes >> with 128MB - and they only use half of that. >> > Do you have php or other services running on other VM's than the ngninx servers with such minimum settings? > -- > Daniel > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Yes, they were running on the physical server. Steve From hobbsjb at yahoo.com Fri Jan 11 19:18:27 2013 From: hobbsjb at yahoo.com (JB Hobbs) Date: Fri, 11 Jan 2013 11:18:27 -0800 (PST) Subject: Request time of 60s when denying SSL requests? In-Reply-To: <20130111181752.GI80623@mdounin.ru> References: <1357847946.19517.YahooMailNeo@web142405.mail.bf1.yahoo.com> <20130111132026.GC80623@mdounin.ru> <1357918624.76621.YahooMailNeo@web142403.mail.bf1.yahoo.com> <20130111181752.GI80623@mdounin.ru> Message-ID: <1357931907.11376.YahooMailNeo@web142403.mail.bf1.yahoo.com> > You may disable keepalive by configuring keepalive_timeout to 0,? > see http://nginx.org/r/keepalive_timeout. > ? ?error_page 403 /403.html; > ? ?location = /403.html { > ? ? ?keepalive_timeout 0; } Would that approach be any different than me just putting "keepalive_timeout 0;" directly into my "location = /" block? Or doing that would not work because of the 403 page itself acts like a new request that then needs to have the keep alive suppressed there? > Note well: 400 and 408 in your previous message aren't after 403.? >?They are logged for connections without any single request got by >?nginx, and keepalive_timeout do not apply here.? To limit the time >?nginx will wait for a request you may tune client_header_timeout, >?see http://nginx.org/r/client_header_timeout. The way our web service works, our users fetch a tiny file from us (front-ended by Nginx). They do this by making a simple GET request. I can't imagine the headers transmitted to us are more than 1Kb or so - just the user agent string and whatever default headers browsers send. There would not be any cookies and so forth. ?With this in mind, what do you think would be a reasonable timeout to use? For someone on a dial up connection in a far away land with high latency I couldn't imagine it taking more than 10 seconds? I want to tune it tight, but not overly aggressive. At any rate, I tried putting a client_header_timeout setting inside of my "location = /" block, but Nginx returned an error message in the logs stating that the directive is not allowed in there. Basically what I would like to do is use a VERY aggressive client_header_timeout, even 0 if that is allowed, but specifically just for requests made to the root (location = /). I can do this because such requests made to our service are invalid and just "stray" requests coming in over the net. Therefore I want to dispose of any system resources ASAP for such requests, even if the browser doesn't like it. ?Is there a way I can set this client_header_timeout differently based on the location block like what I tried? If not, is there an alternative approach? > I think you are right, disabling keepalive completely may be? >?beneficial in such case.? (Well, nginx itself doesn't care much, >?but it should be less painfull for your OS.) Is there a command I can run like netstat or something that shows me what extent keepalive is taking up resources on my server? ?I would like to get a feel for what impact, if any, having it on is making to the system so that I can compare that to how things look after turning keepalive off. > I believe you misunderstood what you actually get.? Defining >?access_log inside the location overrides all access_log's difined >?on previous levels, so you'll only get requests logged to >?fobidden.log.? What you see in your main.log is other connections >?opened by the browser. Yes. I am seeing in forbidden.log the requests made to / as expected. ?Then about 10 seconds later for http, and 60 seconds later for https, I get the 400 or 408 log entry in main.log. ?I guess this is nginx's way of logging that it (408) or the browser (400) closed the connection. ?So then my question is how would I tell nginx to make this log entry somewhere else other than in main.log. ?As an example this is what it looks like in forbidden.log: ? ?"[11/Jan/2013:13:36:08 -0500]" "GET / HTTP/1.1" "403" "570" "443" "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.52 Safari/537.17" "0.070" "-" "-" "-" ? ?(appears instantly upon making the request) and this is what it looks like in main.log ? ?"[11/Jan/2013:13:37:08 -0500]" "-" "408" "0" "443" "-" "-" "60.006" "-" "-" "-" ? (appears 60 seconds after the initial request - how do I get Nginx to log this somewhere else?) Thank you once again for your very timely and detailed help. It is very much appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 11 19:31:10 2013 From: nginx-forum at nginx.us (daveyfx) Date: Fri, 11 Jan 2013 14:31:10 -0500 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: Message-ID: <68bd53a352d2d0c401163cff392aadc2.NginxMailingListEnglish@forum.nginx.org> lm011111 Wrote: ------------------------------------------------------- > Hello! > > is it possible that nginx will not buffer the client body before > handle the > request to upstream? > > we want to use nginx as a reverse proxy to upload very very big file > to the > upstream, but the default behavior of nginx is to save the whole > request to > the local disk first before handle it to the upstream, which make the > upstream impossible to process the file on the fly when the file is > uploading, results in much high request latency and server-side > resource > consumption. > > Thanks! > _______________________________________________ You could use the nginx upload module to do this. http://www.grid.net.ru/nginx/upload.en.html I currently am using this to support a video upload application built with Django and has full upload resume functionality as well. The only caveat is that I cannot support resumes across different sessions, but within the same session it works (uses the X-UPLOAD-SESSION-ID header or something similar). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234926,234964#msg-234964 From nginx-forum at nginx.us Sat Jan 12 07:33:10 2013 From: nginx-forum at nginx.us (nurettin) Date: Sat, 12 Jan 2013 02:33:10 -0500 Subject: nginx post response doesn't get cached In-Reply-To: <20130108193154.GH73378@mdounin.ru> References: <20130108193154.GH73378@mdounin.ru> Message-ID: <530beaeaf4745658671a0997cd0d167e.NginxMailingListEnglish@forum.nginx.org> Thanks a lot proxy_buffers 8 2m; proxy_buffer_size 10m; proxy_busy_buffers_size 10m; Now the response gets cached properly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234567,234967#msg-234967 From mdounin at mdounin.ru Sat Jan 12 18:33:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 Jan 2013 22:33:06 +0400 Subject: Request time of 60s when denying SSL requests? In-Reply-To: <1357931907.11376.YahooMailNeo@web142403.mail.bf1.yahoo.com> References: <1357847946.19517.YahooMailNeo@web142405.mail.bf1.yahoo.com> <20130111132026.GC80623@mdounin.ru> <1357918624.76621.YahooMailNeo@web142403.mail.bf1.yahoo.com> <20130111181752.GI80623@mdounin.ru> <1357931907.11376.YahooMailNeo@web142403.mail.bf1.yahoo.com> Message-ID: <20130112183306.GB14602@mdounin.ru> Hello! On Fri, Jan 11, 2013 at 11:18:27AM -0800, JB Hobbs wrote: > > You may disable keepalive by configuring keepalive_timeout to > > 0,? > > > see http://nginx.org/r/keepalive_timeout. > > > > ? ?error_page 403 /403.html; > > ? ?location = /403.html { > > ? ? ?keepalive_timeout 0; } > > Would that approach be any different than me just putting > "keepalive_timeout 0;" directly into my "location = /" block? Or > doing that would not work because of the 403 page itself acts > like a new request that then needs to have the keep alive > suppressed there? As long as all requests in "location /" are rejected, and there is no error_page 403 define, just "keepalive_timout 0" would be enough. Separate 403 location is needed to distinguish between allowed and not allowed requests. > > Note well: 400 and 408 in your previous message aren't after 403.? > >?They are logged for connections without any single request got by > >?nginx, and keepalive_timeout do not apply here.? To limit the time > >?nginx will wait for a request you may tune client_header_timeout, > >?see http://nginx.org/r/client_header_timeout. > > The way our web service works, our users fetch a tiny file from > us (front-ended by Nginx). They do this by making a simple GET > request. I can't imagine the headers transmitted to us are more > than 1Kb or so - just the user agent string and whatever default > headers browsers send. There would not be any cookies and so > forth. ?With this in mind, what do you think would be a > reasonable timeout to use? For someone on a dial up connection > in a far away land with high latency I couldn't imagine it > taking more than 10 seconds? I want to tune it tight, but not > overly aggressive. Minimal time one should allow IMO is slightly more than 3s to tolerate one retransmit without RTT known. With 10s up to two retransmits in a row will be allowed (3s + 6s), and it's ok for most use cases. > At any rate, I tried putting a client_header_timeout setting > inside of my "location = /" block, but Nginx returned an error > message in the logs stating that the directive is not allowed in > there. > > Basically what I would like to do is use a VERY aggressive > client_header_timeout, even 0 if that is allowed, but > specifically just for requests made to the root (location = /). > I can do this because such requests made to our service are > invalid and just "stray" requests coming in over the net. > Therefore I want to dispose of any system resources ASAP for > such requests, even if the browser doesn't like it. ?Is there a > way I can set this client_header_timeout differently based on > the location block like what I tried? If not, is there an > alternative approach? Request URI isn't known in advance, and therefore it's not possible to set different header timeouts for different locations. Moreover, please note it only works for _default_ server on the listen socket in question (as virtual host isn't known as well). Once request headers are got from client and you know the request isn't legitimate, you may just close the connection by using return 444; See http://nginx.org/r/return. > > I think you are right, disabling keepalive completely may be? > >?beneficial in such case.? (Well, nginx itself doesn't care > >much, ?but it should be less painfull for your OS.) > > Is there a command I can run like netstat or something that > shows me what extent keepalive is taking up resources on my > server? ?I would like to get a feel for what impact, if any, > having it on is making to the system so that I can compare that > to how things look after turning keepalive off. This depends on the OS you are using. E.g. on FreeBSD "vmstat -z" will show something like this: ... socket: 412, 25605, 149, 1804, 43516452, 0 unpcb: 172, 25622, 96, 686, 14762777, 0 ipq: 32, 904, 0, 226, 22503, 0 udp_inpcb: 220, 25614, 10, 134, 6521351, 0 udpcb: 8, 25781, 10, 193, 6521351, 0 tcp_inpcb: 220, 25614, 43, 6311, 22232147, 0 tcpcb: 632, 25602, 34, 1148, 22232147, 0 tcptw: 52, 5184, 9, 5175, 9010766, 114029 syncache: 112, 15365, 0, 175, 14160824, 0 hostcache: 76, 15400, 139, 261, 441570, 0 tcpreass: 20, 1690, 0, 338, 497191, 0 ... Each established TCP connection uses at least socket + tcp_inpcb + tcpcb structures, i.e. about 1.5k. Additionally, each connection sits in TCB hash and slows down lookups if there are lots of connections. This isn't a problem if you have properly tuned system and enough memory, but if you are trying to keep lots of connections alive - you may want to start counting. > > I believe you misunderstood what you actually get.? Defining > >?access_log inside the location overrides all access_log's difined > >?on previous levels, so you'll only get requests logged to > >?fobidden.log.? What you see in your main.log is other connections > >?opened by the browser. > > > Yes. I am seeing in forbidden.log the requests made to / as > expected. ?Then about 10 seconds later for http, and 60 seconds > later for https, I get the 400 or 408 log entry in main.log. ?I > guess this is nginx's way of logging that it (408) or the > browser (400) closed the connection. Not exactly - it's about "closed the connection without sending any complete request in it". 400 here means "client opened the connection presumably to send a request, but closed the connection before it was able to send a request", and 408 "... but failed to send a request before timeout expired". >?So then my question is how would I tell nginx to make this log > entry somewhere else other than in main.log. ?As an example this > is what it looks like in forbidden.log: If you want nginx to log 400 and 408 errors separately you have to configure error_page to handle these errors, and configure access_log there. E.g.: error_page 408 /408.html; location = /408.html { access_log /path/to/408.log combined; } [...] -- Maxim Dounin http://nginx.com/support.html From hobbsjb at yahoo.com Sat Jan 12 20:19:15 2013 From: hobbsjb at yahoo.com (JB Hobbs) Date: Sat, 12 Jan 2013 12:19:15 -0800 (PST) Subject: Request time of 60s when denying SSL requests? References: <1357847946.19517.YahooMailNeo@web142405.mail.bf1.yahoo.com> <20130111132026.GC80623@mdounin.ru> <1357918624.76621.YahooMailNeo@web142403.mail.bf1.yahoo.com> <20130111181752.GI80623@mdounin.ru> <1357931907.11376.YahooMailNeo@web142403.mail.bf1.yahoo.com> <20130112183306.GB14602@mdounin.ru> Message-ID: <1358021955.57616.YahooMailNeo@web142402.mail.bf1.yahoo.com> > Request URI isn't known in advance, and therefore it's not? >?possible to set different header timeouts for different locations.? >?Moreover, please note it only works for _default_ server on the >?listen socket in question (as virtual host isn't known as well). >?Once request headers are got from client and you know the request > isn't legitimate, you may just close the connection by using > return 444; Thanks. I tested this. I think in some ways it is worse. ?In one way it seems better because with 444 I do not get a 408 from Nginx 60 seconds later. However, sending the 444 causes Chrome to try multiple times in a row. For instance just entering https://mydomain/ one time in the browser and not refreshing the page at all gives this: "[12/Jan/2013:15:10:33 -0500]" "GET / HTTP/1.1" "444" "0" "443" "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.52 Safari/537.17" "0.055" "-" "-" "-" "[12/Jan/2013:15:10:35 -0500]" "GET / HTTP/1.1" "444" "0" "443" "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.52 Safari/537.17" "1.683" "-" "-" "-" "[12/Jan/2013:15:10:35 -0500]" "GET / HTTP/1.1" "444" "0" "443" "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.52 Safari/537.17" "0.029" "-" "-" "-" "[12/Jan/2013:15:10:35 -0500]" "GET / HTTP/1.1" "444" "0" "443" "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.52 Safari/537.17" "0.020" "-" "-" "-" So it seems that returning the 444 makes Chrome want to try 4 more times before giving up. ?That's got to be worse than with the 403 and it trying once but keeping the connection, you think? I am wondering if I am concerning myself too much with this 60 second delay before nginx closes the connection. I can probably use client_header_timeout at 15s and still have that be safe and so the connection doesn't stay more than 15 seconds before Nginx closes it out. ?But I still wonder if having this connection stick around is wasting resources? > This depends on the OS you are using.? E.g. on FreeBSD "vmstat -z"? > will show something like this: > This isn't a problem if you have properly tuned? > system and enough memory, but if you are trying to keep lots of > connections alive - you may want to start counting. Sorry I should have specified I am on Fedora Core 17. It has a vmstat but no -z option? ?Anyway, in looking at the output, how can one determine whether the amount of sockets and such being held is nearing the OS limits? Thanks again! -------------- next part -------------- An HTML attachment was scrubbed... URL: From root at numberchan.dyndns.dk Sat Jan 12 22:13:35 2013 From: root at numberchan.dyndns.dk (sgdisjigk) Date: Sat, 12 Jan 2013 14:13:35 -0800 Subject: fastcgi wrapper for windows Message-ID: <50F1E00F.4020305@numberchan.dyndns.dk> is there any? From mdounin at mdounin.ru Sun Jan 13 01:57:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 13 Jan 2013 05:57:41 +0400 Subject: Request time of 60s when denying SSL requests? In-Reply-To: <1358021955.57616.YahooMailNeo@web142402.mail.bf1.yahoo.com> References: <1357847946.19517.YahooMailNeo@web142405.mail.bf1.yahoo.com> <20130111132026.GC80623@mdounin.ru> <1357918624.76621.YahooMailNeo@web142403.mail.bf1.yahoo.com> <20130111181752.GI80623@mdounin.ru> <1357931907.11376.YahooMailNeo@web142403.mail.bf1.yahoo.com> <20130112183306.GB14602@mdounin.ru> <1358021955.57616.YahooMailNeo@web142402.mail.bf1.yahoo.com> Message-ID: <20130113015741.GF14602@mdounin.ru> Hello! On Sat, Jan 12, 2013 at 12:19:15PM -0800, JB Hobbs wrote: > > return 444; > > Thanks. I tested this. I think in some ways it is worse. ?In one > way it seems better because with 444 I do not get a 408 from > Nginx 60 seconds later. > > However, sending the 444 causes Chrome to try multiple times in > a row. For instance just entering https://mydomain/ one time in > the browser and not refreshing the page at all gives this: Yes, for https hosts most browsers retry with various workarounds if connection is closed, and use of "return 444" with https is probably not a good idea. [...] > I am wondering if I am concerning myself too much with this 60 > second delay before nginx closes the connection. I can probably > use client_header_timeout at 15s and still have that be safe and > so the connection doesn't stay more than 15 seconds before Nginx > closes it out. ?But I still wonder if having this connection > stick around is wasting resources? It is, but a) most likely you wouldn't even notice, and b) you anyway can't avoid it completely. > > This depends on the OS you are using.? E.g. on FreeBSD "vmstat -z"? > > will show something like this: > > > This isn't a problem if you have properly tuned? > > system and enough memory, but if you are trying to keep lots of > > connections alive - you may want to start counting. > > Sorry I should have specified I am on Fedora Core 17. It has a > vmstat but no -z option? ?Anyway, in looking at the output, how > can one determine whether the amount of sockets and such being > held is nearing the OS limits? Sorry, I'm not familiar with Linux and can't help. Google returns about 1 mln results on "linux tuning c10k" query though. -- Maxim Dounin http://nginx.com/support.html From dmiller at amfes.com Sun Jan 13 05:40:14 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Sat, 12 Jan 2013 21:40:14 -0800 Subject: Variables and includes Message-ID: Is it possible to use a variable from one configuration in a included config file? Example: set $a = "hello"; include test.conf; [test.conf] if ($a = "hello") { set $a = "world"; } # something that works with $a Within the scope of the commands of test.conf, will $a be "hello" or "world"? Currently my usage like this gives me a, "using unitialized variable" warning. -- Daniel From dmiller at amfes.com Sun Jan 13 05:52:31 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Sat, 12 Jan 2013 21:52:31 -0800 Subject: Perceived poor performance Message-ID: I don't know if this actually IS poor performance - it just feels like it. I'm running nginx and php-fpm on a VirtualBox virtual server. No other services are running (other than the standard Ubuntu Precise minor items for a server). The VM has four cores and 1G allocated. Any time I connect to my server it seems to take 3 seconds before the request is processed. I've seen some references via Google that indicate this is TCP related - but I don't know where to look to find the break. The second item - which is probably meaningless until the 3 second delay is resolved - is stress tests from an external server (I'm trying http://loader.io) start dropping connections after about 30 simultaneous connections. From what little I've gathered about nginx performance this is absurd. -- Daniel From dmiller at amfes.com Sun Jan 13 06:17:50 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Sat, 12 Jan 2013 22:17:50 -0800 Subject: Perceived poor performance In-Reply-To: <50F24B9F.60401@amfes.com> References: <50F24B9F.60401@amfes.com> Message-ID: On 1/12/2013 9:52 PM, Daniel L. Miller wrote: > I don't know if this actually IS poor performance - it just feels like > it. > > I'm running nginx and php-fpm on a VirtualBox virtual server. No > other services are running (other than the standard Ubuntu Precise > minor items for a server). The VM has four cores and 1G allocated. > > Any time I connect to my server it seems to take 3 seconds before the > request is processed. I've seen some references via Google that > indicate this is TCP related - but I don't know where to look to find > the break. Further tests show me it isn't for all requests - only requests to my Wordpress sites, which have a much more complex nginx configuration that other sites which give me instantaneous response. What (improper) commands would have such a major slowdown effect? -- Daniel From steve at greengecko.co.nz Sun Jan 13 06:28:31 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sun, 13 Jan 2013 19:28:31 +1300 Subject: Perceived poor performance In-Reply-To: References: <50F24B9F.60401@amfes.com> Message-ID: <50F2540F.90403@greengecko.co.nz> On 13/01/13 19:17, Daniel L. Miller wrote: > On 1/12/2013 9:52 PM, Daniel L. Miller wrote: >> I don't know if this actually IS poor performance - it just feels >> like it. >> >> I'm running nginx and php-fpm on a VirtualBox virtual server. No >> other services are running (other than the standard Ubuntu Precise >> minor items for a server). The VM has four cores and 1G allocated. >> >> Any time I connect to my server it seems to take 3 seconds before the >> request is processed. I've seen some references via Google that >> indicate this is TCP related - but I don't know where to look to find >> the break. > Further tests show me it isn't for all requests - only requests to my > Wordpress sites, which have a much more complex nginx configuration > that other sites which give me instantaneous response. What > (improper) commands would have such a major slowdown effect? a wp config isn't particularly complex. What database tuning have you done (are the fast ones also using a database)? yes I know it's remote, but... What memory are you allocating to the server? How busy is the physical server? Are you running a local DNS server? Sorry I'm not versed in VB, but I run plenty of WP sites off KVM servers. From dmiller at amfes.com Sun Jan 13 06:45:17 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Sat, 12 Jan 2013 22:45:17 -0800 Subject: Perceived poor performance In-Reply-To: <50F2540F.90403@greengecko.co.nz> References: <50F24B9F.60401@amfes.com> <50F2518E.3080803@amfes.com> <50F2540F.90403@greengecko.co.nz> Message-ID: On 1/12/2013 10:28 PM, Steve Holdoway wrote: > On 13/01/13 19:17, Daniel L. Miller wrote: >> On 1/12/2013 9:52 PM, Daniel L. Miller wrote: >>> I don't know if this actually IS poor performance - it just feels >>> like it. >>> >>> I'm running nginx and php-fpm on a VirtualBox virtual server. No >>> other services are running (other than the standard Ubuntu Precise >>> minor items for a server). The VM has four cores and 1G allocated. >>> >>> Any time I connect to my server it seems to take 3 seconds before >>> the request is processed. I've seen some references via Google that >>> indicate this is TCP related - but I don't know where to look to >>> find the break. >> Further tests show me it isn't for all requests - only requests to my >> Wordpress sites, which have a much more complex nginx configuration >> that other sites which give me instantaneous response. What >> (improper) commands would have such a major slowdown effect? > a wp config isn't particularly complex. > > What database tuning have you done (are the fast ones also using a > database)? yes I know it's remote, but... > What memory are you allocating to the server? > How busy is the physical server? > Are you running a local DNS server? > > Sorry I'm not versed in VB, but I run plenty of WP sites off KVM servers. > The fast ones are using the same Mysql database. I've done a little bit of Mysql tweaking - I periodically run mysqltuner - but I truly don't think that's the issue. I've got 1G of RAM allocated to the VM. The physical server has 16G. The physical server is a 6-core Opteron. Under "load" - neither the host nor the guest show much usage. I am running a local DNS server (PowerDNS). While I've already admitted my overall ignorance - I truly don't think it's anything else. I've probably munged my nginx wordpress config - I just don't know HOW! Getting wordpress multi-site running was quite a frustration. I may try to rebuild my config. One of my issues is I may have tried to be too "elegant" - I like using multiple include files to try to avoid duplication. That way multiple configs can share common setups. I may have broken the config by breaking things up too far! And I don't know if combining fast-cgi caching with W3 Total Cache is helping or hurting (assuming of course I'm doing it correctly). For me, the whole "rewrite" process, particularly the combination of Wordpress' logic and nginx syntax, has me quite confused. -- Daniel From nginx-list at puzzled.xs4all.nl Sun Jan 13 07:17:23 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Sun, 13 Jan 2013 08:17:23 +0100 Subject: Perceived poor performance In-Reply-To: References: <50F24B9F.60401@amfes.com> Message-ID: <50F25F83.3070601@puzzled.xs4all.nl> On 01/13/2013 07:17 AM, Daniel L. Miller wrote: [snip] > Further tests show me it isn't for all requests - only requests to my > Wordpress sites, which have a much more complex nginx configuration that > other sites which give me instantaneous response. What (improper) > commands would have such a major slowdown effect? For my Wordpress site I installed nginx-helper in Wordpress: http://wordpress.org/extend/plugins/nginx-helper/ And added the cache config inspired by these 2 pages: http://rtcamp.com/tutorials/nginx-wordpress-fastcgi_cache-with-conditional-purging/ http://codex.wordpress.org/Nginx And the Wordpress site became much faster. You do need to build nginx with the ngx_cache_purge module from: https://github.com/FRiCKLE/ngx_cache_purge Wordpress multi-site info here: http://rtcamp.com/tutorials/wordpress-multisite-subdirectory-subdomain-domain-mapping-overview/ http://rtcamp.com/tutorials/nginx-wordpress-multisite-subdirectories-fastcgi_cache-with-conditional-purging/ Check the source of the generated pages. It shows some info how long it took to generate the page. You may also want to look at your DB and optimize where possible. Regards, Patrick From zjay1987 at gmail.com Sun Jan 13 09:01:43 2013 From: zjay1987 at gmail.com (li zJay) Date: Sun, 13 Jan 2013 17:01:43 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: Message-ID: Hello! @yaoweibin > If you are eager for this feature, you could try my patch: > https://github.com/taobao/tengine/pull/91. This patch has been running in > our production servers. what's the nginx version your patch based on? Thanks! On Fri, Jan 11, 2013 at 5:17 PM, ??? wrote: > I know nginx team are working on it. You can wait for it. > > If you are eager for this feature, you could try my patch: > https://github.com/taobao/tengine/pull/91. This patch has been running in > our production servers. > > 2013/1/11 li zJay > >> Hello! >> >> is it possible that nginx will not buffer the client body before handle >> the request to upstream? >> >> we want to use nginx as a reverse proxy to upload very very big file to >> the upstream, but the default behavior of nginx is to save the whole >> request to the local disk first before handle it to the upstream, which >> make the upstream impossible to process the file on the fly when the file >> is uploading, results in much high request latency and server-side resource >> consumption. >> >> Thanks! >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jan 13 11:03:17 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 13 Jan 2013 06:03:17 -0500 Subject: fastcgi wrapper for windows In-Reply-To: <50F1E00F.4020305@numberchan.dyndns.dk> References: <50F1E00F.4020305@numberchan.dyndns.dk> Message-ID: <0e5147571376f1a3883eb5422d994ddf.NginxMailingListEnglish@forum.nginx.org> Look for srvany, http://support.microsoft.com/kb/137890 Then create a batchfile (.cmd) which loads php-cgi.exe with its parameters or make the service run php-cgi directly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234982,234998#msg-234998 From yaoweibin at gmail.com Sun Jan 13 12:22:17 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Sun, 13 Jan 2013 20:22:17 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: Message-ID: This patch should work between nginx-1.2.6 and nginx-1.3.8. The documentation is here: ## client_body_postpone_sending ## Syntax: **client_body_postpone_sending** `size` Default: 64k Context: `http, server, location` If you specify the `proxy_request_buffering` or `fastcgi_request_buffering` to be off, Nginx will send the body to backend when it receives more than `size` data or the whole request body has been received. It could save the connection and reduce the IO number with backend. ## proxy_request_buffering ## Syntax: **proxy_request_buffering** `on | off` Default: `on` Context: `http, server, location` Specify the request body will be buffered to the disk or not. If it's off, the request body will be stored in memory and sent to backend after Nginx receives more than `client_body_postpone_sending` data. It could save the disk IO with large request body. Note that, if you specify it to be off, the nginx retry mechanism with unsuccessful response will be broken after you sent part of the request to backend. It will just return 500 when it encounters such unsuccessful response. This directive also breaks these variables: $request_body, $request_body_file. You should not use these variables any more while their values are undefined. ## fastcgi_request_buffering ## Syntax: **fastcgi_request_buffering** `on | off` Default: `on` Context: `http, server, location` The same as `proxy_request_buffering`. 2013/1/13 li zJay > Hello! > > @yaoweibin > >> If you are eager for this feature, you could try my patch: >> https://github.com/taobao/tengine/pull/91. This patch has been running >> in our production servers. > > what's the nginx version your patch based on? > > Thanks! > > On Fri, Jan 11, 2013 at 5:17 PM, ??? wrote: > >> I know nginx team are working on it. You can wait for it. >> >> If you are eager for this feature, you could try my patch: >> https://github.com/taobao/tengine/pull/91. This patch has been running >> in our production servers. >> >> 2013/1/11 li zJay >> >>> Hello! >>> >>> is it possible that nginx will not buffer the client body before handle >>> the request to upstream? >>> >>> we want to use nginx as a reverse proxy to upload very very big file to >>> the upstream, but the default behavior of nginx is to save the whole >>> request to the local disk first before handle it to the upstream, which >>> make the upstream impossible to process the file on the fly when the file >>> is uploading, results in much high request latency and server-side resource >>> consumption. >>> >>> Thanks! >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> Weibin Yao >> Developer @ Server Platform Team of Taobao >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: no_buffer.patch Type: application/octet-stream Size: 39692 bytes Desc: not available URL: From yaoweibin at gmail.com Sun Jan 13 12:25:51 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Sun, 13 Jan 2013 20:25:51 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: Message-ID: This patch should work between nginx-1.2.6 and nginx-1.3.8. The documentation is here: ## client_body_postpone_sending ## Syntax: **client_body_postpone_sending** `size` Default: 64k Context: `http, server, location` If you specify the `proxy_request_buffering` or `fastcgi_request_buffering` to be off, Nginx will send the body to backend when it receives more than `size` data or the whole request body has been received. It could save the connection and reduce the IO number with backend. ## proxy_request_buffering ## Syntax: **proxy_request_buffering** `on | off` Default: `on` Context: `http, server, location` Specify the request body will be buffered to the disk or not. If it's off, the request body will be stored in memory and sent to backend after Nginx receives more than `client_body_postpone_sending` data. It could save the disk IO with large request body. Note that, if you specify it to be off, the nginx retry mechanism with unsuccessful response will be broken after you sent part of the request to backend. It will just return 500 when it encounters such unsuccessful response. This directive also breaks these variables: $request_body, $request_body_file. You should not use these variables any more while their values are undefined. ## fastcgi_request_buffering ## Syntax: **fastcgi_request_buffering** `on | off` Default: `on` Context: `http, server, location` The same as `proxy_request_buffering`. 2013/1/13 li zJay > Hello! > > @yaoweibin > >> If you are eager for this feature, you could try my patch: >> https://github.com/taobao/tengine/pull/91. This patch has been running >> in our production servers. > > what's the nginx version your patch based on? > > Thanks! > > On Fri, Jan 11, 2013 at 5:17 PM, ??? wrote: > >> I know nginx team are working on it. You can wait for it. >> >> If you are eager for this feature, you could try my patch: >> https://github.com/taobao/tengine/pull/91. This patch has been running >> in our production servers. >> >> 2013/1/11 li zJay >> >>> Hello! >>> >>> is it possible that nginx will not buffer the client body before handle >>> the request to upstream? >>> >>> we want to use nginx as a reverse proxy to upload very very big file to >>> the upstream, but the default behavior of nginx is to save the whole >>> request to the local disk first before handle it to the upstream, which >>> make the upstream impossible to process the file on the fly when the file >>> is uploading, results in much high request latency and server-side resource >>> consumption. >>> >>> Thanks! >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> Weibin Yao >> Developer @ Server Platform Team of Taobao >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jan 13 15:39:14 2013 From: nginx-forum at nginx.us (PascalTurbo) Date: Sun, 13 Jan 2013 10:39:14 -0500 Subject: otrs on nginx with fcgiwrap Message-ID: <2d2b8627ef71a4b4bce97f4fb2bac1c4.NginxMailingListEnglish@forum.nginx.org> Hi There, I'm trying to run OTRS on Debian with nginx and fcgiwrap. But all I get is this error: FastCGI sent in stderr: "Cannot get script name, is DOCUMENT_ROOT and SCRIPT_NAME set and is the script executable?" while reading response header from upstream, client: 123.123.123.123, server: support.example.com, request: "GET /otrs/index.pl HTTP/1.1", upstream: "fastcgi://unix:/var/www/sockets/fcgiwrap.socket:", host: "support.example.com", referrer: "http://support.examle.com/" Here's my configuration: server { server_name support.example.com; access_log /var/www/support/log/access.log; access_log /var/log/nginx/access.log; error_log /var/www/support/log/error.log info; fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; root /var/www/support/otrs/var/httpd/htdocs; index index.html; location = /favicon.ico { access_log off; log_not_found off; } location /otrs-web/ { alias /var/www/support/otrs/var/httpd/htdocs; } location ~ ^/otrs/(.*\.pl)(/.*)?$ { gzip off; fastcgi_pass unix:/var/www/sockets/fcgiwrap.socket; fastcgi_index index.pl; fastcgi_param SCRIPT_FILENAME /var/www/support/otrs/bin/fcgi-bin/$1; include fastcgi_params; } } and the fastcgi_params: fastcgi_connect_timeout 65; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; Any idea what could be the problem? Kind regards Pascal Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235011,235011#msg-235011 From francis at daoine.org Sun Jan 13 21:51:17 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 13 Jan 2013 21:51:17 +0000 Subject: otrs on nginx with fcgiwrap In-Reply-To: <2d2b8627ef71a4b4bce97f4fb2bac1c4.NginxMailingListEnglish@forum.nginx.org> References: <2d2b8627ef71a4b4bce97f4fb2bac1c4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130113215117.GG4332@craic.sysops.org> On Sun, Jan 13, 2013 at 10:39:14AM -0500, PascalTurbo wrote: Hi there, > I'm trying to run OTRS on Debian with nginx and fcgiwrap. But all I get is > this error: > > FastCGI sent in stderr: "Cannot get script name, is DOCUMENT_ROOT and > SCRIPT_NAME set > and is the script executable?" while reading response header from upstream, All untested, but: that message suggests that your fastcgi server (==fcgiwrap) uses DOCUMENT_ROOT and SCRIPT_NAME to decide which file to process. Which file do you want fcgiwrap to process? What is DOCUMENT_ROOT? What is SCRIPT_NAME? Does combining them lead you to that file? Is that file executable? It may be that switching to a fcgiwrap that uses SCRIPT_FILENAME is the simplest fix for you. Or maybe your fcgiwrap does already use SCRIPT_FILENAME, but it uses the last one that it receives -- in that case putting your "include" line before your "fastcgi_param" line may be sufficient. You must match your nginx "fastcgi_param" configuration to whatever it is that your fastcgi server requires. Your fastcgi server documentation may give a better indication of what that is. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Jan 14 02:50:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Jan 2013 06:50:20 +0400 Subject: Variables and includes In-Reply-To: References: Message-ID: <20130114025020.GE25043@mdounin.ru> Hello! On Sat, Jan 12, 2013 at 09:40:14PM -0800, Daniel L. Miller wrote: > Is it possible to use a variable from one configuration in a > included config file? Example: > > set $a = "hello"; > include test.conf; > > [test.conf] > if ($a = "hello") { > set $a = "world"; > } > # something that works with $a > > Within the scope of the commands of test.conf, will $a be "hello" or > "world"? Currently my usage like this gives me a, "using > unitialized variable" warning. The "include" directive works during configuration parsing and completely transparent to everything else. That is, you may set a variable in one file and then use it in an included file, it is expected to work fine. On the other hand, example you've provided is syntactically invalid and will result in the following error during configuration parsing: nginx: [emerg] invalid number of arguments in "set" directive in ... Correct way to write it would be set $a "hello"; Note there is no "=" character. See http://nginx.org/r/set for details. Note well, that after fixing the example the $a at the end of test.conf will be either "world" (if test.conf goes after 'set $a "hello";') or uninitialized if it's included somewhere else. -- Maxim Dounin http://nginx.com/support.html From dmiller at amfes.com Mon Jan 14 04:18:35 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Sun, 13 Jan 2013 20:18:35 -0800 Subject: Variables and includes In-Reply-To: <20130114025020.GE25043@mdounin.ru> References: <50F248BE.1050202@amfes.com> <20130114025020.GE25043@mdounin.ru> Message-ID: On 1/13/2013 6:50 PM, Maxim Dounin wrote: > Hello! > > On Sat, Jan 12, 2013 at 09:40:14PM -0800, Daniel L. Miller wrote: > >> Is it possible to use a variable from one configuration in a >> included config file? Example: >> >> set $a = "hello"; >> include test.conf; >> >> [test.conf] >> if ($a = "hello") { >> set $a = "world"; >> } >> # something that works with $a >> >> Within the scope of the commands of test.conf, will $a be "hello" or >> "world"? Currently my usage like this gives me a, "using >> unitialized variable" warning. > The "include" directive works during configuration parsing and > completely transparent to everything else. That is, you may set a > variable in one file and then use it in an included file, it is > expected to work fine. > > On the other hand, example you've provided is syntactically > invalid and will result in the following error during > configuration parsing: > > nginx: [emerg] invalid number of arguments in "set" directive in ... > > Correct way to write it would be > > set $a "hello"; > > Note there is no "=" character. See http://nginx.org/r/set for > details. Shows what happens when you quickly type up an example...thanks for catching it. So I repeat my question - why, given the above example (with correct syntax), would I see warnings for "uninitialized variable" for the above in the "test.conf", as the variable is declared prior to the include statement? -- Daniel From yaoweibin at gmail.com Mon Jan 14 08:11:20 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Mon, 14 Jan 2013 16:11:20 +0800 Subject: A problem with the keepalive module and the direcitve proxy_next_upstream Message-ID: Hi, folks, We have found a bug with the keepalive module. When we used the keepalive module, the directive proxy_next_upstream seems disabled. We use Nginx as reverse server. Our backend servers simply close connection when read some abnormal packets. Nginx will call the function ngx_http_upstream_next() and try to use the next server. The ft_type is NGX_HTTP_UPSTREAM_FT_ERROR. We want to turn off the try mechanism with such packets. Otherwise, it will try all the servers every time. We use directive proxy_next_upstream off. If it's not keepalive connection, everything is fine. If it's keepalive connection, it will run such code: 2858 if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { 2859 status = 0; 2860 2861 /* TODO: inform balancer instead */ 2862 2863 u->peer.tries++; 2864 The status is cleared to be 0. The below code will never be touched: 2896 if (status) { 2897 u->state->status = status; 2898 2899 if (u->peer.tries == 0 || !(u->conf->next_upstream & ft_type)) { The variable of tries and u->conf->next_upstream become useless. I don't know why the cached connection should clear the status, Can we just remove the code from line 2858 to 2864? Is there any side effect? -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 14 10:51:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Jan 2013 14:51:24 +0400 Subject: A problem with the keepalive module and the direcitve proxy_next_upstream In-Reply-To: References: Message-ID: <20130114105124.GI25043@mdounin.ru> Hello! On Mon, Jan 14, 2013 at 04:11:20PM +0800, ??? wrote: > Hi, folks, > > We have found a bug with the keepalive module. When we used the keepalive > module, the directive proxy_next_upstream seems disabled. > > We use Nginx as reverse server. Our backend servers simply close connection > when read some abnormal packets. Nginx will call the function > ngx_http_upstream_next() and try to use the next server. The ft_type > is NGX_HTTP_UPSTREAM_FT_ERROR. We want to turn off the try mechanism with > such packets. Otherwise, it will try all the servers every time. We use > directive proxy_next_upstream off. If it's not keepalive connection, > everything is fine. If it's keepalive connection, it will run such code: > > 2858 if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { > 2859 status = 0; > 2860 > 2861 /* TODO: inform balancer instead */ > 2862 > 2863 u->peer.tries++; > 2864 > > The status is cleared to be 0. The below code will never be touched: > > 2896 if (status) { > 2897 u->state->status = status; > 2898 > 2899 if (u->peer.tries == 0 || !(u->conf->next_upstream & ft_type)) > { > > The variable of tries and u->conf->next_upstream become useless. > > I don't know why the cached connection should clear the status, Can we just > remove the code from line 2858 to 2864? Is there any side effect? Cached connection might be (legitimately) closed by an upstream server at any time, so the code always retries if sending request failed. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Mon Jan 14 11:09:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Jan 2013 15:09:30 +0400 Subject: Variables and includes In-Reply-To: References: <50F248BE.1050202@amfes.com> <20130114025020.GE25043@mdounin.ru> Message-ID: <20130114110930.GN25043@mdounin.ru> Hello! On Sun, Jan 13, 2013 at 08:18:35PM -0800, Daniel L. Miller wrote: > On 1/13/2013 6:50 PM, Maxim Dounin wrote: > >Hello! > > > >On Sat, Jan 12, 2013 at 09:40:14PM -0800, Daniel L. Miller wrote: > > > >>Is it possible to use a variable from one configuration in a > >>included config file? Example: > >> > >>set $a = "hello"; > >>include test.conf; > >> > >>[test.conf] > >>if ($a = "hello") { > >> set $a = "world"; > >>} > >># something that works with $a > >> > >>Within the scope of the commands of test.conf, will $a be "hello" or > >>"world"? Currently my usage like this gives me a, "using > >>unitialized variable" warning. > >The "include" directive works during configuration parsing and > >completely transparent to everything else. That is, you may set a > >variable in one file and then use it in an included file, it is > >expected to work fine. > > > >On the other hand, example you've provided is syntactically > >invalid and will result in the following error during > >configuration parsing: > > > >nginx: [emerg] invalid number of arguments in "set" directive in ... > > > >Correct way to write it would be > > > > set $a "hello"; > > > >Note there is no "=" character. See http://nginx.org/r/set for > >details. > Shows what happens when you quickly type up an example...thanks for > catching it. > > So I repeat my question - why, given the above example (with correct > syntax), would I see warnings for "uninitialized variable" for the > above in the "test.conf", as the variable is declared prior to the > include statement? Because of another problem resulted from quick typing? Show the exact configuration which produces warning for you. -- Maxim Dounin http://nginx.com/support.html From peter at donka.hu Mon Jan 14 12:04:10 2013 From: peter at donka.hu (peter at donka.hu) Date: Mon, 14 Jan 2013 13:04:10 +0100 Subject: Multiple site with PHP-FPM home directory permission Message-ID: Thx! Its working like this. On 11/01/13 21:07, peter at donka.hu wrote: > Hi Guys! > > I have an nginx server with multiple virtual hosted site. Every site > running with unique user permission using PHP-FPM. > Its all fine, i see the user variable in the phpinfo page and i see the > right username. > > However i have a little problem. > Here an example what is have then i write what is the problem. > > in the /var/www directory i have all site webroot like: > > domain.tld > domain1.tld > > etc.. > > every folder have the connected php-fpm user rights like owner and group > > so domain.tld folder user and group is domain.tld > and have 0755 permission, so only the owner can write group and everybody > else just read. > > I want to restrict this to that only thy owner/group can enter this > directory, so i need 0750 flag. > In that case the web site no longer loaded i see 404 error and in the log > files a permission denied error. > Then i realize i need to gain access to the www-data too, because this > user try to enter to the main directory. > So i add www-data to the domain.tld group, but same problem. I all can get > the permission denied. > If i set back the 0755 permission, so everybody can read/enter this > directory it will working again. > > Is there any way to set a permission that the web page working fine but > the directory only accessible by the owner and www-data and root? > > Thx for the help! > Peter > chgrp -R www-data . find . -type d | xargs chmod 2750 will provide and future proof read access to the web server. I assume there is a dedicated php-fpm process for each site, running as the appropriate owner. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From peter at donka.hu Mon Jan 14 12:06:04 2013 From: peter at donka.hu (peter at donka.hu) Date: Mon, 14 Jan 2013 13:06:04 +0100 Subject: htaccess to nginx Message-ID: Hi guys! I have this rewrite rule in apache: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^([\w\.-]*)(?:\/(.*))?$ $1.php?q=$2 [QSA,L] I need to translate it to the nginx. I tryed this: http://www.anilcetin.com/convert-apache-htaccess-to-nginx/ and this: http://winginx.com/htaccess non of them work properly. anybody can help me out about this? Thx! Peter From yaoweibin at gmail.com Mon Jan 14 14:14:01 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Mon, 14 Jan 2013 22:14:01 +0800 Subject: A problem with the keepalive module and the direcitve proxy_next_upstream In-Reply-To: <20130114105124.GI25043@mdounin.ru> References: <20130114105124.GI25043@mdounin.ru> Message-ID: The nginx end will close the cacahed connection actively in this next upstream function. I don't know why it should always *retry* other server and don't honor the tries and u->conf->next_upstream variable. Thanks. 2013/1/14 Maxim Dounin > Hello! > > On Mon, Jan 14, 2013 at 04:11:20PM +0800, ??? wrote: > > > Hi, folks, > > > > We have found a bug with the keepalive module. When we used the keepalive > > module, the directive proxy_next_upstream seems disabled. > > > > We use Nginx as reverse server. Our backend servers simply close > connection > > when read some abnormal packets. Nginx will call the function > > ngx_http_upstream_next() and try to use the next server. The ft_type > > is NGX_HTTP_UPSTREAM_FT_ERROR. We want to turn off the try mechanism with > > such packets. Otherwise, it will try all the servers every time. We use > > directive proxy_next_upstream off. If it's not keepalive connection, > > everything is fine. If it's keepalive connection, it will run such code: > > > > 2858 if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { > > 2859 status = 0; > > 2860 > > 2861 /* TODO: inform balancer instead */ > > 2862 > > 2863 u->peer.tries++; > > 2864 > > > > The status is cleared to be 0. The below code will never be touched: > > > > 2896 if (status) { > > 2897 u->state->status = status; > > 2898 > > 2899 if (u->peer.tries == 0 || !(u->conf->next_upstream & > ft_type)) > > { > > > > The variable of tries and u->conf->next_upstream become useless. > > > > I don't know why the cached connection should clear the status, Can we > just > > remove the code from line 2858 to 2864? Is there any side effect? > > Cached connection might be (legitimately) closed by an upstream > server at any time, so the code always retries if sending request > failed. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 14 08:23:49 2013 From: nginx-forum at nginx.us (philipp) Date: Mon, 14 Jan 2013 03:23:49 -0500 Subject: OCSP_basic_verify() failed In-Reply-To: <20130111144812.GG80623@mdounin.ru> References: <20130111144812.GG80623@mdounin.ru> Message-ID: <826e143399bc9bf47be8722990f609a9.NginxMailingListEnglish@forum.nginx.org> Thanks for your help, I guess I found the problem... I had two vhosts with ocsp. But only one host had a working trusted certificate. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234832,235032#msg-235032 From nginx-forum at nginx.us Mon Jan 14 08:20:24 2013 From: nginx-forum at nginx.us (refercon) Date: Mon, 14 Jan 2013 03:20:24 -0500 Subject: The quesion will be kill me Message-ID: Using nginx make proxy server, the backend server are nginx,apache Sometimes,the web is not ok ! An error occurred. Sorry, the page you are looking for is currently unavailable. Please try again later. If you are the system administrator of this resource then you should check the error log for details. Faithfully yours, nginx. Moretimes the web is ok?little times is error... The nginx error log is: 2013/01/14 16:02:26 [error] 31316#0: *1154349 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 61.186.137.140, server: m.xx.cn, request: "GET /pnp4nagios/index.php/graph?host=sxcq-web&srv=PING HTTP/1.1", upstream: "http://10.2.4.10:80/pnp4nagios/index.php/graph?host=sxcq-web&srv=PING", host: "m.xx.cn", referrer: "http://m.xx.cn/pnp4nagios/index.php/graph?host=sxcq-web&srv=PING" 2013/01/14 16:02:26 [error] 31310#0: *1161676 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 123.151.148.201, server: www.xx.gov.cn, request: "GET /plus/view.php?aid=3402 HTTP/1.1", upstream: "http://10.2.4.4:80/plus/view.php?aid=3402", host: "www.xx.gov.cn" I checked backend server's log ,,, No erros in it ... My nginx proxy conf is: Http's section: http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 120; client_body_timeout 300; server_tokens off; send_timeout 3m; gzip on; gzip_http_version 1.0; gzip_min_length 1000; gzip_buffers 4 8k; gzip_comp_level 6; gzip_types text/plain image/gif image/jpeg image/png text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; client_max_body_size 64M; server { listen 80 default; server_name _; return 403; } Proxy's section: upstream xx{ server 10.2.4.11; } server { listen 80; server_name www.xx.cn; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; proxy_redirect off ; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; client_max_body_size 1000m; proxy_connect_timeout 15; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 512k; proxy_buffers 8 512k; proxy_busy_buffers_size 512k; proxy_temp_file_write_size 512k; proxy_next_upstream error timeout invalid_header http_500 http_503 http_404; proxy_pass http://xx; include attack.conf; } location ~ .*\.(html|js|css|jpg|png|gif|flv|ico|swf)$ { expires max; root /home/nginx_cache/www.sxcq.cn; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_connect_timeout 15; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 512k; proxy_buffers 8 512k; proxy_busy_buffers_size 512k; proxy_temp_file_write_size 512k; proxy_store_access user:rw group:rw all:rw; proxy_temp_path /home/nginx_cache/www.xx.cn; proxy_next_upstream error timeout invalid_header http_500 http_503 http_404; proxy_store on; if ( !-e $request_filename) { proxy_pass http://xx; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Pls help me... Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235031,235031#msg-235031 From al-nginx at none.at Mon Jan 14 15:35:19 2013 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 14 Jan 2013 16:35:19 +0100 Subject: nginx deployments survey In-Reply-To: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> References: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> Message-ID: <15834264c8bd5547f88951f809ff0817@none.at> Hai Andrew, Am 09-01-2013 11:01, schrieb Andrew Alexeev: > Hello, [snipp] > http://nginx-survey.questionpro.com/ > > This survey will help us greatly to define goals and adjust > priorities in 2013 and to make nginx better. > > Many thanks in advance! It would be nice to have a survey response analyse and what that means to nginx roadmap. Best regards Aleks From andrew at nginx.com Mon Jan 14 15:37:48 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Mon, 14 Jan 2013 19:37:48 +0400 Subject: nginx deployments survey In-Reply-To: <15834264c8bd5547f88951f809ff0817@none.at> References: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> <15834264c8bd5547f88951f809ff0817@none.at> Message-ID: <6984DDF3-A161-4D57-A9C5-C4D8DDA78E89@nginx.com> On Jan 14, 2013, at 7:35 PM, Aleksandar Lazic wrote: > Hai Andrew, > > Am 09-01-2013 11:01, schrieb Andrew Alexeev: >> Hello, > > [snipp] > >> http://nginx-survey.questionpro.com/ >> This survey will help us greatly to define goals and adjust >> priorities in 2013 and to make nginx better. >> Many thanks in advance! > > It would be nice to have a survey response analyse and what that means to nginx roadmap. Definitely. But first we need more responses to make it statistically valid :) And then we'll definitely plan to make a blog post or something with an aggregated summary and notes about roadmap. Thanks a lot for those participating! From nginx-forum at nginx.us Mon Jan 14 16:06:25 2013 From: nginx-forum at nginx.us (philipp) Date: Mon, 14 Jan 2013 11:06:25 -0500 Subject: patch.spdy-55_1.3.11 broken? Message-ID: I have patched the nginx sources with the latest spdy patch: /usr/src/nginx-1.3.11# patch -p1 < patch.spdy-55_1.3.11.txt but building the package isn't possible anymore: dpkg-buildpackage -rfakeroot -uc -b ... gcc -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules \ -o objs/src/http/ngx_http_write_filter_module.o \ src/http/ngx_http_write_filter_module.c gcc -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules \ -o objs/src/http/ngx_http_copy_filter_module.o \ src/http/ngx_http_copy_filter_module.c gcc -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules \ -o objs/src/http/modules/ngx_http_log_module.o \ src/http/modules/ngx_http_log_module.c gcc -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules \ -o objs/src/http/ngx_http_request_body.o \ src/http/ngx_http_request_body.c src/http/ngx_http_request_body.c: In function 'ngx_http_discard_request_body': src/http/ngx_http_request_body.c:479:10: error: 'ngx_http_request_t' has no member named 'spdy_stream' make[3]: *** [objs/src/http/ngx_http_request_body.o] Error 1 make[3]: Leaving directory `/usr/src/nginx-1.3.11' make[2]: *** [build] Error 2 make[2]: Leaving directory `/usr/src/nginx-1.3.11' dh_auto_build: make -j1 returned exit code 2 make[1]: *** [override_dh_auto_build] Error 2 make[1]: Leaving directory `/usr/src/nginx-1.3.11' make: *** [build] Error 2 dpkg-buildpackage: error: debian/rules build gave error exit status 2 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235059,235059#msg-235059 From vbart at nginx.com Mon Jan 14 16:53:33 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 14 Jan 2013 20:53:33 +0400 Subject: patch.spdy-55_1.3.11 broken? In-Reply-To: References: Message-ID: <201301142053.33218.vbart@nginx.com> On Monday 14 January 2013 20:06:25 philipp wrote: > I have patched the nginx sources with the latest spdy patch: > > /usr/src/nginx-1.3.11# patch -p1 < patch.spdy-55_1.3.11.txt > > but building the package isn't possible anymore: > Yeap, fixed. Also, please note, the "--with-http_spdy_module" ?onfigure option is now mandatory for SPDY. See: http://nginx.org/patches/spdy/CHANGES.txt http://nginx.org/patches/spdy/README.txt wbr, Valentin V. Bartenev > dpkg-buildpackage -rfakeroot -uc -b > ... > gcc -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat > -Wformat-security -I src/core -I src/event -I src/event/modules -I > src/os/unix -I objs -I src/http -I src/http/modules \ > -o objs/src/http/ngx_http_write_filter_module.o \ > src/http/ngx_http_write_filter_module.c > gcc -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat > -Wformat-security -I src/core -I src/event -I src/event/modules -I > src/os/unix -I objs -I src/http -I src/http/modules \ > -o objs/src/http/ngx_http_copy_filter_module.o \ > src/http/ngx_http_copy_filter_module.c > gcc -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat > -Wformat-security -I src/core -I src/event -I src/event/modules -I > src/os/unix -I objs -I src/http -I src/http/modules \ > -o objs/src/http/modules/ngx_http_log_module.o \ > src/http/modules/ngx_http_log_module.c > gcc -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat > -Wformat-security -I src/core -I src/event -I src/event/modules -I > src/os/unix -I objs -I src/http -I src/http/modules \ > -o objs/src/http/ngx_http_request_body.o \ > src/http/ngx_http_request_body.c > src/http/ngx_http_request_body.c: In function > 'ngx_http_discard_request_body': > src/http/ngx_http_request_body.c:479:10: error: 'ngx_http_request_t' has no > member named 'spdy_stream' > make[3]: *** [objs/src/http/ngx_http_request_body.o] Error 1 > make[3]: Leaving directory `/usr/src/nginx-1.3.11' > make[2]: *** [build] Error 2 > make[2]: Leaving directory `/usr/src/nginx-1.3.11' > dh_auto_build: make -j1 returned exit code 2 > make[1]: *** [override_dh_auto_build] Error 2 > make[1]: Leaving directory `/usr/src/nginx-1.3.11' > make: *** [build] Error 2 > dpkg-buildpackage: error: debian/rules build gave error exit status 2 > Yeap, thatls From oliviermo75 at gmail.com Mon Jan 14 17:46:42 2013 From: oliviermo75 at gmail.com (Olivier Morel) Date: Mon, 14 Jan 2013 18:46:42 +0100 Subject: i have a problem with my virtual block Message-ID: hy when i try to go to my website in local like http://mediawiki.dedibox.fr/, i get the message It works! This is the default web page for this server. etc... but normally i must have my index.php and not that , i don' t understand why i get this message . this is my nginx.conf *events { worker_connections 1024; } http { include mime.types; passenger_root /usr/local/xxxx/ruby-1.9.3-p125/gems/passenger-3.0.19; passenger_ruby /usr/local/xxxx/ruby-1.9.3-p125/ruby; passenger_max_pool_size 10; default_type application/octet-stream; sendfile on; ## TCP options tcp_nopush on; tcp_nodelay on; ## Timeout keepalive_timeout 65; types_hash_max_size 2048; server_names_hash_bucket_size 128; proxy_cache_path /mnt/donner/nginx/cache levels=1:2 keys_zone=one:10m; gzip on; server_tokens off; * * include /usr/local/xxx/nginx/vhosts-available/*; include /usr/local/xxx/nginx/vhosts-available/*.conf; * server { * listen 80; passenger_enabled on; passenger_use_global_queue on; error_log /home/logs/nginx/error.log; access_log /home/logs/nginx/access-global.log; } }* * vhosts-available/mediawiki.conf* server { listen 80 ; server_name mediawiki.dedibox.fr; root /home/sites_web/mediawiki; index index.php index.html; error_log /home/logs/mediawiki/error.log; access_log /home/logs/mediawiki/access.log; location ~ .php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/sites_web/mediawiki/$fastcgi_script_name; include /usr/local/centOs/nginx/conf/fastcgi_params; } } thank you very much for your help . And have a good day -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon Jan 14 18:11:17 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 15 Jan 2013 07:11:17 +1300 Subject: i have a problem with my virtual block In-Reply-To: References: Message-ID: <5EBA7D2B-C4EC-4183-BC63-0C8C294E905C@greengecko.co.nz> Your site is using the default server block in conf.d. There is a hierarchy in the listen format, looks like the default is above listen 80; simplest way is to either remove the default, or name your site. On 15/01/2013, at 6:46 AM, Olivier Morel wrote: > > hy > when i try to go to my website in local like http://mediawiki.dedibox.fr/ , i get the message > It works! > > This is the default web page for this server. etc... > > > > but normally i must have my index.php and not that , i don' t understand why i get this message . > > > > this is my nginx.conf > > > > events { > worker_connections 1024; > } > > > http { > include mime.types; > passenger_root /usr/local/xxxx/ruby-1.9.3-p125/gems/passenger-3.0.19; > passenger_ruby /usr/local/xxxx/ruby-1.9.3-p125/ruby; > passenger_max_pool_size 10; > > > > default_type application/octet-stream; > > sendfile on; > ## TCP options > tcp_nopush on; > tcp_nodelay on; > ## Timeout > keepalive_timeout 65; > > types_hash_max_size 2048; > server_names_hash_bucket_size 128; > proxy_cache_path /mnt/donner/nginx/cache levels=1:2 keys_zone=one:10m; > gzip on; > server_tokens off; > > > include /usr/local/xxx/nginx/vhosts-available/*; > include /usr/local/xxx/nginx/vhosts-available/*.conf; > > server { > > listen 80; > passenger_enabled on; > passenger_use_global_queue on; > > error_log /home/logs/nginx/error.log; > access_log /home/logs/nginx/access-global.log; > } > } > > > vhosts-available/mediawiki.conf > > server { > > listen 80 ; > server_name mediawiki.dedibox.fr; > root /home/sites_web/mediawiki; > index index.php index.html; > error_log /home/logs/mediawiki/error.log; > access_log /home/logs/mediawiki/access.log; > > > location ~ .php$ { > > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /home/sites_web/mediawiki/$fastcgi_script_name; > include /usr/local/centOs/nginx/conf/fastcgi_params; > > } > } > > > thank you very much for your help . > And have a good day > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 14 19:41:30 2013 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 14 Jan 2013 14:41:30 -0500 Subject: nginx deployments survey In-Reply-To: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> References: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> Message-ID: Done! maybe something to think about, allow a feature request to be sponsored, for example SiT has this, if many sponsor a request it gets done faster or if few sponsor a request, it requires higher sponsor fees to get it done. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234835,235071#msg-235071 From andrew at nginx.com Mon Jan 14 19:54:05 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Mon, 14 Jan 2013 23:54:05 +0400 Subject: nginx deployments survey In-Reply-To: References: <49F0DB49-F8BF-4A82-A32E-E21B9FBBD260@nginx.com> Message-ID: <512EF8E9-5502-4D0C-8D5E-0F28212230B1@nginx.com> On Jan 14, 2013, at 11:41 PM, itpp2012 wrote: > Done! maybe something to think about, allow a feature request to be > sponsored, for example SiT has this, if many sponsor a request it gets done > faster or if few sponsor a request, it requires higher sponsor fees to get > it done. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234835,235071#msg-235071 Thanks for completing the survey and thanks for the suggestion. We actually have this, though it's currently structured a bit differently (single/few sponsors per feature vs. crowdfunding) During 2012 the following features were sponsored: - SPDY (Automattic) - OCSP Stapling (GlobalSign, Comodo, DigiCert) - Chunked transfer encoding on input (sponsor preferred to refrain from public announcement) - WebSockets (due to be ready soon, and the sponsors list to be disclosed) From oliviermo75 at gmail.com Mon Jan 14 20:33:11 2013 From: oliviermo75 at gmail.com (Olivier Morel) Date: Mon, 14 Jan 2013 21:33:11 +0100 Subject: i have a problem with my virtual block In-Reply-To: <5EBA7D2B-C4EC-4183-BC63-0C8C294E905C@greengecko.co.nz> References: <5EBA7D2B-C4EC-4183-BC63-0C8C294E905C@greengecko.co.nz> Message-ID: i have change the name in the parameter server_name , but i continue to pointe on It works! etc... 2013/1/14 Steve Holdoway > Your site is using the default server block in conf.d. There is a > hierarchy in the listen format, looks like the default is above listen 80; > simplest way is to either remove the default, or name your site. > > On 15/01/2013, at 6:46 AM, Olivier Morel wrote: > > > hy > when i try to go to my website in local like http://mediawiki.dedibox.fr/, i get the message > It works! > > This is the default web page for this server. etc... > > > but normally i must have my index.php and not that , i don' t understand > why i get this message . > > > this is my nginx.conf > > > > *events { > worker_connections 1024; > } > > > http { > include mime.types; > passenger_root /usr/local/xxxx/ruby-1.9.3-p125/gems/passenger-3.0.19; > passenger_ruby /usr/local/xxxx/ruby-1.9.3-p125/ruby; > passenger_max_pool_size 10; > > > > default_type application/octet-stream; > > sendfile on; > ## TCP options > tcp_nopush on; > tcp_nodelay on; > ## Timeout > keepalive_timeout 65; > > types_hash_max_size 2048; > server_names_hash_bucket_size 128; > proxy_cache_path /mnt/donner/nginx/cache levels=1:2 keys_zone=one:10m; > gzip on; > server_tokens off; > > * > > * include /usr/local/xxx/nginx/vhosts-available/*; > include /usr/local/xxx/nginx/vhosts-available/*.conf; > * > > server { > * listen 80; > passenger_enabled on; > passenger_use_global_queue on; > > error_log /home/logs/nginx/error.log; > access_log /home/logs/nginx/access-global.log; > } > }* > * > > vhosts-available/mediawiki.conf* > > server { > > listen 80 ; > server_name mediawiki.dedibox.fr; > root /home/sites_web/mediawiki; > index index.php index.html; > error_log /home/logs/mediawiki/error.log; > access_log /home/logs/mediawiki/access.log; > > > location ~ .php$ { > > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /home/sites_web/mediawiki/$fastcgi_script_name; > include /usr/local/centOs/nginx/conf/fastcgi_params; > > } > } > > > thank you very much for your help . > And have a good day > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Cordialement Olivier Morel tel : 06.62.25.03.77 -------------- next part -------------- An HTML attachment was scrubbed... URL: From agus.262 at gmail.com Mon Jan 14 22:02:41 2013 From: agus.262 at gmail.com (Agus) Date: Mon, 14 Jan 2013 19:02:41 -0300 Subject: Custom error_page basic config problem. Message-ID: Hi fellows, I was having trouble creating a custom error_page. Here's the simple test config i did: server_name www.test1.com.ar; error_log logs/www.test1.com.ar.http.error.log debug; access_log logs/www.test1.com.ar.http.access.log main; root /usr/local/www/www.test1; location / { # Esto es para simular el geoip con un if. if ( $remote_addr = "10.24.18.2" ) { error_page 401 /custom/404b.html; return 401; } } With that, i only got the nginx default error page. After turning on debug i saw that when nginx goes to fetch the error_page mentioned it searches in location / so it denies and send me the default error. Now i added a location like this location = /custom/404b.html { internal; } Which made it work. My question is is this is OK. If my solution is the correct one or perhaps theres a better one. Also, this test is easy cause its local, but i want to implemtn this in a proxy_pass situation. Probably the intercept_error.. Thanks for any hints you can give. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 14 22:50:14 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Mon, 14 Jan 2013 17:50:14 -0500 Subject: 1.3.11 Issues? Message-ID: <5baee6252154c74ad98ee13545bf738d.NginxMailingListEnglish@forum.nginx.org> Has anyone else seen problems with Nginx with 1.3.11? Everything works perfectly fine with same compile options under 1.3.10... It's a pretty vanilla compile as far as modules... proxy, SSI, SSL and status modules. Using SPDY patch (using the correct patch for each version of Nginx) (and the correct SPDY module compile option on 1.3.11)... RIght right 1.3.11 starts being used, we get this stuff in the error log: *** Error in `nginx: worker process': malloc(): memory corruption: 0x00000000009084f0 *** 2013/01/14 14:34:16 [alert] 16599#0: *137 getsockname() failed (9: Bad file descriptor) while SPDY processing, client: 174.70.187.121, server: dpstatic.com, request: "GET /j/tinymce/tiny_mce.js?_v=bba17b4a HTTP/1.1", host: "x.dpstatic.com", referrer: "https://dev.digitalpoint.com/threads/anybody-has-experience-with-ordering-at-answerserp-com-for-yahoo-backlinks.2624660/" 2013/01/14 14:34:16 [alert] 16599#0: *137 getsockname() failed (9: Bad file descriptor) while SPDY processing, client: 174.70.187.121, server: dpstatic.com, request: "GET /j/tinymce/tiny_mce.js?_v=bba17b4a HTTP/1.1", host: "x.dpstatic.com", referrer: "https://dev.digitalpoint.com/threads/anybody-has-experience-with-ordering-at-answerserp-com-for-yahoo-backlinks.2624660/" *** Error in `nginx: worker process': malloc(): memory corruption: 0x000000000088cc00 *** Ended up going back to 1.3.10, and everything back to normal... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235090,235090#msg-235090 From vbart at nginx.com Mon Jan 14 23:01:24 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Jan 2013 03:01:24 +0400 Subject: 1.3.11 Issues? In-Reply-To: <5baee6252154c74ad98ee13545bf738d.NginxMailingListEnglish@forum.nginx.org> References: <5baee6252154c74ad98ee13545bf738d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301150301.24920.vbart@nginx.com> On Tuesday 15 January 2013 02:50:14 digitalpoint wrote: > Has anyone else seen problems with Nginx with 1.3.11? Everything works > perfectly fine with same compile options under 1.3.10... It's a pretty > vanilla compile as far as modules... proxy, SSI, SSL and status modules. > Using SPDY patch (using the correct patch for each version of Nginx) (and > the correct SPDY module compile option on 1.3.11)... > > RIght right 1.3.11 starts being used, we get this stuff in the error log: > > *** Error in `nginx: worker process': malloc(): memory corruption: > 0x00000000009084f0 *** > 2013/01/14 14:34:16 [alert] 16599#0: *137 getsockname() failed (9: Bad file > descriptor) while SPDY processing, client: 174.70.187.121, server: > dpstatic.com, request: "GET /j/tinymce/tiny_mce.js?_v=bba17b4a HTTP/1.1", > host: "x.dpstatic.com", referrer: > "https://dev.digitalpoint.com/threads/anybody-has-experience-with-ordering- > at-answerserp-com-for-yahoo-backlinks.2624660/" 2013/01/14 14:34:16 [alert] > 16599#0: *137 getsockname() failed (9: Bad file descriptor) while SPDY > processing, client: 174.70.187.121, server: dpstatic.com, request: "GET > /j/tinymce/tiny_mce.js?_v=bba17b4a HTTP/1.1", host: "x.dpstatic.com", > referrer: > "https://dev.digitalpoint.com/threads/anybody-has-experience-with-ordering- > at-answerserp-com-for-yahoo-backlinks.2624660/" *** Error in `nginx: worker > process': malloc(): memory corruption: 0x000000000088cc00 *** > > Ended up going back to 1.3.10, and everything back to normal... > Could you show nginx -V ? wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Jan 14 23:05:26 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Mon, 14 Jan 2013 18:05:26 -0500 Subject: 1.3.11 Issues? In-Reply-To: <201301150301.24920.vbart@nginx.com> References: <201301150301.24920.vbart@nginx.com> Message-ID: <06ab5cc3b7268e763d8952026f6fee5f.NginxMailingListEnglish@forum.nginx.org> ya sorry, was just coming back to follow up with that... after I posted that, I was wondering why in the hell I was compiling with SSI module... it's actually WITHOUT SSI. :) This is back on 1.3.10 after I rolled it back after the issues: nginx version: nginx/1.3.10 built by gcc 4.5.1 20101208 [gcc-4_5-branch revision 167585] (SUSE Linux) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/usr/log/ngnix/error.log --http-log-path=/usr/log/ngnix/access.log --with-openssl=/home/software_source/openssl-1.0.1c --with-cc-opt='-I /usr/local/ssl/include' --with-ld-opt='-L /usr/local/ssl/lib' --without-http_proxy_module --without-http_ssi_module --with-http_ssl_module --with-http_stub_status_module The only difference with the compile options when I did it for 1.3.11 was I added "--with-http_spdy_module" to the end since it needs it for 1.3.11+ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235090,235094#msg-235094 From vbart at nginx.com Mon Jan 14 23:16:14 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Jan 2013 03:16:14 +0400 Subject: 1.3.11 Issues? In-Reply-To: <06ab5cc3b7268e763d8952026f6fee5f.NginxMailingListEnglish@forum.nginx.org> References: <201301150301.24920.vbart@nginx.com> <06ab5cc3b7268e763d8952026f6fee5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301150316.14516.vbart@nginx.com> On Tuesday 15 January 2013 03:05:26 digitalpoint wrote: > ya sorry, was just coming back to follow up with that... after I posted > that, I was wondering why in the hell I was compiling with SSI module... > it's actually WITHOUT SSI. :) > > This is back on 1.3.10 after I rolled it back after the issues: > > nginx version: nginx/1.3.10 > built by gcc 4.5.1 20101208 [gcc-4_5-branch revision 167585] (SUSE Linux) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --pid-path=/var/run/nginx.pid --error-log-path=/usr/log/ngnix/error.log > --http-log-path=/usr/log/ngnix/access.log > --with-openssl=/home/software_source/openssl-1.0.1c --with-cc-opt='-I > /usr/local/ssl/include' --with-ld-opt='-L /usr/local/ssl/lib' > --without-http_proxy_module --without-http_ssi_module > --with-http_ssl_module --with-http_stub_status_module > > The only difference with the compile options when I did it for 1.3.11 was I > added "--with-http_spdy_module" to the end since it needs it for 1.3.11+ > Ok, thank you. It seems I already figured out where is the problem. Please, try the new patch: http://nginx.org/patches/spdy/patch.spdy-56_1.3.11.txt wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From kevin at my.walr.us Mon Jan 14 23:25:26 2013 From: kevin at my.walr.us (KT Walrus) Date: Mon, 14 Jan 2013 18:25:26 -0500 Subject: cookie and first load balancing Message-ID: <7957BF86-B3DA-4DB6-B1F1-D66DB304EE65@my.walr.us> The only reason I need haproxy in my software stack is for "cookie persistence" and "first" load balancing. I'd like to use nginx instead. Can I do the equivalent of haproxy "cookie persistence" in nginx by testing if the request has a cookie set and then testing this cookie's value to decide which upstream to use? Seems possible to me. As for haproxy's "first" load balancing, is there a variable in nginx that I can test for a given server which yields the current number of active connections that nginx has with that server? I haven't found any such variable in nginx's documentation. Kevin From nginx-forum at nginx.us Mon Jan 14 23:33:02 2013 From: nginx-forum at nginx.us (Nullivex) Date: Mon, 14 Jan 2013 18:33:02 -0500 Subject: Socket address relative to prefix In-Reply-To: <6121211db1192d560ddd3bbc5f73d530.NginxMailingListEnglish@forum.nginx.org> References: <6121211db1192d560ddd3bbc5f73d530.NginxMailingListEnglish@forum.nginx.org> Message-ID: I know this as posted ages ago however its the number 1 post in google and has no answer. NGINX uses a path relative to where it was started from when it comes to reading sockets at least. I am unsure if this affects additional functionality. At first I thought it was reading from the compile prefix but this doesnt seem to be the case and I also dont see any documentation on this. Here is an example. Starting NGINX manually from the NGINX folder cd ~/nginx; sbin/nginx Now all SOCKET paths are relative to ~/nginx However; cd ~; nginx/sbin/nginx SOCKET paths are now relative to ~ Just figured I would help document. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,146304,235097#msg-235097 From nginx-forum at nginx.us Mon Jan 14 23:46:43 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Mon, 14 Jan 2013 18:46:43 -0500 Subject: 1.3.11 Issues? In-Reply-To: <201301150316.14516.vbart@nginx.com> References: <201301150316.14516.vbart@nginx.com> Message-ID: <26213a823cde5e05cc761ab34913506c.NginxMailingListEnglish@forum.nginx.org> Nope... still same issue. nginx version: nginx/1.3.11 built by gcc 4.5.1 20101208 [gcc-4_5-branch revision 167585] (SUSE Linux) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/usr/log/ngnix/error.log --http-log-path=/usr/log/ngnix/access.log --with-openssl=/home/software_source/openssl-1.0.1c --with-cc-opt='-I /usr/local/ssl/include' --with-ld-opt='-L /usr/local/ssl/lib' --without-http_proxy_module --without-http_ssi_module --with-http_ssl_module --with-http_stub_status_module --with-http_spdy_module ---- End of error.log: *** Error in `nginx: worker process': malloc(): memory corruption: 0x00000000009947f0 *** Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235090,235098#msg-235098 From vbart at nginx.com Tue Jan 15 00:07:09 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Jan 2013 04:07:09 +0400 Subject: 1.3.11 Issues? In-Reply-To: <26213a823cde5e05cc761ab34913506c.NginxMailingListEnglish@forum.nginx.org> References: <201301150316.14516.vbart@nginx.com> <26213a823cde5e05cc761ab34913506c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301150407.10062.vbart@nginx.com> On Tuesday 15 January 2013 03:46:43 digitalpoint wrote: > Nope... still same issue. > > nginx version: nginx/1.3.11 > built by gcc 4.5.1 20101208 [gcc-4_5-branch revision 167585] (SUSE Linux) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --pid-path=/var/run/nginx.pid --error-log-path=/usr/log/ngnix/error.log > --http-log-path=/usr/log/ngnix/access.log > --with-openssl=/home/software_source/openssl-1.0.1c --with-cc-opt='-I > /usr/local/ssl/include' --with-ld-opt='-L /usr/local/ssl/lib' > --without-http_proxy_module --without-http_ssi_module > --with-http_ssl_module --with-http_stub_status_module > --with-http_spdy_module > > ---- > End of error.log: > > *** Error in `nginx: worker process': malloc(): memory corruption: > 0x00000000009947f0 *** > Thank you for testing. Could you create a debug log for the issue? See this link for instructions: http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jan 15 00:21:43 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Mon, 14 Jan 2013 19:21:43 -0500 Subject: 1.3.11 Issues? In-Reply-To: <201301150407.10062.vbart@nginx.com> References: <201301150407.10062.vbart@nginx.com> Message-ID: <397dc288d385598065919430a5732b02.NginxMailingListEnglish@forum.nginx.org> Here ya go: http://www.shawnhogan.com/error.log.gz Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235090,235100#msg-235100 From vbart at nginx.com Tue Jan 15 01:23:17 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Jan 2013 05:23:17 +0400 Subject: 1.3.11 Issues? In-Reply-To: <397dc288d385598065919430a5732b02.NginxMailingListEnglish@forum.nginx.org> References: <201301150407.10062.vbart@nginx.com> <397dc288d385598065919430a5732b02.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301150523.17555.vbart@nginx.com> On Tuesday 15 January 2013 04:21:43 digitalpoint wrote: > Here ya go: > > http://www.shawnhogan.com/error.log.gz > Thanks a lot. It was really helpful. I believe the problem is fixed now: http://nginx.org/patches/spdy/patch.spdy-57_1.3.11.txt wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jan 15 01:55:43 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Mon, 14 Jan 2013 20:55:43 -0500 Subject: 1.3.11 Issues? In-Reply-To: <201301150523.17555.vbart@nginx.com> References: <201301150523.17555.vbart@nginx.com> Message-ID: <665fc7d81dd4048bb3c6a3d9d1ab1909.NginxMailingListEnglish@forum.nginx.org> Well... the underlying errors went away, but it seems the new SPDY patch broke being able to handle multiple hosts on the same SPDY connection now (it worked under 1.3.10 just fine). For example, we have a SSL cert for both digitalpoint.com and dpstatic.com (dpstatic.com is a cookieless domain for serving static content), so SPDY attempts to use the same connection for multiple hosts. See SPDY session list here: http://f.cl.ly/items/0T1u3g0h0e1A0D1g2N0s/Image%202013.01.08%2011:59:48%20AM.png With the SPDY patch for 1.3.11, now requests to *.dpstatic.com are *actually* being sent to digitalpoint.com (and getting a file not found). So somehow during a SPDY connection, the host for an individual request is being ignored somewhere along the way. Top browser is Chrome (SPDY connection), bottom browser is Safari (no SPDY support)... the end result is a SPDY connection will yield different results vs the "traditional" SSL connection: http://f.cl.ly/items/3K1Q2N1I3B000c0b0614/Image%202013.01.14%205:52:31%20PM.png Again, this worked as expected (ability for SPDY to properly share a connection across multiple hosts) with 1.3.10. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235090,235105#msg-235105 From vbart at nginx.com Tue Jan 15 03:17:49 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Jan 2013 07:17:49 +0400 Subject: 1.3.11 Issues? In-Reply-To: <665fc7d81dd4048bb3c6a3d9d1ab1909.NginxMailingListEnglish@forum.nginx.org> References: <201301150523.17555.vbart@nginx.com> <665fc7d81dd4048bb3c6a3d9d1ab1909.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301150717.49576.vbart@nginx.com> On Tuesday 15 January 2013 05:55:43 digitalpoint wrote: > Well... the underlying errors went away, but it seems the new SPDY patch > broke being able to handle multiple hosts on the same SPDY connection now > (it worked under 1.3.10 just fine). > > For example, we have a SSL cert for both digitalpoint.com and dpstatic.com > (dpstatic.com is a cookieless domain for serving static content), so SPDY > attempts to use the same connection for multiple hosts. See SPDY session > list here: > http://f.cl.ly/items/0T1u3g0h0e1A0D1g2N0s/Image%202013.01.08%2011:59:48%20A > M.png > > With the SPDY patch for 1.3.11, now requests to *.dpstatic.com are > *actually* being sent to digitalpoint.com (and getting a file not found). > So somehow during a SPDY connection, the host for an individual request is > being ignored somewhere along the way. > > Top browser is Chrome (SPDY connection), bottom browser is Safari (no SPDY > support)... the end result is a SPDY connection will yield different > results vs the "traditional" SSL connection: > http://f.cl.ly/items/3K1Q2N1I3B000c0b0614/Image%202013.01.14%205:52:31%20PM > .png > > Again, this worked as expected (ability for SPDY to properly share a > connection across multiple hosts) with 1.3.10. > There is no difference between 1.3.10 and 1.3.11 in terms of SPDY. In fact, 1.3.10 has serious bugs (see: http://nginx.org/en/CHANGES), and you should use 1.3.11 instead. The big difference is between spdy54 and spdy55+ patches. A large part of SPDY implementation was rewritten in spdy55, and also some relevant parts of nginx got new code. One of those changes makes nginx more RFC 6066 compliant. Here is some quotes: 3. Server Name Indication [...] If an application negotiates a server name using an application protocol and then upgrades to TLS, and if a server_name extension is sent, then the extension SHOULD contain the same name that was negotiated in the application protocol. If the server_name is established in the TLS session handshake, the client SHOULD NOT attempt to request a different server name at the application layer. 11.1. Security Considerations for server_name [...] Since it is possible for a client to present a different server_name in the application protocol, application server implementations that rely upon these names being the same MUST check to make sure the client did not present a different name in the application protocol. from http://tools.ietf.org/html/rfc6066 And you will not find in SPDY draft.2 specification any information about the "ability for SPDY to properly share a connection across multiple hosts": http://dev.chromium.org/spdy/spdy-protocol/spdy-protocol-draft2 Apparently by making TLS SNI in nginx more RFC-compliant I unintentionally broke SPDY. Well, it's safe to use spdy54 with 1.3.11: http://nginx.org/patches/spdy/patch.spdy-54.txt and I recommend you to use it while I will think about a solution. Thanks again for testing. I hope to fix the issue soon. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jan 15 03:50:30 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Mon, 14 Jan 2013 22:50:30 -0500 Subject: 1.3.11 Issues? In-Reply-To: <201301150717.49576.vbart@nginx.com> References: <201301150717.49576.vbart@nginx.com> Message-ID: <3a941aa89f0f5db9eb60fb6b0a25f98f.NginxMailingListEnglish@forum.nginx.org> Yeah... the problem is that while it might not be part of the SPDY 2 draft to share connections across multiple hosts, Chrome most certainly is doing it (and probably other browsers) as you can see from the previous screenshot. Either way, you guys are doing a crazy awesome job... keep it up. :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235090,235107#msg-235107 From nginx-list at puzzled.xs4all.nl Tue Jan 15 04:08:15 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Tue, 15 Jan 2013 05:08:15 +0100 Subject: Howto set geoip_country for IPv4 and IPv6 databases? Message-ID: <50F4D62F.7070107@puzzled.xs4all.nl> Hi, Next to their IPv4 country and city database, Maxmind now also has an IPv6 country and city database at http://dev.maxmind.com/geoip/geolite Is there a way I can make geo use the IPv6 databases too? This does not work: geoip_country /etc/nginx/GeoIP.dat; geoip_country /etc/nginx/GeoIPv6.dat; geoip_city /etc/nginx/GeoLiteCity.dat; geoip_city /etc/nginx/GeoLiteCityv6.dat; Because I see this error when checking the config: nginx: [emerg] "geoip_country" directive is duplicate in /etc/nginx/nginx.conf:116 Thanks for any advice. Regards, Patrick From edwardsongy at bellsouth.net Tue Jan 15 04:33:33 2013 From: edwardsongy at bellsouth.net (Edward Songy) Date: Mon, 14 Jan 2013 22:33:33 -0600 Subject: 1st Post Message-ID: <50F4DC1D.6070708@bellsouth.net> Hi All, I have embarked on the path to get nginx to work with MS Windows. I am going to follow the instructions provided in the sites of Eksith Rodrigo explicitely, and ask that any who have succeded in getting 'phpinfo.php' to display it's information properly share your 'nginx.conf' settings and your 'php.ini' directive settings, and your advice. I acknowledge the advantages of using Linux, OpenBSD, ..etc but have not touched "....inx' since installing SCO xenix thirty-eight (38) years ago, and although confident then ( I was age 40 then ), would not like to 're-learn' '...inx' until it is needed for a 'production' site, therefore the desire to stick with one of the MS operating systems. I am comfortable with installing, configuring, creating databases and tables and using the MySQL monitor. Had the MySQL application working on my notebook with Apache, and PHP, but have removed it all in order to start 'clean' with the nginx endeavor. Have downloaded: mysql-5.5.29-win32.zip and mysql-5.5.29-win32.msi. I know how to install via t.he - - -.msi , but do not know with - - -.zip It seems that the Windows tools for installing starting, stopping, and un-installing presume installation via an installer. The RDBMS IDE 'PFXplus' that I have been using for all versions, SCO xenix thru MS Windows can be installed as multiple instances and no changes to the registry. All that is required for an 'install' is that the files be copied into the desired drive:directory and it works! Simple and wonderful. Since Eksith instructs to use the .... zip versions, am I now going to learn to 'install' and configure MySQL without the .msi version? ..and do the same with PHP? Have downloaded: php-5.4.10-win32-VC9-x86.zip. Do not believe that there exists a: php-5.4.10-win32-VC9-x86.msi. Have downloaded: nginx-1.3.11.zip and nginx-1[1].3.10-win32-setup.exe From nginx-forum at nginx.us Tue Jan 15 05:15:31 2013 From: nginx-forum at nginx.us (pullover) Date: Tue, 15 Jan 2013 00:15:31 -0500 Subject: Louis Vuitton Murakami collection features the multicolor patterns Message-ID: <189c56a7b02f7e73f49d76c6e7bcaf21.NginxMailingListEnglish@forum.nginx.org> [url=http://www.louisvuittondanmark.org/]Louis Vuitton outlet[/url] handbags are one of the most well known companies in the world. They can be expensive if you are in the market to purchase one. There are things you can do to buy a Louis Vuitton bag for a lower price. Saving money on luxury is not always an easy thing to do. First you need to search online and see if you can pick up a Louis Vuitton handbag for a low cost. You want to be careful because there are many fake bags out there. You want to find a good used one that the owner has not used a whole lot. You should able to find someone that needs the money and they are selling there bag. Next you want to look at an auction website such as eBay. There you can find people who are selling second hand merchandise. You can find an affordable [url=http://www.louisvuittondanmark.org/]Louis Vuitton danmark[/url] handbag that will meet your expectations and budget at the same time. Make sure you verify is authenticity before making a purchase. There are a lot of handbags to choose from but if you are looking for one of the best then you need to look no further than a [url=http://www.louisvuittondanmark.org/]Louis Vuitton taske pris[/url] (bag sale). There are many styles and colors to choose from. You want to visit there boutique to select your favorite styles and colors. Remember that you can save money by using the internet to find your next handbag. Finally remember that you can get one of the most recognizable handbags in the world with a Louis Vuitton. You do not have to pay full price because there are online option available to you. A good option would be to find someone who is in need of selling there bag because they need the money. [url=http://www.louisvuittondanmark.org/]Louis Vuitton[/url] Murakami collection features the multicolor patterns, mainly includes the white and black background, the usage of colors is the main characteristics of his works, he changed the classical monogrammed pattern into 33 kinds of color, which were brightened by the white and black background, and maybe this is the huge contribution what he had done in the innovative field. Besides, cherry blossom is the national flower of Japan, as a Japanese artist, Murakami didn't forget to veneer a few cherry blossoms, rounded handbags have the cute bow, they are fond of young ladies. [url=http://www.louisvuittondanmark.org/]Louis Vuitton priser[/url] Murakami is an excellent creation in the road of LV, which spread around the world with its unique feature, what a perfect cooperation! By now, this collection keeps space with the time also, and it always walks in the forefront of fashion. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235112,235112#msg-235112 From bruno.premont at restena.lu Tue Jan 15 08:08:30 2013 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Tue, 15 Jan 2013 09:08:30 +0100 Subject: Howto set geoip_country for IPv4 and IPv6 databases? In-Reply-To: <50F4D62F.7070107@puzzled.xs4all.nl> References: <50F4D62F.7070107@puzzled.xs4all.nl> Message-ID: <20130115090830.60080188@pluto.restena.lu> Hi, It should be sufficient to just list the IPv6 database (which includes the IPv4 database as mapped addresses) tough 1.2.x branch needs attached patch. I don't know if Maxim has already applied it or a variant of it to 1.3.x branch. Also search the archives for GeoIP and IPv6 support, there were a few posts about it. Attached patch was initially posted in http://mailman.nginx.org/pipermail/nginx-devel/2011-June/000971.html (though I've updated it for 1.2.2). Regards, Bruno On Tue, 15 Jan 2013 05:08:15 +0100 Patrick Lists wrote: > Hi, > > Next to their IPv4 country and city database, Maxmind now also has an > IPv6 country and city database at http://dev.maxmind.com/geoip/geolite > > Is there a way I can make geo use the IPv6 databases too? > > This does not work: > > geoip_country /etc/nginx/GeoIP.dat; > geoip_country /etc/nginx/GeoIPv6.dat; > geoip_city /etc/nginx/GeoLiteCity.dat; > geoip_city /etc/nginx/GeoLiteCityv6.dat; > > Because I see this error when checking the config: > > nginx: [emerg] "geoip_country" directive is duplicate in > /etc/nginx/nginx.conf:116 > > Thanks for any advice. > > Regards, > Patrick -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.2.2-geoip.patch Type: text/x-patch Size: 10108 bytes Desc: not available URL: From ru at nginx.com Tue Jan 15 13:36:09 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 15 Jan 2013 17:36:09 +0400 Subject: Howto set geoip_country for IPv4 and IPv6 databases? In-Reply-To: <20130115090830.60080188@pluto.restena.lu> References: <50F4D62F.7070107@puzzled.xs4all.nl> <20130115090830.60080188@pluto.restena.lu> Message-ID: <20130115133609.GA68241@lo0.su> On Tue, Jan 15, 2013 at 09:08:30AM +0100, Bruno Pr?mont wrote: > It should be sufficient to just list the IPv6 database (which includes > the IPv4 database as mapped addresses) tough 1.2.x branch needs > attached patch. > > I don't know if Maxim has already applied it or a variant of it to > 1.3.x branch. It's in the works, should be done real soon now. From andy.jewell at sysmicro.co.uk Tue Jan 15 13:56:54 2013 From: andy.jewell at sysmicro.co.uk (Andy D'Arcy Jewell) Date: Tue, 15 Jan 2013 13:56:54 +0000 Subject: Why does the unauthenticated SMTP proxy include auth_http statement? Message-ID: <50F56026.90402@sysmicro.co.uk> Hi all, Can anyone throw some light on this for me please? Looking at: http://wiki.nginx.org/Faq#How_can_Nginx_be_deployed_as_an_SMTP_proxy.2C_with_a_Postfix_backend.3F It says "The example is for unauthenticated e-mail as you can see", but the example clearly shows authentication: --------------------------------------------------------- mail { server_name mail.somedomain.com; auth_http localhost:8008/auth-smtppass.php; --------------------------------------------------------- So I'm confused - should this be in there? Because if so, I'm obviously missing something about what unauthenticated means... -Andy -- Andy D'Arcy Jewell SysMicro Limited Linux Support E: andy.jewell at sysmicro.co.uk W: www.sysmicro.co.uk From nginx-list at puzzled.xs4all.nl Tue Jan 15 16:22:34 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Tue, 15 Jan 2013 17:22:34 +0100 Subject: Howto set geoip_country for IPv4 and IPv6 databases? In-Reply-To: <20130115133609.GA68241@lo0.su> References: <50F4D62F.7070107@puzzled.xs4all.nl> <20130115090830.60080188@pluto.restena.lu> <20130115133609.GA68241@lo0.su> Message-ID: <50F5824A.300@puzzled.xs4all.nl> Hi Ruslan, On 01/15/2013 02:36 PM, Ruslan Ermilov wrote: > On Tue, Jan 15, 2013 at 09:08:30AM +0100, Bruno Pr?mont wrote: >> It should be sufficient to just list the IPv6 database (which includes >> the IPv4 database as mapped addresses) tough 1.2.x branch needs >> attached patch. >> >> I don't know if Maxim has already applied it or a variant of it to >> 1.3.x branch. > > It's in the works, should be done real soon now. Thanks, that's good to know. I'll wait for the new 1.3 release with this functionality and report back. Regards, Patrick From nginx-forum at nginx.us Tue Jan 15 18:31:10 2013 From: nginx-forum at nginx.us (Rad3k) Date: Tue, 15 Jan 2013 13:31:10 -0500 Subject: Socket address relative to prefix In-Reply-To: References: <6121211db1192d560ddd3bbc5f73d530.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2ce43212385ea98eaa4abaddf1bba7c8.NginxMailingListEnglish@forum.nginx.org> Thanks for answering :) By the way, excellent timing - I have just returned, after long break, to experiments with web development, so I'll definitely make use of this knowledge. Back then I worked around this problem by preprocessing config files each time the service was started, and I'll probably still have to to this for php and mysql. But it's nice that I won't need it for Nginx anymore. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,146304,235152#msg-235152 From lists at ruby-forum.com Tue Jan 15 19:02:07 2013 From: lists at ruby-forum.com (Buzi Buzi) Date: Tue, 15 Jan 2013 20:02:07 +0100 Subject: Does anyone know of an Interface that can provide/manage text configuration snippets for nginx Message-ID: <93040888d579a6877939036ca964994b@ruby-forum.com> Hi,im a newbie to Nginx. I'm in search of an Interface that can provide/manage "text configuration-snippets" for Nginx. As my server is intended to serve as a reverse proxy for multiple sites with dynamic fast-changing sub_filter configurations. can anyone point me to a starting point ? if such an interface doesn't exist and you had to develop one, how would you go upon doing it for best performance on Linux (software architecture/language) ? hope its clear and my question is in the right place. thanks -- Posted via http://www.ruby-forum.com/. From cmfileds at gmail.com Tue Jan 15 19:28:46 2013 From: cmfileds at gmail.com (CM Fields) Date: Tue, 15 Jan 2013 14:28:46 -0500 Subject: nginx 1.3.11 and spdy v58 - thank you Message-ID: I just wanted to thank the developers for releasing Nginx v1.3.11 and the optimized SPDY patch v58. I run a non-profit https site serving static content (10KB-1MB) averaging 500 connections per second on FreeBSD 9.1 with zfs. With nginx v1.3.9 and spdy v54 we were completing client requests in less than 10ms, 72% of the time. With nginx 1.3.11 and spdy v58 we now complete requests in less than 10ms, 94% of the time. Just incredible. Again, my sincerest thanks for such a powerful web server. -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Tue Jan 15 19:30:05 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 15 Jan 2013 19:30:05 +0000 Subject: Does anyone know of an Interface that can provide/manage text configuration snippets for nginx In-Reply-To: <93040888d579a6877939036ca964994b@ruby-forum.com> References: <93040888d579a6877939036ca964994b@ruby-forum.com> Message-ID: On 15 January 2013 19:02, Buzi Buzi wrote: > Hi,im a newbie to Nginx. > I'm in search of an Interface that can provide/manage "text > configuration-snippets" for Nginx. > As my server is intended to serve as a reverse proxy for multiple sites > with dynamic fast-changing sub_filter configurations. > > can anyone point me to a starting point ? Tools like these, which don't explicitly include a GUI, are where much good work is being done these days: * http://community.opscode.com/cookbooks/nginx * https://github.com/puppetlabs/puppetlabs-nginx > if such an interface doesn't exist and you had to develop one, how would > you go upon doing it for best performance on Linux (software > architecture/language) ? Given that you say you're new to nginx, may I suggest you don't jump straight in to fill what you perceive as a tools gap. Perhaps you should get some experience running it in production so you know what would /actually/ be important to you, hence which tooling gaps need filling and which really don't. Cheers, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From andrew at nginx.com Tue Jan 15 19:32:14 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Tue, 15 Jan 2013 23:32:14 +0400 Subject: nginx 1.3.11 and spdy v58 - thank you In-Reply-To: References: Message-ID: On Jan 15, 2013, at 11:28 PM, CM Fields wrote: > I just wanted to thank the developers for releasing Nginx v1.3.11 and the optimized SPDY patch v58. > > I run a non-profit https site serving static content (10KB-1MB) averaging 500 connections per second on FreeBSD 9.1 with zfs. With nginx v1.3.9 and spdy v54 we were completing client requests in less than 10ms, 72% of the time. With nginx 1.3.11 and spdy v58 we now complete requests in less than 10ms, 94% of the time. Just incredible. > > Again, my sincerest thanks for such a powerful web server. You're welcome, and thanks for sharing this one! More precisely, Valentin (vbart) Bartenev is the developer who's implemented SPDY in nginx :) From becassel at uwaterloo.ca Tue Jan 15 22:03:29 2013 From: becassel at uwaterloo.ca (Ben Cassell) Date: Tue, 15 Jan 2013 17:03:29 -0500 Subject: Difficulties with LD_PRELOAD and Intercepting Sendfile Message-ID: <8F00F18177154FCAAF89155EF48EB24C@cs.uwaterloo.ca> We have a custom library shim that we are using to intercept libc calls from applications. We are loading this shim beneath nginx 1.2.3 using the LD_PRELOAD flag on both Linux and FreeBSD environments. After setting env LD_PRELOAD in the nginx.conf file, our shim is able to intercept a host of libc calls including read and write (as expected). On FreeBSD, sendfile is also intercepted from nginx as expected by our shim. On Linux however, our library is never reached when calling sendfile, even though we have verified that sendfile is being included from sys/sendfile.h (both through checking the bindings of the calls themselves using debugging tools and through tracing the code inside .../nginx-1.2.3/src/os/unix/ngx_linux_sendfile_chain.c). Calling sendfile through other applications results in a successful redirection to our library. I?m wondering if anyone has encountered difficulty with intercepting sendfile from nginx on Linux before, or if this is possibly an oversight on our part. Thanks in advance for any insight, Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Wed Jan 16 09:28:33 2013 From: lists at ruby-forum.com (Buzi Buzi) Date: Wed, 16 Jan 2013 10:28:33 +0100 Subject: Does anyone know of an Interface that can provide/manage text configuration snippets for nginx In-Reply-To: <93040888d579a6877939036ca964994b@ruby-forum.com> References: <93040888d579a6877939036ca964994b@ruby-forum.com> Message-ID: <218adc02091a680003cdff2e0f0164a3@ruby-forum.com> Thanks for your answers Jonathan.very interesting. as to the second part of the question: I have a very specific need which has lead me to use Nginx: Im using it as a Reverse proxy, specifically for it's sub_filter capabilities, to manage text substitution at the output chain buffer level. Since Nginx sub_filter module derives its configuration from included *.conf files, and since my intention is to use plenty of them in a hierarchical structure (almost like xml files are used as a Data source) in large amounts changing them very often, I need to be able to comfortably generate/edit large amounts of these config text snippets, preferably from an interface which i intend to customize. this is currently and probably the tool gap i need filling as of this moment. hopes this makes my question more clear. I have already setup a server and found it suitable for my needs. I know these are deep waters and i might be over my head here, but what i'm looking for is a good starting point or tool that can get me going. if such don't exist, what programming language would be suitable to interface with the Nginx configuration snippets ? Thanks again. -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Wed Jan 16 10:25:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Jan 2013 14:25:12 +0400 Subject: Why does the unauthenticated SMTP proxy include auth_http statement? In-Reply-To: <50F56026.90402@sysmicro.co.uk> References: <50F56026.90402@sysmicro.co.uk> Message-ID: <20130116102511.GI25043@mdounin.ru> Hello! On Tue, Jan 15, 2013 at 01:56:54PM +0000, Andy D'Arcy Jewell wrote: > Hi all, > > Can anyone throw some light on this for me please? > > Looking at: http://wiki.nginx.org/Faq#How_can_Nginx_be_deployed_as_an_SMTP_proxy.2C_with_a_Postfix_backend.3F > > It says "The example is for unauthenticated e-mail as you can see", > but the example clearly shows authentication: > > --------------------------------------------------------- > mail { server_name mail.somedomain.com; > auth_http localhost:8008/auth-smtppass.php; > --------------------------------------------------------- > > So I'm confused - should this be in there? Because if so, I'm > obviously missing something about what unauthenticated means... The "auth_method none" still doesn't mean there is no authentication/authorization at all, it means that it's done without requiring a user to provide login/password. Auth script is still expected to do some form of authorization, e.g. by checking ip and source/destination addresses provided. -- Maxim Dounin http://nginx.com/support.html From jaap at q42.nl Wed Jan 16 10:59:06 2013 From: jaap at q42.nl (Jaap Taal) Date: Wed, 16 Jan 2013 11:59:06 +0100 Subject: server_group { } directive? Message-ID: Hey there, I was reading about global if's / locations and why it isn't possible. I would like to suggest the following, maybe it's been suggested before, who knows: server_group { # this if will be duplicated to each of the server { } configs in this server_group if ( $http_user_agent ~* Googlebot ) { return 403; } server { server_name srv1.example.com; } server { server_name srv2.example.com; } } It's just a way for me to keep the server { } configs very simple, runtime it should be just like the if would appear twice. With includes this makes more sense: nginx.conf: server_group { # this if will be duplicated to each of the server { } configs in this server_group if ( $http_user_agent ~* Googlebot ) { return 403; } include protected-servers/*.conf; } protected-servers/srv1.conf: server { server_name srv1.example.com; } protected-servers/srv2.conf: server { server_name srv2.example.com; } This way I can enforce the servers in protected-server/*.conf to have the googlebot check. Otherwise I would have to include a file in each .conf file, which could lead to human mistakes. Jaap -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasik at iki.fi Wed Jan 16 15:15:12 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Wed, 16 Jan 2013 17:15:12 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: Message-ID: <20130116151511.GS8912@reaktio.net> On Sun, Jan 13, 2013 at 08:22:17PM +0800, ?????? wrote: > This patch should work between nginx-1.2.6 and nginx-1.3.8. > The documentation is here: > ## client_body_postpone_sending ## > Syntax: **client_body_postpone_sending** `size` > Default: 64k > Context: `http, server, location` > If you specify the `proxy_request_buffering` or > `fastcgi_request_buffering` to be off, Nginx will send the body to backend > when it receives more than `size` data or the whole request body has been > received. It could save the connection and reduce the IO number with > backend. > > ## proxy_request_buffering ## > Syntax: **proxy_request_buffering** `on | off` > Default: `on` > Context: `http, server, location` > Specify the request body will be buffered to the disk or not. If it's off, > the request body will be stored in memory and sent to backend after Nginx > receives more than `client_body_postpone_sending` data. It could save the > disk IO with large request body. > > > Note that, if you specify it to be off, the nginx retry mechanism > with unsuccessful response will be broken after you sent part of the > request to backend. It will just return 500 when it encounters such > unsuccessful response. This directive also breaks these variables: > $request_body, $request_body_file. You should not use these variables any > more while their values are undefined. > Hello, This patch sounds exactly like what I need aswell! I assume it works for both POST and PUT requests? Thanks, -- Pasi > Hello! > @yaoweibin > > If you are eager for this feature, you could try my > patch: [2]https://github.com/taobao/tengine/pull/91. This patch has > been running in our production servers. > > what's the nginx version your patch based on? > Thanks! > On Fri, Jan 11, 2013 at 5:17 PM, ?????? <[3]yaoweibin at gmail.com> wrote: > > I know nginx team are working on it. You can wait for it. > If you are eager for this feature, you could try my > patch: [4]https://github.com/taobao/tengine/pull/91. This patch has > been running in our production servers. > > 2013/1/11 li zJay <[5]zjay1987 at gmail.com> > > Hello! > is it possible that nginx will not buffer the client body before > handle the request to upstream? > we want to use nginx as a reverse proxy to upload very very big file > to the upstream, but the default behavior of nginx is to save the > whole request to the local disk first before handle it to the > upstream, which make the upstream impossible to process the file on > the fly when the file is uploading, results in much high request > latency and server-side resource consumption. > Thanks! > _______________________________________________ > nginx mailing list > [6]nginx at nginx.org > [7]http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > _______________________________________________ > nginx mailing list > [8]nginx at nginx.org > [9]http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > [10]nginx at nginx.org > [11]http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > References > > Visible links > 1. mailto:zjay1987 at gmail.com > 2. https://github.com/taobao/tengine/pull/91 > 3. mailto:yaoweibin at gmail.com > 4. https://github.com/taobao/tengine/pull/91 > 5. mailto:zjay1987 at gmail.com > 6. mailto:nginx at nginx.org > 7. http://mailman.nginx.org/mailman/listinfo/nginx > 8. mailto:nginx at nginx.org > 9. http://mailman.nginx.org/mailman/listinfo/nginx > 10. mailto:nginx at nginx.org > 11. http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From cloos at jhcloos.com Wed Jan 16 22:16:09 2013 From: cloos at jhcloos.com (James Cloos) Date: Wed, 16 Jan 2013 17:16:09 -0500 Subject: patch.spdy-55_1.3.11 broken? In-Reply-To: <201301142053.33218.vbart@nginx.com> (Valentin V. Bartenev's message of "Mon, 14 Jan 2013 20:53:33 +0400") References: <201301142053.33218.vbart@nginx.com> Message-ID: >>>>> "VVB" == Valentin V Bartenev writes: VVB> Yeap, fixed. The patch at: http://nginx.org/patches/spdy/patch.spdy-58_1.3.11.txt fails to apply to nginx-1.3.11 with 37 FAILs and 143 succeeds, 22 of which need fuzz. -JimC -- James Cloos OpenPGP: 1024D/ED7DAEA6 From yaoweibin at gmail.com Thu Jan 17 03:15:58 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 17 Jan 2013 11:15:58 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130116151511.GS8912@reaktio.net> References: <20130116151511.GS8912@reaktio.net> Message-ID: Yes. It should work for any request method. 2013/1/16 Pasi K?rkk?inen > On Sun, Jan 13, 2013 at 08:22:17PM +0800, ?????? wrote: > > This patch should work between nginx-1.2.6 and nginx-1.3.8. > > The documentation is here: > > > ## client_body_postpone_sending ## > > Syntax: **client_body_postpone_sending** `size` > > Default: 64k > > Context: `http, server, location` > > If you specify the `proxy_request_buffering` or > > `fastcgi_request_buffering` to be off, Nginx will send the body to > backend > > when it receives more than `size` data or the whole request body has > been > > received. It could save the connection and reduce the IO number with > > backend. > > > > ## proxy_request_buffering ## > > Syntax: **proxy_request_buffering** `on | off` > > Default: `on` > > Context: `http, server, location` > > Specify the request body will be buffered to the disk or not. If it's > off, > > the request body will be stored in memory and sent to backend after > Nginx > > receives more than `client_body_postpone_sending` data. It could save > the > > disk IO with large request body. > > > > > > Note that, if you specify it to be off, the nginx retry > mechanism > > with unsuccessful response will be broken after you sent part of the > > request to backend. It will just return 500 when it encounters such > > unsuccessful response. This directive also breaks these variables: > > $request_body, $request_body_file. You should not use these variables > any > > more while their values are undefined. > > > > Hello, > > This patch sounds exactly like what I need aswell! > I assume it works for both POST and PUT requests? > > Thanks, > > -- Pasi > > > > Hello! > > @yaoweibin > > > > If you are eager for this feature, you could try my > > patch: [2]https://github.com/taobao/tengine/pull/91. This patch > has > > been running in our production servers. > > > > what's the nginx version your patch based on? > > Thanks! > > On Fri, Jan 11, 2013 at 5:17 PM, ?????? <[3]yaoweibin at gmail.com> > wrote: > > > > I know nginx team are working on it. You can wait for it. > > If you are eager for this feature, you could try my > > patch: [4]https://github.com/taobao/tengine/pull/91. This patch > has > > been running in our production servers. > > > > 2013/1/11 li zJay <[5]zjay1987 at gmail.com> > > > > Hello! > > is it possible that nginx will not buffer the client body before > > handle the request to upstream? > > we want to use nginx as a reverse proxy to upload very very big > file > > to the upstream, but the default behavior of nginx is to save > the > > whole request to the local disk first before handle it to the > > upstream, which make the upstream impossible to process the > file on > > the fly when the file is uploading, results in much high request > > latency and server-side resource consumption. > > Thanks! > > _______________________________________________ > > nginx mailing list > > [6]nginx at nginx.org > > [7]http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Weibin Yao > > Developer @ Server Platform Team of Taobao > > _______________________________________________ > > nginx mailing list > > [8]nginx at nginx.org > > [9]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > [10]nginx at nginx.org > > [11]http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Weibin Yao > > Developer @ Server Platform Team of Taobao > > > > References > > > > Visible links > > 1. mailto:zjay1987 at gmail.com > > 2. https://github.com/taobao/tengine/pull/91 > > 3. mailto:yaoweibin at gmail.com > > 4. https://github.com/taobao/tengine/pull/91 > > 5. mailto:zjay1987 at gmail.com > > 6. mailto:nginx at nginx.org > > 7. http://mailman.nginx.org/mailman/listinfo/nginx > > 8. mailto:nginx at nginx.org > > 9. http://mailman.nginx.org/mailman/listinfo/nginx > > 10. mailto:nginx at nginx.org > > 11. http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From robm at fastmail.fm Thu Jan 17 04:41:59 2013 From: robm at fastmail.fm (Robert Mueller) Date: Thu, 17 Jan 2013 15:41:59 +1100 Subject: Logging errors via error_page + post_action? References: <1355370282.17475.140661165396829.04E700D9@webmail.messagingengine.com> Message-ID: <1358397719.10080.140661178656361.3DD44B9B@webmail.messagingengine.com> I posted this about a month ago and didn't hear anything, so I'm reposting again to hopefully catch some new eyes and see if anyone has any ideas. --- Hi In our nginx setup we use proxy_pass to pass most requests to backend servers. We like to monitor our logs regularly for any errors to see that everything is working as expected. We can grep the nginx logs, but: a) That's not real time b) We can't get extended information about the request, like if it's a POST, what the POST body actually was So what we wanted to do was use an error_page handler in nginx so if any backend returned an error, we resent the request details to an error handler script, something like: location / { proxy_pass http://backend/; } error_page 500 /internal_error_page_500; location /internal_error_page_500 { internal; proxy_set_header X-URL "$host$request_uri"; proxy_set_header X-Post $request_body; proxy_set_header X-Method $request_method; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://local/cgi-bin/error.pl; } The problem is that this replaces any result content from the main / proxy_pass with the content that error.pl generates. We don't want that, we want to keep the original result, but just use the error_page handler to effectively "log" the error for later. I thought maybe we could replace: proxy_pass http://local/cgi-bin/error.pl; With: post_action http://local/cgi-bin/error.pl; But that just causes nginx to return a "404 Not Found" error instead. Is there any way to do this? Return the original result content of a proxy_pass directive, but if that proxy_pass returns an error code (eg 500, etc), do a request to another URL with "logging" information (eg URL, method, POST body content, etc) -- Rob Mueller robm at fastmail.fm From nginx-forum at nginx.us Thu Jan 17 07:25:04 2013 From: nginx-forum at nginx.us (Krys) Date: Thu, 17 Jan 2013 02:25:04 -0500 Subject: Default Welcome page Message-ID: <79d1b7d0448e448bf869ba1b42061fe4.NginxMailingListEnglish@forum.nginx.org> Hi, I am a newbie to NGINX. I wish to learn as to how can I make a custom default index page and be called if another index.html or index.php file is not available in the document root. The custom index.html page should shoot for every new server block. How can I have a common syntax block - maybe include a file ? Any help is highly appreciated. Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235196,235196#msg-235196 From maxim at nginx.com Thu Jan 17 10:03:39 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 17 Jan 2013 14:03:39 +0400 Subject: patch.spdy-55_1.3.11 broken? In-Reply-To: References: <201301142053.33218.vbart@nginx.com> Message-ID: <50F7CC7B.9050307@nginx.com> Hi James, On 1/17/13 2:16 AM, James Cloos wrote: >>>>>> "VVB" == Valentin V Bartenev writes: > > VVB> Yeap, fixed. > > The patch at: > > http://nginx.org/patches/spdy/patch.spdy-58_1.3.11.txt > > fails to apply to nginx-1.3.11 with 37 FAILs and 143 succeeds, 22 of > which need fuzz. > The following script works for me on FreeBSD box: fetch -o- http://www.nginx.org/download/nginx-1.3.11.tar.gz | tar zxvf - && fetch http://nginx.org/patches/spdy/patch.spdy-58_1.3.11.txt && cd nginx-1.3.11 && patch -p1 <../patch.spdy-58_1.3.11.txt Please note that 'patch -C' could produce the (false) errors you describe. -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/support.html From nginx-forum at nginx.us Thu Jan 17 15:10:38 2013 From: nginx-forum at nginx.us (philipp) Date: Thu, 17 Jan 2013 10:10:38 -0500 Subject: patch.spdy-55_1.3.11 broken? In-Reply-To: <50F7CC7B.9050307@nginx.com> References: <50F7CC7B.9050307@nginx.com> Message-ID: <6871ea9c81333d76fc34a9c84821f691.NginxMailingListEnglish@forum.nginx.org> Yes the latest patch (58) works fine. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235059,235228#msg-235228 From cloos at jhcloos.com Thu Jan 17 16:18:08 2013 From: cloos at jhcloos.com (James Cloos) Date: Thu, 17 Jan 2013 11:18:08 -0500 Subject: patch.spdy-55_1.3.11 broken? In-Reply-To: <50F7CC7B.9050307@nginx.com> (Maxim Konovalov's message of "Thu, 17 Jan 2013 14:03:39 +0400") References: <201301142053.33218.vbart@nginx.com> <50F7CC7B.9050307@nginx.com> Message-ID: >>>>> "MK" == Maxim Konovalov writes: MK> The following script works for me on FreeBSD box: MK> fetch -o- http://www.nginx.org/download/nginx-1.3.11.tar.gz | tar MK> zxvf - && fetch MK> http://nginx.org/patches/spdy/patch.spdy-58_1.3.11.txt && cd MK> nginx-1.3.11 && patch -p1 <../patch.spdy-58_1.3.11.txt MK> Please note that 'patch -C' could produce the (false) errors you MK> describe. On linux, using gnu patch, patch(1) and patch -C both tell me that -C is not a valid option to patch. But I see the problem; the single patch file is a set of incremental patches. Unfortunately, gentoo's portage uses the --dry-run flag first, to determine which -p option is required. The earlier spdy patches worked with that automation, but the hg changeset does not. I'll post a bugz with them about that issue. Is the hg repo with the spdy changes available anywhere? -JimC -- James Cloos OpenPGP: 1024D/ED7DAEA6 From gustavo at tenrreiro.com Thu Jan 17 17:09:19 2013 From: gustavo at tenrreiro.com (Gustavo Tenrreiro) Date: Thu, 17 Jan 2013 11:09:19 -0600 Subject: Intermittent 404s when requesting a CSS file Message-ID: <03a001cdf4d5$63e6bd90$2bb438b0$@tenrreiro.com> Hi, Just installed nginx and I am also using php5-fpm on Ubuntu. I created a very simple php page that has a css stylesheet reference at the top the of the generated HTML. Every 2 or 3 requests however the browser request for the css file returns a 404 (nginx 404). When that happens in the response headers I see: X-Powered-By PHP/5.3.10-1ubuntu3.4 Which I think means that nginx sent the request to the php service instead of serving the css file itself. Can someone help me figure out why this is happening? ( see below for my nginx default.conf ) server { listen 80; server_name localhost; expires off; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; root /home/gustavo/nginx/html; location ~* /css/.*\.css$ { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } location / { #root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { #root /usr/share/nginx/html; #root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # enforce NO www if ($host ~* ^www\.(.*)) { set $host_without_www $1; rewrite ^/(.*)$ $scheme://$host_without_www/$1 permanent; } # canonicalize codeigniter url end points # if your default controller is something other than "welcome" you should change the following if ($request_uri ~* ^(/welcome(/index)?|/index(.php)?)/?$) { rewrite ^(.*)$ / permanent; } # removes trailing "index" from all controllers if ($request_uri ~* index/?$) { rewrite ^/(.*)/index/?$ /$1 permanent; } # removes trailing slashes (prevents SEO duplicate content issues) if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } # removes access to "system" folder, also allows a "System.php" controller if ($request_uri ~* ^/system) { rewrite ^/(.*)$ /index.php?/$1 last; break; } # unless the request is for a valid file (image, js, css, etc.), send to bootstrap if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?/$1 last; break; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #root /home/gustavo/nginx/html; try_files $uri = 404; #fastcgi_split_path_info ^(.+\.php)(/.+)$; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Jan 17 18:27:55 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 17 Jan 2013 22:27:55 +0400 Subject: patch.spdy-55_1.3.11 broken? In-Reply-To: References: <50F7CC7B.9050307@nginx.com> Message-ID: <201301172227.55232.vbart@nginx.com> On Thursday 17 January 2013 20:18:08 James Cloos wrote: [...] > Is the hg repo with the spdy changes available anywhere? > No, there's not. I use mq extension to maintain these patches based on our mercurial mirror: http://hg.nginx.org/ wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From agus.262 at gmail.com Thu Jan 17 18:59:48 2013 From: agus.262 at gmail.com (Agus) Date: Thu, 17 Jan 2013 15:59:48 -0300 Subject: Custom error_page basic config problem. In-Reply-To: References: Message-ID: Weird. No one? Cheers 2013/1/14 Agus > Hi fellows, > > I was having trouble creating a custom error_page. Here's the simple test > config i did: > > server_name www.test1.com.ar; > > error_log logs/www.test1.com.ar.http.error.log debug; > access_log logs/www.test1.com.ar.http.access.log main; > > root /usr/local/www/www.test1; > > location / { > # Esto es para simular el geoip con un if. > if ( $remote_addr = "10.24.18.2" ) { > error_page 401 /custom/404b.html; > return 401; > } > } > > > With that, i only got the nginx default error page. After turning on debug > i saw that when nginx goes to fetch the error_page mentioned it searches in > location / so it denies and send me the default error. Now i added a > location like this > > location = /custom/404b.html { > internal; > } > > > Which made it work. > > My question is is this is OK. If my solution is the correct one or perhaps > theres a better one. Also, this test is easy cause its local, but i want to > implemtn this in a proxy_pass situation. Probably the intercept_error.. > > Thanks for any hints you can give. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 17 19:54:49 2013 From: nginx-forum at nginx.us (armananmol) Date: Thu, 17 Jan 2013 14:54:49 -0500 Subject: Euro Truck Simulator 2 Free Full Version Download Message-ID: <157deeef37b1b847d7d3df96f9b8259c.NginxMailingListEnglish@forum.nginx.org> Euro Truck Simulator 2 Free Full Version Download http://gamesftime.blogspot.com/2012/02/euro-truck-simulator-2-free-full.html Euro Truck Simulator 2 Free Full Version Download Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235240,235240#msg-235240 From mikydevel at yahoo.fr Thu Jan 17 21:35:52 2013 From: mikydevel at yahoo.fr (Mik J) Date: Thu, 17 Jan 2013 21:35:52 +0000 (GMT) Subject: Not logging access to favicon.ico Message-ID: <1358458552.16395.YahooMailNeo@web171801.mail.ir2.yahoo.com> Hello, I'm really new to Nginx so forgive me if this question seems obvious to you. I have Nginx with virtual hosts. In my nginx.conf I have http { ... ?include /etc/nginx/sites-enabled/*; ... In /etc/nginx/sites-enabled/ I have my configuration such as default server { ??????? listen 80 default_server; ??????? server_name _; ??????? index index.html; ??????? root /var/nginx/html; ??????? access_log /var/log/nginx/default.access.log; } I would like all my virtual hosts to have some global properties such as location = /favicon.ico { return 204; access_log off; log_not_found off; } so that I won't log access and errors relative to favicon.ico But it's not clear to me, where I should put this statement in order to have a minimalistic configuration. Thank you From agus.262 at gmail.com Thu Jan 17 22:53:32 2013 From: agus.262 at gmail.com (Agus) Date: Thu, 17 Jan 2013 19:53:32 -0300 Subject: Not logging access to favicon.ico In-Reply-To: <1358458552.16395.YahooMailNeo@web171801.mail.ir2.yahoo.com> References: <1358458552.16395.YahooMailNeo@web171801.mail.ir2.yahoo.com> Message-ID: location is only available in server block. Though you could create a file with the location /favicon... and then include it in every server block which will save you typing. 2013/1/17 Mik J > Hello, > > > I'm really new to Nginx so forgive me if this question seems obvious to > you. > > I have Nginx with virtual hosts. In my nginx.conf I have > http { > ... > include /etc/nginx/sites-enabled/*; > ... > > In /etc/nginx/sites-enabled/ I have my configuration such as default > > server { > listen 80 default_server; > server_name _; > index index.html; > root /var/nginx/html; > access_log /var/log/nginx/default.access.log; > } > > > I would like all my virtual hosts to have some global properties such as > location = /favicon.ico { > return 204; > access_log off; > log_not_found off; > } > so that I won't log access and errors relative to favicon.ico > > But it's not clear to me, where I should put this statement in order to > have a minimalistic configuration. > > > Thank you > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Thu Jan 17 23:22:12 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 17 Jan 2013 23:22:12 +0000 Subject: Not logging access to favicon.ico In-Reply-To: References: <1358458552.16395.YahooMailNeo@web171801.mail.ir2.yahoo.com> Message-ID: On 17 January 2013 22:53, Agus wrote: > location is only available in server block. Though you could create a file > with the location /favicon... and then include it in every server block > which will save you typing. I do this. I'm /sure/ I've seen some historic user-agents requesting "favico.ico" or something like that, so I place it under a location of just /favico. Mik - you may find that a 204 is not what you want to return, as browsers /may/ not cache such a 0-byte file for the favicon. I use this, and include it in every server{} stanza: location /favico { access_log off; error_log /dev/null crit; empty_gif; } I'm fully aware it doesn't work on IE, due to IE only supporting properly-formed Icon format files; fuck those guys. Cheers, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From mikydevel at yahoo.fr Fri Jan 18 08:00:54 2013 From: mikydevel at yahoo.fr (Mik J) Date: Fri, 18 Jan 2013 08:00:54 +0000 (GMT) Subject: Not logging access to favicon.ico In-Reply-To: References: <1358458552.16395.YahooMailNeo@web171801.mail.ir2.yahoo.com> Message-ID: <1358496054.44479.YahooMailNeo@web171806.mail.ir2.yahoo.com> ----- Mail original ----- > De?: Jonathan Matthews > > On 17 January 2013 22:53, Agus wrote: >> location is only available in server block. Though you could create a file >> with the location /favicon... and then include it in every server block >> which will save you typing. > ... I use this, and include it in every server{} stanza: > > location /favico { > ? access_log off; > ? error_log /dev/null crit; > ? empty_gif; > } Thank you guys, so I'll put a somilar configuration as below for all my virtual hosts: # cat /etc/nginx/sites-available/default server { ??????? listen 80 default_server; ??????? server_name _; ??????? index index.html; ??????? root /var/nginx/html; ??????? access_log /var/log/nginx/default.access.log; ??????? location /favico { ??????? access_log off; ??????? error_log /dev/null crit; ??????? empty_gif; ??????? } } I was wondering that since this "location" block (and probably other settings) is going to be repeated in every one of my virtual hosts there would be a was to configure this globally. I don't have any server stanza since I use include /etc/nginx/sites-enabled/*; I'll do as Jonathan says except if someone has another suggestion. Bye From pasik at iki.fi Fri Jan 18 08:38:21 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Fri, 18 Jan 2013 10:38:21 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: <20130116151511.GS8912@reaktio.net> Message-ID: <20130118083821.GA8912@reaktio.net> On Thu, Jan 17, 2013 at 11:15:58AM +0800, ?????? wrote: > Yes. It should work for any request method. > Great, thanks, I'll let you know how it works for me. Probably in two weeks or so. -- Pasi > 2013/1/16 Pasi K??rkk??inen <[1]pasik at iki.fi> > > On Sun, Jan 13, 2013 at 08:22:17PM +0800, ?????? wrote: > > ? ? This patch should work between nginx-1.2.6 and nginx-1.3.8. > > ? ? The documentation is here: > > > ? ? ## client_body_postpone_sending ## > > ? ? Syntax: **client_body_postpone_sending** `size` > > ? ? Default: 64k > > ? ? Context: `http, server, location` > > ? ? If you specify the `proxy_request_buffering` or > > ? ? `fastcgi_request_buffering` to be off, Nginx will send the body > to backend > > ? ? when it receives more than `size` data or the whole request body > has been > > ? ? received. It could save the connection and reduce the IO number > with > > ? ? backend. > > > > ? ? ## proxy_request_buffering ## > > ? ? Syntax: **proxy_request_buffering** `on | off` > > ? ? Default: `on` > > ? ? Context: `http, server, location` > > ? ? Specify the request body will be buffered to the disk or not. If > it's off, > > ? ? the request body will be stored in memory and sent to backend > after Nginx > > ? ? receives more than `client_body_postpone_sending` data. It could > save the > > ? ? disk IO with large request body. > > > > > > ? ? ? ? ? ? Note that, if you specify it to be off, the nginx > retry mechanism > > ? ? with unsuccessful response will be broken after you sent part of > the > > ? ? request to backend. It will just return 500 when it encounters > such > > ? ? unsuccessful response. This directive also breaks these > variables: > > ? ? $request_body, $request_body_file. You should not use these > variables any > > ? ? more while their values are undefined. > > > > Hello, > > This patch sounds exactly like what I need aswell! > I assume it works for both POST and PUT requests? > > Thanks, > > -- Pasi > > > ? ? ? Hello! > > ? ? ? @yaoweibin > > > > ? ? ? ? If you are eager for this feature, you could try my > > ? ? ? ? patch: [2][2]https://github.com/taobao/tengine/pull/91. > This patch has > > ? ? ? ? been running in our production servers. > > > > ? ? ? what's the nginx version your patch based on? > > ? ? ? Thanks! > > ? ? ? On Fri, Jan 11, 2013 at 5:17 PM, ?*? ?*?????? > <[3][3]yaoweibin at gmail.com> wrote: > > > > ? ? ? ? I know nginx team are working on it. You can wait for it. > > ? ? ? ? If you are eager for this feature, you could try my > > ? ? ? ? patch: [4][4]https://github.com/taobao/tengine/pull/91. > This patch has > > ? ? ? ? been running in our production servers. > > > > ? ? ? ? 2013/1/11 li zJay <[5][5]zjay1987 at gmail.com> > > > > ? ? ? ? ? Hello! > > ? ? ? ? ? is it possible that nginx will not buffer the client > body before > > ? ? ? ? ? handle the request to upstream? > > ? ? ? ? ? we want to use nginx as a reverse proxy to upload very > very big file > > ? ? ? ? ? to the upstream, but the default behavior of nginx is to > save the > > ? ? ? ? ? whole request to the local disk first before handle it > to the > > ? ? ? ? ? upstream, which make the upstream impossible to process > the file on > > ? ? ? ? ? the fly when the file is uploading, results in much high > request > > ? ? ? ? ? latency and server-side resource consumption. > > ? ? ? ? ? Thanks! > > ? ? ? ? ? _______________________________________________ > > ? ? ? ? ? nginx mailing list > > ? ? ? ? ? [6][6]nginx at nginx.org > > ? ? ? ? ? [7][7]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? ? -- > > ? ? ? ? Weibin Yao > > ? ? ? ? Developer @ Server Platform Team of Taobao > > ? ? ? ? _______________________________________________ > > ? ? ? ? nginx mailing list > > ? ? ? ? [8][8]nginx at nginx.org > > ? ? ? ? [9][9]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? _______________________________________________ > > ? ? ? nginx mailing list > > ? ? ? [10][10]nginx at nginx.org > > ? ? ? [11][11]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? -- > > ? ? Weibin Yao > > ? ? Developer @ Server Platform Team of Taobao > > > > References > > > > ? ? Visible links > > ? ? 1. mailto:[12]zjay1987 at gmail.com > > ? ? 2. [13]https://github.com/taobao/tengine/pull/91 > > ? ? 3. mailto:[14]yaoweibin at gmail.com > > ? ? 4. [15]https://github.com/taobao/tengine/pull/91 > > ? ? 5. mailto:[16]zjay1987 at gmail.com > > ? ? 6. mailto:[17]nginx at nginx.org > > ? ? 7. [18]http://mailman.nginx.org/mailman/listinfo/nginx > > ? ? 8. mailto:[19]nginx at nginx.org > > ? ? 9. [20]http://mailman.nginx.org/mailman/listinfo/nginx > > ? 10. mailto:[21]nginx at nginx.org > > ? 11. [22]http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > > nginx mailing list > > [23]nginx at nginx.org > > [24]http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > [25]nginx at nginx.org > [26]http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > References > > Visible links > 1. mailto:pasik at iki.fi > 2. https://github.com/taobao/tengine/pull/91 > 3. mailto:yaoweibin at gmail.com > 4. https://github.com/taobao/tengine/pull/91 > 5. mailto:zjay1987 at gmail.com > 6. mailto:nginx at nginx.org > 7. http://mailman.nginx.org/mailman/listinfo/nginx > 8. mailto:nginx at nginx.org > 9. http://mailman.nginx.org/mailman/listinfo/nginx > 10. mailto:nginx at nginx.org > 11. http://mailman.nginx.org/mailman/listinfo/nginx > 12. mailto:zjay1987 at gmail.com > 13. https://github.com/taobao/tengine/pull/91 > 14. mailto:yaoweibin at gmail.com > 15. https://github.com/taobao/tengine/pull/91 > 16. mailto:zjay1987 at gmail.com > 17. mailto:nginx at nginx.org > 18. http://mailman.nginx.org/mailman/listinfo/nginx > 19. mailto:nginx at nginx.org > 20. http://mailman.nginx.org/mailman/listinfo/nginx > 21. mailto:nginx at nginx.org > 22. http://mailman.nginx.org/mailman/listinfo/nginx > 23. mailto:nginx at nginx.org > 24. http://mailman.nginx.org/mailman/listinfo/nginx > 25. mailto:nginx at nginx.org > 26. http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Jan 18 10:26:42 2013 From: nginx-forum at nginx.us (sachintheonly) Date: Fri, 18 Jan 2013 05:26:42 -0500 Subject: Enabling proxy cache causes all responses to be buffered Message-ID: Hi, I am using nginx as a download proxy cache, caching files locally. I have configured nginx to cache certain paths, for the rest I set proxy_cache_bypass to make sure they are not cached. The config looks something like this: proxy_buffering on; proxy_cache_path _ at cache_path_@ levels=1:2 keys_zone=cache_one:128m inactive=1440m max_size=2000m; proxy_cache_methods GET; ... ... location /service { # Send request to java objectStoreService servlet proxy_cache_bypass true; proxy_pass url; } location ~ /get_file$ { proxy_pass url; } However I see that every response is being buffered and saved for futur even when proxy_cache_bypass is set to true, how can I skip caching and buffering altogether for unwanted paths for example /service in the above conf Any help is much appreciated, thanks! Thanks Sachin Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235264,235264#msg-235264 From m6rkalan at gmail.com Fri Jan 18 11:08:10 2013 From: m6rkalan at gmail.com (Mark Alan) Date: Fri, 18 Jan 2013 11:08:10 +0000 Subject: Not logging access to favicon.ico In-Reply-To: <1358496054.44479.YahooMailNeo@web171806.mail.ir2.yahoo.com> References: <1358458552.16395.YahooMailNeo@web171801.mail.ir2.yahoo.com> <1358496054.44479.YahooMailNeo@web171806.mail.ir2.yahoo.com> Message-ID: <50f92d1d.c518b40a.71b3.ffffd80e@mx.google.com> On Fri, 18 Jan 2013 08:00:54 +0000 (GMT), Mik J wrote: > Thank you guys, so I'll put a somilar configuration as below for all > my virtual hosts: # cat /etc/nginx/sites-available/default > server { > ??????? listen 80 default_server; > ??????? server_name _; > ??????? index index.html; > ??????? root /var/nginx/html; > ??????? access_log /var/log/nginx/default.access.log; > > ??????? location /favico { > ??????? access_log off; > ??????? error_log /dev/null crit; > ??????? empty_gif; > ??????? } > } There is no need to use the 'empty_gif' blob or the related ngx_http_empty_gif_module anymore. Try this instead: location = /favicon.ico { access_log off; log_not_found off; expires 30d; try_files /favicon.ico =204; } Regarding that 204 status code, do see: https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#204 r. M From nginx-forum at nginx.us Fri Jan 18 11:11:06 2013 From: nginx-forum at nginx.us (wingoo) Date: Fri, 18 Jan 2013 06:11:06 -0500 Subject: curl can response gzip but browser can't Message-ID: <22402b6eab60f1314253f050e3654d98.NginxMailingListEnglish@forum.nginx.org> my environment is nginx(1.3.11) + php-fpm curl -I -H "Accept-Encoding: gzip,deflate" http://www.ihezhu.com/ HTTP/1.1 200 OK Server: nginx Date: Fri, 18 Jan 2013 05:18:31 GMT Content-Type: text/html; charset=utf-8 Connection: keep-alive Vary: Accept-Encoding Set-Cookie: PHPSESSID=2quqa651uglt62ku49re2nt1n4; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Content-Encoding: gzip but when use browser like chrome and it's response not contain Content-Encoding, what's wrong? my nginx setting is gzip on; gzip_buffers 4 16k; gzip_comp_level 3; gzip_http_version 1.1; gzip_min_length 1k; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_vary on; gzip_disable msie6; thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235266,235266#msg-235266 From jgehrcke at googlemail.com Fri Jan 18 12:21:44 2013 From: jgehrcke at googlemail.com (Jan-Philip Gehrcke) Date: Fri, 18 Jan 2013 13:21:44 +0100 Subject: How to not 'expose' directory tree by default Message-ID: <50F93E58.1030408@googlemail.com> Hello, error 403 means that the location exists and access is not allowed while 404 means that the location does not exist. Based on this, with mostly default settings, it is (in theory) possible to determine the directory structure below the document root via guessing or dictionary attack. This may or may not be considered a security risk (what do you think?). I know that there are ways to make nginx return 404 for specific locations, including directories. In am wondering, however, if there is a neat approach making nginx return 404 generally for each directory that - has not explicitly enabled autoindex and - contains no 'index' file (HttpIndexModule) Thanks, Jan-Philip From someukdeveloper at gmail.com Fri Jan 18 17:45:13 2013 From: someukdeveloper at gmail.com (Some Developer) Date: Fri, 18 Jan 2013 17:45:13 +0000 Subject: Interest in extending FastCGI / SCGI support to allow TLS encrypted connections to back end? Message-ID: <50F98A29.8060105@googlemail.com> Hi, I was wondering if there was any interest in extending the FastCGI and SCGI implementations in Nginx to allow a TLS encryption to the application backend? Currently if you have Nginx on one machine and your FastCGI / SCGI application on another machine then communications between the two will be unencrypted. Of course you can use something like stunnel (which someone on this list told me about helpfully a while ago) to encrypt the communications but that seems a bit messy. If Nginx supported TLS encryption natively then applications using FastCGI or SCGI could be upgraded to take advantage of that fact if the developer thought it was worthwhile. I can't be the only person who wants a 100% encrypted connection from the browser to Nginx to the FastCGI application to the database. From andrew at andrewloe.com Fri Jan 18 22:43:06 2013 From: andrew at andrewloe.com (W. Andrew Loe III) Date: Fri, 18 Jan 2013 14:43:06 -0800 Subject: SPDY patch and mod_zip crashing workers Message-ID: I'm trying to get nginx 1.3.8 to play well with the SPDY patch and mod_zip. I don't want to move to 1.3.9+ because I rely on the upload modules that have not yet been modified to handle the chunked uploads. When initiating a download with mod_zip, things appear to go ok until mod_zip starts to feed the content of the subrequest into the output handler (spdy in this case). I get this in my debug log: [debug] 5244#0: *1 http init upstream, client timer: 0 [notice] 5230#0: signal 20 (SIGCHLD) received nginx version: nginx/1.3.8 built by clang 4.0 (tags/Apple/clang-421.0.60) (based on LLVM 3.1svn) TLS SNI support enabled configure arguments: --prefix=/usr/local/Cellar/nginx/1.3.8 --conf-path=/usr/local/etc/nginx/nginx.conf --error-log-path=/usr/local/var/nginx/error.log --http-log-path=/usr/local/var/nginx/access.log --http-client-body-temp-path=/usr/local/var/cache/nginx/client_temp --http-proxy-temp-path=/usr/local/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/usr/local/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/usr/local/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/usr/local/var/cache/nginx/scgi_temp --lock-path=/usr/local/var/lock/nginx.lock --pid-path=/usr/local/var/run/nginx.pid --with-pcre-jit --with-debug --with-ipv6 --without-http_browser_module --without-http_empty_gif_module --without-http_fastcgi_module --without-http_geo_module --without-http_memcached_module --without-http_referer_module --without-http_scgi_module --without-http_split_clients_module --without-http_ssi_module --without-http_userid_module --without-http_uwsgi_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_gzip_static_module --with-http_realip_module --with-http_ssl_module --with-http_stub_status_module --add-module=nginx_upload_module-2.0.12c --add-module=mod_zip-1.1.6 --add-module=headers-more-nginx-module-0.19 I've attached the complete debug log. -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 72716 bytes Desc: not available URL: From ondanomala_albertelli at yahoo.it Sat Jan 19 01:21:18 2013 From: ondanomala_albertelli at yahoo.it (OndanomalA) Date: Sat, 19 Jan 2013 02:21:18 +0100 Subject: Redirect specific query to a page Message-ID: I moved my website from Joomla to WordPress.. I'd like to redirect www.website.com/?option=com_content&view=article&id=164&Itemid=139 to www.website.com/listituto/contatti/ (so, same domain). I tried with this line (both within the / location and outside it): rewrite /?option=com_content&view=article&id=164&Itemid=139 /listituto/contatti/ permanent; ... with no luck. What am I doing wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Sat Jan 19 01:51:59 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sat, 19 Jan 2013 02:51:59 +0100 Subject: Redirect specific query to a page In-Reply-To: References: Message-ID: <87hame2b40.wl%appa@perusio.net> On 19 Jan 2013 02h21 CET, ondanomala_albertelli at yahoo.it wrote: > I moved my website from Joomla to WordPress.. I'd like to redirect > www.website.com/?option=com_content&view=article&id=164&Itemid=139 > to www.website.com/listituto/contatti/ (so, same domain). > > I tried with this line (both within the / location and outside it): > > rewrite /?option=com_content&view=article&id=164&Itemid=139 > /listituto/contatti/ permanent; Try with a map. At the http level: set $arg_str $arg_option$arg_view$arg_id$arg_itemid; map $arg_str $redirect_contact { default 0; com_contentarticle164139 1; } Then do at the server (vhost) level: if ($redirect_contact) { return 301 $scheme://$host/listituto/contatti; } Remember that the argument order can change and that your rewrite already takes the arguments in consideration. You need to add a '?' at the end of the replacement to omit the arguments from it. See: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite --- appa From anoopalias01 at gmail.com Sat Jan 19 09:13:26 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 19 Jan 2013 14:43:26 +0530 Subject: Announcing cpXstack - The complete nginX+ PHP-FPM stack for cpanel Message-ID: cpXstack cpXstack is a GPL licensed cpanel plugin developed by SysAlly ( http://SysAlly.net ) that implements the full LEMP ( Linux- nginX - MySQL - PHP-FPM ) stack . More information about the project can be found at http://cpxstack.sysally.net/ Documentation at http://manage.piserve.com/projects/cpxstack/wiki/Documentation The software is FREE (as in FREE beer ) for use . SysAlly provides : 1. Installation Support - 10 USD 2. Technical Support - 10 USD/ Hr 3. Value Added Service - 25 checkpoint Server Optimization and security hardening with PDF report + FREE cpXstack installation - worth 45 USD at 30 USD/server . About SysAlly SysAlly is a division of PiServe Technologies Private Limited that provides RIM (Remote IT Infrastructure Management Services). At SysAlly, we are proud of our ability to timely deliver Superior and cost-effective services and products, thereby enhancing the business value to our esteemed clients. Our products and services offer true value for money and involve low cost of ownership. Thank you, -- Anoop P Alias (PGP Key ID : 0x014F9953) GNU system administrator http://UniversalAdm.in -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sat Jan 19 13:31:15 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 19 Jan 2013 18:31:15 +0500 Subject: Optimize nginx stream Message-ID: we're running high traffic streaming website similar to youtube, due to large number of streams on daily basis, our server is consuming 10~12TB bandwidh per day. We're using nginx-1.2.1 and want to restrict users to download videos but don't want to restrict streams from our website. We also tried "limiting download connections per ip (limit_conn addr 1), but the problem is if 2~3 users are streaming videos from our site on same Lan with single ip, the stream will stop working for 2 others and will display (stream not found) error .I am newbie to nginx, can anyone help me on optimizing bandwidth security for nginx? My particular host config settings are given below :- limit_conn_zone $binary_remote_addr zone=addr:5m; server { listen 80; server_name content.com; client_max_body_size 800m; limit_rate 100k; # access_log /websites/theos.in/logs/access.log main; location / { root /var/www/html/site; index index.html index.htm index.php; } location /files/videos { root /var/www/html/site; limit_conn addr 1; } location ~ \.(flv)$ { flv; root /var/www/html/site; limit_conn addr 1; valid_referers none blocked content.com; if ($invalid_referer) { return 403; } } I think the above config is enough to identify the issue. please let me know if you understand about what i am trying to explain. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondanomala_albertelli at yahoo.it Sat Jan 19 14:44:24 2013 From: ondanomala_albertelli at yahoo.it (OndanomalA) Date: Sat, 19 Jan 2013 15:44:24 +0100 Subject: Redirect specific query to a page In-Reply-To: References: Message-ID: Thanks Antonio for the reply! :) The fact is that I don't care so much about these redirects, I just want 4/5 pages of the old permalink structure to be correctly redirected to the new pages. This 4 page are (for example): /?option=com_content&view=category&id=40&Itemid=106 /?option=com_content&view=article&id=164&Itemid=139 /?option=com_content&view=article&id=288&Itemid=90 /?option=com_content&view=category&layout=blog&id=1&Itemid=50 All should be redirected to 4 different pages. Is there a way - without looking at parameters (I just need to redirect THOSE 4/5 pages, not the whole perm structure) - to tell to singly redirect each to the corresponding new page (one line rewrite for each)? For example: /?option=com_content&view=category&id=40&Itemid=106 -> /blahblah/page1 /?option=com_content&view=article&id=164&Itemid=139 -> /blahblah/page2 2013/1/19 OndanomalA > I moved my website from Joomla to WordPress.. I'd like to redirect > www.website.com/?option=com_content&view=article&id=164&Itemid=139 to > www.website.com/listituto/contatti/ (so, same domain). > > I tried with this line (both within the / location and outside it): > > rewrite /?option=com_content&view=article&id=164&Itemid=139 > /listituto/contatti/ permanent; > > ... with no luck. What am I doing wrong? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Sat Jan 19 17:57:53 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sat, 19 Jan 2013 18:57:53 +0100 Subject: Redirect specific query to a page In-Reply-To: References: Message-ID: <87fw1x2gym.wl%appa@perusio.net> On 19 Jan 2013 15h44 CET, ondanomala_albertelli at yahoo.it wrote: > Thanks Antonio for the reply! :) > > The fact is that I don't care so much about these redirects, I just > want 4/5 pages of the old permalink structure to be correctly > redirected to the new pages. This 4 page are (for example): > /?option=com_content&view=category&id=40&Itemid=106 > /?option=com_content&view=article&id=164&Itemid=139 > /?option=com_content&view=article&id=288&Itemid=90 > /?option=com_content&view=category&layout=blog&id=1&Itemid=50 > > All should be redirected to 4 different pages. Is there a way - > without looking at parameters (I just need to redirect THOSE 4/5 > pages, not the whole perm structure) - to tell to singly redirect > each to the corresponding new page (one line rewrite for each)? > > For example: /?option=com_content&view=category&id=40&Itemid=106 -> > /blahblah/page1 /?option=com_content&view=article&id=164&Itemid=139 > -> /blahblah/page2 You could. Just use map to map the old on the new. At the http level. ## String composed of all the arguments on the URI. set $arg_str $arg_option$arg_view$arg_id$arg_itemid; map $arg_str $new_uri { default 0; com_contentarticle40106 /blablah/page1; com_contentarticle164139 /blablah/page2; ## add as many string -> new uri lines as needed. } then at the server level do: if ($new_uri) { rewrite ^ $scheme://$host$new_uri permanent; } Try it out. --- appa From ondanomala_albertelli at yahoo.it Sun Jan 20 01:28:52 2013 From: ondanomala_albertelli at yahoo.it (OndanomalA) Date: Sun, 20 Jan 2013 02:28:52 +0100 Subject: Redirect specific query to a page In-Reply-To: References: Message-ID: I tried it but I get the error "nginx: [emerg] "set" directive is not allowed here" (as you said I put set and map at http level and rewrite at server level). 2013/1/19 OndanomalA > Thanks Antonio for the reply! :) > > The fact is that I don't care so much about these redirects, I just want > 4/5 pages of the old permalink structure to be correctly redirected to the > new pages. This 4 page are (for example): > /?option=com_content&view=category&id=40&Itemid=106 > /?option=com_content&view=article&id=164&Itemid=139 > /?option=com_content&view=article&id=288&Itemid=90 > /?option=com_content&view=category&layout=blog&id=1&Itemid=50 > > All should be redirected to 4 different pages. Is there a way - without > looking at parameters (I just need to redirect THOSE 4/5 pages, not the > whole perm structure) - to tell to singly redirect each to the > corresponding new page (one line rewrite for each)? > > For example: > /?option=com_content&view=category&id=40&Itemid=106 -> /blahblah/page1 > /?option=com_content&view=article&id=164&Itemid=139 -> /blahblah/page2 > > > > 2013/1/19 OndanomalA > >> I moved my website from Joomla to WordPress.. I'd like to redirect >> www.website.com/?option=com_content&view=article&id=164&Itemid=139 to >> www.website.com/listituto/contatti/ (so, same domain). >> >> I tried with this line (both within the / location and outside it): >> >> rewrite /?option=com_content&view=article&id=164&Itemid=139 >> /listituto/contatti/ permanent; >> >> ... with no luck. What am I doing wrong? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Sun Jan 20 02:43:18 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 20 Jan 2013 03:43:18 +0100 Subject: Redirect specific query to a page In-Reply-To: References: Message-ID: <87ehhg377d.wl%appa@perusio.net> On 20 Jan 2013 02h28 CET, ondanomala_albertelli at yahoo.it wrote: > I tried it but I get the error "nginx: [emerg] "set" directive is > not allowed here" (as you said I put set and map at http level and > rewrite at server level). > Indeed. set is not allowed at the http level, although it could be quite useful IMHO. Either that or allowing to combine variables as the left side of the map. You have to do a more complex mapping. Remove the set and use the following map instead (while mantaining the if at the server level): map $query_string $new_uri { option=com_content&view=category&id=40&Itemid=106 /blahblah/page1; option=com_content&view=article&id=164&Itemid=139 /blahblah/page2; ## add as query string -> new uri lines as needed. } Now it depends on the order, which is bad. Other option is to use only set and if. Like this: At the server level: ## String composed of all the arguments on the URI. set $arg_str $arg_option$arg_view$arg_id$arg_itemid; if ($arg_str = com_contentarticle40106) { rewrite ^ $scheme://$host/blahblah/page1 permanent; } if ($arg_str = com_contentarticle164139) { rewrite ^ $scheme://$host/blahblah/page2 permanent; } Add as many if blocks as needed. It's ugly but since you have only a few redirects, it's manageable IMHO. Now you no longer depend on the order. --- appa From ondanomala_albertelli at yahoo.it Sun Jan 20 12:16:35 2013 From: ondanomala_albertelli at yahoo.it (OndanomalA) Date: Sun, 20 Jan 2013 13:16:35 +0100 Subject: Redirect specific query to a page In-Reply-To: References: Message-ID: Yeah! Using only set and if worked! :D Thanks! 2013/1/20 OndanomalA > I tried it but I get the error "nginx: [emerg] "set" directive is not > allowed here" (as you said I put set and map at http level and rewrite at > server level). > > > 2013/1/19 OndanomalA > >> Thanks Antonio for the reply! :) >> >> The fact is that I don't care so much about these redirects, I just want >> 4/5 pages of the old permalink structure to be correctly redirected to the >> new pages. This 4 page are (for example): >> /?option=com_content&view=category&id=40&Itemid=106 >> /?option=com_content&view=article&id=164&Itemid=139 >> /?option=com_content&view=article&id=288&Itemid=90 >> /?option=com_content&view=category&layout=blog&id=1&Itemid=50 >> >> All should be redirected to 4 different pages. Is there a way - without >> looking at parameters (I just need to redirect THOSE 4/5 pages, not the >> whole perm structure) - to tell to singly redirect each to the >> corresponding new page (one line rewrite for each)? >> >> For example: >> /?option=com_content&view=category&id=40&Itemid=106 -> /blahblah/page1 >> /?option=com_content&view=article&id=164&Itemid=139 -> /blahblah/page2 >> >> >> >> 2013/1/19 OndanomalA >> >>> I moved my website from Joomla to WordPress.. I'd like to redirect >>> www.website.com/?option=com_content&view=article&id=164&Itemid=139 to >>> www.website.com/listituto/contatti/ (so, same domain). >>> >>> I tried with this line (both within the / location and outside it): >>> >>> rewrite /?option=com_content&view=article&id=164&Itemid=139 >>> /listituto/contatti/ permanent; >>> >>> ... with no luck. What am I doing wrong? >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jan 20 12:38:46 2013 From: nginx-forum at nginx.us (blubb123) Date: Sun, 20 Jan 2013 07:38:46 -0500 Subject: Temporary "File not found" error Message-ID: About one month ago I had set up a new server configuration, since then I struggle with an annoying temporary error. Most of the time everything works fine, but after some period of time, I sometimes receive "File not found" errors when I try to acces a PHP file. access log: a.b.c.d - - [20/Jan/2013:13:14:38 +0100] "GET /intern/phpMyAdmin/ HTTP/1.1" 404 47 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0" "-" a.b.c.d - - [20/Jan/2013:13:14:39 +0100] "GET /intern/phpMyAdmin/ HTTP/1.1" 200 2378 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0" "-" error log: 2013/01/20 13:14:38 [error] 26428#0: *6892 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: a.b.c.d, server: abc.de, request: "GET /intern/phpMyAdmin/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:10000", host: "abc.de" Two request, the first on throws an error, the second one is fine. So there is only one of the two php-fpm workers affected? Since it is working most of the time, I don't think it has something to do with file permissions? Perhaps someone has an idea whats wrong with my configuration? Thanks in advance! nginx config: root /srv/www/abc.de/public/www; access_log /srv/www/abc.de/log/access.log main; error_log /srv/www/abc.de/log/error.log warn; location / { index index.html index.htm index.php; } location ~ \.php$ { if ( !-f $request_filename ) { return 404; } include /etc/nginx/fastcgi_params; fastcgi_split_path_info ^((?U).+\.php)(/?.+)$; fastcgi_param DOCUMENT_ROOT /public/www; fastcgi_param SCRIPT_FILENAME /public/www$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_pass 127.0.0.1:10000; fastcgi_index index.php; } fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_connect_timeout 120; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; php-fpm pool config: [abc] prefix = /srv/www/abc.de/ listen = 127.0.0.1:10000 listen.backlog = -1 listen.allowed_clients = 127.0.0.1 listen.owner = www-data listen.group = www-data listen.mode = 666 user = www-data group = www-data pm = dynamic pm.max_children = 30 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 10 pm.max_requests = 500 pm.status_path = /php_pool_abc_status ping.path = /abc_ping ping.response = abc_pong request_terminate_timeout = 0 request_slowlog_timeout = 1 slowlog = log/php-slow.log access.log = /srv/www/abc.de/log/abc_pool.access.log chroot = $prefix chdir = /public/ security.limit_extensions = .php .php3 .php4 .php5 env[HOSTNAME] = $HOSTNAME ; env[PATH] = /usr/local/bin:/usr/bin:/bin env[TMP] = /tmp env[TMPDIR] = /tmp env[TEMP] = /tmp php_flag[display_errors] = off php_admin_value[error_log] = /log/php_err.log php_admin_flag[log_errors] = on php_flag[suhosin.session.encrypt] = off Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235335,235335#msg-235335 From vadim.lazovskiy at gmail.com Sun Jan 20 13:00:03 2013 From: vadim.lazovskiy at gmail.com (Vadim Lazovskiy) Date: Sun, 20 Jan 2013 17:00:03 +0400 Subject: Redirect specific query to a page In-Reply-To: <87ehhg377d.wl%appa@perusio.net> References: <87ehhg377d.wl%appa@perusio.net> Message-ID: Hello, http://nginx.org/en/docs/http/ngx_http_map_module.html Before version 0.9.0 only a single variable could be specified in the first parameter. map $arg_option$arg_view$arg_id$arg_itemid $redirect_uri { com_contentarticle40106 /blahblah/page1; com_contentarticle164139 /blahblah/page2; } ... location / { if ($redirect_uri) { return 301 $scheme://$host$redirect_uri; } } It's also order-independent. And this is an Orthodox way :) 2013/1/20 Ant?nio P. P. Almeida > On 20 Jan 2013 02h28 CET, ondanomala_albertelli at yahoo.it wrote: > > > I tried it but I get the error "nginx: [emerg] "set" directive is > > not allowed here" (as you said I put set and map at http level and > > rewrite at server level). > > > > Indeed. set is not allowed at the http level, although it could be > quite useful IMHO. Either that or allowing to combine variables as the > left side of the map. > > You have to do a more complex mapping. Remove the set and use the > following map instead (while mantaining the if at the server level): > > map $query_string $new_uri { > option=com_content&view=category&id=40&Itemid=106 /blahblah/page1; > option=com_content&view=article&id=164&Itemid=139 /blahblah/page2; > ## add as query string -> new uri lines as needed. > } > > Now it depends on the order, which is bad. Other option is to use only > set and if. Like this: > > At the server level: > > ## String composed of all the arguments on the URI. > set $arg_str $arg_option$arg_view$arg_id$arg_itemid; > > if ($arg_str = com_contentarticle40106) { > rewrite ^ $scheme://$host/blahblah/page1 permanent; > } > > if ($arg_str = com_contentarticle164139) { > rewrite ^ $scheme://$host/blahblah/page2 permanent; > } > > Add as many if blocks as needed. > > It's ugly but since you have only a few redirects, it's manageable > IMHO. Now you no longer depend on the order. > > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Best Regards, Vadim Lazovskiy -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at vereshagin.org Sun Jan 20 15:10:20 2013 From: peter at vereshagin.org (Peter Vereshagin) Date: Sun, 20 Jan 2013 19:10:20 +0400 Subject: Interest in extending FastCGI / SCGI support to allow TLS encrypted connections to back end? In-Reply-To: <50F98A29.8060105@googlemail.com> References: <50F98A29.8060105@googlemail.com> Message-ID: <20130120151019.GA14521@external.screwed.box> Hello. 2013/01/18 17:45:13 +0000 Some Developer => To nginx at nginx.org : SD> be unencrypted. Of course you can use something like stunnel (which SD> someone on this list told me about helpfully a while ago) to encrypt the SD> communications but that seems a bit messy. If Nginx supported TLS *CGI interfaces look to be the previous century demand. HTTP(S) seems to be the trend, even for the newly developed databases. What's messy with your 'stunnel'? Why shouldn't you use the 'nginx' on the backend side with https as an uplink protocol? The your 'fastcgi client' nginx should use then the 'nginx on a backend side' as an https upstream. SD> I can't be the only person who wants a 100% encrypted connection from SD> the browser to Nginx to the FastCGI application to the database. Are there any other web server(s) having this feature implemented then? Thank you. -- Peter Vereshagin (http://vereshagin.org) pgp: 1754B9C1 From mikydevel at yahoo.fr Sun Jan 20 15:22:07 2013 From: mikydevel at yahoo.fr (Mik J) Date: Sun, 20 Jan 2013 15:22:07 +0000 (GMT) Subject: Rewrite rules with NGinx In-Reply-To: References: <87ehhg377d.wl%appa@perusio.net> Message-ID: <1358695327.26525.YahooMailNeo@web171805.mail.ir2.yahoo.com> Hello, I have read the thread "Redirect specific query to a page" carefully as I need something similar but I've also searched on different websites and they don't use the map solution. Action 1: I would like that when people access to www.domain.org/nginx the system queries the webpage www.domain.org/page.php?arg=nginx With Apache I did RewriteRule ^nginx$ /page.php?arg=nginx [L] For Nginx I found this equivalent rewrite ^nginx$ /page.php?arg=nginx last; Action 2: For people who try to access to www.domain.org/page.php?arg=nginx, they are redirected to www.domain.org/nginx RewriteCond %{ENV:REDIRECT_STATUS} 200 RewriteRule .* - [L] RewriteCond %{QUERY_STRING} ^arg=nginx$ RewriteRule ^page\.php$ /nginx? [R=302,L] When I wrote these rules with Apache I had to add RewriteCond %{ENV:REDIRECT_STATUS} 200 RewriteRule .* - [L] to prevent the loop How could I write this second action with NGinx ? Is my first action rule correct ? When should I use map ? >________________________________ > De?: Vadim Lazovskiy >??: nginx at nginx.org >Envoy? le : Dimanche 20 janvier 2013 14h00 >Objet?: Re: Redirect specific query to a page > > >Hello, > > >http://nginx.org/en/docs/http/ngx_http_map_module.html > >Before version 0.9.0 only a single variable could be specified in the first parameter. > > > >map?$arg_option$arg_view$arg_id$arg_itemid $redirect_uri { >? ? ??com_contentarticle40106?/blahblah/page1; >? ? ??com_contentarticle164139 /blahblah/page2; >} > > >... > > >location / { >? ? ? if ($redirect_uri) { >? ? ? ? ? ? return 301?$scheme://$host$redirect_uri; >? ? ? } >} > > >It's also order-independent. And this is an Orthodox way :) > > > >2013/1/20 Ant?nio P. P. Almeida > >On 20 Jan 2013 02h28 CET, ondanomala_albertelli at yahoo.it wrote: >> >>> I tried it but I get the error "nginx: [emerg] "set" directive is >>> not allowed here" (as you said I put set and map at http level and >>> rewrite at server level). >>> >> >>Indeed. set is not allowed at the http level, although it could be >>quite useful IMHO. Either that or allowing to combine variables as the >>left side of the map. >> >>You have to do a more complex mapping. Remove the set and use the >>following map instead (while mantaining the if at the server level): >> >>map $query_string $new_uri { >>? ? option=com_content&view=category&id=40&Itemid=106 /blahblah/page1; >>? ? option=com_content&view=article&id=164&Itemid=139 /blahblah/page2; >>? ? ## add as query string -> new uri lines as needed. >>} >> >>Now it depends on the order, which is bad. Other option is to use only >>set and if. Like this: >> >>At the server level: >> >> >>## String composed of all the arguments on the URI. >>set $arg_str $arg_option$arg_view$arg_id$arg_itemid; >> >>if ($arg_str = com_contentarticle40106) { >>? ?rewrite ^ $scheme://$host/blahblah/page1 permanent; >>} >> >>if ($arg_str = com_contentarticle164139) { >>? ?rewrite ^ $scheme://$host/blahblah/page2 permanent; >>} >> >>Add as many if blocks as needed. >> >>It's ugly but since you have only a few redirects, it's manageable >>IMHO. Now you no longer depend on the order. >> >> >>--- appa >> >>_______________________________________________ >>nginx mailing list >>nginx at nginx.org >>http://mailman.nginx.org/mailman/listinfo/nginx >> > > > >-- > >Best Regards, > >Vadim Lazovskiy >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > > From francis at daoine.org Sun Jan 20 15:50:24 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 20 Jan 2013 15:50:24 +0000 Subject: Rewrite rules with NGinx In-Reply-To: <1358695327.26525.YahooMailNeo@web171805.mail.ir2.yahoo.com> References: <87ehhg377d.wl%appa@perusio.net> <1358695327.26525.YahooMailNeo@web171805.mail.ir2.yahoo.com> Message-ID: <20130120155024.GJ4332@craic.sysops.org> On Sun, Jan 20, 2013 at 03:22:07PM +0000, Mik J wrote: Hi there, Untested, but: it feels nicer to avoid rewrite if possible. > Action 1: > I would like that when people access to www.domain.org/nginx the system queries the webpage www.domain.org/page.php?arg=nginx location = /nginx { # proxy_pass or fastcgi_pass and fastcgi_param, or whatever is appropriate } "appropriate" depends on which non-nginx thing you use to process php. > Action 2: > For people who try to access to www.domain.org/page.php?arg=nginx, they are redirected to www.domain.org/nginx location = /page.php { if (#this_should_redirect) { return 302 /nginx; } # proxy_pass or fastcgi_pass, or whatever is appropriate } "this_should_redirect" might be based on $arg_arg, or on $query_string, or on something similar. What should happen for /page.php?arg=other? And for /page.php?arg=nginx&other? f -- Francis Daly francis at daoine.org From mikydevel at yahoo.fr Sun Jan 20 16:35:39 2013 From: mikydevel at yahoo.fr (Mik J) Date: Sun, 20 Jan 2013 16:35:39 +0000 (GMT) Subject: Rewrite rules with NGinx In-Reply-To: <20130120155024.GJ4332@craic.sysops.org> References: <87ehhg377d.wl%appa@perusio.net> <1358695327.26525.YahooMailNeo@web171805.mail.ir2.yahoo.com> <20130120155024.GJ4332@craic.sysops.org> Message-ID: <1358699739.80593.YahooMailNeo@web171805.mail.ir2.yahoo.com> ----- Mail original ----- > De?: Francis Daly > > Hi there, > > Untested, but: it feels nicer to avoid rewrite if possible. Hello Francis, Thank you for your answer. >>??Action 1: >>??I would like that when people access to www.domain.org/nginx the system > queries the webpage www.domain.org/page.php?arg=nginx > > ? location = /nginx { > ???# proxy_pass or fastcgi_pass and fastcgi_param, or whatever is appropriate > ? } > > "appropriate" depends on which non-nginx thing you use to process php. I have not installed anything like that at the moment but I'll use fastcgi I think. I'm not sure I fully understand your answer though. >>??Action 2: >>??For people who try to access to www.domain.org/page.php?arg=nginx, they are > redirected to www.domain.org/nginx > > ? location = /page.php { > ???if (#this_should_redirect) { > ? ? return 302 /nginx; > ???} > ???# proxy_pass or fastcgi_pass, or whatever is appropriate > ? } > > "this_should_redirect" might be based on $arg_arg, or on > $query_string, > or on something similar. According to your answer I should write something like this location = /page.php { if ($arg_arg = nginx) { ?? ? return 302 /nginx; ???? } # proxy_pass or fastcgi_pass, or whatever is appropriate } > What should happen for /page.php?arg=other ? I didn't think about this case but if arg=other, I would like the redirection to go to www.domain.org/http > And for /page.php?arg=nginx&other? I'm not sure I understand but the arguements provided will be arg=nginx&language=ru. So yes there could be more than one argument. But my Apache setting has only one argument at the moment and I'd like to move from Apache to Nginx. From francis at daoine.org Sun Jan 20 17:01:00 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 20 Jan 2013 17:01:00 +0000 Subject: Rewrite rules with NGinx In-Reply-To: <1358699739.80593.YahooMailNeo@web171805.mail.ir2.yahoo.com> References: <87ehhg377d.wl%appa@perusio.net> <1358695327.26525.YahooMailNeo@web171805.mail.ir2.yahoo.com> <20130120155024.GJ4332@craic.sysops.org> <1358699739.80593.YahooMailNeo@web171805.mail.ir2.yahoo.com> Message-ID: <20130120170100.GK4332@craic.sysops.org> On Sun, Jan 20, 2013 at 04:35:39PM +0000, Mik J wrote: Hi there, > > ? location = /nginx { > > ???# proxy_pass or fastcgi_pass and fastcgi_param, or whatever is appropriate > > ? } > > > > "appropriate" depends on which non-nginx thing you use to process php. > > I have not installed anything like that at the moment but I'll use fastcgi I think. I'm not sure I fully understand your answer though. Apache "does" php internally (typically). Nginx "does" php by being a client to some other server that "does" php itself. You need to configure nginx to send the right requests to that other server. For fastcgi, the usual way to tell the fastcgi server which file it should process is by sending something like (in this case) fastcgi_param SCRIPT_FILENAME $document_root/page.php; and since you also want to use a particular query string: fastcgi_param QUERY_STRING arg=nginx; No need for a rewrite. > According to your answer I should write something like this > location = /page.php { > if ($arg_arg = nginx) { > ?? ? return 302 /nginx; > ???? } > # proxy_pass or fastcgi_pass, or whatever is appropriate > } Yes; that will redirect /page.php?k1=v1&arg=nginx&k2=v2 to /nginx (for any k1, v1, k2, v2). And then you'll want to add the rest of the configuration for any /page.php request that does not match that "if". > > What should happen for /page.php?arg=other ? > I didn't think about this case but if arg=other, I would like the redirection to go to www.domain.org/http When you start handling more than one case specially, it is probably time to use a "map" to map between inputs and desired outputs. Or just let the page.php script do the logic to appropriately handle the arguments. > > And for /page.php?arg=nginx&other? > I'm not sure I understand but the arguements provided will be arg=nginx&language=ru. So yes there could be more than one argument. If you don't care about other arguments, then $arg_arg is the variable to use. If you care that there is only one argument, then $query_string is the variable to use. > But my Apache setting has only one argument at the moment and I'd like to move from Apache to Nginx. You'll need to move to something like nginx + fastcgi. The hard part of migrations is usually deciding exactly what behaviours you do and do not want. Once you know that, it should be straightforward to have a test system which will show you that your current attempt does or does not match the requirements. Good luck with it, f -- Francis Daly francis at daoine.org From appa at perusio.net Sun Jan 20 17:22:08 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 20 Jan 2013 18:22:08 +0100 Subject: Rewrite rules with NGinx In-Reply-To: <1358699739.80593.YahooMailNeo@web171805.mail.ir2.yahoo.com> References: <87ehhg377d.wl%appa@perusio.net> <1358695327.26525.YahooMailNeo@web171805.mail.ir2.yahoo.com> <20130120155024.GJ4332@craic.sysops.org> <1358699739.80593.YahooMailNeo@web171805.mail.ir2.yahoo.com> Message-ID: <87bocj3h33.wl%appa@perusio.net> On 20 Jan 2013 17h35 CET, mikydevel at yahoo.fr wrote: > ----- Mail original ----- > >> De?: Francis Daly >> >> Hi there, >> >> Untested, but: it feels nicer to avoid rewrite if possible. > > Hello Francis, > Thank you for your answer. > >>> ??Action 1: ??I would like that when people access to >>> www.domain.org/nginx the system >> queries the webpage www.domain.org/page.php?arg=nginx >> >> ? location = /nginx { ???# proxy_pass or fastcgi_pass and >> fastcgi_param, or whatever is appropriate ? } >> >> "appropriate" depends on which non-nginx thing you use to process >> php. > > I have not installed anything like that at the moment but I'll use > fastcgi I think. I'm not sure I fully understand your answer though. > >>> ??Action 2: ??For people who try to access to >>> www.domain.org/page.php?arg=nginx, they are >> redirected to www.domain.org/nginx >> >> ? location = /page.php { >> ???if (#this_should_redirect) { >> ? ? return 302 /nginx; >> ???} >> ???# proxy_pass or fastcgi_pass, or whatever is appropriate >> ? } >> >> "this_should_redirect" might be based on $arg_arg, or on >> $query_string, >> or on something similar. > > According to your answer I should write something like this > location = /page.php { > if ($arg_arg = nginx) { > ?? ? return 302 /nginx; > ???? } > # proxy_pass or fastcgi_pass, or whatever is appropriate > } > > > >> What should happen for /page.php?arg=other ? > I didn't think about this case but if arg=other, I would like the > redirection to go to www.domain.org/http > >> And for /page.php?arg=nginx&other? > I'm not sure I understand but the arguements provided will be > arg=nginx&language=ru. So yes there could be more than one > argument. But my Apache setting has only one argument at the moment > and I'd like to move from Apache to Nginx. I'm not sure I understand what you want to do. It seems you'll get a redirect loop. But anyway, try this (untested): map $arg_arg$arg_language $redirect { default 0; nginxru 1; ## Add as many lines as arg_arg and language combinations needed... } location = /page.php { error_page 418 =200 /nginx; if ($redirect) { return 418; } ## FCGI content handler here... } location = /nginx { if ($status != 200) { return 302 /page.php?$query_string; } try_files /page.php?$query_string =404; } --- appa From mac_man2008 at yahoo.it Sun Jan 20 17:53:10 2013 From: mac_man2008 at yahoo.it (mac_man2008 at yahoo.it) Date: Mon, 21 Jan 2013 01:53:10 +0800 Subject: Failed to install Message-ID: <50FC2F06.8060407@yahoo.it> Hi to all, I am in China and I just followed the instructions reported here: http://www.nginx.org/en/download.html but after running the "sudo apt-get update" command I get the following error: W: Impossibile recuperare http://nginx.org/packages/debian/dists/quantal/nginx/source/Sources 404 Not Found W: Impossibile recuperare http://nginx.org/packages/debian/dists/quantal/nginx/binary-i386/Packages 404 Not Found E: Impossibile scaricare alcuni file di indice: saranno ignorati o verranno usati quelli vecchi. What can I do? Thanks. Kubuntu 12.10 From vbart at nginx.com Sun Jan 20 18:47:15 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 20 Jan 2013 22:47:15 +0400 Subject: Failed to install In-Reply-To: <50FC2F06.8060407@yahoo.it> References: <50FC2F06.8060407@yahoo.it> Message-ID: <201301202247.15261.vbart@nginx.com> On Sunday 20 January 2013 21:53:10 mac_man2008 at yahoo.it wrote: > Hi to all, > I am in China and I just followed the instructions reported here: > http://www.nginx.org/en/download.html > but after running the "sudo apt-get update" command I get the following > error: > > W: Impossibile recuperare > http://nginx.org/packages/debian/dists/quantal/nginx/source/Sources 404 > Not Found > > W: Impossibile recuperare > http://nginx.org/packages/debian/dists/quantal/nginx/binary-i386/Packages > 404 Not Found > > E: Impossibile scaricare alcuni file di indice: saranno ignorati o > verranno usati quelli vecchi. > > > What can I do? > Thanks. > Kubuntu 12.10 > Please, check your "/etc/apt/sources.list". It looks like you have confused ubuntu with debian. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Jan 21 00:38:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jan 2013 04:38:26 +0400 Subject: curl can response gzip but browser can't In-Reply-To: <22402b6eab60f1314253f050e3654d98.NginxMailingListEnglish@forum.nginx.org> References: <22402b6eab60f1314253f050e3654d98.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130121003826.GG99404@mdounin.ru> Hello! On Fri, Jan 18, 2013 at 06:11:06AM -0500, wingoo wrote: > my environment is nginx(1.3.11) + php-fpm > > curl -I -H "Accept-Encoding: gzip,deflate" http://www.ihezhu.com/ > > HTTP/1.1 200 OK > Server: nginx > Date: Fri, 18 Jan 2013 05:18:31 GMT > Content-Type: text/html; charset=utf-8 > Connection: keep-alive > Vary: Accept-Encoding > Set-Cookie: PHPSESSID=2quqa651uglt62ku49re2nt1n4; path=/ > Expires: Thu, 19 Nov 1981 08:52:00 GMT > Cache-Control: no-store, no-cache, must-revalidate, post-check=0, > pre-check=0 > Pragma: no-cache > Content-Encoding: gzip > > but when use browser like chrome and it's response not contain > Content-Encoding, what's wrong? > > my nginx setting is > > gzip on; > gzip_buffers 4 16k; > gzip_comp_level 3; > gzip_http_version 1.1; > gzip_min_length 1k; > gzip_proxied any; > gzip_types text/plain text/css application/json application/x-javascript > text/xml application/xml application/xml+rss text/javascript; > gzip_vary on; > gzip_disable msie6; > > thanks It looks like something on your host interferes with normal request handling. Probably some firewall with content inspection and/or antivirus software. I was able to simplify test case down to the LF vs CRLF difference in the following requests: $ (printf "GET / HTTP/1.1\r\nHost: www.ihezhu.com\r\nAccept-Encoding: gzip\r\n\r\n"; sleep 1) | nc 210.51.54.180 80 | grep Content-Encoding $ (printf "GET / HTTP/1.1\nHost: www.ihezhu.com\nAccept-Encoding: gzip\n\n"; sleep 1) | nc 210.51.54.180 80 | grep -a Content-Encoding Content-Encoding: gzip No gzip is returned with proper CRLF newlines, while with just bare LFs gzip is returned (likely because it bypasses content inspection in question). Try disabling your antivirus software to see if it helps. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Mon Jan 21 00:55:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jan 2013 04:55:10 +0400 Subject: How to not 'expose' directory tree by default In-Reply-To: <50F93E58.1030408@googlemail.com> References: <50F93E58.1030408@googlemail.com> Message-ID: <20130121005510.GH99404@mdounin.ru> Hello! On Fri, Jan 18, 2013 at 01:21:44PM +0100, Jan-Philip Gehrcke wrote: > Hello, > > error 403 means that the location exists and access is not allowed > while 404 means that the location does not exist. > > Based on this, with mostly default settings, it is (in theory) > possible to determine the directory structure below the document > root via guessing or dictionary attack. This may or may not be > considered a security risk (what do you think?). It is always possible to determine all files available under document root as long as you have enough time or luck. Directories are just special case of files which return directory listing if they are requested with traling slash and listing is allowed. > I know that there are ways to make nginx return 404 for specific > locations, including directories. In am wondering, however, if there > is a neat approach making nginx return 404 generally for each > directory that > - has not explicitly enabled autoindex and > - contains no 'index' file (HttpIndexModule) Simple solution would be to redefine 403 to be 404, something like error_page 404 = /error/403; location = /error/403 { return 404; } Note though, that it will be still possible to find out there is a directory, as on request without trailing slash a 301 redirect will be returned with trailing slash added. (You may use similar aproach to override 301 redirects as well, but it will as well affect directories with autoindex enabled/index files present, resulting in bad user experience.) -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Mon Jan 21 03:14:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jan 2013 07:14:16 +0400 Subject: Custom error_page basic config problem. In-Reply-To: References: Message-ID: <20130121031415.GN99404@mdounin.ru> Hello! On Mon, Jan 14, 2013 at 07:02:41PM -0300, Agus wrote: > Hi fellows, > > I was having trouble creating a custom error_page. Here's the simple test > config i did: > > server_name www.test1.com.ar; > > error_log logs/www.test1.com.ar.http.error.log debug; > access_log logs/www.test1.com.ar.http.access.log main; > > root /usr/local/www/www.test1; > > location / { > # Esto es para simular el geoip con un if. > if ( $remote_addr = "10.24.18.2" ) { > error_page 401 /custom/404b.html; > return 401; > } > } > > > With that, i only got the nginx default error page. After turning on debug > i saw that when nginx goes to fetch the error_page mentioned it searches in > location / so it denies and send me the default error. This is expected behaviour - you return 401 once again during error_page handling, and hence get builtin error. > Now i added a location like this > > location = /custom/404b.html { > internal; > } > > > Which made it work. This is how it's usually handled - via separate location without a check to deny access. > My question is is this is OK. If my solution is the correct one or perhaps > theres a better one. Also, this test is easy cause its local, but i want to > implemtn this in a proxy_pass situation. Probably the intercept_error.. > > Thanks for any hints you can give. Separate location for handling errors is fine. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Mon Jan 21 03:54:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jan 2013 07:54:28 +0400 Subject: Logging errors via error_page + post_action? In-Reply-To: <1358397719.10080.140661178656361.3DD44B9B@webmail.messagingengine.com> References: <1355370282.17475.140661165396829.04E700D9@webmail.messagingengine.com> <1358397719.10080.140661178656361.3DD44B9B@webmail.messagingengine.com> Message-ID: <20130121035427.GO99404@mdounin.ru> Hello! On Thu, Jan 17, 2013 at 03:41:59PM +1100, Robert Mueller wrote: > I posted this about a month ago and didn't hear anything, so I'm > reposting again to hopefully catch some new eyes and see if anyone has > any ideas. > > --- > > Hi > > In our nginx setup we use proxy_pass to pass most requests to backend > servers. > > We like to monitor our logs regularly for any errors to see that > everything is working as expected. We can grep the nginx logs, but: > > a) That's not real time > b) We can't get extended information about the request, like if it's a > POST, what the POST body actually was The "tail -F /path/to/log" is actually recommended solution. It is realtime and doesn't introduce additional point of failure in client request processing. > So what we wanted to do was use an error_page handler in nginx so if any > backend returned an error, we resent the request details to an error > handler script, something like: > > location / { > proxy_pass http://backend/; > } > > error_page 500 /internal_error_page_500; > location /internal_error_page_500 { > internal; > proxy_set_header X-URL "$host$request_uri"; > proxy_set_header X-Post $request_body; > proxy_set_header X-Method $request_method; > proxy_set_header X-Real-IP $remote_addr; > proxy_pass http://local/cgi-bin/error.pl; > } > > The problem is that this replaces any result content from the main / > proxy_pass with the content that error.pl generates. We don't want that, > we want to keep the original result, but just use the error_page handler > to effectively "log" the error for later. > > I thought maybe we could replace: > > proxy_pass http://local/cgi-bin/error.pl; > > With: > > post_action http://local/cgi-bin/error.pl; > > But that just causes nginx to return a "404 Not Found" error instead. > > Is there any way to do this? Return the original result content of a > proxy_pass directive, but if that proxy_pass returns an error code (eg > 500, etc), do a request to another URL with "logging" information (eg > URL, method, POST body content, etc) The error_page is executed to replace content returned to client. It can't be used to do something in addition to normal request processing. If you are brave enough to do post_action and understand consequences, you may do something like location / { proxy_pass http://backend/; post_action /post; } location = /post { if ($status != 500) { return 204; } # do something } Though I wouldn't recommend using post_action unless you understand what it implies. It's left undocumented on purpose. -- Maxim Dounin http://nginx.com/support.html From someukdeveloper at gmail.com Mon Jan 21 07:07:46 2013 From: someukdeveloper at gmail.com (Some Developer) Date: Mon, 21 Jan 2013 07:07:46 +0000 Subject: Interest in extending FastCGI / SCGI support to allow TLS encrypted connections to back end? In-Reply-To: <20130120151019.GA14521@external.screwed.box> References: <50F98A29.8060105@googlemail.com> <20130120151019.GA14521@external.screwed.box> Message-ID: <50FCE942.8060508@googlemail.com> On 20/01/13 15:10, Peter Vereshagin wrote: > Hello. > > 2013/01/18 17:45:13 +0000 Some Developer => To nginx at nginx.org : > SD> be unencrypted. Of course you can use something like stunnel (which > SD> someone on this list told me about helpfully a while ago) to encrypt the > SD> communications but that seems a bit messy. If Nginx supported TLS > > *CGI interfaces look to be the previous century demand. HTTP(S) seems to be > the trend, even for the newly developed databases. Unfortunately there really isn't much that you can use instead of FastCGI or SCGI if you want to be able to host multiple applications using different languages in a consistent manner. > What's messy with your 'stunnel'? Why shouldn't you use the 'nginx' on the > backend side with https as an uplink protocol? The your 'fastcgi client' nginx > should use then the 'nginx on a backend side' as an https upstream. I'm not sure I completely understand your point here. Are you suggesting that you just run a simple Nginx server on the application so that the front end Nginx server can just pass the requests to the Nginx on the application server via HTTPS and then the local Nginx server just passes the requests on to the application server on 127.0.0.1? > SD> I can't be the only person who wants a 100% encrypted connection from > SD> the browser to Nginx to the FastCGI application to the database. > > Are there any other web server(s) having this feature implemented then? Not that I am aware of. From peter at vereshagin.org Mon Jan 21 07:31:41 2013 From: peter at vereshagin.org (Peter Vereshagin) Date: Mon, 21 Jan 2013 11:31:41 +0400 Subject: Interest in extending FastCGI / SCGI support to allow TLS encrypted connections to back end? In-Reply-To: <50FCE942.8060508@googlemail.com> References: <50F98A29.8060105@googlemail.com> <20130120151019.GA14521@external.screwed.box> <50FCE942.8060508@googlemail.com> Message-ID: <20130121073140.GA15072@external.screwed.box> Hello. 2013/01/21 07:07:46 +0000 Some Developer => To nginx at nginx.org : SD> On 20/01/13 15:10, Peter Vereshagin wrote: SD> > 2013/01/18 17:45:13 +0000 Some Developer => To nginx at nginx.org : SD> > What's messy with your 'stunnel'? Why shouldn't you use the 'nginx' on the SD> > backend side with https as an uplink protocol? The your 'fastcgi client' nginx SD> > should use then the 'nginx on a backend side' as an https upstream. SD> SD> I'm not sure I completely understand your point here. Are you suggesting SD> that you just run a simple Nginx server on the application so that the SD> front end Nginx server can just pass the requests to the Nginx on the SD> application server via HTTPS and then the local Nginx server just passes SD> the requests on to the application server on 127.0.0.1? Short answer: yes. 127.0.0.1 or local socket or DMZ neighbor (the whatever). What's wrong with stunnel then? I have my interest as an author of 'fcgi_spawn' for perl 'cgi alike' apps: http://search.cpan.org/dist/FCGI-Spawn/bin/fcgi_spawn Had never mind about SSL'ing the socket to listen for... Thank you. -- Peter Vereshagin (http://vereshagin.org) pgp: 1754B9C1 From yaoweibin at gmail.com Mon Jan 21 08:04:22 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Mon, 21 Jan 2013 16:04:22 +0800 Subject: [Announce] Tengine-1.4.3 is released Message-ID: Hi folks, We are glad to announce that Tengine-1.4.3 development version has been released. You can either checkout the source code from GitHub: https://github.com/taobao/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-1.4.3.tar.gz In this release, we have added the TFS module, which provides a RESTful API to work with TFS (Taobao File System). TFS is an open source distributed file system similar to GFS. It has been proved very stable and efficient and stores about 10 petabytes of data at Taobao. More information about TFS can be found at https://github.com/taobao/tfs. The full changelog is as follows: *) Feature: added the TFS module which provides a RESTful API to Taobao File System. (zhcn381, monadbobo) *) Feature: added a $sent_cookie_XXX variable which could be used to get the value of cookie XXX from the Set-Cookie headers. (skoo87) *) Feature: now the syslog logging supports host name and domain name as its destination address. (cfsego) *) Change: added an attribute 'id' for the server directive in the upstream block. (yaoweibin) *) Bugfix: fixed a bug of DSO module which might stop Tengine from reloading. (monadbobo) *) Bugfix: fixed a segmentation fault bug of upstream_check module when the check timeout was larger than the check interval. (yaoweibin) *) Bugfix: fixed a segmentation fault bug of user_agent module when there was no User-Agent header existed in a request. (dinic) *) Bugfix: fixed the bug that sysguard module didn't work on Mac OS. (lizi) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Have fun! Regards, -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 21 10:07:01 2013 From: nginx-forum at nginx.us (dmee) Date: Mon, 21 Jan 2013 05:07:01 -0500 Subject: =?UTF-8?B?UmU6IGNuZ2lueC0tLSBuZ2lueCDkuK3mloforrrlnZsgbmdpbnggQ2hpbmVzZSBm?= =?UTF-8?B?b3J1bQ==?= In-Reply-To: References: Message-ID: <346644c87dd6706521b3cde580ed2bb6.NginxMailingListEnglish@forum.nginx.org> I set up a nginx forum for chinese user recently? SIte url is www.inginx.org, welcome to sign in and discuss. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,1966,235373#msg-235373 From someukdeveloper at gmail.com Mon Jan 21 11:15:51 2013 From: someukdeveloper at gmail.com (Some Developer) Date: Mon, 21 Jan 2013 11:15:51 +0000 Subject: Interest in extending FastCGI / SCGI support to allow TLS encrypted connections to back end? In-Reply-To: <20130121073140.GA15072@external.screwed.box> References: <50F98A29.8060105@googlemail.com> <20130120151019.GA14521@external.screwed.box> <50FCE942.8060508@googlemail.com> <20130121073140.GA15072@external.screwed.box> Message-ID: <50FD2367.6060000@googlemail.com> On 21/01/13 07:31, Peter Vereshagin wrote: > Hello. > > 2013/01/21 07:07:46 +0000 Some Developer => To nginx at nginx.org : > SD> On 20/01/13 15:10, Peter Vereshagin wrote: > SD> > 2013/01/18 17:45:13 +0000 Some Developer => To nginx at nginx.org : > SD> > What's messy with your 'stunnel'? Why shouldn't you use the 'nginx' on the > SD> > backend side with https as an uplink protocol? The your 'fastcgi client' nginx > SD> > should use then the 'nginx on a backend side' as an https upstream. > SD> > SD> I'm not sure I completely understand your point here. Are you suggesting > SD> that you just run a simple Nginx server on the application so that the > SD> front end Nginx server can just pass the requests to the Nginx on the > SD> application server via HTTPS and then the local Nginx server just passes > SD> the requests on to the application server on 127.0.0.1? > > Short answer: yes. > > 127.0.0.1 or local socket or DMZ neighbor (the whatever). > > What's wrong with stunnel then? Nothing is wrong with stunnel other than it adds extra complexity to your deployment. It would be nice if Nginx could handle this on its own. It clearly already can due to its support of HTTPS on the browser side so I can't imagine it would be very hard to add support on the FastCGI or SCGI side. From manlio.perillo at gmail.com Mon Jan 21 11:53:29 2013 From: manlio.perillo at gmail.com (Manlio Perillo) Date: Mon, 21 Jan 2013 12:53:29 +0100 Subject: Interest in extending FastCGI / SCGI support to allow TLS encrypted connections to back end? In-Reply-To: <50F98A29.8060105@googlemail.com> References: <50F98A29.8060105@googlemail.com> Message-ID: <50FD2C39.4020801@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Il 18/01/2013 18:45, Some Developer ha scritto: > Hi, > > I was wondering if there was any interest in extending the FastCGI and > SCGI implementations in Nginx to allow a TLS encryption to the > application backend? > > Currently if you have Nginx on one machine and your FastCGI / SCGI > application on another machine then communications between the two will > be unencrypted. Use Nginx on machine A and Nginx on machine B. Then, on machine B, use FastCGI/SCGI/uWSGI to talk with your applications. > [...] Manlio Perillo -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAlD9LDkACgkQscQJ24LbaUTm9ACdFY/VuIiAMjqbfEwaRZn8zofp pykAn3oX5mPy6rkT5N/wk2liwkaKI6ZY =+S3y -----END PGP SIGNATURE----- From someukdeveloper at gmail.com Mon Jan 21 12:39:10 2013 From: someukdeveloper at gmail.com (Some Developer) Date: Mon, 21 Jan 2013 12:39:10 +0000 Subject: Interest in extending FastCGI / SCGI support to allow TLS encrypted connections to back end? In-Reply-To: <50FD2C39.4020801@gmail.com> References: <50F98A29.8060105@googlemail.com> <50FD2C39.4020801@gmail.com> Message-ID: <50FD36EE.4060406@googlemail.com> On 21/01/13 11:53, Manlio Perillo wrote: > Il 18/01/2013 18:45, Some Developer ha scritto: >> Hi, >> >> I was wondering if there was any interest in extending the FastCGI and >> SCGI implementations in Nginx to allow a TLS encryption to the >> application backend? >> >> Currently if you have Nginx on one machine and your FastCGI / SCGI >> application on another machine then communications between the two will >> be unencrypted. > > Use Nginx on machine A and Nginx on machine B. > Then, on machine B, use FastCGI/SCGI/uWSGI to talk with your applications. OK. This sounds like the best option. Thank you both for your help. From peter at vereshagin.org Mon Jan 21 14:20:09 2013 From: peter at vereshagin.org (Peter Vereshagin) Date: Mon, 21 Jan 2013 18:20:09 +0400 Subject: Interest in extending FastCGI / SCGI support to allow TLS encrypted connections to back end? In-Reply-To: <50FD2367.6060000@googlemail.com> References: <50F98A29.8060105@googlemail.com> <20130120151019.GA14521@external.screwed.box> <50FCE942.8060508@googlemail.com> <20130121073140.GA15072@external.screwed.box> <50FD2367.6060000@googlemail.com> Message-ID: <20130121142008.GA64602@external.screwed.box> Hello. 2013/01/21 11:15:51 +0000 Some Developer => To nginx at nginx.org : SD> On 21/01/13 07:31, Peter Vereshagin wrote: SD> > 2013/01/21 07:07:46 +0000 Some Developer => To nginx at nginx.org : SD> > SD> On 20/01/13 15:10, Peter Vereshagin wrote: SD> > SD> > 2013/01/18 17:45:13 +0000 Some Developer => To nginx at nginx.org : SD> > SD> > What's messy with your 'stunnel'? Why shouldn't you use the 'nginx' on the SD> > SD> > backend side with https as an uplink protocol? The your 'fastcgi client' nginx SD> > SD> > should use then the 'nginx on a backend side' as an https upstream. SD> > SD> SD> > SD> I'm not sure I completely understand your point here. Are you suggesting SD> > SD> that you just run a simple Nginx server on the application so that the SD> > SD> front end Nginx server can just pass the requests to the Nginx on the SD> > SD> application server via HTTPS and then the local Nginx server just passes SD> > SD> the requests on to the application server on 127.0.0.1? SD> > SD> > Short answer: yes. SD> > SD> > 127.0.0.1 or local socket or DMZ neighbor (the whatever). SD> > SD> > What's wrong with stunnel then? SD> SD> Nothing is wrong with stunnel other than it adds extra complexity to SD> your deployment. It would be nice if Nginx could handle this on its own. SD> It clearly already can due to its support of HTTPS on the browser side SD> so I can't imagine it would be very hard to add support on the FastCGI SD> or SCGI side. It's fine only for the smaller half of cases when the backend has only one application (fcgi or scgi) server per host. Back in time when one application server was handling several application(s) this did more sense. But this just doesn't seem to be a web applications architecture trend any more. Adding more application to the typical nginx consumer's backend means adding more application servers therefore more ports/sockets to listen. The more ports to listen on the outer network means more complication(s) e. g., firewall and encryption on each of them, from both frontend and a backend sides. At the same time being backed by nginx (or backing nginx) those daemons should feel better with outer network instabilities, e. g., avoiding 'slow client problem' that may happen between frontend and backend hosts keeping from use of the full potential of the application servers and so on. I believe it's not hard to implement encryption in the nginx fcgi/scgi client, just think it's not a future targeting and can decrease the growth of installations number, on backends particularly. ;-) Thank you. -- Peter Vereshagin (http://vereshagin.org) pgp: 1754B9C1 From siefke_listen at web.de Mon Jan 21 17:54:00 2013 From: siefke_listen at web.de (Silvio Siefke) Date: Mon, 21 Jan 2013 18:54:00 +0100 Subject: Nginx and Python Message-ID: <20130121185400.e16f4a588ed5aaaa0d079809@web.de> Hello, i try for a tutorial Python / Django and Nginx unite. http://www.collabspot.com/2012/08/14/setting-up-nginx-uwsgi-python-ubuntu-12-04/ That sounds like a multi-hosting, and was not particularly difficult. But something is wrong with the system. When i want on the website, i become a 502 Bad Gateway. The Socket would not be create. Configuration: gentoo-mobile conf # cat nginx.conf server { listen 80; server_name python.silviosiefke.de; root /var/www/python.silviosiefke.de/src/silviosiefke; access_log /var/www/python.silviosiefke.de/logs/access.log; error_log /var/www/python.silviosiefke.de/logs/error.log; location / { include /etc/nginx/configuration/uwsgi_params; uwsgi_pass unix:///tmp/python.silviosiefke.de.sock; } } gentoo-mobile conf # cat uwsgi.ini [uwsgi] # variables projectname = silviosiefke projectdomain = python.silviosiefke.de base = /var/www/python.silviosiefke.de # config protocol = uwsgi venv = /var/www/python.silviosiefke.de/venv pythonpath = /var/www/python.silviosiefke.de/src/silviosiefke module = %(projectname).wsgi socket = /tmp/python.silviosiefke.de.sock logto = /var/www/python.silviosiefke.de/logs/uwsgi.log 2013/01/21 18:50:17 [crit] 4539#0: *1 connect() to unix:///tmp/python.silviosiefke.de.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.2.20, server: python.silviosiefke.de, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///tmp/python.silviosiefke.de.sock:", : "python.silviosiefke.de" The socket is not present in /tmp. Whereis the mistake? Is there no way to make the Vhost configuration so that it does not matter what the user is in? PHP, Perl or Python? Really Thank you for help. Greetings Silvio From contact at jpluscplusm.com Mon Jan 21 19:15:21 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 21 Jan 2013 19:15:21 +0000 Subject: Nginx and Python In-Reply-To: <20130121185400.e16f4a588ed5aaaa0d079809@web.de> References: <20130121185400.e16f4a588ed5aaaa0d079809@web.de> Message-ID: On 21 January 2013 17:54, Silvio Siefke wrote:> > The socket is not present in /tmp. Whereis the mistake? It looks like uWSGI hasn't created it. Perhaps you didn't run it in a way such that it had permission to do so. I'm afraid I can't help you troubleshoot that program and, just FYI, this isn't the mailing list on which to do that. > Is there no way to make the Vhost configuration so that it does not > matter what the user is in? PHP, Perl or Python? If your code/container can listen on a socket, and you know what protocol it speaks, and can deterministically configure nginx to use that protocol, then there's no reason why you can't abstract the nginx config away from having per-app knowledge. I suggest that it's probably more bother than it's worth, unless you have very specific mass-hosting or NoOps requirements, however. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From kirpit at gmail.com Mon Jan 21 19:25:43 2013 From: kirpit at gmail.com (kirpit) Date: Mon, 21 Jan 2013 21:25:43 +0200 Subject: Nginx and Python In-Reply-To: <20130121185400.e16f4a588ed5aaaa0d079809@web.de> References: <20130121185400.e16f4a588ed5aaaa0d079809@web.de> Message-ID: yeah, your uwsgi instance is not working or somehow socket is simply not there. and for the application-agnostic configuration; you basically have to pass some parameters to your upstreams about the current request and then handle these params from them. see there is a similar example on nginx documentation: http://wiki.nginx.org/Configuration#Python_via_uWSGI go dig nginx and uwsgi config files from my multi-app scenario: https://github.com/kirpit/webstack cheers. On Mon, Jan 21, 2013 at 7:54 PM, Silvio Siefke wrote: > Hello, > > > i try for a tutorial Python / Django and Nginx unite. > > > http://www.collabspot.com/2012/08/14/setting-up-nginx-uwsgi-python-ubuntu-12-04/ > > That sounds like a multi-hosting, and was not particularly difficult. But > something is wrong with the system. > > When i want on the website, i become a 502 Bad Gateway. The Socket > would not be create. > > Configuration: > > > gentoo-mobile conf # cat nginx.conf > server { > listen 80; > server_name python.silviosiefke.de; > root /var/www/python.silviosiefke.de/src/silviosiefke; > access_log /var/www/python.silviosiefke.de/logs/access.log; > error_log /var/www/python.silviosiefke.de/logs/error.log; > > location / { > include /etc/nginx/configuration/uwsgi_params; > uwsgi_pass unix:///tmp/python.silviosiefke.de.sock; > } > } > > > gentoo-mobile conf # cat uwsgi.ini > [uwsgi] > # variables > projectname = silviosiefke > projectdomain = python.silviosiefke.de > base = /var/www/python.silviosiefke.de > # config > protocol = uwsgi > venv = /var/www/python.silviosiefke.de/venv > pythonpath = /var/www/python.silviosiefke.de/src/silviosiefke > module = %(projectname).wsgi > socket = /tmp/python.silviosiefke.de.sock > logto = /var/www/python.silviosiefke.de/logs/uwsgi.log > > > 2013/01/21 18:50:17 [crit] 4539#0: *1 connect() to > unix:///tmp/python.silviosiefke.de.sock failed (2: No such file or > directory) > while connecting to upstream, client: 192.168.2.20, > server: python.silviosiefke.de, request: "GET /favicon.ico HTTP/1.1", > upstream: "uwsgi://unix:///tmp/python.silviosiefke.de.sock:", > : "python.silviosiefke.de" > > The socket is not present in /tmp. Whereis the mistake? > > Is there no way to make the Vhost configuration so that it does not > matter what the user is in? PHP, Perl or Python? > > > Really Thank you for help. Greetings > Silvio > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 21 19:48:16 2013 From: nginx-forum at nginx.us (PascalTurbo) Date: Mon, 21 Jan 2013 14:48:16 -0500 Subject: Internet Explorer won't show my website Message-ID: <20fd016f498b52d74e3af070e8a64f0d.NginxMailingListEnglish@forum.nginx.org> Hi There, I'm using nginx for serving a lot of webpages. That work's fine on nearly every installation - exept of one: All websites on this server couldn't get accessed by Internet Explorer. Firefox, Chrome, Safari etc.. works fine. Only IE doesn't show anything. Not on the screen, not in the logs. Even the Server-Logs doesn't get touched. Is this a known issue or does anyone have an idea what went wrong with this? Kind Regards Pascal Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235387,235387#msg-235387 From nunomagalhaes at eu.ipp.pt Mon Jan 21 20:05:42 2013 From: nunomagalhaes at eu.ipp.pt (=?UTF-8?Q?Nuno_Magalh=C3=A3es?=) Date: Mon, 21 Jan 2013 20:05:42 +0000 Subject: Internet Explorer won't show my website In-Reply-To: <20fd016f498b52d74e3af070e8a64f0d.NginxMailingListEnglish@forum.nginx.org> References: <20fd016f498b52d74e3af070e8a64f0d.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Mon, Jan 21, 2013 at 7:48 PM, PascalTurbo wrote: > Is this a known issue or does anyone have an idea what went wrong with > this? IE? Yeah it's a known issue ;) If you have multiple sites try comparing all their configs. If all browsers are working but one, i don't see how this could be (directly) related to nginx. Without the config it's hard to debug, though. -- "On the internet, nobody knows you're a dog." From luky-37 at hotmail.com Mon Jan 21 20:31:00 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 21 Jan 2013 21:31:00 +0100 Subject: Internet Explorer won't show my website In-Reply-To: <20fd016f498b52d74e3af070e8a64f0d.NginxMailingListEnglish@forum.nginx.org> References: <20fd016f498b52d74e3af070e8a64f0d.NginxMailingListEnglish@forum.nginx.org> Message-ID: You will have to do a little troubleshooting on your own. Did you analyze (tcpdump, wireshark) the traffic on the client and on the server side? What does it show? Is the TCP connection established? What does it transfer? Did you check your error logs (not the access logs)? Regards, Lukas > To: nginx at nginx.org > Subject: Internet Explorer won't show my website > From: nginx-forum at nginx.us > Date: Mon, 21 Jan 2013 14:48:16 -0500 > > Hi There, > > I'm using nginx for serving a lot of webpages. > > That work's fine on nearly every installation - exept of one: > > All websites on this server couldn't get accessed by Internet Explorer. > Firefox, Chrome, Safari etc.. works fine. Only IE doesn't show anything. Not > on the screen, not in the logs. Even the Server-Logs doesn't get touched. > > Is this a known issue or does anyone have an idea what went wrong with > this? > > Kind Regards > Pascal > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235387,235387#msg-235387 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From andrew at andrewloe.com Mon Jan 21 22:20:12 2013 From: andrew at andrewloe.com (W. Andrew Loe III) Date: Mon, 21 Jan 2013 14:20:12 -0800 Subject: SPDY patch and mod_zip crashing workers In-Reply-To: References: Message-ID: Is there more data I should provide? On Fri, Jan 18, 2013 at 2:43 PM, W. Andrew Loe III wrote: > I'm trying to get nginx 1.3.8 to play well with the SPDY patch and > mod_zip. I don't want to move to 1.3.9+ because I rely on the upload > modules that have not yet been modified to handle the chunked uploads. > > When initiating a download with mod_zip, things appear to go ok until > mod_zip starts to feed the content of the subrequest into the output > handler (spdy in this case). I get this in my debug log: > > [debug] 5244#0: *1 http init upstream, client timer: 0 > [notice] 5230#0: signal 20 (SIGCHLD) received > > nginx version: nginx/1.3.8 > built by clang 4.0 (tags/Apple/clang-421.0.60) (based on LLVM 3.1svn) > TLS SNI support enabled > configure arguments: --prefix=/usr/local/Cellar/nginx/1.3.8 > --conf-path=/usr/local/etc/nginx/nginx.conf > --error-log-path=/usr/local/var/nginx/error.log > --http-log-path=/usr/local/var/nginx/access.log > --http-client-body-temp-path=/usr/local/var/cache/nginx/client_temp > --http-proxy-temp-path=/usr/local/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/usr/local/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/usr/local/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/usr/local/var/cache/nginx/scgi_temp > --lock-path=/usr/local/var/lock/nginx.lock > --pid-path=/usr/local/var/run/nginx.pid --with-pcre-jit --with-debug > --with-ipv6 --without-http_browser_module > --without-http_empty_gif_module --without-http_fastcgi_module > --without-http_geo_module --without-http_memcached_module > --without-http_referer_module --without-http_scgi_module > --without-http_split_clients_module --without-http_ssi_module > --without-http_userid_module --without-http_uwsgi_module > --without-mail_pop3_module --without-mail_imap_module > --without-mail_smtp_module --with-http_gzip_static_module > --with-http_realip_module --with-http_ssl_module > --with-http_stub_status_module > --add-module=nginx_upload_module-2.0.12c --add-module=mod_zip-1.1.6 > --add-module=headers-more-nginx-module-0.19 > > I've attached the complete debug log. From siefke_listen at web.de Mon Jan 21 22:26:32 2013 From: siefke_listen at web.de (Silvio Siefke) Date: Mon, 21 Jan 2013 23:26:32 +0100 Subject: Nginx and Python In-Reply-To: References: <20130121185400.e16f4a588ed5aaaa0d079809@web.de> Message-ID: <20130121232632.f347e7a8685d7221ec285878@web.de> Hello, On Mon, 21 Jan 2013 19:15:21 +0000 Jonathan Matthews wrote: > It looks like uWSGI hasn't created it. Perhaps you didn't run it in a > way such that it had permission to do so. > I'm afraid I can't help you troubleshoot that program and, just FYI, > this isn't the mailing list on which to do that. No it will be created. There was something wrong in my reading. :) When i not start uwsgi then the socket can not create. Now i become the log: 2013/01/21 23:06:52 [crit] 7472#0: *5 connect() to unix:///tmp/python.silviosiefke.de.sock failed (13: Permission denied) while connecting to upstream, client: 192.168.2.20, server: python.silviosiefke.de, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///tmp/python.silviosiefke.de.sock:", host: "python.silviosiefke.de:92" > If your code/container can listen on a socket, and you know what > protocol it speaks, and can deterministically configure nginx to use > that protocol, then there's no reason why you can't abstract the nginx > config away from having per-app knowledge. I suggest that it's > probably more bother than it's worth, unless you have very specific > mass-hosting or NoOps requirements, however. Mass hoster I'm not, I will not be. There are customers whose computers and networks I care. Years ago, I once asked a hosting and now I host a few more. But before I can offer my customers, I need to test self and understand. I would not start with Apache again, Nginx has been a good companion and should stay. I try the way from nginx Wiki, maybe run it. When can i switch the clients to extra server for python hosting. Regards & Thank you Silvio From scott_ribe at elevated-dev.com Mon Jan 21 22:29:54 2013 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Mon, 21 Jan 2013 15:29:54 -0700 Subject: Nginx and Python In-Reply-To: <20130121232632.f347e7a8685d7221ec285878@web.de> References: <20130121185400.e16f4a588ed5aaaa0d079809@web.de> <20130121232632.f347e7a8685d7221ec285878@web.de> Message-ID: <2E0FEC3B-D344-4FB0-8FED-5D7F4512AA1C@elevated-dev.com> On Jan 21, 2013, at 3:26 PM, Silvio Siefke wrote: > unix:///tmp/python.silviosiefke.de.sock failed (13: Permission denied) Well, there's your problem... -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From farseas at gmail.com Tue Jan 22 00:09:25 2013 From: farseas at gmail.com (Bob S.) Date: Mon, 21 Jan 2013 19:09:25 -0500 Subject: Internet Explorer won't show my website In-Reply-To: References: <20fd016f498b52d74e3af070e8a64f0d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Which version(s) of IE are you talking about? We have given up trying to support IE6 or below, ever, for anything. It makes no sense to spend an inordinate amount of time for a browser that is used by so few people. My guess is that IE does not detect your network connection. bob s. On Mon, Jan 21, 2013 at 3:31 PM, Lukas Tribus wrote: > > You will have to do a little troubleshooting on your own. > Did you analyze (tcpdump, wireshark) the traffic on the > client and on the server side? > What does it show? Is the TCP connection established? > What does it transfer? > > Did you check your error logs (not the access logs)? > > > > Regards, > > Lukas > > > > To: nginx at nginx.org > > Subject: Internet Explorer won't show my website > > From: nginx-forum at nginx.us > > Date: Mon, 21 Jan 2013 14:48:16 -0500 > > > > Hi There, > > > > I'm using nginx for serving a lot of webpages. > > > > That work's fine on nearly every installation - exept of one: > > > > All websites on this server couldn't get accessed by Internet Explorer. > > Firefox, Chrome, Safari etc.. works fine. Only IE doesn't show anything. > Not > > on the screen, not in the logs. Even the Server-Logs doesn't get touched. > > > > Is this a known issue or does anyone have an idea what went wrong with > > this? > > > > Kind Regards > > Pascal > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,235387,235387#msg-235387 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mac_man2008 at yahoo.it Tue Jan 22 12:14:14 2013 From: mac_man2008 at yahoo.it (mac_man2008 at yahoo.it) Date: Tue, 22 Jan 2013 20:14:14 +0800 Subject: Failed to install Message-ID: <50FE8296.7060305@yahoo.it> Hi to all, I am in China and I just followed the instructions reported here: http://www.nginx.org/en/download.html but after running the "sudo apt-get update" command I get the following error: W: Impossibile recuperare http://nginx.org/packages/debian/dists/quantal/nginx/source/Sources 404 Not Found W: Impossibile recuperare http://nginx.org/packages/debian/dists/quantal/nginx/binary-i386/Packages 404 Not Found E: Impossibile scaricare alcuni file di indice: saranno ignorati o verranno usati quelli vecchi. What can I do? Thanks. Kubuntu 12.10 From haifeng.813 at gmail.com Tue Jan 22 12:16:36 2013 From: haifeng.813 at gmail.com (Liu Haifeng) Date: Tue, 22 Jan 2013 20:16:36 +0800 Subject: How to bind variable with connection in a customized module? Message-ID: hi all, I want save state during a long connection (keep-alive) for performance optimization in my own HTTP handler module, thus I can peek up the saved state when handling requests, to speed up response. Actually it's quite like session concept. I saw request struct has a member ctx, but what I want is a ctx on the connection. It seems no way to save any customized variable to the ngx_connection_t structure. What's the suggested way to make this if I don't want the client hold something like session_id? From sb at waeme.net Tue Jan 22 12:19:24 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Tue, 22 Jan 2013 16:19:24 +0400 Subject: Failed to install In-Reply-To: <50FE8296.7060305@yahoo.it> References: <50FE8296.7060305@yahoo.it> Message-ID: On 22 Jan2013, at 16:14 , mac_man2008 at yahoo.it wrote: > Hi to all, > I am in China and I just followed the instructions reported here: > http://www.nginx.org/en/download.html > but after running the "sudo apt-get update" command I get the following error: > > W: Impossibile recuperare http://nginx.org/packages/debian/dists/quantal/nginx/source/Sources 404 Not Found > > W: Impossibile recuperare http://nginx.org/packages/debian/dists/quantal/nginx/binary-i386/Packages 404 Not Found > > E: Impossibile scaricare alcuni file di indice: saranno ignorati o verranno usati quelli vecchi. > > > What can I do? You have added incorrect line to /etc/apt/sources.list. Replace 'deb http://nginx.org/packages/debian/ quantal nginx' with 'deb http://nginx.org/packages/ubuntu/ quantal nginx' and run apt-get update From mdounin at mdounin.ru Tue Jan 22 12:21:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jan 2013 16:21:31 +0400 Subject: SPDY patch and mod_zip crashing workers In-Reply-To: References: Message-ID: <20130122122130.GE9787@mdounin.ru> Hello! On Mon, Jan 21, 2013 at 02:20:12PM -0800, W. Andrew Loe III wrote: > Is there more data I should provide? It's hardly possible to get help with this, given the fact that you are using non-latest nginx with non-latest experimental spdy patch and have problems with 3rd party module. On the other hand, in any case it's good idea to provide details as recommended on http://wiki.nginx.org/Debugging, in particular full config to reproduce the problem, and a backtrace. > On Fri, Jan 18, 2013 at 2:43 PM, W. Andrew Loe III wrote: > > I'm trying to get nginx 1.3.8 to play well with the SPDY patch and > > mod_zip. I don't want to move to 1.3.9+ because I rely on the upload > > modules that have not yet been modified to handle the chunked uploads. > > > > When initiating a download with mod_zip, things appear to go ok until > > mod_zip starts to feed the content of the subrequest into the output > > handler (spdy in this case). I get this in my debug log: > > > > [debug] 5244#0: *1 http init upstream, client timer: 0 > > [notice] 5230#0: signal 20 (SIGCHLD) received > > > > nginx version: nginx/1.3.8 > > built by clang 4.0 (tags/Apple/clang-421.0.60) (based on LLVM 3.1svn) > > TLS SNI support enabled > > configure arguments: --prefix=/usr/local/Cellar/nginx/1.3.8 > > --conf-path=/usr/local/etc/nginx/nginx.conf > > --error-log-path=/usr/local/var/nginx/error.log > > --http-log-path=/usr/local/var/nginx/access.log > > --http-client-body-temp-path=/usr/local/var/cache/nginx/client_temp > > --http-proxy-temp-path=/usr/local/var/cache/nginx/proxy_temp > > --http-fastcgi-temp-path=/usr/local/var/cache/nginx/fastcgi_temp > > --http-uwsgi-temp-path=/usr/local/var/cache/nginx/uwsgi_temp > > --http-scgi-temp-path=/usr/local/var/cache/nginx/scgi_temp > > --lock-path=/usr/local/var/lock/nginx.lock > > --pid-path=/usr/local/var/run/nginx.pid --with-pcre-jit --with-debug > > --with-ipv6 --without-http_browser_module > > --without-http_empty_gif_module --without-http_fastcgi_module > > --without-http_geo_module --without-http_memcached_module > > --without-http_referer_module --without-http_scgi_module > > --without-http_split_clients_module --without-http_ssi_module > > --without-http_userid_module --without-http_uwsgi_module > > --without-mail_pop3_module --without-mail_imap_module > > --without-mail_smtp_module --with-http_gzip_static_module > > --with-http_realip_module --with-http_ssl_module > > --with-http_stub_status_module > > --add-module=nginx_upload_module-2.0.12c --add-module=mod_zip-1.1.6 > > --add-module=headers-more-nginx-module-0.19 > > > > I've attached the complete debug log. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Tue Jan 22 12:31:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jan 2013 16:31:04 +0400 Subject: How to bind variable with connection in a customized module? In-Reply-To: References: Message-ID: <20130122123104.GF9787@mdounin.ru> Hello! On Tue, Jan 22, 2013 at 08:16:36PM +0800, Liu Haifeng wrote: > hi all, > > I want save state during a long connection (keep-alive) for > performance optimization in my own HTTP handler module, thus I > can peek up the saved state when handling requests, to speed up > response. Actually it's quite like session concept. I saw > request struct has a member ctx, but what I want is a ctx on the > connection. It seems no way to save any customized variable to > the ngx_connection_t structure. What's the suggested way to make > this if I don't want the client hold something like session_id? This was recently discussed on nginx-devel@ mailing list, and probably the best way currently available is to install connection pool cleanup handler with custom data and then iterate over connection pool cleanup handlers to find your data. It is relatively costly, but allows to keep memory footprint from keepalive connections low and still allows modules to keep their per-connection data in rare cases when they really need to. See here: http://mailman.nginx.org/pipermail/nginx-devel/2012-December/003049.html Note well though, that HTTP is stateless protocol, and the fact that request came from the same connection means mostly nothing: it might be a request from a completely different user. -- Maxim Dounin http://nginx.com/support.html From haifeng.813 at gmail.com Tue Jan 22 13:33:31 2013 From: haifeng.813 at gmail.com (Haifeng Liu) Date: Tue, 22 Jan 2013 05:33:31 -0800 Subject: =?UTF-8?B?5Zue5aSN77yaIFJlOiBIb3cgdG8=?= Message-ID: <-2518085127904266875@unknownmsgid> =?UTF-8?Q?bind_variable_with?= =?UTF-8?Q?_connection_in_a_customized_module=3F?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Thank you very much. And for your note, I know the connections are reusable= , it's not a problem for me. To be sure, during a client is alive, the conn= ection won't serve other clients, is it correct? Maxim Dounin =E7=BC=96=E5=86=99=EF=BC=9A Hello! On Tue, Jan 22, 2013 at 08:16:36PM +0800, Liu Haifeng wrote: > hi all, >=20 > I want save state during a long connection (keep-alive) for=20 > performance optimization in my own HTTP handler module, thus I=20 > can peek up the saved state when handling requests, to speed up=20 > response. Actually it's quite like session concept. I saw=20 > request struct has a member ctx, but what I want is a ctx on the=20 > connection. It seems no way to save any customized variable to=20 > the ngx_connection_t structure. What's the suggested way to make=20 > this if I don't want the client hold something like session_id? This was recently discussed on nginx-devel@ mailing list, and=20 probably the best way currently available is to install connection=20 pool cleanup handler with custom data and then iterate over=20 connection pool cleanup handlers to find your data. It is=20 relatively costly, but allows to keep memory footprint from=20 keepalive connections low and still allows modules to keep their=20 per-connection data in rare cases when they really need to. See here: http://mailman.nginx.org/pipermail/nginx-devel/2012-December/003049.html Note well though, that HTTP is stateless protocol, and the fact=20 that request came from the same connection means mostly nothing:=20 it might be a request from a completely different user. --=20 Maxim Dounin http://nginx.com/support.html _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Jan 22 14:16:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jan 2013 18:16:34 +0400 Subject: =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9A_Re=3A_How_to_bind_variable_wit?= =?UTF-8?Q?h_connection_in_a_customized_module=3F?= In-Reply-To: <-2518085127904266875@unknownmsgid> References: <-2518085127904266875@unknownmsgid> Message-ID: <20130122141633.GJ9787@mdounin.ru> Hello! (Just a side note: it looks like you mail client have problems with proper mail encoding. I have to recover message headers by hand, as the Subject header was split across multiple non-consecutive lines, breaking other headers as well.) On Tue, Jan 22, 2013 at 05:33:31AM -0800, Haifeng Liu wrote: > Thank you very much. And for your note, I know the connections > are reusable, it's not a problem for me. To be sure, during a > client is alive, the connection won't serve other clients, is it > correct? This depends on what do you mean by "client". If next hop http client - true, the connection is always with one client. But on the same connection request from different users might appear, e.g. if requests are proxied via the same proxy server ("client" from nginx point of view). It might be helpfull to read this section of RFC2616 for better understanding: http://tools.ietf.org/html/rfc2616#section-1.4 > > Maxim Dounin ??? > > > Hello! > > On Tue, Jan 22, 2013 at 08:16:36PM +0800, Liu Haifeng wrote: > > > hi all, > > > > I want save state during a long connection (keep-alive) for > > performance optimization in my own HTTP handler module, thus I > > can peek up the saved state when handling requests, to speed up > > response. Actually it's quite like session concept. I saw > > request struct has a member ctx, but what I want is a ctx on the > > connection. It seems no way to save any customized variable to > > the ngx_connection_t structure. What's the suggested way to make > > this if I don't want the client hold something like session_id? > > This was recently discussed on nginx-devel@ mailing list, and > probably the best way currently available is to install connection > pool cleanup handler with custom data and then iterate over > connection pool cleanup handlers to find your data. It is > relatively costly, but allows to keep memory footprint from > keepalive connections low and still allows modules to keep their > per-connection data in rare cases when they really need to. > > See here: > http://mailman.nginx.org/pipermail/nginx-devel/2012-December/003049.html > > Note well though, that HTTP is stateless protocol, and the fact > that request came from the same connection means mostly nothing: > it might be a request from a completely different user. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Tue Jan 22 14:19:27 2013 From: nginx-forum at nginx.us (automatix) Date: Tue, 22 Jan 2013 09:19:27 -0500 Subject: issue with default vhost Message-ID: <07a72206a782a019ffe7a13b032e7ccd.NginxMailingListEnglish@forum.nginx.org> Hello! There is an Nginx instance on one of my VMs (192.168.56.101) and three (enabled) vhosts: "default" and two further vhosts I created. When I enter the IP in my host browser or "localhost" in my guest browser, in both cases the default page (/usr/share/nginx/html/index.html) is displayed: It works! This is the default web page for this server. The web server software is running but no content has been added, yet. OK. Now I've edited the default page /usr/share/nginx/html/index.html. But I only can see the chages in the guest browser. So, no changes are visible, when access the defalt vhost via IP from the host system. Furthermore I created a simple PHP file index.php and added it to the default vhost directory (/usr/share/nginx/html). When I access it from the guest (localhost/index.php), it works fine. But when I try it from the host, only a message File not found. is displayed. I only have this issue with the default vhost. The oher two are working fine. What can be the cause? What do I do wrong? thx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235411,235411#msg-235411 From nginx-forum at nginx.us Tue Jan 22 15:09:05 2013 From: nginx-forum at nginx.us (automatix) Date: Tue, 22 Jan 2013 10:09:05 -0500 Subject: issue with default vhost In-Reply-To: <07a72206a782a019ffe7a13b032e7ccd.NginxMailingListEnglish@forum.nginx.org> References: <07a72206a782a019ffe7a13b032e7ccd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <026d391af7ca41ce47c6710ffaf6032f.NginxMailingListEnglish@forum.nginx.org> EDIT: The same issue occures, when I try to reach the server from the guest but over the domain of the machine (devvm.loc). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235411,235413#msg-235413 From nginx-forum at nginx.us Tue Jan 22 15:13:06 2013 From: nginx-forum at nginx.us (jayaraj.k) Date: Tue, 22 Jan 2013 10:13:06 -0500 Subject: Optimizing Nginx for serving 1GB files - Finding values for 'directio' & 'output_buffers' Message-ID: <211a73395c1840714287511c9663325b.NginxMailingListEnglish@forum.nginx.org> Hi, We have a Nginx web server which is serving files whose size is almost 1GB. We were trying to optimize the configuration with directio & output_buffers directives. but, we couldn't find any calculation/formula with which we can identify suitable values for above mentioned directives. Server Spec Processor: Intel E5-2600 Xeon Family (2cpus,16 cores each) RAM: 32GB Nginx config Nginx Version: 1.3.9 (dev) worker_processes 33; worker_connections 1024; use epoll; worker_rlimit_nofile 33792; aio on; Could you plz explain how we can find values for 'directio' & 'output_buffers' specific to a server. Thanks Jayaraj Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235414,235414#msg-235414 From nginx-forum at nginx.us Tue Jan 22 16:55:05 2013 From: nginx-forum at nginx.us (sdeancos) Date: Tue, 22 Jan 2013 11:55:05 -0500 Subject: About ignore_invalid_headers directive in SSL Message-ID: Hi! I have a problem. I have try ignore invalid headers with directive ignore_invalid_headers off in my configuration with SSL and dont get it working, however without SSL perfect work. What could be the problem? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235422,235422#msg-235422 From luky-37 at hotmail.com Tue Jan 22 17:26:10 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 22 Jan 2013 18:26:10 +0100 Subject: About ignore_invalid_headers directive in SSL In-Reply-To: References: Message-ID: Please be more specific about SSL "not working". What does actually happen? Do you see errors in the Browser? Do you see errors in access or error logs? Would you post your relevant configuration please? Also, since?ignore_invalid_headers has nothing to do with SSL at all [2], why do you think are trying to fix a SSL related problem with it? What do you expect from that command? [2]?http://nginx.org/en/docs/http/ngx_http_core_module.html#ignore_invalid_headers ---------------------------------------- > To: nginx at nginx.org > Subject: About ignore_invalid_headers directive in SSL > From: nginx-forum at nginx.us > Date: Tue, 22 Jan 2013 11:55:05 -0500 > > Hi! > > I have a problem. > > I have try ignore invalid headers with directive ignore_invalid_headers off > in my configuration with SSL and dont get it working, however without SSL > perfect work. > > What could be the problem? > > Thanks! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235422,235422#msg-235422 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Jan 22 22:15:54 2013 From: nginx-forum at nginx.us (middleforkgis) Date: Tue, 22 Jan 2013 17:15:54 -0500 Subject: why is nginx binding to 0.0.0.0:80 when I specify explicit IPs to listen on? Message-ID: I'm having a difficult time understanding why I'm unable to limit the IP address to which nginx binds. nginx 1.2.6-1 Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.35-2 x86_64 GNU/Linux ================ root at skokomish:/etc/nginx# netstat -pant |grep nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 7768/nginx tcp 0 0 66.113.100.140:81 0.0.0.0:* LISTEN 7768/nginx shows that it is binding on port 80 of all IP addresses. ================= Yet each of my hosts is explicitly listening on a single IP address: root at skokomish:/etc/nginx/sites-available# more default { listen 127.0.0.1:80; } ================= root at skokomish:/etc/nginx/sites-available# more example server { server_name example.com www.example.com; listen 66.113.100.140:80; access_log /var/log/ngnix/example.log; error_log /var/log/nginx/example.error.log; location /site { alias /data/www/content/site/example; } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://10.15.20.10:8107/; } } ========================== There is no 'listen' statement in nginx.conf itself: root at skokomish:/etc/nginx# grep listen nginx.conf # listen localhost:110; # listen localhost:143; ========================== Grepping for 'listen' shows: root at skokomish:/etc/nginx# for i in `find .`; do grep listen $i; done|sort|uniq grep: .: Is a directory grep: ./sites-enabled: Is a directory grep: ./conf.d: Is a directory grep: ./sites-available: Is a directory { listen 127.0.0.1:80; } listen 66.113.100.140:80; listen 66.113.100.140:81; # listen localhost:110; # listen localhost:143; root at skokomish:/etc/nginx# ls conf.d root at skokomish:/etc/nginx# ================================== Thanks in advance. -Steve W. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235428,235428#msg-235428 From luky-37 at hotmail.com Wed Jan 23 00:27:13 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 23 Jan 2013 01:27:13 +0100 Subject: why is nginx binding to 0.0.0.0:80 when I specify explicit IPs to listen on? In-Reply-To: References: Message-ID: Can you please try the official nginx.org binary [1] for debian or even better compile it from source? Debian patches its packets heavily and you are even running the packet from debian unstable. You may report your issue to debian, if it works with nginx.org build. [1] http://nginx.org/en/download.html ---------------------------------------- > To: nginx at nginx.org > Subject: why is nginx binding to 0.0.0.0:80 when I specify explicit IPs to listen on? > From: nginx-forum at nginx.us > Date: Tue, 22 Jan 2013 17:15:54 -0500 > > I'm having a difficult time understanding why I'm unable to limit the IP > address to which nginx binds. > > nginx 1.2.6-1 > Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.35-2 x86_64 GNU/Linux > > > ================ > root at skokomish:/etc/nginx# netstat -pant |grep nginx > tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN > 7768/nginx > tcp 0 0 66.113.100.140:81 0.0.0.0:* LISTEN > 7768/nginx > > shows that it is binding on port 80 of all IP addresses. > ================= > Yet each of my hosts is explicitly listening on a single IP address: > > root at skokomish:/etc/nginx/sites-available# more default > { listen 127.0.0.1:80; } > ================= > root at skokomish:/etc/nginx/sites-available# more example > server > { > server_name example.com www.example.com; > listen 66.113.100.140:80; > access_log /var/log/ngnix/example.log; > error_log /var/log/nginx/example.error.log; > > location /site { > alias /data/www/content/site/example; > } > location / { > proxy_pass_header Server; > proxy_set_header Host $http_host; > proxy_redirect off; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Scheme $scheme; > proxy_connect_timeout 10; > proxy_read_timeout 10; > proxy_pass http://10.15.20.10:8107/; > } > } > > > ========================== > There is no 'listen' statement in nginx.conf itself: > root at skokomish:/etc/nginx# grep listen nginx.conf > # listen localhost:110; > # listen localhost:143; > ========================== > Grepping for 'listen' shows: > > root at skokomish:/etc/nginx# for i in `find .`; do grep listen $i; > done|sort|uniq > grep: .: Is a directory > grep: ./sites-enabled: Is a directory > grep: ./conf.d: Is a directory > grep: ./sites-available: Is a directory > { listen 127.0.0.1:80; } > listen 66.113.100.140:80; > listen 66.113.100.140:81; > # listen localhost:110; > # listen localhost:143; > root at skokomish:/etc/nginx# ls conf.d > root at skokomish:/etc/nginx# > > ================================== > Thanks in advance. > > -Steve W. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235428,235428#msg-235428 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jan 23 02:43:33 2013 From: nginx-forum at nginx.us (middleforkgis) Date: Tue, 22 Jan 2013 21:43:33 -0500 Subject: why is nginx binding to 0.0.0.0:80 when I specify explicit IPs to listen on? In-Reply-To: References: Message-ID: Thanks for the advice. As per your suggestion, I've installed the 1.2.6-1 squeeze package from nginx.org and the problem persists. I will follow up with the results of a source build. -s ======================================================================================= wget http://nginx.org/packages/debian/pool/nginx/n/nginx/nginx_1.2.6-1~squeeze_amd64.deb --2013-01-22 18:38:49-- http://nginx.org/packages/debian/pool/nginx/n/nginx/nginx_1.2.6-1~squeeze_amd64.deb Resolving nginx.org (nginx.org)... 206.251.255.63 Connecting to nginx.org (nginx.org)|206.251.255.63|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 485238 (474K) [application/octet-stream] Saving to: ???nginx_1.2.6-1~squeeze_amd64.deb??? 100%[========================================================================================================>] 485,238 614KB/s in 0.8s 2013-01-22 18:38:50 (614 KB/s) - ???nginx_1.2.6-1~squeeze_amd64.deb??? saved [485238/485238] ============================== root at skokomish:/tmp# !dpkg dpkg -i `ls *deb` (Reading database ... 136211 files and directories currently installed.) Preparing to replace nginx 1.2.6-1~squeeze (using nginx_1.2.6-1~squeeze_amd64.deb) ... Unpacking replacement nginx ... Setting up nginx (1.2.6-1~squeeze) ... =============================== root at skokomish:/tmp# /etc/init.d/nginx start root at skokomish:/tmp# !netstat netstat -pant |grep nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 15578/nginx.conf tcp 0 0 66.113.100.140:81 0.0.0.0:* LISTEN 15578/nginx.conf root at skokomish:/tmp# Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235428,235432#msg-235432 From nginx-forum at nginx.us Wed Jan 23 02:59:01 2013 From: nginx-forum at nginx.us (middleforkgis) Date: Tue, 22 Jan 2013 21:59:01 -0500 Subject: why is nginx binding to 0.0.0.0:80 when I specify explicit IPs to listen on? In-Reply-To: References: Message-ID: <24801f89acd50afb557832b7f92ffba5.NginxMailingListEnglish@forum.nginx.org> http://nginx.org/download/nginx-1.2.6.tar.gz ./configure ./make ./make install /usr/local/sbin/nginx -c /etc/nginx/nginx.conf netstat -pant |grep nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 16609/nginx.conf tcp 0 0 66.113.100.140:81 0.0.0.0:* LISTEN 16609/nginx.conf Problem persists, the problem does not appear to be with the Debian package but in my config. I'll try a plain vanilla off-the-shelf nginx.conf next. -S Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235428,235433#msg-235433 From steve at greengecko.co.nz Wed Jan 23 03:05:53 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 23 Jan 2013 16:05:53 +1300 Subject: why is nginx binding to 0.0.0.0:80 when I specify explicit IPs to listen on? In-Reply-To: <24801f89acd50afb557832b7f92ffba5.NginxMailingListEnglish@forum.nginx.org> References: <24801f89acd50afb557832b7f92ffba5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1358910353.32451.185.camel@steve-new> On Tue, 2013-01-22 at 21:59 -0500, middleforkgis wrote: > http://nginx.org/download/nginx-1.2.6.tar.gz > > ./configure > ./make > ./make install > > /usr/local/sbin/nginx -c /etc/nginx/nginx.conf > > netstat -pant |grep nginx > tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN > 16609/nginx.conf > tcp 0 0 66.113.100.140:81 0.0.0.0:* LISTEN > 16609/nginx.conf > > > Problem persists, the problem does not appear to be with the Debian > package but in my config. > > I'll try a plain vanilla off-the-shelf nginx.conf next. > > > > > -S have a looksee what's under /etc/nginx/conf.d... there's often config for the default 'it works' splash screen in there. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From haifeng.813 at gmail.com Wed Jan 23 03:29:26 2013 From: haifeng.813 at gmail.com (Liu Haifeng) Date: Wed, 23 Jan 2013 11:29:26 +0800 Subject: Is there any other way to trigger log reopen beside kill -USR1? Message-ID: <890591AB-E772-4852-ADA1-F5FD0BEF76AB@gmail.com> Hi all, In the common case, people rotate access log like this: mv access.log access.XXX.log kill -USR1 In my case, I have to do something like this: if [ -f "access.log" ]; then mv access.log access.20130121.log fi kill -USR1 mv access.log access.20130122.log My goal is make the "current" log file named with the date pattern immediately, not after one day. Well, my script seams OK, but for a production script, I still worry about that is there any "unexpected" trigger can make nginx reopen the log file (I mean inside nginx, core and other modules)? Will there be any inside reopen action in the future? From weiyue at taobao.com Wed Jan 23 03:34:46 2013 From: weiyue at taobao.com (=?utf-8?B?5Y2r6LaK?=) Date: Wed, 23 Jan 2013 03:34:46 +0000 Subject: =?UTF-8?B?562U5aSNOiBOb3QgbG9nZ2luZyBhY2Nlc3MgdG8gZmF2aWNvbi5pY28=?= In-Reply-To: <1358496054.44479.YahooMailNeo@web171806.mail.ir2.yahoo.com> References: <1358458552.16395.YahooMailNeo@web171801.mail.ir2.yahoo.com> <1358496054.44479.YahooMailNeo@web171806.mail.ir2.yahoo.com> Message-ID: You can try ngx_log_if module https://github.com/cfsego/ngx_log_if -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Mik J ????: 2013?1?18? 16:01 ???: nginx at nginx.org ??: Re: Not logging access to favicon.ico ----- Mail original ----- > De : Jonathan Matthews > > On 17 January 2013 22:53, Agus wrote: >> location is only available in server block. Though you could create a file >> with the location /favicon... and then include it in every server block >> which will save you typing. > ... I use this, and include it in every server{} stanza: > > location /favico { > access_log off; > error_log /dev/null crit; > empty_gif; > } Thank you guys, so I'll put a somilar configuration as below for all my virtual hosts: # cat /etc/nginx/sites-available/default server { listen 80 default_server; server_name _; index index.html; root /var/nginx/html; access_log /var/log/nginx/default.access.log; location /favico { access_log off; error_log /dev/null crit; empty_gif; } } I was wondering that since this "location" block (and probably other settings) is going to be repeated in every one of my virtual hosts there would be a was to configure this globally. I don't have any server stanza since I use include /etc/nginx/sites-enabled/*; I'll do as Jonathan says except if someone has another suggestion. Bye _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ________________________________ This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you. ???(??????)?????????????????????????????????????????????????????????????????????? From haifeng.813 at gmail.com Wed Jan 23 03:08:18 2013 From: haifeng.813 at gmail.com (Liu Haifeng) Date: Wed, 23 Jan 2013 11:08:18 +0800 Subject: =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9A_Re=3A_How_to_bind_variable_wit?= =?UTF-8?Q?h_connection_in_a_customized_module=3F?= In-Reply-To: <20130122141633.GJ9787@mdounin.ru> References: <-2518085127904266875@unknownmsgid> <20130122141633.GJ9787@mdounin.ru> Message-ID: Thank you again. Last mail was sent from my mobile phone(android). Sorry for the trouble. Proxied connection is not a big problem for me for now. But good note, I'll think about it in the future. On Jan 22, 2013, at 10:16 PM, Maxim Dounin wrote: > Hello! > > (Just a side note: it looks like you mail client have problems > with proper mail encoding. I have to recover message headers by > hand, as the Subject header was split across multiple > non-consecutive lines, breaking other headers as well.) > > On Tue, Jan 22, 2013 at 05:33:31AM -0800, Haifeng Liu wrote: > >> Thank you very much. And for your note, I know the connections >> are reusable, it's not a problem for me. To be sure, during a >> client is alive, the connection won't serve other clients, is it >> correct? > > This depends on what do you mean by "client". If next hop http > client - true, the connection is always with one client. But on > the same connection request from different users might appear, > e.g. if requests are proxied via the same proxy server ("client" > from nginx point of view). > > It might be helpfull to read this section of RFC2616 for better > understanding: > > http://tools.ietf.org/html/rfc2616#section-1.4 > >> >> Maxim Dounin ??? >> >> >> Hello! >> >> On Tue, Jan 22, 2013 at 08:16:36PM +0800, Liu Haifeng wrote: >> >>> hi all, >>> >>> I want save state during a long connection (keep-alive) for >>> performance optimization in my own HTTP handler module, thus I >>> can peek up the saved state when handling requests, to speed up >>> response. Actually it's quite like session concept. I saw >>> request struct has a member ctx, but what I want is a ctx on the >>> connection. It seems no way to save any customized variable to >>> the ngx_connection_t structure. What's the suggested way to make >>> this if I don't want the client hold something like session_id? >> >> This was recently discussed on nginx-devel@ mailing list, and >> probably the best way currently available is to install connection >> pool cleanup handler with custom data and then iterate over >> connection pool cleanup handlers to find your data. It is >> relatively costly, but allows to keep memory footprint from >> keepalive connections low and still allows modules to keep their >> per-connection data in rare cases when they really need to. >> >> See here: >> http://mailman.nginx.org/pipermail/nginx-devel/2012-December/003049.html >> >> Note well though, that HTTP is stateless protocol, and the fact >> that request came from the same connection means mostly nothing: >> it might be a request from a completely different user. >> >> -- >> Maxim Dounin >> http://nginx.com/support.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Maxim Dounin > http://nginx.com/support.html From nginx-forum at nginx.us Wed Jan 23 04:19:14 2013 From: nginx-forum at nginx.us (middleforkgis) Date: Tue, 22 Jan 2013 23:19:14 -0500 Subject: why is nginx binding to 0.0.0.0:80 when I specify explicit IPs to listen on? In-Reply-To: <1358910353.32451.185.camel@steve-new> References: <1358910353.32451.185.camel@steve-new> Message-ID: That's really good advice - in general - if not in this case. Via the standard Debian install, conf.d is empty: #ls -l /etc/nginx/conf.d total 0 === Conversely, downloading the squeeze package from nginx does indeed create a file /etc/nginx/conf.d/default.conf which includes the following: server { listen 80; server_name localhost; which could have been the culprit - but wasn't in my original case (my first post does show an empty conf.d, and I've verified that on a separate machine). But again, that's important advice you gave. === Since, in my debugging, along the way I did remove the package from the debian repository and install the package from nginx.org, and so the /etc/nginx/conf.d/default.conf was on my system during the subsequent tests I documented above. So, I now need to repeat the entire process and report back on it. -S Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235428,235438#msg-235438 From igor at sysoev.ru Wed Jan 23 05:09:03 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 23 Jan 2013 09:09:03 +0400 Subject: why is nginx binding to 0.0.0.0:80 when I specify explicit IPs to listen on? In-Reply-To: References: Message-ID: <4C205300-DB8D-4E09-BBDF-603B6C362E9B@sysoev.ru> On Jan 23, 2013, at 2:15 , middleforkgis wrote: > I'm having a difficult time understanding why I'm unable to limit the IP > address to which nginx binds. > > nginx 1.2.6-1 > Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.35-2 x86_64 GNU/Linux > > > ================ > root at skokomish:/etc/nginx# netstat -pant |grep nginx > tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN > 7768/nginx > tcp 0 0 66.113.100.140:81 0.0.0.0:* LISTEN > 7768/nginx > > shows that it is binding on port 80 of all IP addresses. > ================= > Yet each of my hosts is explicitly listening on a single IP address: > > root at skokomish:/etc/nginx/sites-available# more default > { listen 127.0.0.1:80; } > ================= > root at skokomish:/etc/nginx/sites-available# more example > server > { > server_name example.com www.example.com; > listen 66.113.100.140:80; > access_log /var/log/ngnix/example.log; > error_log /var/log/nginx/example.error.log; > > location /site { > alias /data/www/content/site/example; > } > location / { > proxy_pass_header Server; > proxy_set_header Host $http_host; > proxy_redirect off; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Scheme $scheme; > proxy_connect_timeout 10; > proxy_read_timeout 10; > proxy_pass http://10.15.20.10:8107/; > } > } > > > ========================== > There is no 'listen' statement in nginx.conf itself: > root at skokomish:/etc/nginx# grep listen nginx.conf > # listen localhost:110; > # listen localhost:143; > ========================== > Grepping for 'listen' shows: > > root at skokomish:/etc/nginx# for i in `find .`; do grep listen $i; > done|sort|uniq > grep: .: Is a directory > grep: ./sites-enabled: Is a directory > grep: ./conf.d: Is a directory > grep: ./sites-available: Is a directory > { listen 127.0.0.1:80; } > listen 66.113.100.140:80; > listen 66.113.100.140:81; > # listen localhost:110; > # listen localhost:143; > root at skokomish:/etc/nginx# ls conf.d > root at skokomish:/etc/nginx# > > ================================== > Thanks in advance. If there is server block without a listen directive, then nginx will listen on *:80. -- Igor Sysoev http://nginx.com/support.html From yaoweibin at gmail.com Wed Jan 23 06:25:10 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 23 Jan 2013 14:25:10 +0800 Subject: A problem with the keepalive module and the direcitve proxy_next_upstream In-Reply-To: References: <20130114105124.GI25043@mdounin.ru> Message-ID: Hi, Maxim, I have removed the above code. It seems work for us and there is no side effect. And we have put it on our busy production boxes for a week. This patch could make nginx honor the tries and u->conf->next_upstream variable. The patch is here: diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c index 842b634..0a0dfc3 100644 --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2845,6 +2845,12 @@ ngx_http_upstream_next(ngx_http_request_t *r, ngx_http_upstream_t *u, "upstream timed out"); } +#if 0 if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { status = 0; @@ -2853,6 +2859,7 @@ ngx_http_upstream_next(ngx_http_request_t *r, ngx_http_upstream_t *u, u->peer.tries++; } else { +#endif switch(ft_type) { case NGX_HTTP_UPSTREAM_FT_TIMEOUT: @@ -2875,7 +2882,9 @@ ngx_http_upstream_next(ngx_http_request_t *r, ngx_http_upstream_t *u, default: status = NGX_HTTP_BAD_GATEWAY; } +#if 0 } +#endif if (r->connection->error) { ngx_http_upstream_finalize_request(r, u, ~ 2013/1/14 ??? > The nginx end will close the cacahed connection actively in this next > upstream function. I don't know why it should always *retry* other server > and don't honor the tries and u->conf->next_upstream variable. > > Thanks. > > > 2013/1/14 Maxim Dounin > >> Hello! >> >> On Mon, Jan 14, 2013 at 04:11:20PM +0800, ??? wrote: >> >> > Hi, folks, >> > >> > We have found a bug with the keepalive module. When we used the >> keepalive >> > module, the directive proxy_next_upstream seems disabled. >> > >> > We use Nginx as reverse server. Our backend servers simply close >> connection >> > when read some abnormal packets. Nginx will call the function >> > ngx_http_upstream_next() and try to use the next server. The ft_type >> > is NGX_HTTP_UPSTREAM_FT_ERROR. We want to turn off the try mechanism >> with >> > such packets. Otherwise, it will try all the servers every time. We use >> > directive proxy_next_upstream off. If it's not keepalive connection, >> > everything is fine. If it's keepalive connection, it will run such code: >> > >> > 2858 if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { >> > 2859 status = 0; >> > 2860 >> > 2861 /* TODO: inform balancer instead */ >> > 2862 >> > 2863 u->peer.tries++; >> > 2864 >> > >> > The status is cleared to be 0. The below code will never be touched: >> > >> > 2896 if (status) { >> > 2897 u->state->status = status; >> > 2898 >> > 2899 if (u->peer.tries == 0 || !(u->conf->next_upstream & >> ft_type)) >> > { >> > >> > The variable of tries and u->conf->next_upstream become useless. >> > >> > I don't know why the cached connection should clear the status, Can we >> just >> > remove the code from line 2858 to 2864? Is there any side effect? >> >> Cached connection might be (legitimately) closed by an upstream >> server at any time, so the code always retries if sending request >> failed. >> >> >> -- >> Maxim Dounin >> http://nginx.com/support.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Jan 23 09:43:39 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 14:43:39 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections Message-ID: Hello, We are using nginx to serve large size of static files i.e jpg,flv and mp4 . Nginx stream works very well on 1000~1500 concurrent connections but whenever connections exceeded to 2000~2200, stream gets too slow. We've five content server with following specification:- Dual Quard Core (8cores/16threads) RAM = 32G HDD = Sas Hard-Raid 10 My nginx.conf config is given below : user nginx; worker_processes 16; worker_rlimit_nofile 300000; #2 filehandlers for each connection; #pid logs/nginx.pid; events { worker_connections 6000; use epoll; } http { include mime.types; default_type application/octet-stream; limit_rate 180k; client_body_buffer_size 128K; sendfile_max_chunk 128k; server_tokens off; #Conceals nginx version access_log off; sendfile on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; keepalive_timeout 0; If somebody can help me improving nginx config will be helpful to him. I apologize for bad engish :D -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 23 11:16:52 2013 From: nginx-forum at nginx.us (amozz) Date: Wed, 23 Jan 2013 06:16:52 -0500 Subject: Using HttpMapModule with proxy_pass got 502 Message-ID: Hi everyone, My project needs to route http request to different host with different domain name. I deployed HttpMapModule and proxy_pass, and the related nginx.conf segment is following: http { map $http_host $backend_servers { app.example.com localdomain1; #if changed to ip, it's ok default localdomain2; } server { listen 80; server_name localhost; location /{ proxy_pass http://$backend_servers; } ....... } When I send the requet uri as http://app.example.com, the result is 502 error. But if I changed the maping value as IP address not domain name, it worked with no problem. And if I change the proxy_pass item directly to http://localdomain1, it also worked as normal. Does the proxy_pass + http_map_module not refer to DNS look? or is there any other point wrong in the nginx.conf ? Thanks for any suggestion. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235450,235450#msg-235450 From nginx-forum at nginx.us Wed Jan 23 11:29:18 2013 From: nginx-forum at nginx.us (sdeancos) Date: Wed, 23 Jan 2013 06:29:18 -0500 Subject: About ignore_invalid_headers directive in SSL In-Reply-To: References: Message-ID: Hi! Sorry... When i say "not working" meant that not working ignore_invalid_headers off directive.. not propage my customs headers. My example: server { listen 443; ssl on; ssl_certificate my_public.crt; ssl_certificate_key my_server.key; server_name myservername; ignore_invalid_headers off; location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://192.168.1.82; } } Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235422,235452#msg-235452 From shahzaib.cb at gmail.com Wed Jan 23 11:31:29 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 16:31:29 +0500 Subject: Optimizing Nginx for serving 1GB files - Finding values for 'directio' & 'output_buffers' In-Reply-To: <211a73395c1840714287511c9663325b.NginxMailingListEnglish@forum.nginx.org> References: <211a73395c1840714287511c9663325b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Same problem with me on serving of large files with .flv and .mp4 extensions. Can somebody help ? Thanks On Tue, Jan 22, 2013 at 8:13 PM, jayaraj.k wrote: > Hi, > > We have a Nginx web server which is serving files whose size is almost 1GB. > We were trying to optimize the configuration with directio & output_buffers > directives. but, we couldn't find any calculation/formula with which we can > identify suitable values for above mentioned directives. > > Server Spec > > Processor: Intel E5-2600 Xeon Family (2cpus,16 cores each) > RAM: 32GB > > Nginx config > > Nginx Version: 1.3.9 (dev) > worker_processes 33; > worker_connections 1024; > use epoll; > worker_rlimit_nofile 33792; > aio on; > Could you plz explain how we can find values for 'directio' & > 'output_buffers' specific to a server. > > Thanks > Jayaraj > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,235414,235414#msg-235414 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m6rkalan at gmail.com Wed Jan 23 11:46:23 2013 From: m6rkalan at gmail.com (Mark Alan) Date: Wed, 23 Jan 2013 11:46:23 +0000 Subject: Problem with return 302 redirection, with Nginx 1.3.11 + Drupal 7 Message-ID: <50ffcd91.424cb40a.72a3.42e1@mx.google.com> Hello, In order to redirect certain Drupal 7 functions to https I have setup Nginx 1.3.11 as follows: location ~* ^/(\?q=)?(?:user|admin|contact$) { return 302 https://$host$request_uri; } # and then the usual: location / { try_files $uri $uri/ /index.php?$args; } location = /index.php { ... fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass ... } While this works if called as: http://example.com/user -> becomes https://example.com/user This fails if called as: http://example.com/?q=user -> stays http://example.com/?q=user What should I do to get http://example.com/?q=user redirected to https://example.com/user or, if that is not possible, to https://example.com/?q=user ? Thank you, M. From m6rkalan at gmail.com Wed Jan 23 11:56:32 2013 From: m6rkalan at gmail.com (Mark Alan) Date: Wed, 23 Jan 2013 11:56:32 +0000 Subject: Should it be "if ( $request_method !~ ^(GET...)" or "limit_except GET..." ? Message-ID: <50ffcff3.4252b40a.6bd4.3376@mx.google.com> Hello, Can anybody tell me if the bellow constructs are equivalent? In case of a dozen or so 'location', would it still be better to have a limit_except for each location or, in that case, would it be better to have a server level single 'if ( $request_method...' ? server { ... if ( $request_method !~ ^(?:GET|HEAD|POST)$ ) { return 444; } ... server { ... location / { limit_except GET HEAD POST { deny all; } ... Thank you, Mark From nginx-forum at nginx.us Wed Jan 23 11:58:49 2013 From: nginx-forum at nginx.us (skechboy) Date: Wed, 23 Jan 2013 06:58:49 -0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: Message-ID: Have you checked HDD performance on the server in this periods with atop or iostat 1 ? It's very likely to be related with this since I guess there's a lot's of random reading on 2000 connections. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235447,235456#msg-235456 From francis at daoine.org Wed Jan 23 12:03:36 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 23 Jan 2013 12:03:36 +0000 Subject: Problem with return 302 redirection, with Nginx 1.3.11 + Drupal 7 In-Reply-To: <50ffcd91.424cb40a.72a3.42e1@mx.google.com> References: <50ffcd91.424cb40a.72a3.42e1@mx.google.com> Message-ID: <20130123120336.GL4332@craic.sysops.org> On Wed, Jan 23, 2013 at 11:46:23AM +0000, Mark Alan wrote: Hi there, > location ~* ^/(\?q=)?(?:user|admin|contact$) { > return 302 https://$host$request_uri; > } That probably won't match the request that you want it to match. > What should I do to get http://example.com/?q=user redirected to > https://example.com/user or, if that is not possible, to > https://example.com/?q=user ? The request http://example.com/?q=user has location = /, and $query_string = q=user, and $arg_q = user. So you should use some combination of those variables within location = / {} to do the redirection. Use $arg_q if you don't care about any other parts of the query string. If you have many things to compare, creating a "map" is probably worthwhile. And you'll also want to consider what to do in that location if $arg_q is not one that you want to redirect -- possibly just letting it fall through to the "index" value will do. f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Wed Jan 23 12:11:30 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 17:11:30 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: Message-ID: Hello, Following is the output of vmstat 1 on 1000+ concurrent connections :- procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 438364 43668 31548164 0 0 62 23 3 0 5 0 95 0 0 0 0 0 437052 43668 31548520 0 0 1292 0 19763 1570 0 0 99 1 0 0 1 0 435316 43668 31549940 0 0 1644 0 20034 1537 0 0 99 1 0 1 0 0 434688 43676 31551388 0 0 1104 12 19816 1612 0 0 100 0 0 0 0 0 434068 43676 31552304 0 0 512 24 20253 1541 0 0 99 0 0 1 0 0 430844 43676 31553156 0 0 1304 0 19322 1636 0 0 99 1 0 0 1 0 429480 43676 31554256 0 0 884 0 19993 1585 0 0 99 0 0 0 0 0 428988 43676 31555020 0 0 1008 0 19244 1558 0 0 99 0 0 0 0 0 416472 43676 31556368 0 0 1244 0 18752 1611 0 0 99 0 0 2 0 0 425344 43676 31557552 0 0 1120 0 19039 1639 0 0 99 0 0 0 0 0 421308 43676 31558212 0 0 1012 0 19921 1595 0 0 99 0 0 This might be a stupid question ,which section should i focus from above output to analyze if I/O is performing well or in heavy load? Thanks On Wed, Jan 23, 2013 at 4:58 PM, skechboy wrote: > Have you checked HDD performance on the server in this periods with atop or > iostat 1 ? > It's very likely to be related with this since I guess there's a lot's of > random reading on 2000 connections. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,235447,235456#msg-235456 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Jan 23 12:13:59 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 17:13:59 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: Message-ID: Sorry for above reply on wrong command. Following are the output of iostat 1 :- Linux 2.6.32-279.19.1.el6.x86_64 (DNTX005.local) 01/23/2013 _x86_64_ (16 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 1.72 2.94 0.46 0.11 0.00 94.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 20.53 1958.91 719.38 477854182 175484870 avg-cpu: %user %nice %system %iowait %steal %idle 0.06 0.00 0.13 0.19 0.00 99.62 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 30.00 1040.00 5392.00 1040 5392 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.19 0.25 0.00 99.56 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 24.00 1368.00 104.00 1368 104 Thanks On Wed, Jan 23, 2013 at 5:11 PM, shahzaib shahzaib wrote: > Hello, > > Following is the output of vmstat 1 on 1000+ concurrent connections > :- > > procs -----------memory---------- ---swap-- -----io---- --system-- > -----cpu----- > r b swpd free buff cache si so bi bo in cs us sy id > wa st > 0 0 0 438364 43668 31548164 0 0 62 23 3 0 5 0 > 95 0 0 > 0 0 0 437052 43668 31548520 0 0 1292 0 19763 1570 0 0 > 99 1 0 > 0 1 0 435316 43668 31549940 0 0 1644 0 20034 1537 0 0 > 99 1 0 > 1 0 0 434688 43676 31551388 0 0 1104 12 19816 1612 0 0 > 100 0 0 > 0 0 0 434068 43676 31552304 0 0 512 24 20253 1541 0 0 > 99 0 0 > 1 0 0 430844 43676 31553156 0 0 1304 0 19322 1636 0 0 > 99 1 0 > 0 1 0 429480 43676 31554256 0 0 884 0 19993 1585 0 0 > 99 0 0 > 0 0 0 428988 43676 31555020 0 0 1008 0 19244 1558 0 0 > 99 0 0 > 0 0 0 416472 43676 31556368 0 0 1244 0 18752 1611 0 0 > 99 0 0 > 2 0 0 425344 43676 31557552 0 0 1120 0 19039 1639 0 0 > 99 0 0 > 0 0 0 421308 43676 31558212 0 0 1012 0 19921 1595 0 0 > 99 0 0 > > > This might be a stupid question ,which section should i focus from above > output to analyze if I/O is performing well or in heavy load? > > Thanks > > > On Wed, Jan 23, 2013 at 4:58 PM, skechboy wrote: > >> Have you checked HDD performance on the server in this periods with atop >> or >> iostat 1 ? >> It's very likely to be related with this since I guess there's a lot's of >> random reading on 2000 connections. >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,235447,235456#msg-235456 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dennisml at conversis.de Wed Jan 23 12:39:51 2013 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Wed, 23 Jan 2013 13:39:51 +0100 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: Message-ID: <50FFDA17.6070200@conversis.de> On 01/23/2013 10:43 AM, shahzaib shahzaib wrote: > Hello, > > We are using nginx to serve large size of static files i.e jpg,flv > and mp4 . Nginx stream works very well on 1000~1500 concurrent connections > but whenever connections exceeded to 2000~2200, stream gets too slow. We've > five content server with following specification:- > > Dual Quard Core (8cores/16threads) > RAM = 32G > HDD = Sas Hard-Raid 10 > > > My nginx.conf config is given below : > > user nginx; > worker_processes 16; > worker_rlimit_nofile 300000; #2 filehandlers for each connection; > > #pid logs/nginx.pid; > > > events { > worker_connections 6000; > use epoll; > } > http { > include mime.types; > default_type application/octet-stream; > limit_rate 180k; > client_body_buffer_size 128K; > sendfile_max_chunk 128k; > server_tokens off; #Conceals nginx version > access_log off; > sendfile on; > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; > keepalive_timeout 0; > > If somebody can help me improving nginx config will be helpful to him. I > apologize for bad engish :D What's the required bandwidth for the flv files? What is the bandwidth of the connection of the system? What is the bandwidth of the uplink to the Internet? Regards, Dennis From appa at perusio.net Wed Jan 23 13:13:10 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 23 Jan 2013 14:13:10 +0100 Subject: Problem with return 302 redirection, with Nginx 1.3.11 + Drupal 7 In-Reply-To: <50ffcd91.424cb40a.72a3.42e1@mx.google.com> References: <50ffcd91.424cb40a.72a3.42e1@mx.google.com> Message-ID: <87ham83uvt.wl%appa@perusio.net> On 23 Jan 2013 12h46 CET, m6rkalan at gmail.com wrote: > Hello, > > In order to redirect certain Drupal 7 functions to https I have > setup Nginx 1.3.11 as follows: > > location ~* ^/(\?q=)?(?:user|admin|contact$) { > return 302 https://$host$request_uri; > } Locations don't match the query string part. At the http level: map $arg_q $q_secure { default 0; ~(?:user|admin|contact) 1; } map $uri $u_secure { default 0; ~^/(?:user|admin|contact) 1; } map $q_secure$u_secure $secure { default 0; 10 1; 01 1; } At the server level: if ($secure) { return 302 https://$host$request_uri; } Try it. --- appa From shahzaib.cb at gmail.com Wed Jan 23 13:13:31 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 18:13:31 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: <50FFDA17.6070200@conversis.de> References: <50FFDA17.6070200@conversis.de> Message-ID: The average size of each flv video is 60Mb+. We've five content servers (nginx-1.2.1), each with 1Gbps port and 100TB bandwidth per month. Right now each server is consuming 10~12 bandwidth per day and we're going to run out of bandwidth on coming last days of month. However we limited every connection to 180k, you can see limit_rate 180k; in nginx.conf file. I am newbie to this field. Please correct me if i didn't satisfy your question regarding bandwidth. :) On Wed, Jan 23, 2013 at 5:39 PM, Dennis Jacobfeuerborn < dennisml at conversis.de> wrote: > On 01/23/2013 10:43 AM, shahzaib shahzaib wrote: > > Hello, > > > > We are using nginx to serve large size of static files i.e > jpg,flv > > and mp4 . Nginx stream works very well on 1000~1500 concurrent > connections > > but whenever connections exceeded to 2000~2200, stream gets too slow. > We've > > five content server with following specification:- > > > > Dual Quard Core (8cores/16threads) > > RAM = 32G > > HDD = Sas Hard-Raid 10 > > > > > > My nginx.conf config is given below : > > > > user nginx; > > worker_processes 16; > > worker_rlimit_nofile 300000; #2 filehandlers for each connection; > > > > #pid logs/nginx.pid; > > > > > > events { > > worker_connections 6000; > > use epoll; > > } > > http { > > include mime.types; > > default_type application/octet-stream; > > limit_rate 180k; > > client_body_buffer_size 128K; > > sendfile_max_chunk 128k; > > server_tokens off; #Conceals nginx version > > access_log off; > > sendfile on; > > client_header_timeout 3m; > > client_body_timeout 3m; > > send_timeout 3m; > > keepalive_timeout 0; > > > > If somebody can help me improving nginx config will be helpful to him. I > > apologize for bad engish :D > > What's the required bandwidth for the flv files? What is the bandwidth of > the connection of the system? What is the bandwidth of the uplink to the > Internet? > > Regards, > Dennis > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hartz.geoffrey at gmail.com Wed Jan 23 13:33:54 2013 From: hartz.geoffrey at gmail.com (Geoffrey Hartz) Date: Wed, 23 Jan 2013 14:33:54 +0100 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <50FFDA17.6070200@conversis.de> Message-ID: It will not help you actualy but I had a similar experience. My issue was due to the system event handle (epoll, kqueue...) I noticed poor speed when hitting 2000 connections with haproxy. So I switch to nginx + tcp module proxy. Same results.. But using haproxy + nginx (with two different event handler, I avoid the speed problem). At the end, I prefered use ESXi and 2/3 VM and split connections with DNS load balancing Maybe you should take a look at this event handler problem and do some tunning on kernel/OS. Nginx (maybe) isn't the actual issue. 2013/1/23 shahzaib shahzaib : > The average size of each flv video is 60Mb+. We've five content servers > (nginx-1.2.1), each with 1Gbps port and 100TB bandwidth per month. Right now > each server is consuming 10~12 bandwidth per day and we're going to run out > of bandwidth on coming last days of month. However we limited every > connection to 180k, you can see limit_rate 180k; in nginx.conf file. > > I am newbie to this field. Please correct me if i didn't satisfy your > question regarding bandwidth. :) > > > On Wed, Jan 23, 2013 at 5:39 PM, Dennis Jacobfeuerborn > wrote: >> >> On 01/23/2013 10:43 AM, shahzaib shahzaib wrote: >> > Hello, >> > >> > We are using nginx to serve large size of static files i.e >> > jpg,flv >> > and mp4 . Nginx stream works very well on 1000~1500 concurrent >> > connections >> > but whenever connections exceeded to 2000~2200, stream gets too slow. >> > We've >> > five content server with following specification:- >> > >> > Dual Quard Core (8cores/16threads) >> > RAM = 32G >> > HDD = Sas Hard-Raid 10 >> > >> > >> > My nginx.conf config is given below : >> > >> > user nginx; >> > worker_processes 16; >> > worker_rlimit_nofile 300000; #2 filehandlers for each connection; >> > >> > #pid logs/nginx.pid; >> > >> > >> > events { >> > worker_connections 6000; >> > use epoll; >> > } >> > http { >> > include mime.types; >> > default_type application/octet-stream; >> > limit_rate 180k; >> > client_body_buffer_size 128K; >> > sendfile_max_chunk 128k; >> > server_tokens off; #Conceals nginx version >> > access_log off; >> > sendfile on; >> > client_header_timeout 3m; >> > client_body_timeout 3m; >> > send_timeout 3m; >> > keepalive_timeout 0; >> > >> > If somebody can help me improving nginx config will be helpful to him. I >> > apologize for bad engish :D >> >> What's the required bandwidth for the flv files? What is the bandwidth of >> the connection of the system? What is the bandwidth of the uplink to the >> Internet? >> >> Regards, >> Dennis >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Geoffrey HARTZ From shahzaib.cb at gmail.com Wed Jan 23 14:14:03 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 19:14:03 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <50FFDA17.6070200@conversis.de> Message-ID: Thanks for helping me out guyz but w're already dealing with 3-app servers clustering with haproxy load balancer (For this video streaming site), and can't afford another clustering type things. You're talking about Haproxy load-balancer to split required flv files requests to different servers. We'll have to buy 5 more content servers for mirroring data between every two servers to split load-balancer requests but the problem is we're running out of budget. If you can provide me some alternate to this problem? On Wed, Jan 23, 2013 at 6:33 PM, Geoffrey Hartz wrote: > It will not help you actualy but I had a similar experience. > > My issue was due to the system event handle (epoll, kqueue...) > > I noticed poor speed when hitting 2000 connections with haproxy. So I > switch to nginx + tcp module proxy. Same results.. > > But using haproxy + nginx (with two different event handler, I avoid > the speed problem). At the end, I prefered use ESXi and 2/3 VM and > split connections with DNS load balancing > > Maybe you should take a look at this event handler problem and do some > tunning on kernel/OS. Nginx (maybe) isn't the actual issue. > > 2013/1/23 shahzaib shahzaib : > > The average size of each flv video is 60Mb+. We've five content servers > > (nginx-1.2.1), each with 1Gbps port and 100TB bandwidth per month. Right > now > > each server is consuming 10~12 bandwidth per day and we're going to run > out > > of bandwidth on coming last days of month. However we limited every > > connection to 180k, you can see limit_rate 180k; in nginx.conf file. > > > > I am newbie to this field. Please correct me if i didn't satisfy your > > question regarding bandwidth. :) > > > > > > On Wed, Jan 23, 2013 at 5:39 PM, Dennis Jacobfeuerborn > > wrote: > >> > >> On 01/23/2013 10:43 AM, shahzaib shahzaib wrote: > >> > Hello, > >> > > >> > We are using nginx to serve large size of static files i.e > >> > jpg,flv > >> > and mp4 . Nginx stream works very well on 1000~1500 concurrent > >> > connections > >> > but whenever connections exceeded to 2000~2200, stream gets too slow. > >> > We've > >> > five content server with following specification:- > >> > > >> > Dual Quard Core (8cores/16threads) > >> > RAM = 32G > >> > HDD = Sas Hard-Raid 10 > >> > > >> > > >> > My nginx.conf config is given below : > >> > > >> > user nginx; > >> > worker_processes 16; > >> > worker_rlimit_nofile 300000; #2 filehandlers for each connection; > >> > > >> > #pid logs/nginx.pid; > >> > > >> > > >> > events { > >> > worker_connections 6000; > >> > use epoll; > >> > } > >> > http { > >> > include mime.types; > >> > default_type application/octet-stream; > >> > limit_rate 180k; > >> > client_body_buffer_size 128K; > >> > sendfile_max_chunk 128k; > >> > server_tokens off; #Conceals nginx version > >> > access_log off; > >> > sendfile on; > >> > client_header_timeout 3m; > >> > client_body_timeout 3m; > >> > send_timeout 3m; > >> > keepalive_timeout 0; > >> > > >> > If somebody can help me improving nginx config will be helpful to > him. I > >> > apologize for bad engish :D > >> > >> What's the required bandwidth for the flv files? What is the bandwidth > of > >> the connection of the system? What is the bandwidth of the uplink to the > >> Internet? > >> > >> Regards, > >> Dennis > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Geoffrey HARTZ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 23 14:21:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Jan 2013 18:21:13 +0400 Subject: Using HttpMapModule with proxy_pass got 502 In-Reply-To: References: Message-ID: <20130123142113.GC27423@mdounin.ru> Hello! On Wed, Jan 23, 2013 at 06:16:52AM -0500, amozz wrote: > Hi everyone, > > My project needs to route http request to different host with different > domain name. I deployed HttpMapModule and proxy_pass, and the related > nginx.conf segment is following: > > http { > map $http_host $backend_servers { > app.example.com localdomain1; #if changed to ip, it's ok > default localdomain2; > } > > server { > listen 80; > server_name localhost; > location /{ > proxy_pass http://$backend_servers; > } > ....... > } > > When I send the requet uri as http://app.example.com, the result is 502 > error. But if I changed the maping value as IP address not domain name, it > worked with no problem. And if I change the proxy_pass item directly to > http://localdomain1, it also worked as normal. > > Does the proxy_pass + http_map_module not refer to DNS look? or is there any > other point wrong in the nginx.conf ? > > Thanks for any suggestion. For dynamic name resolution of upstream servers to work, you have to configure resolver, see http://nginx.org/r/resolver. You may also start looking into error log if something goes wrong. In this case it should have "no resolver defined" errors logged. -- Maxim Dounin http://nginx.com/support.html From luky-37 at hotmail.com Wed Jan 23 14:29:02 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 23 Jan 2013 15:29:02 +0100 Subject: About ignore_invalid_headers directive in SSL In-Reply-To: References: , Message-ID: Reread the documentation and what the flag is actually about. You are disabling (off) a feature which IGNORES invalid header names. If you rely on invalid header names, you need to enable this feature, not disable it. And btw, the feature is already on by default, so why don't you just remove it from the configuration? That?being?said, you should absolutely not rely on invalid headers, since that may break in certain browsers. Are you perhaps confusing CUSTOM (X-blabla: asdasd) with INVALID header names (broken-?$%&\/()-header-name: asdasd)? ---------------------------------------- > To: nginx at nginx.org > Subject: Re: RE: About ignore_invalid_headers directive in SSL > From: nginx-forum at nginx.us > Date: Wed, 23 Jan 2013 06:29:18 -0500 > > Hi! > > Sorry... > > When i say "not working" meant that not working ignore_invalid_headers off > directive.. not propage my customs headers. > > My example: > > server { > listen 443; > ssl on; > ssl_certificate my_public.crt; > ssl_certificate_key my_server.key; > server_name myservername; > ignore_invalid_headers off; > location / { > proxy_pass_header Server; > proxy_set_header Host $http_host; > proxy_redirect off; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Scheme $scheme; > proxy_pass http://192.168.1.82; > } > } > > > Thanks! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235422,235452#msg-235452 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From vbart at nginx.com Wed Jan 23 14:35:47 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 23 Jan 2013 18:35:47 +0400 Subject: About ignore_invalid_headers directive in SSL In-Reply-To: References: Message-ID: <201301231835.47587.vbart@nginx.com> On Wednesday 23 January 2013 15:29:18 sdeancos wrote: > Hi! > > Sorry... > > When i say "not working" meant that not working ignore_invalid_headers off > directive.. not propage my customs headers. > > My example: > > server { > listen 443; > ssl on; > ssl_certificate my_public.crt; > ssl_certificate_key my_server.key; > server_name myservername; > ignore_invalid_headers off; > location / { > proxy_pass_header Server; > proxy_set_header Host $http_host; > proxy_redirect off; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Scheme $scheme; > proxy_pass http://192.168.1.82; > } > } > > > Thanks! > Is this the default server? Do you have other server blocks that listen on 443? Please note from the documentation: "A directive can be specified on the server level in a default server. In this case, its value will cover all virtual servers listening on the same address and port." @ http://nginx.org/r/ignore_invalid_headers wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From r at roze.lv Wed Jan 23 14:42:50 2013 From: r at roze.lv (Reinis Rozitis) Date: Wed, 23 Jan 2013 16:42:50 +0200 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: Message-ID: > If somebody can help me improving nginx config will be helpful to him. I > apologize for bad engish :D It is not always the nginx that needs tuning. What about OS? Have you changed the default file descriptor limits? Something like ?Nginx stream works very well on 1000~1500 concurrent connections? might indicate you are hitting the default 1024 limit. While since some kernel versions linux does autotuning still sometimes tweaking it a bit helps a lot: http://www.cyberciti.biz/faq/linux-tcp-tuning/ etc .. rr From shahzaib.cb at gmail.com Wed Jan 23 14:58:09 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 19:58:09 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: Message-ID: File-discriptors are tweaked to 700000 in sysctl.conf. However i'll follow above guide to tweak sysctl to better performance and will see if it works. Thanks for guiding me guyz. On Wed, Jan 23, 2013 at 7:42 PM, Reinis Rozitis wrote: > If somebody can help me improving nginx config will be helpful to him. I >> apologize for bad engish :D >> > > It is not always the nginx that needs tuning. What about OS? > > Have you changed the default file descriptor limits? Something like ?Nginx > stream works very well on 1000~1500 concurrent connections? might indicate > you are hitting the default 1024 limit. > While since some kernel versions linux does autotuning still sometimes > tweaking it a bit helps a lot: http://www.cyberciti.biz/faq/** > linux-tcp-tuning/ etc > .. > > > rr > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Jan 23 15:08:01 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 20:08:01 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: Message-ID: 50% of it already been tweaked , i can send you sysctl.conf config if you ask for it. I think the problem is not kernals, it is something else. On Wed, Jan 23, 2013 at 7:58 PM, shahzaib shahzaib wrote: > File-discriptors are tweaked to 700000 in sysctl.conf. However i'll follow > above guide to tweak sysctl to better performance and will see if it works. > Thanks for guiding me guyz. > > > On Wed, Jan 23, 2013 at 7:42 PM, Reinis Rozitis wrote: > >> If somebody can help me improving nginx config will be helpful to him. I >>> apologize for bad engish :D >>> >> >> It is not always the nginx that needs tuning. What about OS? >> >> Have you changed the default file descriptor limits? Something like >> ?Nginx stream works very well on 1000~1500 concurrent connections? might >> indicate you are hitting the default 1024 limit. >> While since some kernel versions linux does autotuning still sometimes >> tweaking it a bit helps a lot: http://www.cyberciti.biz/faq/** >> linux-tcp-tuning/ etc >> .. >> >> >> rr >> ______________________________**_________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/**mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 23 15:12:37 2013 From: nginx-forum at nginx.us (sdeancos) Date: Wed, 23 Jan 2013 10:12:37 -0500 Subject: About ignore_invalid_headers directive in SSL In-Reply-To: <201301231835.47587.vbart@nginx.com> References: <201301231835.47587.vbart@nginx.com> Message-ID: <6231d8bbddcdc5e9a6c63caa41daa96c.NginxMailingListEnglish@forum.nginx.org> Thanks very much for all! Effectively, it was because I had another virtualhost and was not putting the flag. Now it working! Thanks!! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235422,235474#msg-235474 From nginx-forum at nginx.us Wed Jan 23 15:21:40 2013 From: nginx-forum at nginx.us (skechboy) Date: Wed, 23 Jan 2013 10:21:40 -0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: Message-ID: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> >From your output I can see that it isn't IO issue, I wish I could help you more. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235447,235476#msg-235476 From shahzaib.cb at gmail.com Wed Jan 23 15:30:09 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 20:30:09 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sketchboy, i sent you the output of only 1000 concurrent connections because it wasn't peak hours of traffic. I'll send you the output of iostat 1 when concurrent connections will hit to 2000+ in next hour. Please keep in touch cause i need to resolve this issue :( On Wed, Jan 23, 2013 at 8:21 PM, skechboy wrote: > From your output I can see that it isn't IO issue, I wish I could help you > more. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,235447,235476#msg-235476 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 23 16:05:47 2013 From: nginx-forum at nginx.us (middleforkgis) Date: Wed, 23 Jan 2013 11:05:47 -0500 Subject: SOLVED Re: why is nginx binding to 0.0.0.0:80 when I specify explicit IPs to listen on? In-Reply-To: <4C205300-DB8D-4E09-BBDF-603B6C362E9B@sysoev.ru> References: <4C205300-DB8D-4E09-BBDF-603B6C362E9B@sysoev.ru> Message-ID: SOLVED Thank you Igor, you solved the issue for me. I had one non-standard entry in my sites-available: This is how I found it: #for i in `ls`; do echo $i; grep listen $i; done site1 listen 66.113.100.140:80; site2 listen 66.113.100.140:80; site3 listen 66.113.100.140:80; site4 listen 66.113.100.140:80; site5 listen 66.113.100.140:80; site6 listen 66.113.100.140:80; site7 site8 listen 66.113.100.140:80; site9 listen 66.113.100.140:80; site10 listen 66.113.100.140:80; everybody has a 'listen' directive except the entry for 'site7' #netstat -pant |grep nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 19915/nginx tcp 0 0 66.113.100.140:81 0.0.0.0:* LISTEN 19915/nginx #rm site7 #/etc/init.d/nginx restart netstat -pant |grep nginx tcp 0 0 66.113.100.140:80 0.0.0.0:* LISTEN 21884/nginx tcp 0 0 66.113.100.140:81 0.0.0.0:* LISTEN 21884/nginx SOLVED! Thank you for your help! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235428,235479#msg-235479 From shahzaib.cb at gmail.com Wed Jan 23 16:51:43 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 23 Jan 2013 21:51:43 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Following is the output of 3000+ concurrent connections on iostat 1 command :- avg-cpu: %user %nice %system %iowait %steal %idle 1.72 2.96 0.47 0.12 0.00 94.73 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 22.47 1988.92 733.04 518332350 191037238 avg-cpu: %user %nice %system %iowait %steal %idle 0.39 0.00 0.91 0.20 0.00 98.50 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 22.00 2272.00 0.00 2272 0 avg-cpu: %user %nice %system %iowait %steal %idle 0.46 0.00 0.91 0.07 0.00 98.57 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 23.00 864.00 48.00 864 48 avg-cpu: %user %nice %system %iowait %steal %idle 0.39 0.00 0.72 0.33 0.00 98.56 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 60.00 3368.00 104.00 3368 104 avg-cpu: %user %nice %system %iowait %steal %idle 0.20 0.00 0.65 0.20 0.00 98.95 On Wed, Jan 23, 2013 at 8:30 PM, shahzaib shahzaib wrote: > Sketchboy, i sent you the output of only 1000 concurrent connections > because it wasn't peak hours of traffic. I'll send you the output of iostat > 1 when concurrent connections will hit to 2000+ in next hour. Please keep > in touch cause i need to resolve this issue :( > > > On Wed, Jan 23, 2013 at 8:21 PM, skechboy wrote: > >> From your output I can see that it isn't IO issue, I wish I could help you >> more. >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,235447,235476#msg-235476 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Wed Jan 23 18:07:53 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 23 Jan 2013 19:07:53 +0100 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: , <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org>, , Message-ID: Can you send us a 20+ lines of output from "vmstat 1" under this load? Also, what exact linux kernel are you running ("cat /proc/version")? ________________________________ > Date: Wed, 23 Jan 2013 21:51:43 +0500 > Subject: Re: Nginx flv stream gets too slow on 2000 concurrent connections > From: shahzaib.cb at gmail.com > To: nginx at nginx.org > > Following is the output of 3000+ concurrent connections on iostat 1 > command :- > > avg-cpu: %user %nice %system %iowait %steal %idle > 1.72 2.96 0.47 0.12 0.00 94.73 > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 22.47 1988.92 733.04 518332350 191037238 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.39 0.00 0.91 0.20 0.00 98.50 > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 22.00 2272.00 0.00 2272 0 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.46 0.00 0.91 0.07 0.00 98.57 > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 23.00 864.00 48.00 864 48 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.39 0.00 0.72 0.33 0.00 98.56 > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 60.00 3368.00 104.00 3368 104 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.20 0.00 0.65 0.20 0.00 98.95 > > > > On Wed, Jan 23, 2013 at 8:30 PM, shahzaib shahzaib > > wrote: > Sketchboy, i sent you the output of only 1000 concurrent connections > because it wasn't peak hours of traffic. I'll send you the output of > iostat 1 when concurrent connections will hit to 2000+ in next hour. > Please keep in touch cause i need to resolve this issue :( > > > On Wed, Jan 23, 2013 at 8:21 PM, skechboy > > wrote: > From your output I can see that it isn't IO issue, I wish I could help you > more. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,235447,235476#msg-235476 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jan 23 18:37:08 2013 From: nginx-forum at nginx.us (automatix) Date: Wed, 23 Jan 2013 13:37:08 -0500 Subject: RegEx VHost name and the default VHost Message-ID: <571d808ff2acc6f97d62438494354442.NginxMailingListEnglish@forum.nginx.org> Hello! VHost names with RegEx is an absolutely imazing feature of nginx. I llllllllllllllove it! :) But now I've got an issue with it. On my VM I have a VHost templates ax-common-vhost and use it for VHsots like project.area.loc, so the server_name rule is: server_name ~^(.*)\.(.*)\.loc$; It works fine and has already saved much time for me. But the default VHost seems to get a trouble with my template. When a site, that uses it, is active, the default VHost also is trieng to use the template. Example: /etc/nginx/sites-available: > default -- root /usr/share/nginx/html -- server_name localhost; > test.sandbox.loc -- include /etc/nginx/sites-available/ax-common-vhost; > ax-common-vhost -- server_name ~^(.*)\.(.*)\.loc$; -- if ($host ~ ^(.*)\.(.*)\.loc$) { set $project $1; set $area $2; set $folder "$area/$project"; set $domain "$project.$area.loc"; } -- root /var/www/$folder/; -- test.sandbox.loc (based on the ax-common-vhost) /etc/nginx/sites-enabled: > default > test.sandbox.loc By access to the server default VHost (over IP of the VM) an error occures: 2013/01/23 18:41:19 [error] 4051#0: *1 directory index of "/var/www//" is forbidden, client: 192.168.56.1, server: ~^(.*)\.(.*)\.loc$, request: "GET / HTTP/1.1", host: "192.168.56.101" When I change the server root rule in the template e.g. to /var/www/ and place a test file (index.html) into my webroot folder, it is displayed. That means: Nginx uses my template for the default host. But "192.168.56.101" cannot be mached to "^(.*)\.(.*)\.loc$"! Is it a bug? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235490,235490#msg-235490 From shahzaib.cb at gmail.com Wed Jan 23 19:00:36 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 24 Jan 2013 00:00:36 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Following is the output of 2200+ concurrent connections and kernel version is 2.6.32 :- Linux 2.6.32-279.19.1.el6.x86_64 (DNTX005.local) 01/23/2013 _x86_64_ (16 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 1.75 3.01 0.49 0.13 0.00 94.63 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 23.27 2008.64 747.29 538482374 200334422 avg-cpu: %user %nice %system %iowait %steal %idle 0.97 0.00 1.10 0.19 0.00 97.74 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 30.00 2384.00 112.00 2384 112 avg-cpu: %user %nice %system %iowait %steal %idle 0.13 0.00 0.52 0.13 0.00 99.22 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 21.00 1600.00 8.00 1600 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.19 0.00 0.45 0.26 0.00 99.10 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 37.00 2176.00 8.00 2176 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.45 0.00 0.58 0.19 0.00 98.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 24.00 1192.00 8.00 1192 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.32 0.00 0.45 0.19 0.00 99.03 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 29.00 2560.00 8.00 2560 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.32 0.00 0.65 0.19 0.00 98.84 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 35.00 2584.00 152.00 2584 152 avg-cpu: %user %nice %system %iowait %steal %idle 0.26 0.00 0.39 0.39 0.00 98.96 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 25.00 1976.00 8.00 1976 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.32 0.00 0.52 0.39 0.00 98.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 1352.00 8.00 1352 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.26 0.00 0.58 0.26 0.00 98.90 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 28.00 2408.00 8.00 2408 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.45 0.00 0.65 0.06 0.00 98.84 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 37.00 1896.00 8.00 1896 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.71 0.00 0.97 0.13 0.00 98.19 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 2600.00 64.00 2600 64 avg-cpu: %user %nice %system %iowait %steal %idle 0.32 0.00 0.65 0.26 0.00 98.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 20.00 1520.00 8.00 1520 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.19 0.00 0.39 0.19 0.00 99.22 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 3088.00 80.00 3088 80 avg-cpu: %user %nice %system %iowait %steal %idle 0.26 0.00 0.91 0.26 0.00 98.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 48.00 1328.00 8.00 1328 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.32 0.00 0.32 0.26 0.00 99.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 32.00 1528.00 8.00 1528 8 avg-cpu: %user %nice %system %iowait %steal %idle 0.45 0.00 0.58 0.39 0.00 98.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 35.00 1624.00 72.00 1624 72 avg-cpu: %user %nice %system %iowait %steal %idle 0.39 0.00 0.58 0.19 0.00 98.84 On Wed, Jan 23, 2013 at 11:07 PM, Lukas Tribus wrote: > > Can you send us a 20+ lines of output from "vmstat 1" under this load? > Also, what exact linux kernel are you running ("cat /proc/version")? > > > ________________________________ > > Date: Wed, 23 Jan 2013 21:51:43 +0500 > > Subject: Re: Nginx flv stream gets too slow on 2000 concurrent > connections > > From: shahzaib.cb at gmail.com > > To: nginx at nginx.org > > > > Following is the output of 3000+ concurrent connections on iostat 1 > > command :- > > > > avg-cpu: %user %nice %system %iowait %steal %idle > > 1.72 2.96 0.47 0.12 0.00 94.73 > > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > > sda 22.47 1988.92 733.04 518332350 191037238 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > > 0.39 0.00 0.91 0.20 0.00 98.50 > > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > > sda 22.00 2272.00 0.00 2272 0 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > > 0.46 0.00 0.91 0.07 0.00 98.57 > > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > > sda 23.00 864.00 48.00 864 48 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > > 0.39 0.00 0.72 0.33 0.00 98.56 > > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > > sda 60.00 3368.00 104.00 3368 104 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > > 0.20 0.00 0.65 0.20 0.00 98.95 > > > > > > > > On Wed, Jan 23, 2013 at 8:30 PM, shahzaib shahzaib > > > wrote: > > Sketchboy, i sent you the output of only 1000 concurrent connections > > because it wasn't peak hours of traffic. I'll send you the output of > > iostat 1 when concurrent connections will hit to 2000+ in next hour. > > Please keep in touch cause i need to resolve this issue :( > > > > > > On Wed, Jan 23, 2013 at 8:21 PM, skechboy > > > wrote: > > From your output I can see that it isn't IO issue, I wish I could help > you > > more. > > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,235447,235476#msg-235476< > http://forum.nginx.org/read.php?2%2c235447%2c235476#msg-235476> > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ nginx mailing list > > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Jan 23 19:03:59 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 24 Jan 2013 00:03:59 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: And also the 20+ lines of vmstat are given below with 2.6.32 kernal :- procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 1 0 259020 49356 31418328 0 0 64 24 0 4 5 0 95 0 0 1 0 0 248100 49356 31418564 0 0 704 4 35809 3159 0 1 99 0 0 0 0 0 245248 49364 31419856 0 0 1340 48 35114 3217 0 0 99 0 0 1 0 0 243884 49364 31421084 0 0 940 4 35176 3106 0 0 99 0 0 0 0 0 243512 49364 31422152 0 0 812 4 35837 3204 0 0 99 0 0 0 0 0 241608 49364 31423056 0 0 1304 4 35585 3177 1 1 98 0 0 1 0 0 241076 49364 31424132 0 0 1004 4 35774 3199 0 0 99 0 0 0 0 0 241332 49372 31424644 0 0 724 76 35526 3203 0 0 99 0 0 0 0 0 240464 49372 31425376 0 0 776 4 35968 3162 0 0 99 0 0 0 1 0 238236 49372 31426244 0 0 652 4 35705 3131 0 0 99 0 0 0 0 0 234632 49372 31426924 0 0 1088 4 36220 3309 0 1 99 0 0 0 0 0 233640 49372 31428492 0 0 872 4 35663 3235 0 1 99 0 0 0 0 0 232896 49376 31429016 0 0 1272 44 35403 3179 0 0 99 0 0 1 0 0 231024 49376 31430064 0 0 528 4 34713 3238 0 0 99 0 0 0 0 0 239644 49376 31430564 0 0 808 4 35493 3143 0 1 99 0 0 3 0 0 241704 49376 31431372 0 0 612 4 35610 3400 1 1 97 0 0 1 0 0 244092 49376 31432028 0 0 280 4 35787 3333 1 1 99 0 0 2 0 0 244348 49376 31433232 0 0 1260 8 34700 3072 0 0 99 0 0 0 0 0 243908 49384 31433728 0 0 512 32 35019 3145 0 1 99 0 0 1 0 0 241104 49384 31435004 0 0 1440 4 35586 3211 0 1 99 0 0 0 0 0 234600 49384 31435476 0 0 868 4 35240 3235 0 1 99 0 0 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 233656 49384 31436376 0 0 704 4 35297 3126 0 1 99 0 0 0 0 0 233284 49384 31437176 0 0 192 4 35022 3202 0 0 99 0 0 0 0 0 228952 49392 31437336 0 0 868 32 34986 3211 0 1 99 0 0 0 0 0 232176 49392 31438124 0 0 448 4 35785 3294 0 1 99 0 0 0 0 0 230076 49392 31438664 0 0 1052 4 35532 3297 1 1 98 0 0 1 0 0 231184 49392 31439608 0 0 436 4 34967 3177 0 1 99 0 0 1 0 0 224300 49392 31440044 0 0 624 4 34577 3216 0 1 99 0 0 0 0 0 223748 49396 31440664 0 0 460 44 34415 3155 0 0 99 0 0 1 0 0 223260 49396 31441612 0 0 768 4 35287 3194 0 1 99 0 0 0 0 0 230464 49396 31441996 0 0 772 4 35140 3208 0 0 99 0 0 1 0 0 225504 49396 31442668 0 0 564 4 35316 3133 0 0 99 0 0 On Thu, Jan 24, 2013 at 12:00 AM, shahzaib shahzaib wrote: > Following is the output of 2200+ concurrent connections and kernel version > is 2.6.32 :- > > > Linux 2.6.32-279.19.1.el6.x86_64 (DNTX005.local) 01/23/2013 > _x86_64_ (16 CPU) > > avg-cpu: %user %nice %system %iowait %steal %idle > 1.75 3.01 0.49 0.13 0.00 94.63 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 23.27 2008.64 747.29 538482374 200334422 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.97 0.00 1.10 0.19 0.00 97.74 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 30.00 2384.00 112.00 2384 112 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.13 0.00 0.52 0.13 0.00 99.22 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 21.00 1600.00 8.00 1600 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.19 0.00 0.45 0.26 0.00 99.10 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 37.00 2176.00 8.00 2176 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.45 0.00 0.58 0.19 0.00 98.77 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 24.00 1192.00 8.00 1192 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.32 0.00 0.45 0.19 0.00 99.03 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 29.00 2560.00 8.00 2560 8 > > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.32 0.00 0.65 0.19 0.00 98.84 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 35.00 2584.00 152.00 2584 152 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.26 0.00 0.39 0.39 0.00 98.96 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 25.00 1976.00 8.00 1976 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.32 0.00 0.52 0.39 0.00 98.77 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 33.00 1352.00 8.00 1352 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.26 0.00 0.58 0.26 0.00 98.90 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 28.00 2408.00 8.00 2408 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.45 0.00 0.65 0.06 0.00 98.84 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 37.00 1896.00 8.00 1896 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.71 0.00 0.97 0.13 0.00 98.19 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 33.00 2600.00 64.00 2600 64 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.32 0.00 0.65 0.26 0.00 98.77 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 20.00 1520.00 8.00 1520 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.19 0.00 0.39 0.19 0.00 99.22 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 49.00 3088.00 80.00 3088 80 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.26 0.00 0.91 0.26 0.00 98.58 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 48.00 1328.00 8.00 1328 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.32 0.00 0.32 0.26 0.00 99.09 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 32.00 1528.00 8.00 1528 8 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.45 0.00 0.58 0.39 0.00 98.58 > > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 35.00 1624.00 72.00 1624 72 > > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.39 0.00 0.58 0.19 0.00 98.84 > > > > On Wed, Jan 23, 2013 at 11:07 PM, Lukas Tribus wrote: > >> >> Can you send us a 20+ lines of output from "vmstat 1" under this load? >> Also, what exact linux kernel are you running ("cat /proc/version")? >> >> >> ________________________________ >> > Date: Wed, 23 Jan 2013 21:51:43 +0500 >> > Subject: Re: Nginx flv stream gets too slow on 2000 concurrent >> connections >> > From: shahzaib.cb at gmail.com >> > To: nginx at nginx.org >> > >> > Following is the output of 3000+ concurrent connections on iostat 1 >> > command :- >> > >> > avg-cpu: %user %nice %system %iowait %steal %idle >> > 1.72 2.96 0.47 0.12 0.00 94.73 >> > >> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn >> > sda 22.47 1988.92 733.04 518332350 191037238 >> > >> > avg-cpu: %user %nice %system %iowait %steal %idle >> > 0.39 0.00 0.91 0.20 0.00 98.50 >> > >> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn >> > sda 22.00 2272.00 0.00 2272 0 >> > >> > avg-cpu: %user %nice %system %iowait %steal %idle >> > 0.46 0.00 0.91 0.07 0.00 98.57 >> > >> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn >> > sda 23.00 864.00 48.00 864 48 >> > >> > avg-cpu: %user %nice %system %iowait %steal %idle >> > 0.39 0.00 0.72 0.33 0.00 98.56 >> > >> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn >> > sda 60.00 3368.00 104.00 3368 104 >> > >> > avg-cpu: %user %nice %system %iowait %steal %idle >> > 0.20 0.00 0.65 0.20 0.00 98.95 >> > >> > >> > >> > On Wed, Jan 23, 2013 at 8:30 PM, shahzaib shahzaib >> > > wrote: >> > Sketchboy, i sent you the output of only 1000 concurrent connections >> > because it wasn't peak hours of traffic. I'll send you the output of >> > iostat 1 when concurrent connections will hit to 2000+ in next hour. >> > Please keep in touch cause i need to resolve this issue :( >> > >> > >> > On Wed, Jan 23, 2013 at 8:21 PM, skechboy >> > > wrote: >> > From your output I can see that it isn't IO issue, I wish I could help >> you >> > more. >> > >> > Posted at Nginx Forum: >> > http://forum.nginx.org/read.php?2,235447,235476#msg-235476< >> http://forum.nginx.org/read.php?2%2c235447%2c235476#msg-235476> >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > >> > >> > _______________________________________________ nginx mailing list >> > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Jan 23 19:17:01 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 23 Jan 2013 23:17:01 +0400 Subject: RegEx VHost name and the default VHost In-Reply-To: <571d808ff2acc6f97d62438494354442.NginxMailingListEnglish@forum.nginx.org> References: <571d808ff2acc6f97d62438494354442.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301232317.01411.vbart@nginx.com> On Wednesday 23 January 2013 22:37:08 automatix wrote: > Hello! > > VHost names with RegEx is an absolutely imazing feature of nginx. I > llllllllllllllove it! :) But now I've got an issue with it. > > On my VM I have a VHost templates ax-common-vhost and use it for VHsots > like project.area.loc, so the server_name rule is: > > server_name ~^(.*)\.(.*)\.loc$; > > It works fine and has already saved much time for me. But the default VHost > seems to get a trouble with my template. When a site, that uses it, is > active, the default VHost also is trieng to use the template. > > Example: > > /etc/nginx/sites-available: > > default > > -- root /usr/share/nginx/html > -- server_name localhost; > > > test.sandbox.loc > > -- include /etc/nginx/sites-available/ax-common-vhost; > > > ax-common-vhost > > -- server_name ~^(.*)\.(.*)\.loc$; > -- if ($host ~ ^(.*)\.(.*)\.loc$) { > set $project $1; > set $area $2; This is ugly equivalent to: server_name ~^(?.+)\.(?.+)\.loc$; > > set $folder "$area/$project"; > set $domain "$project.$area.loc"; And the $domain variable is effectively equal to $host. What's the point? > } > -- root /var/www/$folder/; root /var/www/$area/$project; > -- test.sandbox.loc (based on the ax-common-vhost) > > /etc/nginx/sites-enabled: > > default > > test.sandbox.loc > > By access to the server default VHost (over IP of the VM) an error occures: > > 2013/01/23 18:41:19 [error] 4051#0: *1 directory index of "/var/www//" is > forbidden, client: 192.168.56.1, server: ~^(.*)\.(.*)\.loc$, request: "GET > / HTTP/1.1", host: "192.168.56.101" > > When I change the server root rule in the template e.g. to /var/www/ and > place a test file (index.html) into my webroot folder, it is displayed. > > That means: Nginx uses my template for the default host. But > "192.168.56.101" cannot be mached to "^(.*)\.(.*)\.loc$"! Is it a bug? > Are you sure that your "default" is actually the default server configuration for the listening address:port? wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From luky-37 at hotmail.com Wed Jan 23 19:19:11 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 23 Jan 2013 20:19:11 +0100 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: , <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org>, , , , , Message-ID: The box doesn't seem to have problems with that kind of load, not even the IO side is struggling, I guess the 31GB of page cache is your life saver here. I would check for recent events in dmesg output. Then I would analyze the network side. Like how much is you eth0 loaded (nload)? Perhaps you are simply saturating your Gbit/s pipe? what do you see with "ifconfig eth0"? Did you talk to the network operators if this kind of load can cause drops/packet loss? From rainer at ultra-secure.de Wed Jan 23 19:29:22 2013 From: rainer at ultra-secure.de (Rainer Duffner) Date: Wed, 23 Jan 2013 20:29:22 +0100 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Am 23.01.2013 um 20:03 schrieb shahzaib shahzaib : > And also the 20+ lines of vmstat are given below with 2.6.32 kernal :- There was a thread recently (well, last year sometimes) with a link to a blog in Chinese with sysctl-settings etc. It had tunings for 2k concurrent connections. Maybe somebody can dig it out? From nginx-forum at nginx.us Wed Jan 23 19:41:36 2013 From: nginx-forum at nginx.us (automatix) Date: Wed, 23 Jan 2013 14:41:36 -0500 Subject: RegEx VHost name and the default VHost In-Reply-To: <201301232317.01411.vbart@nginx.com> References: <201301232317.01411.vbart@nginx.com> Message-ID: <525a30b8c043c90ac60ae112b5873c0d.NginxMailingListEnglish@forum.nginx.org> Thank you for your reply! > server_name ~^(?.+)\.(?.+)\.loc$; Yes, you are right, it's the better way to extract values from a RegEx into vars. > Are you sure that your "default" is actually the default server configuration for the listening address:port? No, I'm not. How can I check it? Ilya Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235490,235497#msg-235497 From shahzaib.cb at gmail.com Wed Jan 23 20:21:47 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 24 Jan 2013 01:21:47 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: The load(nload) of 1500+ concurrent connections with 1Gbps port is : Curr: 988.95 MBit/s ## ## ## ## ## ## ## ## # Avg: 510.84 MBit/s ## ## ## ## ## ## ## ## # Min: 0.00 Bit/s ## ## ## ## ## ## ## ## # Max: 1005.17 MBit/s ## ## ## ## ## ## ## ## # Ttl: 10017.30 GByte What should i see into dmesg to analyse the problem ? I'll also send you the nload when the traffic will hit to its peak, at this time its average traffic. The following is ifconfig eth0 output :- eth0 Link encap:Ethernet HWaddr X:X:X:X:X:X inet addr:X.X.X.X Bcast:X.X.X.X Mask:255.255.255.192 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3713630148 errors:0 dropped:0 overruns:0 frame:0 TX packets:7281199166 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:260499010337 (242.6 GiB) TX bytes:10767156835559 (9.7 TiB) Memory:fbe60000-fbe80000 On Thu, Jan 24, 2013 at 12:29 AM, Rainer Duffner wrote: > > Am 23.01.2013 um 20:03 schrieb shahzaib shahzaib : > > > And also the 20+ lines of vmstat are given below with 2.6.32 kernal :- > > > There was a thread recently (well, last year sometimes) with a link to a > blog in Chinese with sysctl-settings etc. > It had tunings for 2k concurrent connections. > > Maybe somebody can dig it out? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Jan 23 20:22:03 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 24 Jan 2013 00:22:03 +0400 Subject: RegEx VHost name and the default VHost In-Reply-To: <525a30b8c043c90ac60ae112b5873c0d.NginxMailingListEnglish@forum.nginx.org> References: <201301232317.01411.vbart@nginx.com> <525a30b8c043c90ac60ae112b5873c0d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301240022.04071.vbart@nginx.com> On Wednesday 23 January 2013 23:41:36 automatix wrote: > Thank you for your reply! > > > server_name ~^(?.+)\.(?.+)\.loc$; > > Yes, you are right, it's the better way to extract values from a RegEx into > vars. > > > Are you sure that your "default" is actually the default server > > configuration for the listening address:port? > No, I'm not. How can I check it? > You should check the listen directive. Please see this article: http://nginx.org/en/docs/http/request_processing.html and also the documentation: http://nginx.org/r/listen wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From shahzaib.cb at gmail.com Wed Jan 23 20:27:29 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 24 Jan 2013 01:27:29 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: No i didn't concerned with network operators yet. And if someone can get me that chinese blog for setting 2k concurrent connections using sysctl-settings. So far i used this guide to tune kernal. http://www.cyberciti.biz/faq/linux-tcp-tuning/ On Thu, Jan 24, 2013 at 1:21 AM, shahzaib shahzaib wrote: > The load(nload) of 1500+ concurrent connections with 1Gbps port is : > Curr: 988.95 MBit/s > ## ## > ## ## ## ## ## ## # Avg: 510.84 MBit/s > ## ## > ## ## ## ## ## ## # Min: 0.00 Bit/s > ## ## > ## ## ## ## ## ## # Max: 1005.17 MBit/s > ## ## > ## ## ## ## ## ## # Ttl: 10017.30 GByte > > What should i see into dmesg to analyse the problem ? I'll also send you > the nload when the traffic will hit to its peak, at this time its average > traffic. The following is ifconfig eth0 output :- > > eth0 Link encap:Ethernet HWaddr X:X:X:X:X:X > inet addr:X.X.X.X Bcast:X.X.X.X Mask:255.255.255.192 > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:3713630148 errors:0 dropped:0 overruns:0 frame:0 > TX packets:7281199166 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:260499010337 (242.6 GiB) TX bytes:10767156835559 (9.7 > TiB) > Memory:fbe60000-fbe80000 > > > > On Thu, Jan 24, 2013 at 12:29 AM, Rainer Duffner wrote: > >> >> Am 23.01.2013 um 20:03 schrieb shahzaib shahzaib : >> >> > And also the 20+ lines of vmstat are given below with 2.6.32 kernal :- >> >> >> There was a thread recently (well, last year sometimes) with a link to a >> blog in Chinese with sysctl-settings etc. >> It had tunings for 2k concurrent connections. >> >> Maybe somebody can dig it out? >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m6rkalan at gmail.com Wed Jan 23 20:30:13 2013 From: m6rkalan at gmail.com (Mark Alan) Date: Wed, 23 Jan 2013 20:30:13 +0000 Subject: SOLVED Re: Problem with return 302 redirection, with Nginx 1.3.11 + Drupal 7 In-Reply-To: <87ham83uvt.wl%appa@perusio.net> References: <50ffcd91.424cb40a.72a3.42e1@mx.google.com> <87ham83uvt.wl%appa@perusio.net> Message-ID: <51004857.6267b40a.4540.ffffdd6f@mx.google.com> On Wed, 23 Jan 2013 14:13:10 +0100, Ant?nio P. P. Almeida wrote: > > location ~* ^/(\?q=)?(?:user|admin|contact$) { > > return 302 https://$host$request_uri; > > } > > Locations don't match the query string part. Oh no... bitten again by that characteristic of Location. One of those (rare) cases that we must use an IF: # SOLVED: to remove '?q=' from a query use: if ($args ~ "q=(?.*)?") { return 302 $scheme://$host/$q; } Thank you Ant?nio. M. From shahzaib.cb at gmail.com Wed Jan 23 20:39:12 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 24 Jan 2013 01:39:12 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: I am seeing the following messages on dmesg output :- TCP: Peer 79.211.64.145:54649/80 unexpectedly shrunk window 347253187:347272955 (repaired) TCP: Peer 79.211.64.145:54649/80 unexpectedly shrunk window 347253187:347272955 (repaired) TCP: Peer 79.211.64.145:54649/80 unexpectedly shrunk window 347253187:347272955 (repaired) TCP: Peer 81.155.221.33:53075/80 unexpectedly shrunk window 1986341072:1986342532 (repaired) TCP: Peer 81.155.221.33:53075/80 unexpectedly shrunk window 1986341072:1986342532 (repaired) TCP: Peer 81.155.221.33:53075/80 unexpectedly shrunk window 1986341072:1986342532 (repaired) TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window 1128744179:1128773611 (repaired) TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window 1128744179:1128773611 (repaired) TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window 1128744179:1128773611 (repaired) TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window 1128744179:1128773611 (repaired) TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window 1128744179:1128773611 (repaired) TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window 1128744179:1128773611 (repaired) If somebody can explain me ? what is that shrunk window thing? On Thu, Jan 24, 2013 at 1:27 AM, shahzaib shahzaib wrote: > No i didn't concerned with network operators yet. And if someone can get > me that chinese blog for setting 2k concurrent connections using > sysctl-settings. So far i used this guide to tune kernal. > http://www.cyberciti.biz/faq/linux-tcp-tuning/ > > > On Thu, Jan 24, 2013 at 1:21 AM, shahzaib shahzaib wrote: > >> The load(nload) of 1500+ concurrent connections with 1Gbps port is : >> Curr: 988.95 MBit/s >> ## >> ## ## ## ## ## ## ## # Avg: 510.84 MBit/s >> ## >> ## ## ## ## ## ## ## # Min: 0.00 Bit/s >> ## >> ## ## ## ## ## ## ## # Max: 1005.17 MBit/s >> ## >> ## ## ## ## ## ## ## # Ttl: 10017.30 GByte >> >> What should i see into dmesg to analyse the problem ? I'll also send you >> the nload when the traffic will hit to its peak, at this time its average >> traffic. The following is ifconfig eth0 output :- >> >> eth0 Link encap:Ethernet HWaddr X:X:X:X:X:X >> inet addr:X.X.X.X Bcast:X.X.X.X Mask:255.255.255.192 >> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >> RX packets:3713630148 errors:0 dropped:0 overruns:0 frame:0 >> TX packets:7281199166 errors:0 dropped:0 overruns:0 carrier:0 >> collisions:0 txqueuelen:1000 >> RX bytes:260499010337 (242.6 GiB) TX bytes:10767156835559 (9.7 >> TiB) >> Memory:fbe60000-fbe80000 >> >> >> >> On Thu, Jan 24, 2013 at 12:29 AM, Rainer Duffner wrote: >> >>> >>> Am 23.01.2013 um 20:03 schrieb shahzaib shahzaib >> >: >>> >>> > And also the 20+ lines of vmstat are given below with 2.6.32 kernal :- >>> >>> >>> There was a thread recently (well, last year sometimes) with a link to a >>> blog in Chinese with sysctl-settings etc. >>> It had tunings for 2k concurrent connections. >>> >>> Maybe somebody can dig it out? >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hartz.geoffrey at gmail.com Wed Jan 23 20:42:11 2013 From: hartz.geoffrey at gmail.com (Geoffrey Hartz) Date: Wed, 23 Jan 2013 21:42:11 +0100 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Just to say... You are already hitting 1Gbits with a 1Gbit port... It's normal that is slow... no? 2013/1/23 shahzaib shahzaib : > I am seeing the following messages on dmesg output :- > > TCP: Peer 79.211.64.145:54649/80 unexpectedly shrunk window > 347253187:347272955 (repaired) > TCP: Peer 79.211.64.145:54649/80 unexpectedly shrunk window > 347253187:347272955 (repaired) > TCP: Peer 79.211.64.145:54649/80 unexpectedly shrunk window > 347253187:347272955 (repaired) > TCP: Peer 81.155.221.33:53075/80 unexpectedly shrunk window > 1986341072:1986342532 (repaired) > TCP: Peer 81.155.221.33:53075/80 unexpectedly shrunk window > 1986341072:1986342532 (repaired) > TCP: Peer 81.155.221.33:53075/80 unexpectedly shrunk window > 1986341072:1986342532 (repaired) > TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window > 1128744179:1128773611 (repaired) > TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window > 1128744179:1128773611 (repaired) > TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window > 1128744179:1128773611 (repaired) > TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window > 1128744179:1128773611 (repaired) > TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window > 1128744179:1128773611 (repaired) > TCP: Peer 79.211.64.145:54709/80 unexpectedly shrunk window > 1128744179:1128773611 (repaired) > > > If somebody can explain me ? what is that shrunk window thing? > > > On Thu, Jan 24, 2013 at 1:27 AM, shahzaib shahzaib > wrote: >> >> No i didn't concerned with network operators yet. And if someone can get >> me that chinese blog for setting 2k concurrent connections using >> sysctl-settings. So far i used this guide to tune kernal. >> http://www.cyberciti.biz/faq/linux-tcp-tuning/ >> >> >> On Thu, Jan 24, 2013 at 1:21 AM, shahzaib shahzaib >> wrote: >>> >>> The load(nload) of 1500+ concurrent connections with 1Gbps port is : >>> Curr: 988.95 MBit/s >>> ## ## >>> ## ## ## ## ## ## # Avg: 510.84 MBit/s >>> ## ## >>> ## ## ## ## ## ## # Min: 0.00 Bit/s >>> ## ## >>> ## ## ## ## ## ## # Max: 1005.17 MBit/s >>> ## ## >>> ## ## ## ## ## ## # Ttl: 10017.30 GByte >>> >>> What should i see into dmesg to analyse the problem ? I'll also send you >>> the nload when the traffic will hit to its peak, at this time its average >>> traffic. The following is ifconfig eth0 output :- >>> >>> eth0 Link encap:Ethernet HWaddr X:X:X:X:X:X >>> inet addr:X.X.X.X Bcast:X.X.X.X Mask:255.255.255.192 >>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 >>> RX packets:3713630148 errors:0 dropped:0 overruns:0 frame:0 >>> TX packets:7281199166 errors:0 dropped:0 overruns:0 carrier:0 >>> collisions:0 txqueuelen:1000 >>> RX bytes:260499010337 (242.6 GiB) TX bytes:10767156835559 (9.7 >>> TiB) >>> Memory:fbe60000-fbe80000 >>> >>> >>> >>> On Thu, Jan 24, 2013 at 12:29 AM, Rainer Duffner >>> wrote: >>>> >>>> >>>> Am 23.01.2013 um 20:03 schrieb shahzaib shahzaib >>>> : >>>> >>>> > And also the 20+ lines of vmstat are given below with 2.6.32 kernal :- >>>> >>>> >>>> There was a thread recently (well, last year sometimes) with a link to a >>>> blog in Chinese with sysctl-settings etc. >>>> It had tunings for 2k concurrent connections. >>>> >>>> Maybe somebody can dig it out? >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Geoffrey HARTZ From nginx-forum at nginx.us Wed Jan 23 20:54:22 2013 From: nginx-forum at nginx.us (automatix) Date: Wed, 23 Jan 2013 15:54:22 -0500 Subject: RegEx VHost name and the default VHost In-Reply-To: <201301240022.04071.vbart@nginx.com> References: <201301240022.04071.vbart@nginx.com> Message-ID: <569e8f8cf856427aa6aaf9bdf34d7e1a.NginxMailingListEnglish@forum.nginx.org> Thank you very much for the usefull links and the tip. Yes, the vhost default was not default. Now I've set it to default explicitly with the flaf default_server and everything works fine! Thank you, Valentin! Since "default server is the first one" (http://nginx.org/en/docs/http/request_processing.html), the problem must have been, that the server couldn't find a vhost for the request and just took the first vhost. But what ist "the first" vhost, when all vhosts are stored in different files? In which order are the vhost files processed? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235490,235505#msg-235505 From nginx-forum at nginx.us Wed Jan 23 21:00:59 2013 From: nginx-forum at nginx.us (automatix) Date: Wed, 23 Jan 2013 16:00:59 -0500 Subject: [resolved] Re: issue with default vhost In-Reply-To: <07a72206a782a019ffe7a13b032e7ccd.NginxMailingListEnglish@forum.nginx.org> References: <07a72206a782a019ffe7a13b032e7ccd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3f53d694c94f2f6e486a69ecc0b8b5b7.NginxMailingListEnglish@forum.nginx.org> This issue doesn't appear anymore, since the problem I described in my thread "RegEx VHost name and the default VHost" (http://forum.nginx.org/read.php?2,235490) is resolved (solution: http://forum.nginx.org/read.php?2,235490,235500#msg-235500). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235411,235506#msg-235506 From nginx-forum at nginx.us Wed Jan 23 21:50:00 2013 From: nginx-forum at nginx.us (richardm) Date: Wed, 23 Jan 2013 16:50:00 -0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: Message-ID: <48b9c6895f0543716b76c51ce868ae86.NginxMailingListEnglish@forum.nginx.org> Was it this one? It refers to 2M connections and claimed success. http://rdc.taobao.com/blog/cs/?p=1062 The blog is in Chinese. I used Chrome and clicked on "Translate" to read it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235447,235507#msg-235507 From nginx-forum at nginx.us Wed Jan 23 21:50:48 2013 From: nginx-forum at nginx.us (double) Date: Wed, 23 Jan 2013 16:50:48 -0500 Subject: proxy immediately Message-ID: <48e040efb0fa3d313e1b5dd6acda6b1f.NginxMailingListEnglish@forum.nginx.org> Hello, Is there a chance to pass a large POST-request immediately to the upstream? The POST-request is cached by nginx. The Upstream starts parsing the request as soon as the request is fully uploaded. It is impossible to parse a large POST-request within a couple of seconds. Thanks a lot Marcus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235508,235508#msg-235508 From rainer at ultra-secure.de Wed Jan 23 21:51:44 2013 From: rainer at ultra-secure.de (Rainer Duffner) Date: Wed, 23 Jan 2013 22:51:44 +0100 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: <48b9c6895f0543716b76c51ce868ae86.NginxMailingListEnglish@forum.nginx.org> References: <48b9c6895f0543716b76c51ce868ae86.NginxMailingListEnglish@forum.nginx.org> Message-ID: Am 23.01.2013 um 22:50 schrieb "richardm" : > connections using sysctl-settings.> > > Was it this one? It refers to 2M connections and claimed success. > http://rdc.taobao.com/blog/cs/?p=1062 Yes, that's it. 2000k, not 2k. From contact at jpluscplusm.com Wed Jan 23 22:21:10 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 23 Jan 2013 22:21:10 +0000 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 23 January 2013 20:42, Geoffrey Hartz wrote: > Just to say... You are already hitting 1Gbits with a 1Gbit port... > It's normal that is slow... no? +1. I'm not seeing the problem here. Jonathan From jefftk at google.com Wed Jan 23 22:27:54 2013 From: jefftk at google.com (Jeff Kaufman) Date: Wed, 23 Jan 2013 17:27:54 -0500 Subject: ngx_pagespeed is now alpha Message-ID: Have you wanted to use mod_pagespeed to speed up your site, but been unable to because you don't use Apache? We've now finished the initial version of an Nginx port. You can see it in action, with examples of each of our optimizations, on our demonstration site: http://ngxpagespeed.com Keep in mind that this is alpha (early stage) software and most sites should wait until we release a beta version. When we have that ready we'll announce it here: https://groups.google.com/group/ngx-pagespeed-announce To try out the current version, see: https://github.com/pagespeed/ngx_pagespeed#how-to-build https://github.com/pagespeed/ngx_pagespeed#how-to-use If you run into problems, please let us know. We're happy to look at why ngx_pagespeed isn't working in your situation, and problems you run into probably affect more people than just you. Thanks to Otto van der Schaaf, Ben Noordhuis, Yao Weibin, Junmin Xiong, Jan-Willem Maessen, Jud Porter, Maksim Orlovich, Shawn Ligocki, and Joshua Marantz for their contributions and assistance. If you'd like to help out, send an email to ngx-pagespeed-discuss at googlegroups.com; there's lots to do! Jeff Kaufman From vbart at nginx.com Wed Jan 23 22:30:21 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 24 Jan 2013 02:30:21 +0400 Subject: RegEx VHost name and the default VHost In-Reply-To: <569e8f8cf856427aa6aaf9bdf34d7e1a.NginxMailingListEnglish@forum.nginx.org> References: <201301240022.04071.vbart@nginx.com> <569e8f8cf856427aa6aaf9bdf34d7e1a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301240230.22003.vbart@nginx.com> On Thursday 24 January 2013 00:54:22 automatix wrote: > Thank you very much for the usefull links and the tip. Yes, the vhost > default was not default. Now I've set it to default explicitly with the > flaf default_server and everything works fine! > > Thank you, Valentin! > > Since "default server is the first one" > (http://nginx.org/en/docs/http/request_processing.html), the problem must > have been, that the server couldn't find a vhost for the request and just > took the first vhost. But what ist "the first" vhost, when all vhosts are > stored in different files? > > In which order are the vhost files processed? > Actually there is no such thing like "the vhost files" in nginx. You probably mean those files included from nginx.conf by the "include" directive (see: http://nginx.org/r/include ). Before nginx 1.3.10 the order was arbitrary. Since version 1.3.10 they sorted alphabetically on unix systems. Please note, such directories like "sites-enabled" and "sites-available" are not something common for nginx. In fact, they created by nginx package on some linux systems because the maintainers of these packages find it convenient. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Jan 23 22:41:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Jan 2013 02:41:28 +0400 Subject: A problem with the keepalive module and the direcitve proxy_next_upstream In-Reply-To: References: <20130114105124.GI25043@mdounin.ru> Message-ID: <20130123224128.GK27423@mdounin.ru> Hello! On Wed, Jan 23, 2013 at 02:25:10PM +0800, ??? wrote: > I have removed the above code. It seems work for us and there is no side > effect. And we have put it on our busy production boxes for a week. Just removing the code is wrong thing to do, as it will no longer able to retry if an upstream server closes connection at the same time when nginx decides to send a request into it, and there is only one upstream server configured in a given upstream block. [...] -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Wed Jan 23 23:05:31 2013 From: nginx-forum at nginx.us (automatix) Date: Wed, 23 Jan 2013 18:05:31 -0500 Subject: [resolved] Re: RegEx VHost name and the default VHost In-Reply-To: <201301240230.22003.vbart@nginx.com> References: <201301240230.22003.vbart@nginx.com> Message-ID: <580e587aa68687f4d716c2dff37b32f9.NginxMailingListEnglish@forum.nginx.org> > such directories like "sites-enabled" and > "sites-available" are not something common for nginx. > In fact, they created by nginx package on some linux systems > because the maintainers of these packages find it convenient. Good to know, thanks for the info! > Actually there is no such thing like "the vhost files" in nginx. > You probably mean those files included from nginx.conf by the > "include" directive (see: http://nginx.org/r/include ). I mean the files in "sites-available". I know, that they are just included (over the links in "sites-enabled") to the nginx.conf, but I find it clearer and better maintainable to use one file for each vhost. OK, now everything is clear. Thanks a lot for your help! :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235490,235514#msg-235514 From luky-37 at hotmail.com Thu Jan 24 00:04:09 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 24 Jan 2013 01:04:09 +0100 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: , <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org>, , , , , , , , , , , Message-ID: >>> The load(nload) of 1500+ concurrent connections with 1Gbps port? is : Curr: 988.95 MBit/s >> Just to say... You are already hitting 1Gbits with a 1Gbit port... >> It's normal that is slow... no? >+1. I'm not seeing the problem here. Exactly, the issue is crystal clear. You are already hitting your max bandwidth with 1500+ concurrent connections, of course with 2000+ concurrent connections users will notice severe slowdowns. You have not enough bandwidth to serve your clients. I suggest to monitor your eth0 links carefully and upgrade to either multiple bonded 1Gig links, 10gig links or more servers. From scott_ribe at elevated-dev.com Thu Jan 24 00:11:55 2013 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Wed, 23 Jan 2013 17:11:55 -0700 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: , <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org>, , , , , , , , , , , Message-ID: On Jan 23, 2013, at 5:04 PM, Lukas Tribus wrote: > I suggest to monitor your eth0 links carefully and upgrade to either multiple bonded 1Gig links, 10gig links or more servers. Or do what the cable companies do: compress your video until it fits, regardless of how bad the quality gets ;-) -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From stef at scaleengine.com Thu Jan 24 03:49:53 2013 From: stef at scaleengine.com (Stefan Caunter) Date: Wed, 23 Jan 2013 22:49:53 -0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Jan 23, 2013 at 7:04 PM, Lukas Tribus wrote: > >>>> The load(nload) of 1500+ concurrent connections with 1Gbps port is : Curr: 988.95 MBit/s >>> Just to say... You are already hitting 1Gbits with a 1Gbit port... >>> It's normal that is slow... no? >>+1. I'm not seeing the problem here. > > Exactly, the issue is crystal clear. You are already hitting your max bandwidth with 1500+ concurrent connections, of course with 2000+ concurrent connections users will notice severe slowdowns. You have not enough bandwidth to serve your clients. > > I suggest to monitor your eth0 links carefully and upgrade to either multiple bonded 1Gig links, 10gig links or more servers. bah, you need a CDN. From shahzaib.cb at gmail.com Thu Jan 24 06:57:38 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 24 Jan 2013 11:57:38 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks for helping me out guyz. I'll tune my content server according to that chinese guide. Please keep in mind i had only sent the output of one of Five content servers. Other servers load(nload) is not that high and they just hit 500Mbit/s on 2000 concurrent connections. However i'll monitor eth0 port more closely on peak time and will let you know the status. Thanks :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 24 07:05:47 2013 From: nginx-forum at nginx.us (amozz) Date: Thu, 24 Jan 2013 02:05:47 -0500 Subject: Using HttpMapModule with proxy_pass got 502 In-Reply-To: <20130123142113.GC27423@mdounin.ru> References: <20130123142113.GC27423@mdounin.ru> Message-ID: Thank you Maxim. I added the resolver, and it works now! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235450,235522#msg-235522 From shahzaib.cb at gmail.com Thu Jan 24 11:30:13 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 24 Jan 2013 16:30:13 +0500 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have got an idea of preventing users to download videos from our site, so they just can stream videos and that way will save our bandwidth. We have used one of nginx module "limit_conn 1" so nobody will be able to download stream. But this thing has a major drawback of stream i.e if four users are streaming videos under a LAN network with same ip, other 3 won't be able to stream videos due to 1st user, who's already streaming it and when he'll finish streaming it'll resume for 2nd user and vice virsa. Can someone guide me if we can just prevent downloading but stream remains the same ? On Thu, Jan 24, 2013 at 11:57 AM, shahzaib shahzaib wrote: > Thanks for helping me out guyz. I'll tune my content server according to > that chinese guide. Please keep in mind i had only sent the output of one > of Five content servers. Other servers load(nload) is not that high and > they just hit 500Mbit/s on 2000 concurrent connections. However i'll > monitor eth0 port more closely on peak time and will let you know the > status. Thanks :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haifeng.813 at gmail.com Thu Jan 24 12:28:06 2013 From: haifeng.813 at gmail.com (Liu Haifeng) Date: Thu, 24 Jan 2013 20:28:06 +0800 Subject: Is there any other way to trigger log reopen beside kill -USR1? Message-ID: <7A6209E0-C036-4CBD-BE4F-193CE03B56FF@gmail.com> Hi all, In the common case, people rotate access log like this: mv access.log access.XXX.log kill -USR1 In my case, I have to do something like this: if [ -f "access.log" ]; then mv access.log access.20130121.log fi kill -USR1 {wait until access.log was generated} mv access.log access.20130122.log My goal is make the "current" log file renamed with the date pattern immediately, not after one day or other period. Well, my script seams OK, but for a production script, I still worry about that is there any "unexpected" trigger(other than send usr1 signal externally) can make nginx reopen the log file? Will there be any inside reopen action in the future? From nginx-forum at nginx.us Thu Jan 24 12:31:22 2013 From: nginx-forum at nginx.us (pieter@lxnex.com) Date: Thu, 24 Jan 2013 07:31:22 -0500 Subject: Server won't start AND Nginx as reverse proxy In-Reply-To: <91c46b03d0ec7b4093603089f372fc7f.NginxMailingListEnglish@forum.nginx.org> References: <91c46b03d0ec7b4093603089f372fc7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi guys, Concerning the first issue I had, I managed to get Nginx to now startup when I reboot the system, but the only way I could get it to work was by adding a sleep of a couple of seconds to the /etc/init.d/nginx startup script. Is this acceptable practice? Thanks, Pieter Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234664,235536#msg-235536 From appa at perusio.net Thu Jan 24 12:56:37 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Thu, 24 Jan 2013 13:56:37 +0100 Subject: Server won't start AND Nginx as reverse proxy In-Reply-To: References: <91c46b03d0ec7b4093603089f372fc7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87d2wu4u4a.wl%appa@perusio.net> On 24 Jan 2013 13h31 CET, nginx-forum at nginx.us wrote: > Hi guys, > > Concerning the first issue I had, I managed to get Nginx to now > startup when I reboot the system, but the only way I could get it to > work was by adding a sleep of a couple of seconds to the > /etc/init.d/nginx startup script. > > Is this acceptable practice? No. Something is wrong with your setup. --- appa From andrejaenisch at googlemail.com Thu Jan 24 13:00:10 2013 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Thu, 24 Jan 2013 14:00:10 +0100 Subject: Is there any other way to trigger log reopen beside kill -USR1? In-Reply-To: <7A6209E0-C036-4CBD-BE4F-193CE03B56FF@gmail.com> References: <7A6209E0-C036-4CBD-BE4F-193CE03B56FF@gmail.com> Message-ID: 2013/1/24 Liu Haifeng : > My goal is make the "current" log file renamed with the date pattern immediately, not after one day or other period. My first thought would be creating a symbolic (see "man ln") from the current log to the log-with-date-within-filename. You would just have to change the symlink then ? For example like this: http://www.unix.com/302239409-post5.html So let access.log point to access.$(date "+%Y%m%d").log (see https://en.wikipedia.org/wiki/Date_%28Unix%29 ). But I have no nginx running here at the moment. From farseas at gmail.com Thu Jan 24 13:44:24 2013 From: farseas at gmail.com (Bob S.) Date: Thu, 24 Jan 2013 08:44:24 -0500 Subject: nginx.conf size limit Message-ID: Is there a size limit for nginx.conf? -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Thu Jan 24 13:47:10 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 24 Jan 2013 17:47:10 +0400 Subject: nginx.conf size limit In-Reply-To: References: Message-ID: <71481F68-04CA-4408-8254-83614D5B1A50@sysoev.ru> On Jan 24, 2013, at 17:44 , Bob S. wrote: > Is there a size limit for nginx.conf? There is no predefined size limit. -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From haifeng.813 at gmail.com Thu Jan 24 14:42:27 2013 From: haifeng.813 at gmail.com (Liu Haifeng) Date: Thu, 24 Jan 2013 22:42:27 +0800 Subject: Is there any other way to trigger log reopen beside kill -USR1? In-Reply-To: References: <7A6209E0-C036-4CBD-BE4F-193CE03B56FF@gmail.com> Message-ID: Sorry I note that I didn't describe it clearly. I am not looking for another way of log rotating, but afraid of any other 'trigger' make nginx reopen log file unexpectedly, which can break my logic. If its sure that no any hidden trigger other than my script, then my design is ok. I think there won't be such kind of trigger, and I have to make it sure. Regards. On 2013-1-24, at 21:00, Andre Jaenisch wrote: > 2013/1/24 Liu Haifeng : >> My goal is make the "current" log file renamed with the date pattern immediately, not after one day or other period. > > My first thought would be creating a symbolic (see "man ln") from the > current log to the log-with-date-within-filename. > You would just have to change the symlink then ? For example like > this: http://www.unix.com/302239409-post5.html > So let access.log point to access.$(date "+%Y%m%d").log (see > https://en.wikipedia.org/wiki/Date_%28Unix%29 ). > > But I have no nginx running here at the moment. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From scott_ribe at elevated-dev.com Thu Jan 24 14:55:45 2013 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Thu, 24 Jan 2013 07:55:45 -0700 Subject: Is there any other way to trigger log reopen beside kill -USR1? In-Reply-To: References: <7A6209E0-C036-4CBD-BE4F-193CE03B56FF@gmail.com> Message-ID: <0DF7330C-BC88-4B9E-887E-8D0925C46262@elevated-dev.com> On Jan 24, 2013, at 7:42 AM, Liu Haifeng wrote: > Sorry I note that I didn't describe it clearly. I am not looking for another way of log rotating, but afraid of any other 'trigger' make nginx reopen log file unexpectedly, which can break my logic. If its sure that no any hidden trigger other than my script, then my design is ok. I think there won't be such kind of trigger, and I have to make it sure. I think the suggestion about the symlink was to make sure that if/when nginx re-opens the file, it's actually opening the file you want it to--an alternative approach to what you're doing, which does not care when the file is re-opened. -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From jgehrcke at googlemail.com Thu Jan 24 14:55:43 2013 From: jgehrcke at googlemail.com (Jan-Philip Gehrcke) Date: Thu, 24 Jan 2013 15:55:43 +0100 Subject: Is there any other way to trigger log reopen beside kill -USR1? In-Reply-To: References: <7A6209E0-C036-4CBD-BE4F-193CE03B56FF@gmail.com> Message-ID: <51014B6F.1010601@googlemail.com> Maybe I did not read carefully enough, but instead of relying on the $(date "+%Y%m%d") in the exact moment of renaming the file, you could use the last modification time of the file via. e.g. $ stat -c %y ~/.bashrc | awk '{print $1}' 2012-12-18 On a busy web server this should almost always correspond to the day when -- in your words -- the trigger was triggered :) HTH, Jan-Philip On 01/24/2013 03:42 PM, Liu Haifeng wrote: > Sorry I note that I didn't describe it clearly. I am not looking for another way of log rotating, but afraid of any other 'trigger' make nginx reopen log file unexpectedly, which can break my logic. If its sure that no any hidden trigger other than my script, then my design is ok. I think there won't be such kind of trigger, and I have to make it sure. > > Regards. > > On 2013-1-24, at 21:00, Andre Jaenisch wrote: > >> 2013/1/24 Liu Haifeng : >>> My goal is make the "current" log file renamed with the date pattern immediately, not after one day or other period. >> >> My first thought would be creating a symbolic (see "man ln") from the >> current log to the log-with-date-within-filename. >> You would just have to change the symlink then ? For example like >> this: http://www.unix.com/302239409-post5.html >> So let access.log point to access.$(date "+%Y%m%d").log (see >> https://en.wikipedia.org/wiki/Date_%28Unix%29 ). >> >> But I have no nginx running here at the moment. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From r at roze.lv Thu Jan 24 17:28:03 2013 From: r at roze.lv (Reinis Rozitis) Date: Thu, 24 Jan 2013 19:28:03 +0200 Subject: Server won't start AND Nginx as reverse proxy In-Reply-To: <91c46b03d0ec7b4093603089f372fc7f.NginxMailingListEnglish@forum.nginx.org> References: <91c46b03d0ec7b4093603089f372fc7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: > 2- Nginx does not reverse proxy new incoming requests to one of the other > Swazoo web servers and the site appears to be 'hanging'. Any help on this? The default proxy module timeouts are pretty high (like 60s) http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout .. so naturally if the backend doesnt respond in timely matter it can take up to a minute for nginx to decide what to do next. I would also rather than using (with dns resolve): location / { proxy_pass https://some.site.com; } choose the upstream module ( http://nginx.org/en/docs/http/ngx_http_upstream_module.html ) and define all the backends in the upstream {} block: upstream someservers { server your.backend1.ip:80; server your.backend2.ip:80; server your.backend2.ip:80; } location / { proxy_pass http://someservers; } rr From andrejaenisch at googlemail.com Thu Jan 24 17:48:16 2013 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Thu, 24 Jan 2013 18:48:16 +0100 Subject: Is there any other way to trigger log reopen beside kill -USR1? In-Reply-To: <0DF7330C-BC88-4B9E-887E-8D0925C46262@elevated-dev.com> References: <7A6209E0-C036-4CBD-BE4F-193CE03B56FF@gmail.com> <0DF7330C-BC88-4B9E-887E-8D0925C46262@elevated-dev.com> Message-ID: 2013/1/24 Scott Ribe : > I think the suggestion about the symlink was to make sure that if/when nginx re-opens the file, it's actually opening the file you want it to--an alternative approach to what you're doing, which does not care when the file is re-opened. Exactly. There's only one file to be opened (this one, which the symlink points to) and thus less danger of race conditions. 2013/1/24 Jan-Philip Gehrcke : > Maybe I did not read carefully enough, but instead of relying on the $(date "+%Y%m%d") in the exact moment of renaming the file, you could use the last modification time of the file via. e.g. > > $ stat -c %y ~/.bashrc | awk '{print $1}' > 2012-12-18 My version has no dashes :) And you could adjust the modification to not take place at midnight, couldn't you? Regards, Andr? From shahzaib.cb at gmail.com Thu Jan 24 17:52:17 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 24 Jan 2013 22:52:17 +0500 Subject: Limit downlaod speed per ip Message-ID: Hello, Is there a way to limit speed per ip ? Doesn't matter how many connections the single ip is consuming but he shouldn't be able to get more than assigned download limit. limit_rate directive is for per connection but not for per ip. Looking forward to your help. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From jabberuser at gmail.com Thu Jan 24 17:59:02 2013 From: jabberuser at gmail.com (Piotr Karbowski) Date: Thu, 24 Jan 2013 18:59:02 +0100 Subject: Limit downlaod speed per ip In-Reply-To: References: Message-ID: <51017666.1080504@gmail.com> On 01/24/2013 06:52 PM, shahzaib shahzaib wrote: > Hello, > > Is there a way to limit speed per ip ? Doesn't matter how many > connections the single ip is consuming but he shouldn't be able to get more > than assigned download limit. > > limit_rate directive is for per connection but not for per ip. > > Looking forward to your help. Thanks You need to do it on system level, like tc on linux or ipfw on freebsd and so on. -- Piotr. From shahzaib.cb at gmail.com Thu Jan 24 18:02:47 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 24 Jan 2013 23:02:47 +0500 Subject: Limit downlaod speed per ip In-Reply-To: <51017666.1080504@gmail.com> References: <51017666.1080504@gmail.com> Message-ID: Can you explain a bit ? We are using Centos 6.3. On Thu, Jan 24, 2013 at 10:59 PM, Piotr Karbowski wrote: > On 01/24/2013 06:52 PM, shahzaib shahzaib wrote: > >> Hello, >> >> Is there a way to limit speed per ip ? Doesn't matter how many >> connections the single ip is consuming but he shouldn't be able to get >> more >> than assigned download limit. >> >> limit_rate directive is for per connection but not for per ip. >> >> Looking forward to your help. Thanks >> > > You need to do it on system level, like tc on linux or ipfw on freebsd and > so on. > > -- Piotr. > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 24 18:23:41 2013 From: nginx-forum at nginx.us (coolbhushans@gmail.com) Date: Thu, 24 Jan 2013 13:23:41 -0500 Subject: Need help on fast cgi c++ configuration on windows Message-ID: <4546b814627aea18e15f2ee31880e722.NginxMailingListEnglish@forum.nginx.org> Hi everyone I am new to nginx so really need u r help My problem is that i set up niginx server on windows and it runs very well . But i want to run c++ programs i.e(.exe) so i come accross fast-cgi lib then i compiled source successfully now i have fast cgi wraper for .exe ie. cgi-fcgi.exe now i changed in nginx.conf as location ~ /fcgi/$ { include fastcgi_params; fastcgi_pass 127.0.0.1:3000; } and running it as http://localhost/fcgi/ in browser it not working so i checked error log and i found upstream prematurely closed connection so any guess thanks in advance Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235556,235556#msg-235556 From nginx-list at puzzled.xs4all.nl Thu Jan 24 19:11:50 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Thu, 24 Jan 2013 20:11:50 +0100 Subject: Limit downlaod speed per ip In-Reply-To: References: <51017666.1080504@gmail.com> Message-ID: <51018776.4080204@puzzled.xs4all.nl> On 01/24/2013 07:02 PM, shahzaib shahzaib wrote: > Can you explain a bit ? We are using Centos 6.3. $ man tc http://linux-ip.net/articles/Traffic-Control-HOWTO/ Regards, Patrick From linux at bionix-it.de Thu Jan 24 20:53:38 2013 From: linux at bionix-it.de (Markus "bionix(-it)") Date: Thu, 24 Jan 2013 21:53:38 +0100 Subject: Give it a solution for nginx to support cronolog like apache2 Message-ID: <51019F52.4060606@bionix-it.de> Hello guys, I'm looking for a good (maybe native) solution to use cronolog with nginx. I found only these two links [1]+[2], but I'm looking for a better solution like a module or a code upgrade. Is a native solution in development or maybe give it a nginx patch? Regards, Markus "bionix" [1] http://pjkh.com/articles/nginx-and-cronolog/ [2] https://gist.github.com/2891895 From jabberuser at gmail.com Thu Jan 24 21:12:48 2013 From: jabberuser at gmail.com (Piotr Karbowski) Date: Thu, 24 Jan 2013 22:12:48 +0100 Subject: Give it a solution for nginx to support cronolog like apache2 In-Reply-To: <51019F52.4060606@bionix-it.de> References: <51019F52.4060606@bionix-it.de> Message-ID: <5101A3D0.8090802@gmail.com> On 01/24/2013 09:53 PM, Markus "bionix(-it)" wrote: > Hello guys, > > I'm looking for a good (maybe native) solution to use cronolog with nginx. > I found only these two links [1]+[2], but I'm looking for a better > solution like a module or a code upgrade. > > Is a native solution in development or maybe give it a nginx patch? > > Regards, > > Markus "bionix" > > [1] http://pjkh.com/articles/nginx-and-cronolog/ > [2] https://gist.github.com/2891895 Well you could try make a log (error/access) format with vhost/domain/whatever there and setup single access_log/error_log location, which would be a fifo with cronolog cronolog connected to it, note that I have no idea whatever nginx can write to fifo (or will replace it with a file instead) but worth trying, to not slow down nginx because of the read-fifo flow you could use ftee (find ftee.c somewhere on the internet) and open fifo with it forwarding stdout to cronolog. The above solution sounds like a overkill but if you want realtime cronolog... -- Piotr. From yaoweibin at gmail.com Fri Jan 25 03:29:28 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Fri, 25 Jan 2013 11:29:28 +0800 Subject: Give it a solution for nginx to support cronolog like apache2 In-Reply-To: <5101A3D0.8090802@gmail.com> References: <51019F52.4060606@bionix-it.de> <5101A3D0.8090802@gmail.com> Message-ID: Maybe tengine is a solution. It can support pipe natively The documentation is here: http://tengine.taobao.org/document/http_log.html You can configure the cronolog ilke this: error_log "pipe:/path/to/sbin/cronolog /path/to/logs/%Y/%m/%Y-%m-%d-error_log" warn; access_log "pipe:/path/to/sbin/cronolog /path/to/logs/%Y/%m/%Y-%m-%d-access_log" format; 2013/1/25 Piotr Karbowski > On 01/24/2013 09:53 PM, Markus "bionix(-it)" wrote: > >> Hello guys, >> >> I'm looking for a good (maybe native) solution to use cronolog with nginx. >> I found only these two links [1]+[2], but I'm looking for a better >> solution like a module or a code upgrade. >> >> Is a native solution in development or maybe give it a nginx patch? >> >> Regards, >> >> Markus "bionix" >> >> [1] http://pjkh.com/articles/**nginx-and-cronolog/ >> [2] https://gist.github.com/**2891895 >> > > Well you could try make a log (error/access) format with > vhost/domain/whatever there and setup single access_log/error_log location, > which would be a fifo with cronolog cronolog connected to it, note that I > have no idea whatever nginx can write to fifo (or will replace it with a > file instead) but worth trying, to not slow down nginx because of the > read-fifo flow you could use ftee (find ftee.c somewhere on the internet) > and open fifo with it forwarding stdout to cronolog. > > The above solution sounds like a overkill but if you want realtime > cronolog... > > -- Piotr. > > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Fri Jan 25 03:40:01 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Fri, 25 Jan 2013 11:40:01 +0800 Subject: Limit downlaod speed per ip In-Reply-To: <51018776.4080204@puzzled.xs4all.nl> References: <51017666.1080504@gmail.com> <51018776.4080204@puzzled.xs4all.nl> Message-ID: I used to write a simple module for such purpose, It should work. Have a try: https://github.com/yaoweibin/nginx_limit_speed_module 2013/1/25 Patrick Lists > On 01/24/2013 07:02 PM, shahzaib shahzaib wrote: > >> Can you explain a bit ? We are using Centos 6.3. >> > > $ man tc > > http://linux-ip.net/articles/**Traffic-Control-HOWTO/ > > Regards, > Patrick > > > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Fri Jan 25 03:55:50 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Fri, 25 Jan 2013 11:55:50 +0800 Subject: Nginx flv stream gets too slow on 2000 concurrent connections In-Reply-To: References: <34c8e44ba337bde44eed73cb034b510f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Maybe not the box hits the 1Gbit band bandwidth, your switch also possible hits the limit. 2013/1/24 shahzaib shahzaib > Thanks for helping me out guyz. I'll tune my content server according to > that chinese guide. Please keep in mind i had only sent the output of one > of Five content servers. Other servers load(nload) is not that high and > they just hit 500Mbit/s on 2000 concurrent connections. However i'll > monitor eth0 port more closely on peak time and will let you know the > status. Thanks :) > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From faskiri.devel at gmail.com Fri Jan 25 06:32:31 2013 From: faskiri.devel at gmail.com (Fasih) Date: Thu, 24 Jan 2013 22:32:31 -0800 Subject: Need help on fast cgi c++ configuration on windows In-Reply-To: <4546b814627aea18e15f2ee31880e722.NginxMailingListEnglish@forum.nginx.org> References: <4546b814627aea18e15f2ee31880e722.NginxMailingListEnglish@forum.nginx.org> Message-ID: Check your fastcgi code. Ensure that it is running properly. Have you checked the logs of that process? On Thu, Jan 24, 2013 at 10:23 AM, coolbhushans at gmail.com < nginx-forum at nginx.us> wrote: > Hi everyone > I am new to nginx so really need u r help > > > My problem is that i set up niginx server on windows and it runs very well > . But i want to run c++ programs i.e(.exe) > so i come accross fast-cgi lib then i compiled source successfully now i > have fast cgi wraper for .exe ie. cgi-fcgi.exe > > now i changed in nginx.conf as > > location ~ /fcgi/$ { > include fastcgi_params; > fastcgi_pass 127.0.0.1:3000; > } > > and running it as http://localhost/fcgi/ in browser > > it not working so i checked error log and i found > upstream prematurely closed connection > > so any guess > thanks in advance > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,235556,235556#msg-235556 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 25 06:47:27 2013 From: nginx-forum at nginx.us (paragshrvagi) Date: Fri, 25 Jan 2013 01:47:27 -0500 Subject: Compiling NGINX modules on Windows... Message-ID: Hello, Following instructions given on NGINX site, I could compile NGINX default source code for WIndows using MSYS bash and nmake. However, when I tried compiling HttpEchoModule and HttpAuthDigest module, I have started getting compilation errors. I am assuming that since these modules are published and stable, they are compiling on LINUX/UNIX. Can someone suggest me the typical changes I need to do in a NGINX module source/configuration file so as to make it working on Windows? Best Regards, ParagS. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235566,235566#msg-235566 From nginx-forum at nginx.us Fri Jan 25 08:26:40 2013 From: nginx-forum at nginx.us (runesoerensen) Date: Fri, 25 Jan 2013 03:26:40 -0500 Subject: Certificate on HTTPS upstream is not verified Message-ID: <75d977d8be66b7696a1e9192a160d1e9.NginxMailingListEnglish@forum.nginx.org> I need to send data to some backend servers using HTTPS, but it seems like nginx doesn't verify the certificate on the backend server. For instance, if I specify `proxy_pass https://example.com` and the certificate on example.com is invalid, nginx still completes the request without any warning. I'd prefer it if nginx checked whether the certificate could be verified during the SSL handshake, and abort the request if the certificate isn't valid. Is it possible to somehow enable certificate verification of the proxied server's certificate? And if it's not possible to verify the certificate, what's the point in using (or being able to use) an HTTPS backend then? The reason I need SSL encryption is that traffic from my nginx server will be passed via public networks to the backend servers. Thanks, Rune Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235567,235567#msg-235567 From nginx-forum at nginx.us Fri Jan 25 08:47:02 2013 From: nginx-forum at nginx.us (runesoerensen) Date: Fri, 25 Jan 2013 03:47:02 -0500 Subject: Certificate on HTTPS upstream is not verified In-Reply-To: <75d977d8be66b7696a1e9192a160d1e9.NginxMailingListEnglish@forum.nginx.org> References: <75d977d8be66b7696a1e9192a160d1e9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <811f839c994039658b5fe861a345acfd.NginxMailingListEnglish@forum.nginx.org> It's possible to do what I want with the mod_ssl module for Apache. The relevant directive is called `SSLProxyVerify` http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#SSLProxyVerify. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235567,235568#msg-235568 From ru at nginx.com Fri Jan 25 08:58:47 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 25 Jan 2013 12:58:47 +0400 Subject: Compiling NGINX modules on Windows... In-Reply-To: References: Message-ID: <20130125085847.GD52101@lo0.su> On Fri, Jan 25, 2013 at 01:47:27AM -0500, paragshrvagi wrote: > Following instructions given on NGINX site, I could compile NGINX default > source code for WIndows using MSYS bash and nmake. However, when I tried > compiling HttpEchoModule and HttpAuthDigest module, I have started getting > compilation errors. I am assuming that since these modules are published and > stable, they are compiling on LINUX/UNIX. > Can someone suggest me the typical changes I need to do in a NGINX module > source/configuration file so as to make it working on Windows? Well, typical steps usually involve fixing the reported compilation errors. :) From nginx-forum at nginx.us Fri Jan 25 09:10:54 2013 From: nginx-forum at nginx.us (runesoerensen) Date: Fri, 25 Jan 2013 04:10:54 -0500 Subject: Certificate on HTTPS upstream is not verified In-Reply-To: <75d977d8be66b7696a1e9192a160d1e9.NginxMailingListEnglish@forum.nginx.org> References: <75d977d8be66b7696a1e9192a160d1e9.NginxMailingListEnglish@forum.nginx.org> Message-ID: I just found this ticket which appears to describe (and solve) the same issue http://trac.nginx.org/nginx/ticket/13 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235567,235570#msg-235570 From haifeng.813 at gmail.com Fri Jan 25 10:23:56 2013 From: haifeng.813 at gmail.com (Liu Haifeng) Date: Fri, 25 Jan 2013 18:23:56 +0800 Subject: Is there any other way to trigger log reopen beside kill -USR1? In-Reply-To: References: <7A6209E0-C036-4CBD-BE4F-193CE03B56FF@gmail.com> <0DF7330C-BC88-4B9E-887E-8D0925C46262@elevated-dev.com> Message-ID: <091CC4B4-A154-4946-A3A0-1BBB2B1F7C5D@gmail.com> I did a test, seams cool, thank you all. On Jan 25, 2013, at 1:48 AM, Andre Jaenisch wrote: > 2013/1/24 Scott Ribe : >> I think the suggestion about the symlink was to make sure that if/when nginx re-opens the file, it's actually opening the file you want it to--an alternative approach to what you're doing, which does not care when the file is re-opened. > > Exactly. There's only one file to be opened (this one, which the > symlink points to) and thus less danger of race conditions. > > 2013/1/24 Jan-Philip Gehrcke : >> Maybe I did not read carefully enough, but instead of relying on the $(date "+%Y%m%d") in the exact moment of renaming the file, you could use the last modification time of the file via. e.g. >> >> $ stat -c %y ~/.bashrc | awk '{print $1}' >> 2012-12-18 > > My version has no dashes :) > And you could adjust the modification to not take place at midnight, > couldn't you? > > Regards, Andr? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From coolbhushans at gmail.com Fri Jan 25 10:28:14 2013 From: coolbhushans at gmail.com (Bhushan Sonawane) Date: Fri, 25 Jan 2013 15:58:14 +0530 Subject: Need help on fast cgi c++ configuration on windows In-Reply-To: References: <4546b814627aea18e15f2ee31880e722.NginxMailingListEnglish@forum.nginx.org> Message-ID: yes i cheked it and now i am able to run it but not using cgi-fcgi.exe using anothe wraper there is problem in tht code due have any idea about what can be problem in cgi-fcgi.exe On Fri, Jan 25, 2013 at 12:02 PM, Fasih wrote: > Check your fastcgi code. Ensure that it is running properly. Have you > checked the logs of that process? > > > On Thu, Jan 24, 2013 at 10:23 AM, coolbhushans at gmail.com < > nginx-forum at nginx.us> wrote: > >> Hi everyone >> I am new to nginx so really need u r help >> >> >> My problem is that i set up niginx server on windows and it runs very >> well >> . But i want to run c++ programs i.e(.exe) >> so i come accross fast-cgi lib then i compiled source successfully now i >> have fast cgi wraper for .exe ie. cgi-fcgi.exe >> >> now i changed in nginx.conf as >> >> location ~ /fcgi/$ { >> include fastcgi_params; >> fastcgi_pass 127.0.0.1:3000; >> } >> >> and running it as http://localhost/fcgi/ in browser >> >> it not working so i checked error log and i found >> upstream prematurely closed connection >> >> so any guess >> thanks in advance >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,235556,235556#msg-235556 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondanomala_albertelli at yahoo.it Fri Jan 25 11:51:10 2013 From: ondanomala_albertelli at yahoo.it (OndanomalA) Date: Fri, 25 Jan 2013 12:51:10 +0100 Subject: Parameter not taken into account on a specific link structure Message-ID: I have WP installed on my VPS (with nginx 1.3.12 and php5-fpm 5.4.11). The first page of search results (/?s=test) is loaded properly, but /page/2/?s=test displays the same content of /page/2/ (so ?s=test isn't taken into account). It's probably something wrong with my nginx config: location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } location / { # if you're just using wordpress and don't want extra rewrites # then replace the word @rewrites with /index.php try_files $uri $uri/ /index.php; } Articles work fine anyway.. the permalink structure (/%year%/%monthnum%/%day%/%postname%/) works fine.. so I should find a fix that doesn't break that (but "fixes" the search parameter problem). "DEMO" Page 1 (/?s=test): http://goo.gl/HigKa Page 2 (/page/2/?s=test): http://goo.gl/ujftR Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Fri Jan 25 16:05:14 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 25 Jan 2013 21:05:14 +0500 Subject: Limit downlaod speed per ip In-Reply-To: References: <51017666.1080504@gmail.com> <51018776.4080204@puzzled.xs4all.nl> Message-ID: Thanks guyz, i'll look into your module, if it works for me and will let you know about that :) On Fri, Jan 25, 2013 at 8:40 AM, ??? wrote: > I used to write a simple module for such purpose, It should work. Have a > try: https://github.com/yaoweibin/nginx_limit_speed_module > > > 2013/1/25 Patrick Lists > >> On 01/24/2013 07:02 PM, shahzaib shahzaib wrote: >> >>> Can you explain a bit ? We are using Centos 6.3. >>> >> >> $ man tc >> >> http://linux-ip.net/articles/**Traffic-Control-HOWTO/ >> >> Regards, >> Patrick >> >> >> >> ______________________________**_________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/**mailman/listinfo/nginx >> > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre.cruz at co.sapo.pt Fri Jan 25 16:17:31 2013 From: andre.cruz at co.sapo.pt (=?iso-8859-1?Q?Andr=E9_Cruz?=) Date: Fri, 25 Jan 2013 16:17:31 +0000 Subject: X-Accel-Redirect and encodings Message-ID: <5CEE3495-7CC5-42DC-BBA7-E1DAB3970DBB@co.sapo.pt> Hello. I've just been bitten by the bug described previously here (http://mailman.nginx.org/pipermail/nginx/2010-September/022384.html). X-Accel-Redirect expects a decoded URL, which it will encode, but when non-ascii characters are present in the URI this process fails to produce the correct URL. According to this 2010 thread (http://nginx.2469901.n2.nabble.com/Bug-X-Accel-Redirect-td5510716.html) the correct way to do this would be for the encoding to be done by the application, and then nginx would just use the URL in its provided form, without trying to divine the correct way to encode it. There have been attempts to do this (http://forum.nginx.org/read.php?29,221834,221834) but they were not accepted. This one in particular tried to guess if the URL was already encoded or not. Since I'm not familiar with nginx's code, I would like to know if it would be very hard to accomplish this task? Namely, to stop trying to encode the URL received in the X-Accel-Redirect header. It seems that I'm not alone in this and old threads exists about this issue, but since no fix was produced I'm tempted to try it myself. Best regards, Andr? Cruz From cmfileds at gmail.com Fri Jan 25 19:09:21 2013 From: cmfileds at gmail.com (CM Fields) Date: Fri, 25 Jan 2013 14:09:21 -0500 Subject: Delayed 503 limit_req response - feature request Message-ID: Nginx Feature Request: Delay 503 limit_req responses We would like to delay the 503 response to clients which exceed the limit_req value rather then send an immediate 503. The delay would be in milliseconds or in tenths of a second increments for example. We are not trying to slow down those clients who are obviously abusive or malicious. Those ips are blocked at the firewall with custom scripts. We are trying to slow down valid clients who use overly aggressive programs, but have legitimate access our servers. The nginx server in question is used to serve large data downloads. We want to serve as many people as we can without using excessive CPU time or sending billions of 503 error network packets. According to our logs we spend approximately 1.3% of our outgoing bandwidth on 503 responses to valid clients. I believe simply by adding in a 200ms delay we could reduce outgoing 503 bandwidth from 1.3% to less then 0.2% saving us time and money. Our current setup: The following directives allow a client to make 100 requests with no delay. After 100 requests the client is limited to 61 requests per minute. Client requests over 61/60 seconds are returned an error 503 immediately for every subsequent request. http { limit_req_zone $binary_remote_addr zone=one:10m rate=61r/m; server { limit_req zone=one burst=100 nodelay; } } What does everyone think of slightly delaying the 503 response? Using the same limit_req code above the client could exceed our limit of 61 req per 60 sec and we would still send back a 503, but now we delay the response by 200ms. Notice the delay at the end of the following directive. The delay would slow the next request by the client by slowing our response. limit_req_zone $binary_remote_addr zone=one:10m rate=61r/m delay=200ms; What does delaying the response solve? The problem we see is over aggressive download accelerators, http scanners or broken scripts. A client application which gets an immediate 503 will just send another request right away and we will send them another 503 right away. This looping behavior uses CPU resources, network time and can be a significant drain on resources. If nginx could delay the 503 by a user specified time the excessive client requests and 503 error loops will be reduced. The client would wait till it gets a response from the server before sending another request believing the packet is still in the network. The side effect of the delay is the client would be slowed and perhaps lower its request rate below our limit_req value. Problems with delaying the 503 response? One issue we see with delaying the response to the client is holding open the connection. Not sure if this is a real problem because the client will probably use the same connection for their next immediate request anyways. There may be other problems with the delay we have not thought of yet. Thanks for everyone's time. We just wanted to post this request in case anyone else also thought the idea was valid and a delayed 503 was a decent solution. From nginx-forum at nginx.us Sat Jan 26 00:36:08 2013 From: nginx-forum at nginx.us (tulumvinh@gmail.com) Date: Fri, 25 Jan 2013 19:36:08 -0500 Subject: Nginx temp file being deleted errenously Message-ID: <2200278a7c45e3b141fd03ec4a4b8133.NginxMailingListEnglish@forum.nginx.org> Hello everyone, My company processes millions of requests for file uploads/downloads and we are looking at Nginx to replace Apache. We are running into a problem that I hope you can help -- I have searched the web and even read Nginx HTTP Server book by Clement Nedelcu. At a high level, the flow of a request through our system is as follows: user agent posts the data to upload --> Ngixn acting as a reverse proxy --> Object store -> post action of passing request to Uwsgi. Problem: when uploading a large file (~ 100MB) , the post action is failing. The error message seen in error.log is "8 sendfile() failed (9: Bad file descriptor) while sending request to upstream," message. This use case occurs even if no load on the system, When I enable "debug", I see that after the Nginx streams the bytes to the object store, the temp file is deleted. When the post action is executed it fails as the temp file is gone. *Note: this error is NOT seen when upload small files. Details: - Nginx version 1.1.17 - UWsgi vesrion 1.0 - Cent OS 2.6.32-131.0.15.el6.x86_64 - The relevant nginx configuration is: ## # uri block to upload to object store. ## location '/upload/objecststorage' { proxy_pass https:/objectstore_host/valut post_action /os/postback/ospostback.py; } location = '/os/postback/ospostback.py' { root html/uwsgi; set $app ospostback; uwsgi_pass unix:/tmp/uwsgi.sock; include uwsgi_params; uwsgi_param SERVER_ADDR $server_addr; uwsgi_param SCRIPT_NAME $app; uwsgi_param UWSGI_MODULE $app; uwsgi_param UWSGI_CALLABLE "${app}_handler"; uwsgi_param UWSGI_PYHOME $document_root; uwsgi_param UWSGI_CHDIR $document_root; uwsgi_param UWSGI_BYTES $body_bytes_sent; uwsgi_param UWSGI_PATH $path; uwsgi_param UWSGI_PID $proc_id; uwsgi_param UWSGI_ID $token; uwsgi_param UWSGI_KEY $arg_id; uwsgi_modifier1 30; } Any help would be great. I really want to use Nginx but this is now a blocker issue for my company. Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235586,235586#msg-235586 From nginx-forum at nginx.us Sat Jan 26 00:57:10 2013 From: nginx-forum at nginx.us (bluesmoon) Date: Fri, 25 Jan 2013 19:57:10 -0500 Subject: Alias with variable sometimes works and sometimes returns 404 Message-ID: <2483445c57ce538beefc1b70458468dd.NginxMailingListEnglish@forum.nginx.org> I have the following in my server block: set $filename wizard-min; if($cookie_B) { set $filename $cookie_B; } location ~* "^/b/([a-f0-9]{10})$" { default_type application/javascript; alias /var/www/b/b-$filename.js; sub_filter_types application/javascript; sub_filter '%replace%' '$1'; } I request it using curl: curl -v 'http://server/b/0123456789' sometimes it returns the file correctly, and sometimes it returns a 404. When it does return a 404, the error log says: "/var/www/b/0123456789" failed (2: No such file or directory) Note that I'm testing this on a single server, and every request goes into the access log file, but some of them (20-30%) return a 404. If I take out the variable from the alias line, the file returns correctly every time. Any idea? PS: Is there any way to get my code formatted correctly on this forum? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235587,235587#msg-235587 From info at pkern.at Sat Jan 26 01:02:58 2013 From: info at pkern.at (Patrik Kernstock) Date: Sat, 26 Jan 2013 02:02:58 +0100 Subject: AW: AW: AW: Webserver crashes sometimes - don't know why In-Reply-To: References: <003801cdeca5$5bfeb2f0$13fc18d0$@pkern.at>, , <004601cdecb7$f3790f60$da6b2e20$@pkern.at>, , <005601cdeccc$8bd97420$a38c5c60$@pkern.at>, <20130108034846.GC68127@mdounin.ru>, <00fc01cdee21$1a11d480$4e357d80$@pkern.at> Message-ID: <015c01cdfb60$e24ef530$a6ecdf90$@pkern.at> Since weeks no crash. I don't know why the crash happened sometimes... Working perfekt as usual :) If a next crash happens, I know what to do - Thank you all! _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Sat Jan 26 13:40:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 26 Jan 2013 17:40:24 +0400 Subject: Delayed 503 limit_req response - feature request In-Reply-To: References: Message-ID: <20130126134024.GK40753@mdounin.ru> Hello! On Fri, Jan 25, 2013 at 02:09:21PM -0500, CM Fields wrote: > Nginx Feature Request: Delay 503 limit_req responses > > We would like to delay the 503 response to clients which exceed the > limit_req value rather then send an immediate 503. The delay would be > in milliseconds or in tenths of a second increments for example. Try the following trivial module: http://mdounin.ru/hg/ngx_http_delay_module/ With the config like error_page 503 /errors/503.html; location = /errors/503.html { delay 200ms; } -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Sat Jan 26 19:20:01 2013 From: nginx-forum at nginx.us (automatix) Date: Sat, 26 Jan 2013 14:20:01 -0500 Subject: cache configuration Message-ID: <762d6355aaffc5eb8fe62639a728ee09.NginxMailingListEnglish@forum.nginx.org> Hello everyone! A cache issue... I created an XML file order.xml with this simle cotent: John Doe
Foo Str. 3, 10117 Berlin
and saved it on my VM in /var/www/sandbox/test/. As you can see, it's not a valid XML (s. tag address). So the browser (XML parser) threw an error. Then I repaired the file. But the browser is still throwing the same eoor. It's defently the server(-side) cache, since I've already tried it out in several browsers. How can/should I set up the cache behavior of nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235601,235601#msg-235601 From nginx-forum at nginx.us Sun Jan 27 12:10:29 2013 From: nginx-forum at nginx.us (Daniel15) Date: Sun, 27 Jan 2013 07:10:29 -0500 Subject: cache configuration In-Reply-To: <762d6355aaffc5eb8fe62639a728ee09.NginxMailingListEnglish@forum.nginx.org> References: <762d6355aaffc5eb8fe62639a728ee09.NginxMailingListEnglish@forum.nginx.org> Message-ID: <13a811d3cdade910820d870caa088e1a.NginxMailingListEnglish@forum.nginx.org> It's probably browser caching - Did you try pressing Ctrl+F5 or clearing your browser's cache? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235601,235608#msg-235608 From nginx-forum at nginx.us Sun Jan 27 12:12:59 2013 From: nginx-forum at nginx.us (Daniel15) Date: Sun, 27 Jan 2013 07:12:59 -0500 Subject: Debugging FastCGI caching Message-ID: <018730a2e892e5f091abcf2517d175c3.NginxMailingListEnglish@forum.nginx.org> I'm trying to use Nginx FastCGI caching to cache my site's home page and blog home page. However, caching doesn't seem to be working on these pages, and $upstream_cache_status always seems to be "MISS". Site URL: http://dan.cx/ Relevant bits of the config (and the full Nginx server{} config is available at https://github.com/Daniel15/Website/blob/master/nginx.conf): _________________________________________________ # Handles only caching certain URLs map $uri $dan_no_cache { default 1; ~^/bundles/ 0; / 0; /blog 0; } ... # Caching parameters fastcgi_cache_key "$scheme$host$request_uri"; fastcgi_cache DANIEL15; fastcgi_cache_valid 60m; # Don't cache if $dan_no_cache map evaluates to 1 fastcgi_cache_bypass $dan_no_cache; fastcgi_no_cache $dan_no_cache; # Add cache status as X-Cache header add_header X-Cache $upstream_cache_status; _________________________________________________ Files under /bundles/ are correctly cached (see eg. the CSS and JavaScript on the site). I've noticed that $upstream_cache_status is "BYPASS" on pages that shouldn't be cached, which makes me think it's correctly picking up those URIs as being cacheable. What is the best way to debug this issue? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235609,235609#msg-235609 From francis at daoine.org Sun Jan 27 12:32:11 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 27 Jan 2013 12:32:11 +0000 Subject: Debugging FastCGI caching In-Reply-To: <018730a2e892e5f091abcf2517d175c3.NginxMailingListEnglish@forum.nginx.org> References: <018730a2e892e5f091abcf2517d175c3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130127123211.GM4332@craic.sysops.org> On Sun, Jan 27, 2013 at 07:12:59AM -0500, Daniel15 wrote: Hi there, > I'm trying to use Nginx FastCGI caching to cache my site's home page and > blog home page. However, caching doesn't seem to be working on these pages, > and $upstream_cache_status always seems to be "MISS". > > Site URL: http://dan.cx/ $ curl -i -I http://dan.cx/ HTTP/1.1 200 OK ... Cache-Control: private ... I don't see anything in your config which adds that header, so presumably it is coming from upstream. See http://nginx.org/r/fastcgi_cache_valid and http://nginx.org/r/fastcgi_ignore_headers. > What is the best way to debug this issue? Examine the traffic between each client/server pair (in this case nginx and the fasctcgi server). Then protocol-specific knowledge might reveal the problem. f -- Francis Daly francis at daoine.org From mak at ultimateserv.com Sun Jan 27 14:12:36 2013 From: mak at ultimateserv.com (Mohammad Khalaf) Date: Sun, 27 Jan 2013 16:12:36 +0200 Subject: progress bar Message-ID: Hello, I'd like to add progress bar module to nginx. I'm using "nginx admin" script that is auto installer for cpanel. Please advise. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sun Jan 27 15:22:01 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 27 Jan 2013 19:22:01 +0400 Subject: 1.3.11 Issues? In-Reply-To: <3a941aa89f0f5db9eb60fb6b0a25f98f.NginxMailingListEnglish@forum.nginx.org> References: <201301150717.49576.vbart@nginx.com> <3a941aa89f0f5db9eb60fb6b0a25f98f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301271922.01491.vbart@nginx.com> On Tuesday 15 January 2013 07:50:30 digitalpoint wrote: > Yeah... the problem is that while it might not be part of the SPDY 2 draft > to share connections across multiple hosts, Chrome most certainly is doing > it (and probably other browsers) as you can see from the previous > screenshot. > > Either way, you guys are doing a crazy awesome job... keep it up. :) > Please, try the new patch: http://nginx.org/patches/spdy/patch.spdy-59_1.3.11.txt The problem should be fixed now. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Sun Jan 27 22:01:17 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Sun, 27 Jan 2013 17:01:17 -0500 Subject: 1.3.11 Issues? In-Reply-To: <5baee6252154c74ad98ee13545bf738d.NginxMailingListEnglish@forum.nginx.org> References: <5baee6252154c74ad98ee13545bf738d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <354ea223bb0a07a836263998d67c8b72.NginxMailingListEnglish@forum.nginx.org> So far so good... seems to be working fine. :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235090,235624#msg-235624 From nginx-forum at nginx.us Mon Jan 28 06:28:36 2013 From: nginx-forum at nginx.us (Daniel15) Date: Mon, 28 Jan 2013 01:28:36 -0500 Subject: Debugging FastCGI caching In-Reply-To: <20130127123211.GM4332@craic.sysops.org> References: <20130127123211.GM4332@craic.sysops.org> Message-ID: <1789d7ce658f85065384a932036da0f9.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > $ curl -i -I http://dan.cx/ > HTTP/1.1 200 OK > .... > Cache-Control: private > .... > > I don't see anything in your config which adds that header, so > presumably it is coming from upstream. Thanks, Mono / ASP.NET must be adding that by default (as I have not explicitly added any caching headers). I'll set the correct caching headers for this page. Based on this, I assume that Nginx only caches responses with Cache-Control: public (or similar) headers? I couldn't find this documented anywhere on the Nginx site so I wasn't sure. Does Nginx use the Expires or Max-Age headers, and how do these work with fastcgi_cache_valid? If the Max-Age is 1 hour and Expires is 1 hour in the future, but fastcgi_cache_valid is set to 2 hours, how long would Nginx cache the response for? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235609,235630#msg-235630 From crirus at gmail.com Mon Jan 28 06:53:52 2013 From: crirus at gmail.com (Cristian Rusu) Date: Mon, 28 Jan 2013 08:53:52 +0200 Subject: HDD util is 100% - aio questions Message-ID: Hello Right now nginx manages to put hdds in the server to high util rate. I try to run Nginx 1.2.3 with aio support to deliver mp4 videos with the streaming module. I compiled the server with aio and it starts fine. In config I set it like this sendfile off; output_buffers 1 2m; #sndbuf=32K; aio on; directio 512; I read that sendfile should be off, but it won't send video unless I turn it on. In this case does aio work at all? How can I tell, before I wait a week and see that maybe HDD util is not 100% all the time anymore :P --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From djdarkbeat at gmail.com Mon Jan 28 07:25:24 2013 From: djdarkbeat at gmail.com (Brian Loomis) Date: Mon, 28 Jan 2013 00:25:24 -0700 Subject: progress bar In-Reply-To: References: Message-ID: Try using the pv command on Linux On Sunday, January 27, 2013, Mohammad Khalaf wrote: > Hello, > > I'd like to add progress bar module to nginx. > > I'm using "nginx admin" script that is auto installer for cpanel. > > Please advise. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 28 08:51:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Jan 2013 12:51:46 +0400 Subject: Debugging FastCGI caching In-Reply-To: <1789d7ce658f85065384a932036da0f9.NginxMailingListEnglish@forum.nginx.org> References: <20130127123211.GM4332@craic.sysops.org> <1789d7ce658f85065384a932036da0f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130128085146.GL40753@mdounin.ru> Hello! On Mon, Jan 28, 2013 at 01:28:36AM -0500, Daniel15 wrote: > Francis Daly Wrote: > ------------------------------------------------------- > > $ curl -i -I http://dan.cx/ > > HTTP/1.1 200 OK > > .... > > Cache-Control: private > > .... > > > > I don't see anything in your config which adds that header, so > > presumably it is coming from upstream. > > Thanks, Mono / ASP.NET must be adding that by default (as I have not > explicitly added any caching headers). I'll set the correct caching headers > for this page. > > Based on this, I assume that Nginx only caches responses with Cache-Control: > public (or similar) headers? I couldn't find this documented anywhere on the > Nginx site so I wasn't sure. Yes, any of the "private", "no-cache" and "no-store" in Cache-Control disables cache. > Does Nginx use the Expires or Max-Age headers, and how do these work with > fastcgi_cache_valid? If the Max-Age is 1 hour and Expires is 1 hour in the > future, but fastcgi_cache_valid is set to 2 hours, how long would Nginx > cache the response for? The "Expires" header takes precedence over fastcgi_cache_valid (unless ignored with fastcgi_ignore_headers; and, BTW, this is documented at http://nginx.org/r/fastcgi_cache_valid). There is no "Max-Age" header, but the "max-age" directive of the "Cache-Control" header, and it takes precedence over fastcgi_cache_valid as well. -- Maxim Dounin http://nginx.com/support.html From luky-37 at hotmail.com Mon Jan 28 09:34:37 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 28 Jan 2013 10:34:37 +0100 Subject: HDD util is 100% - aio questions In-Reply-To: References: Message-ID: AIO will not work with the streaming module: http://nginx.org/en/docs/http/ngx_http_core_module.html#aio On Linux,?directio?can only be used for reading blocks that are aligned on 512-byte boundaries (or 4K for XFS). Reading of unaligned file?s end is still made in blocking mode. The same holds true for byte range requests, and for FLV requests not from the beginning of a file: reading of unaligned data at the beginning and end of a file will be blocking. There is no need to turn off?sendfile?explicitly as it is turned off automatically when?directio?is used. What is your exact configuration? What OS do you use, what load and what disk and RAM configuration do you have? ________________________________ > From: crirus at gmail.com > Date: Mon, 28 Jan 2013 08:53:52 +0200 > Subject: HDD util is 100% - aio questions > To: nginx at nginx.org > > Hello > > Right now nginx manages to put hdds in the server to high util rate. > > I try to run Nginx 1.2.3 with aio support to deliver mp4 videos with > the streaming module. > I compiled the server with aio and it starts fine. > In config I set it like this > > sendfile off; > output_buffers 1 2m; > #sndbuf=32K; > aio on; > directio 512; > > I read that sendfile should be off, but it won't send video unless I > turn it on. > In this case does aio work at all? How can I tell, before I wait a week > and see that maybe HDD util is not 100% all the time anymore :P > > > --------------------------------------------------------------- > Cristian Rusu > Web Developement & Electronic Publishing > > ====== > Crilance.com > Crilance.blogspot.com > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From appa at perusio.net Mon Jan 28 13:44:32 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 28 Jan 2013 14:44:32 +0100 Subject: Better stats. Message-ID: <874ni1bewv.wl%appa@perusio.net> Hello, Thinking of this post by Andrew: http://mailman.nginx.org/pipermail/nginx/2012-March/032935.html > Yes, the stub one is quite limited. We've been working on a much > better version, which is going to appear soon. How's that going on? I browsed the repo I see nothing of new in the stub status module. Any delays for that? Thank you, --- appa From rkearsley at blueyonder.co.uk Mon Jan 28 13:56:46 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Mon, 28 Jan 2013 13:56:46 +0000 Subject: $request_length outside of log module Message-ID: <5106839E.30004@blueyonder.co.uk> Hi Over time you have been adding functionality to use log variables outside of log module, e.g. Changes with nginx 1.3.8 30 Oct 2012 *) Feature: the $bytes_sent, $connection, and $connection_requests variables can now be used not only in the "log_format" directive. Changes with nginx 1.3.9 27 Nov 2012 *) Feature: the $request_time and $msec variables can now be used not only in the "log_format" directive. Are you going to add the $request_length variable to this list too? Many thanks From contact at jpluscplusm.com Mon Jan 28 14:05:01 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 28 Jan 2013 14:05:01 +0000 Subject: $request_length outside of log module In-Reply-To: <5106839E.30004@blueyonder.co.uk> References: <5106839E.30004@blueyonder.co.uk> Message-ID: On 28 January 2013 13:56, Richard Kearsley wrote: > Hi > Over time you have been adding functionality to use log variables outside of > log module, e.g. > > Changes with nginx 1.3.8 30 Oct 2012 > *) Feature: the $bytes_sent, $connection, and $connection_requests variables > can now be used not only in the "log_format" directive. > > Changes with nginx 1.3.9 27 Nov 2012 > *) Feature: the $request_time and $msec variables can now be used not only > in the "log_format" directive. > > Are you going to add the $request_length variable to this list too? I would think you could get something equivalent and useful out of $upstream_http_content_length, as per http://wiki.nginx.org/HttpUpstreamModule#.24upstream_http_.24HEADER . YMMV. -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From rkearsley at blueyonder.co.uk Mon Jan 28 14:15:11 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Mon, 28 Jan 2013 14:15:11 +0000 Subject: $request_length outside of log module In-Reply-To: References: <5106839E.30004@blueyonder.co.uk> Message-ID: <510687EF.50300@blueyonder.co.uk> Hi, It's not the same, $request_length is the length of what the client (browser) sent it's request for a file e.g. request headers, request body I suppose I could loop through the headers and body and count their lengths.. but not ideal On 28/01/13 14:05, Jonathan Matthews wrote: > I would think you could get something equivalent and useful out of > $upstream_http_content_length, as per > http://wiki.nginx.org/HttpUpstreamModule#.24upstream_http_.24HEADER . > YMMV. > From mdounin at mdounin.ru Mon Jan 28 14:20:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Jan 2013 18:20:54 +0400 Subject: $request_length outside of log module In-Reply-To: <5106839E.30004@blueyonder.co.uk> References: <5106839E.30004@blueyonder.co.uk> Message-ID: <20130128142054.GU40753@mdounin.ru> Hello! On Mon, Jan 28, 2013 at 01:56:46PM +0000, Richard Kearsley wrote: > Hi > Over time you have been adding functionality to use log variables > outside of log module, e.g. > > Changes with nginx 1.3.8 30 Oct 2012 > *) Feature: the $bytes_sent, $connection, and $connection_requests > variables can now be used not only in the "log_format" directive. > > Changes with nginx 1.3.9 27 Nov 2012 > *) Feature: the $request_time and $msec variables can now be used > not only in the "log_format" directive. > > Are you going to add the $request_length variable to this list too? It's already done: http://trac.nginx.org/nginx/changeset/5011/nginx -- Maxim Dounin http://nginx.com/support.html From contact at jpluscplusm.com Mon Jan 28 14:24:17 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 28 Jan 2013 14:24:17 +0000 Subject: $request_length outside of log module In-Reply-To: <510687EF.50300@blueyonder.co.uk> References: <5106839E.30004@blueyonder.co.uk> <510687EF.50300@blueyonder.co.uk> Message-ID: On 28 January 2013 14:15, Richard Kearsley wrote: > On 28/01/13 14:05, Jonathan Matthews wrote: >> I would think you could get something equivalent and useful out of >> $upstream_http_content_length, as per >> http://wiki.nginx.org/HttpUpstreamModule#.24upstream_http_.24HEADER . >> YMMV. > > Hi, > It's not the same, $request_length is the length of what the client > (browser) sent it's request for a file e.g. request headers, request body > I suppose I could loop through the headers and body and count their > lengths.. but not ideal My bad - I misread your original email. $request_length is actually defined here (http://wiki.nginx.org/HttpLogModule#log_format) as the request /body/'s length, i.e. not counting headers. You might get something useful out of $http_content_length instead. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From rkearsley at blueyonder.co.uk Mon Jan 28 14:38:58 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Mon, 28 Jan 2013 14:38:58 +0000 Subject: $request_length outside of log module In-Reply-To: References: <5106839E.30004@blueyonder.co.uk> <510687EF.50300@blueyonder.co.uk> Message-ID: <51068D82.4020509@blueyonder.co.uk> yeah, it's different (correct) definition here: http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format should probably update wiki to reflect it I think $http_content_length will give the length of the response also (not the request) As Maxim says, they added it to the trunk a few days ago and I don't mind waiting for next release :) Thanks all On 28/01/13 14:24, Jonathan Matthews wrote: > My bad - I misread your original email. $request_length is actually > defined here (http://wiki.nginx.org/HttpLogModule#log_format) as the > request /body/'s length, i.e. not counting headers. You might get > something useful out of $http_content_length instead. Jonathan From mak at ultimateserv.com Mon Jan 28 14:43:58 2013 From: mak at ultimateserv.com (Mohammad Khalaf) Date: Mon, 28 Jan 2013 16:43:58 +0200 Subject: upload speed Message-ID: Hello, I've installed nginx with apache for file sharing site. After installing nginx the upload speed became very slow. can I increase upload speed from nginx? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Jan 28 14:46:10 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 28 Jan 2013 14:46:10 +0000 Subject: $request_length outside of log module In-Reply-To: <51068D82.4020509@blueyonder.co.uk> References: <5106839E.30004@blueyonder.co.uk> <510687EF.50300@blueyonder.co.uk> <51068D82.4020509@blueyonder.co.uk> Message-ID: On 28 January 2013 14:38, Richard Kearsley wrote: > yeah, it's different (correct) definition here: > http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format > should probably update wiki to reflect it Ah yes, you're right. Haven't see a doc/wiki mismatch like that before! > I think $http_content_length will give the length of the response also (not > the request) I don't think it will. http://wiki.nginx.org/HttpCoreModule#.24http_HEADER says it'll be the request header. I hadn't noticed $sent_http_* before - handy! Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Mon Jan 28 16:27:18 2013 From: nginx-forum at nginx.us (automatix) Date: Mon, 28 Jan 2013 11:27:18 -0500 Subject: cache configuration In-Reply-To: <13a811d3cdade910820d870caa088e1a.NginxMailingListEnglish@forum.nginx.org> References: <762d6355aaffc5eb8fe62639a728ee09.NginxMailingListEnglish@forum.nginx.org> <13a811d3cdade910820d870caa088e1a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <74bf75895475171c8080858eda09d878.NginxMailingListEnglish@forum.nginx.org> As I've already said: It can only be the server(-side) caching. Since revoming of the browser cache and Ctrl+F5 don't help. But now I think, it might be an issue of the virtual machine. Because when I edit the files directly from the VM* and not in my hosst system, sometimes I can see the changes in the Browser. * The files are saved in a shared folder, so I can access to them both -- from the host and also from the guest system. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235601,235661#msg-235661 From nginx-forum at nginx.us Mon Jan 28 16:48:12 2013 From: nginx-forum at nginx.us (kolbyjack) Date: Mon, 28 Jan 2013 11:48:12 -0500 Subject: cache configuration In-Reply-To: <74bf75895475171c8080858eda09d878.NginxMailingListEnglish@forum.nginx.org> References: <762d6355aaffc5eb8fe62639a728ee09.NginxMailingListEnglish@forum.nginx.org> <13a811d3cdade910820d870caa088e1a.NginxMailingListEnglish@forum.nginx.org> <74bf75895475171c8080858eda09d878.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4684ae0b9fd979e16c5cfde79ab3d6c5.NginxMailingListEnglish@forum.nginx.org> When using virtualbox shared folders, you need to turn off sendfile in nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235601,235662#msg-235662 From vbart at nginx.com Mon Jan 28 17:46:38 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 28 Jan 2013 21:46:38 +0400 Subject: HDD util is 100% - aio questions In-Reply-To: References: Message-ID: <201301282146.38266.vbart@nginx.com> On Monday 28 January 2013 10:53:52 Cristian Rusu wrote: > Hello > > Right now nginx manages to put hdds in the server to high util rate. > > I try to run Nginx 1.2.3 with aio support to deliver mp4 videos with the > streaming module. > I compiled the server with aio and it starts fine. > In config I set it like this [...] > directio 512; > So, you effectively switched off the page cache for any response longer than 512 bytes. > I read that sendfile should be off, but it won't send video unless I turn > it on. No, it should not for Linux. > In this case does aio work at all? How can I tell, before I wait a week and > see that maybe HDD util is not 100% all the time anymore :P > It seems, and you have almost all the data read directly from drive, which is resulted in 100% disk utilization. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html > > --------------------------------------------------------------- > Cristian Rusu > Web Developement & Electronic Publishing > > ====== > Crilance.com > Crilance.blogspot.com From nginx-forum at nginx.us Mon Jan 28 17:48:59 2013 From: nginx-forum at nginx.us (automatix) Date: Mon, 28 Jan 2013 12:48:59 -0500 Subject: cache configuration In-Reply-To: <4684ae0b9fd979e16c5cfde79ab3d6c5.NginxMailingListEnglish@forum.nginx.org> References: <762d6355aaffc5eb8fe62639a728ee09.NginxMailingListEnglish@forum.nginx.org> <13a811d3cdade910820d870caa088e1a.NginxMailingListEnglish@forum.nginx.org> <74bf75895475171c8080858eda09d878.NginxMailingListEnglish@forum.nginx.org> <4684ae0b9fd979e16c5cfde79ab3d6c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0b0788f7b06df832d383b2cfbaa4e7f2.NginxMailingListEnglish@forum.nginx.org> Great, it works! Thank you very much! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235601,235666#msg-235666 From vbart at nginx.com Mon Jan 28 17:55:14 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 28 Jan 2013 21:55:14 +0400 Subject: cache configuration In-Reply-To: <0b0788f7b06df832d383b2cfbaa4e7f2.NginxMailingListEnglish@forum.nginx.org> References: <762d6355aaffc5eb8fe62639a728ee09.NginxMailingListEnglish@forum.nginx.org> <4684ae0b9fd979e16c5cfde79ab3d6c5.NginxMailingListEnglish@forum.nginx.org> <0b0788f7b06df832d383b2cfbaa4e7f2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301282155.14117.vbart@nginx.com> On Monday 28 January 2013 21:48:59 automatix wrote: > Great, it works! Thank you very much! > That is an old well-known bug in VitualBox. See: https://www.virtualbox.org/ticket/819 ?https://www.virtualbox.org/ticket/9069 http://wiki.nginx.org/Pitfalls#Config_Changes_Not_Reflected wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Jan 28 18:10:00 2013 From: nginx-forum at nginx.us (automatix) Date: Mon, 28 Jan 2013 13:10:00 -0500 Subject: cache configuration In-Reply-To: <201301282155.14117.vbart@nginx.com> References: <201301282155.14117.vbart@nginx.com> Message-ID: <9873d775e28382d525f0c8e7b75ccf50.NginxMailingListEnglish@forum.nginx.org> Didn't know it. Thank you for the info! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235601,235668#msg-235668 From counterveil at gmail.com Mon Jan 28 18:37:51 2013 From: counterveil at gmail.com (Christopher Opena) Date: Mon, 28 Jan 2013 10:37:51 -0800 Subject: Buffer messages in log even with buffering turned off Message-ID: Howdy folks, We've been seeing some recurring logs in our log that look like this: 2013/01/28 18:30:36 [warn] 2657#0: *210608 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000772, client: , server: , request: "POST /upload/publish_thumbnail HTTP/1.1", host: "" Based on searching through the mailing list's previous questions we found that we could set the following directives in order to attempt to disable it: client_max_body_size 0; proxy_max_temp_file_size 0; At first we only had this in the 'http' context but then also copied those same configurations into the 'server' and 'location' contexts to see if that would help since the messages continued to appear in the log. Even after adding it to 'server' and 'location', the messages continue - is there another configuration directive that we might be missing? Our config is pretty simple and and doesn't have any location or server contexts that I could have missed (also verified that the context that this message appears under is covered by the two directives that we set to 0). Thanks in advance, Chris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jan 28 18:58:17 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 28 Jan 2013 18:58:17 +0000 Subject: Buffer messages in log even with buffering turned off In-Reply-To: References: Message-ID: <20130128185817.GN4332@craic.sysops.org> On Mon, Jan 28, 2013 at 10:37:51AM -0800, Christopher Opena wrote: Hi there, > We've been seeing some recurring logs in our log that look like this: > > 2013/01/28 18:30:36 [warn] 2657#0: *210608 a client request body is > buffered to a temporary file /var/cache/nginx/client_temp/0000000772, > client: , server: , request: "POST > /upload/publish_thumbnail HTTP/1.1", host: "" > > Based on searching through the mailing list's previous questions we found > that we could set the following directives in order to attempt to disable > it: If you want to ensure that the client request body is not buffered to disk, you want to make sure that your client_body_buffer_size is larger than your client_max_body_size. And be willing to refuse any client request body bigger than that. > client_max_body_size 0; http://nginx.org/r/client_max_body_size Sets the maximum allowed size of the client request body. Setting size to 0 disables client request body size checking. > proxy_max_temp_file_size 0; http://nginx.org/r/proxy_max_temp_file_size For responses from the proxied server. Look at http://nginx.org/en/docs/http/ngx_http_core_module.html You probably want directives which start "client_body_". f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Jan 28 19:20:27 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Mon, 28 Jan 2013 14:20:27 -0500 Subject: 1.3.11 Issues? In-Reply-To: <201301271922.01491.vbart@nginx.com> References: <201301271922.01491.vbart@nginx.com> Message-ID: <48365f5c904ed1d9dd8ca7eb020de6f9.NginxMailingListEnglish@forum.nginx.org> Valentin V. Bartenev Wrote: ------------------------------------------------------- > Please, try the new patch: > http://nginx.org/patches/spdy/patch.spdy-59_1.3.11.txt > > The problem should be fixed now. > > wbr, Valentin V. Bartenev Is there a possibility the patch introduced an issue where connections don't expire (like ever)? Our load balancer in front of web server shows 1,511 connections, Nginx is reporting 10,810 connections, and the number of connections as reported by Nginx is just growing and growing and does not even remotely coincide with actual traffic/users/connections. This only seemed to start after the patch. http://f.cl.ly/items/0u2E2U0P3L280l3X033s/Image%202013.01.28%2011:19:46%20AM.png Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235090,235671#msg-235671 From counterveil at gmail.com Mon Jan 28 19:25:12 2013 From: counterveil at gmail.com (Christopher Opena) Date: Mon, 28 Jan 2013 11:25:12 -0800 Subject: Buffer messages in log even with buffering turned off In-Reply-To: <20130128185817.GN4332@craic.sysops.org> References: <20130128185817.GN4332@craic.sysops.org> Message-ID: On Mon, Jan 28, 2013 at 10:58 AM, Francis Daly wrote: > > If you want to ensure that the client request body is not buffered to > disk, you want to make sure that your client_body_buffer_size is larger > than your client_max_body_size. And be willing to refuse any client > request body bigger than that. > > > client_max_body_size 0; > > http://nginx.org/r/client_max_body_size > > Sets the maximum allowed size of the client request body. Setting size > to 0 disables client request body size checking. > > > proxy_max_temp_file_size 0; > > http://nginx.org/r/proxy_max_temp_file_size > > For responses from the proxied server. > > > Look at http://nginx.org/en/docs/http/ngx_http_core_module.html > > You probably want directives which start "client_body_". Thanks for the rapid reply, Francis. So it seems that even if we disable client request body size checking altogether (setting to 0), we still have to set all the other client_body_ checks? The primary aim is to just let Nginx pass through any traffic regardless of size because we are mostly using Nginx as a proxy / pass-through / load-balancing mechanism doing a hand-off to Apache until we can finally get our app off Apache and fully convert to Nginx. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Jan 28 19:35:25 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 28 Jan 2013 23:35:25 +0400 Subject: 1.3.11 Issues? In-Reply-To: <48365f5c904ed1d9dd8ca7eb020de6f9.NginxMailingListEnglish@forum.nginx.org> References: <201301271922.01491.vbart@nginx.com> <48365f5c904ed1d9dd8ca7eb020de6f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201301282335.26061.vbart@nginx.com> On Monday 28 January 2013 23:20:27 digitalpoint wrote: > Valentin V. Bartenev Wrote: > ------------------------------------------------------- > > > Please, try the new patch: > > http://nginx.org/patches/spdy/patch.spdy-59_1.3.11.txt > > > > The problem should be fixed now. > > > > wbr, Valentin V. Bartenev > > Is there a possibility the patch introduced an issue where connections > don't expire (like ever)? > > Our load balancer in front of web server shows 1,511 connections, Nginx is > reporting 10,810 connections, and the number of connections as reported by > Nginx is just growing and growing and does not even remotely coincide with > actual traffic/users/connections. This only seemed to start after the > patch. Do you mean numbers from stub_status? Then please check the error log. It may also indicate periodically crashing worker processes. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html > > http://f.cl.ly/items/0u2E2U0P3L280l3X033s/Image%202013.01.28%2011:19:46%20A > M.png > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,235090,235671#msg-235671 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Jan 28 19:43:03 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 28 Jan 2013 19:43:03 +0000 Subject: Buffer messages in log even with buffering turned off In-Reply-To: References: <20130128185817.GN4332@craic.sysops.org> Message-ID: <20130128194303.GO4332@craic.sysops.org> On Mon, Jan 28, 2013 at 11:25:12AM -0800, Christopher Opena wrote: > On Mon, Jan 28, 2013 at 10:58 AM, Francis Daly wrote: > > If you want to ensure that the client request body is not buffered to > > disk, you want to make sure that your client_body_buffer_size is larger > > than your client_max_body_size. And be willing to refuse any client > > request body bigger than that. > Thanks for the rapid reply, Francis. So it seems that even if we disable > client request body size checking altogether (setting to 0), we still have > to set all the other client_body_ checks? The only client body checks are client_max_body_size and client_body_timeout. The other client_body_* directives are (mostly) independent of those. When the client sends something to nginx, nginx buffers it somewhere before sending it upstream. That "somewhere" can be disk or ram. If it is bigger than your configured client_body_buffer_size, it goes to disk, and lets you know that that happened. > The primary aim is to just let > Nginx pass through any traffic regardless of size because we are mostly > using Nginx as a proxy / pass-through / load-balancing mechanism doing a > hand-off to Apache until we can finally get our app off Apache and fully > convert to Nginx. That's what nginx does. Except that it buffers input before sending upstream. Occasional buffering to disk isn't a bad thing. f -- Francis Daly francis at daoine.org From cmfileds at gmail.com Mon Jan 28 20:08:31 2013 From: cmfileds at gmail.com (CM Fields) Date: Mon, 28 Jan 2013 15:08:31 -0500 Subject: Delayed 503 limit_req response - feature request In-Reply-To: <20130126134024.GK40753@mdounin.ru> References: <20130126134024.GK40753@mdounin.ru> Message-ID: On Sat, Jan 26, 2013 at 8:40 AM, Maxim Dounin wrote: > Hello! > > On Fri, Jan 25, 2013 at 02:09:21PM -0500, CM Fields wrote: > >> Nginx Feature Request: Delay 503 limit_req responses >> >> We would like to delay the 503 response to clients which exceed the >> limit_req value rather then send an immediate 503. The delay would be >> in milliseconds or in tenths of a second increments for example. > > Try the following trivial module: > > http://mdounin.ru/hg/ngx_http_delay_module/ > > With the config like > > error_page 503 /errors/503.html; > > location = /errors/503.html { > delay 200ms; > } > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Maxim, Thank you! The ngx_http_delay_module works perfectly. From nginx-forum at nginx.us Mon Jan 28 20:32:18 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Mon, 28 Jan 2013 15:32:18 -0500 Subject: 1.3.11 Issues? In-Reply-To: <201301282335.26061.vbart@nginx.com> References: <201301282335.26061.vbart@nginx.com> Message-ID: <98af163cff46e6bf3540cf746942150d.NginxMailingListEnglish@forum.nginx.org> Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Monday 28 January 2013 23:20:27 digitalpoint wrote: > > Valentin V. Bartenev Wrote: > > ------------------------------------------------------- > > > > > Please, try the new patch: > > > http://nginx.org/patches/spdy/patch.spdy-59_1.3.11.txt > > > > > > The problem should be fixed now. > > > > > > wbr, Valentin V. Bartenev > > > > Is there a possibility the patch introduced an issue where > connections > > don't expire (like ever)? > > > > Our load balancer in front of web server shows 1,511 connections, > Nginx is > > reporting 10,810 connections, and the number of connections as > reported by > > Nginx is just growing and growing and does not even remotely > coincide with > > actual traffic/users/connections. This only seemed to start after > the > > patch. > > Do you mean numbers from stub_status? Then please check the error log. > It may also indicate periodically crashing worker processes. > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > > > > > http://f.cl.ly/items/0u2E2U0P3L280l3X033s/Image%202013.01.28%2011:19:4 > 6%20A > > M.png > > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,235090,235671#msg-235671 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx That does look like it's the case... 2013/01/28 00:38:19 [alert] 30235#0: worker process 12408 exited on signal 11 2013/01/28 02:00:24 [alert] 30235#0: worker process 15737 exited on signal 11 2013/01/28 02:24:31 [alert] 30235#0: worker process 14897 exited on signal 11 2013/01/28 02:25:19 [alert] 30235#0: worker process 22628 exited on signal 11 2013/01/28 03:03:59 [alert] 30235#0: worker process 23528 exited on signal 11 2013/01/28 03:17:58 [alert] 30235#0: worker process 7916 exited on signal 11 2013/01/28 03:37:03 [alert] 30235#0: worker process 24767 exited on signal 11 2013/01/28 04:02:07 [alert] 30235#0: worker process 23483 exited on signal 11 2013/01/28 04:27:26 [alert] 30235#0: worker process 25164 exited on signal 11 2013/01/28 05:41:34 [alert] 30235#0: worker process 19980 exited on signal 11 2013/01/28 05:47:18 [alert] 30235#0: worker process 26566 exited on signal 11 2013/01/28 08:39:35 [alert] 30235#0: worker process 30184 exited on signal 11 2013/01/28 08:46:40 [alert] 30235#0: worker process 3482 exited on signal 11 2013/01/28 09:42:16 [alert] 30235#0: worker process 27395 exited on signal 11 2013/01/28 10:14:01 [alert] 30235#0: worker process 5439 exited on signal 11 2013/01/28 10:28:36 [alert] 30235#0: worker process 29917 exited on signal 11 2013/01/28 11:17:04 [alert] 30235#0: worker process 6869 exited on signal 11 I guess I should toll back to the old SPDY patch... :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235090,235680#msg-235680 From lists at ruby-forum.com Tue Jan 29 05:23:37 2013 From: lists at ruby-forum.com (Kibret T.) Date: Tue, 29 Jan 2013 06:23:37 +0100 Subject: Euro Truck Simulator 2 Free Full Version Download In-Reply-To: <157deeef37b1b847d7d3df96f9b8259c.NginxMailingListEnglish@forum.nginx.org> References: <157deeef37b1b847d7d3df96f9b8259c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <08cdc470f4805a175b58d857c26f5bba@ruby-forum.com> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm -- Posted via http://www.ruby-forum.com/. From dmiller at amfes.com Tue Jan 29 13:32:07 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Tue, 29 Jan 2013 05:32:07 -0800 Subject: VirtualBox Linux Host & Guests Message-ID: Not directly on topic - but since I think there are people here who use VirtualBox with Nginx I'll ask. At first I thought I had nginx configuration problems - but even if I do, I know my core issue is VirtualBox/Linux networking. I have a Linux host (AMD Opteron, 6-core, 16GB) with multiple VirtualBox VM's. After tests and experiments, I'm now using the Intel T Server network interface for the guests (I WAS using the virtio - but performance was horrible). The guests are bridged. I'm using Ubuntu "Precise" for both host & guests and "Guest Extensions" is installed on all guests. My first question - is anyone using the virtio interface successfully? And measured your performance to know it's working? But the real question - what if any adjustments have you made, either to VirtualBox or to Linux, to achieve optimum network performance? Having switched to the Intel interface my Linux guests have gotten much better - but still not where I think they should be (which to me means the virtualization should be transparent - guests should have the same speed as the host). I have Windows guests that are working perfectly - it's only the Linux guests that have issues. -- Daniel From nginx-forum at nginx.us Tue Jan 29 19:21:28 2013 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 29 Jan 2013 14:21:28 -0500 Subject: VirtualBox Linux Host & Guests In-Reply-To: References: Message-ID: Stick to 'Intel PRO/1000 MT Server (82545EM)' for every guest that can use it, even debian/nginx (6.0.6) blasts with it, I was using virtio for some time which does do what it suppose to do, lower cpu use and less virtualization overhead, but the performance s*cks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235687,235702#msg-235702 From dmiller at amfes.com Tue Jan 29 19:29:28 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Tue, 29 Jan 2013 11:29:28 -0800 Subject: VirtualBox Linux Host & Guests In-Reply-To: References: <5107CF57.6010802@amfes.com> Message-ID: On 1/29/2013 11:21 AM, itpp2012 wrote: > Stick to 'Intel PRO/1000 MT Server (82545EM)' for every guest that can use > it, even debian/nginx (6.0.6) blasts with it, I was using virtio for some > time which does do what it suppose to do, lower cpu use and less > virtualization overhead, but the performance s*cks. > Regarding the virtio - that's exactly what I found. Is there a reason to use the 'MT' vs the 'T' if I only need one interface? -- Daniel From nginx-forum at nginx.us Tue Jan 29 21:47:33 2013 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 29 Jan 2013 16:47:33 -0500 Subject: VirtualBox Linux Host & Guests In-Reply-To: References: Message-ID: <989a00b218ad717e48942b1594344db8.NginxMailingListEnglish@forum.nginx.org> Daniel L. Miller Wrote: ------------------------------------------------------- > Is there a reason to use the 'MT' vs the 'T' if I only need one > interface? Other then better performance and a wider range of support by default, no. You're not going to see 2 nics anyway, the MT has a few more tuning options then the T, I've done some extensive testing with all of them and their several drivers with the MT having better use of the bandwidth and best throughput (near realtime) even on a cluster with 120 vm's. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235687,235708#msg-235708 From nginx-forum at nginx.us Tue Jan 29 22:09:22 2013 From: nginx-forum at nginx.us (iLinux85) Date: Tue, 29 Jan 2013 17:09:22 -0500 Subject: nginx high load average Message-ID: hello i have server running as shared server Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz 4 core and 16 g.b ram running onCentOS release 5.9 (Final) upload and download via php scripts like rapidshare the problem is i have high connections on this server and i want to know how could i configure the nginx correctly here is my current nginx.conf user nobody; # no need for more workers in the proxy mode worker_processes 1; error_log logs/error.log info; worker_rlimit_nofile 8192; events { worker_connections 51200; # you might need to increase this setting for busy servers use epoll; # Linux kernels 2.4.x change to rtsig } http { server_names_hash_max_size 2048; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 10; gzip on; gzip_min_length 1100; gzip_buffers 4 32k; gzip_types text/plain application/x-javascript text/xml text/css; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; connection_pool_size 1024; client_header_buffer_size 4k; large_client_header_buffers 4 32k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /home/proxy_temp; include "/usr/local/nginx/conf/vhost.conf"; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235709,235709#msg-235709 From dmiller at amfes.com Wed Jan 30 00:12:35 2013 From: dmiller at amfes.com (Daniel L. Miller) Date: Tue, 29 Jan 2013 16:12:35 -0800 Subject: VirtualBox Linux Host & Guests In-Reply-To: <989a00b218ad717e48942b1594344db8.NginxMailingListEnglish@forum.nginx.org> References: <51082318.3060703@amfes.com> <989a00b218ad717e48942b1594344db8.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 1/29/2013 1:47 PM, itpp2012 wrote: > Daniel L. Miller Wrote: > ------------------------------------------------------- >> Is there a reason to use the 'MT' vs the 'T' if I only need one >> interface? > Other then better performance and a wider range of support by default, no. > You're not going to see 2 nics anyway, the MT has a few more tuning options > then the T, I've done some extensive testing with all of them and their > several drivers with the MT having better use of the bandwidth and best > throughput (near realtime) even on a cluster with 120 vm's. > And here I thought I was being clever by using the single-NIC model. Let's see how it works... Hmm...don't really see any difference but I'll take your word for it and leave it set with the MT. But I'm still left with wondering why my Linux guest (now using the MT) is slower than the Windows guest (still using T). It's acceptable speed...just not full throttle and it leaves me wanting more. Are there any tweaks you've done to either the host or guest? -- Daniel From liulantao at gmail.com Wed Jan 30 02:50:40 2013 From: liulantao at gmail.com (Liu Lantao) Date: Wed, 30 Jan 2013 10:50:40 +0800 Subject: nginx high load average In-Reply-To: References: Message-ID: 'worker_processes' should be equal to total numbers of CPU cores. What does 'high connections' mean? Please show the result of 'cat /proc/net/sockstat', and the content of ' /usr/local/nginx/conf/vhost.conf'. On Wed, Jan 30, 2013 at 6:09 AM, iLinux85 wrote: > hello > > i have server running as shared server Intel(R) Xeon(R) CPU E3-1225 V2 @ > 3.20GHz 4 core and 16 g.b ram running onCentOS release 5.9 (Final) upload > and download via php scripts like rapidshare > > the problem is i have high connections on this server and i want to know > how > could i configure the nginx correctly > > here is my current nginx.conf > > > user nobody; > # no need for more workers in the proxy mode > worker_processes 1; > > error_log logs/error.log info; > > worker_rlimit_nofile 8192; > > events { > worker_connections 51200; # you might need to increase this setting for > busy servers > use epoll; # Linux kernels 2.4.x change to rtsig > } > > http { > server_names_hash_max_size 2048; > > include mime.types; > default_type application/octet-stream; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > > keepalive_timeout 10; > > gzip on; > gzip_min_length 1100; > gzip_buffers 4 32k; > gzip_types text/plain application/x-javascript text/xml text/css; > ignore_invalid_headers on; > > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; > connection_pool_size 1024; > client_header_buffer_size 4k; > large_client_header_buffers 4 32k; > request_pool_size 4k; > output_buffers 4 32k; > postpone_output 1460; > proxy_temp_path /home/proxy_temp; > include "/usr/local/nginx/conf/vhost.conf"; > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,235709,235709#msg-235709 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Liu Lantao EMAIL: liulantao ( at ) gmail ( dot ) com ; WEBSITE: http://www.liulantao.com/portal . -------------- next part -------------- An HTML attachment was scrubbed... URL: From haifeng.813 at gmail.com Wed Jan 30 03:03:15 2013 From: haifeng.813 at gmail.com (Liu Haifeng) Date: Wed, 30 Jan 2013 11:03:15 +0800 Subject: Does zero buffer allowed in the output buffer chain? Message-ID: Hi, I am writing a http handler module, I found when the last buffer in the output buffer chain is zero, I'll get an alert in the error log about zero output buffer, and the browser will get no response after waiting a while, this is not what I am expecting as the buffer chain has data, just the last one has zero size. Does this a feature or I missed something else? PS: nginx version: 1.2.4 gzip option is enabled. From ruilue.zengrl at alibaba-inc.com Wed Jan 30 05:20:59 2013 From: ruilue.zengrl at alibaba-inc.com (=?gb2312?B?1PjI8MLU?=) Date: Wed, 30 Jan 2013 05:20:59 +0000 Subject: nginx 411 error Message-ID: <0EF79984908C3844AA4E8E15EFDB335B2F4EC422@CNHZ-EXMAIL-07.ali.com> Hello, when i change my webserver from apache to tengine,sometimes occurs [411 Length Required] error. the request method is post ,but post body is empty. but when i use apache ,this error never occurs.Is there any solution ? Thank you! ________________________________ This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you. ???(??????)?????????????????????????????????????????????????????????????????????? From steve at greengecko.co.nz Wed Jan 30 06:09:20 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 30 Jan 2013 19:09:20 +1300 Subject: nginx high load average In-Reply-To: References: Message-ID: <1359526160.6060.1558.camel@steve-new> It's very unlikely that nginx is causing a high load average. If I separate it off from the php/database processing, I can run it on a low power, single threaded vps with 128MB memory. What process is really using your CPU? Steve On Tue, 2013-01-29 at 17:09 -0500, iLinux85 wrote: > hello > > i have server running as shared server Intel(R) Xeon(R) CPU E3-1225 V2 @ > 3.20GHz 4 core and 16 g.b ram running onCentOS release 5.9 (Final) upload > and download via php scripts like rapidshare > > the problem is i have high connections on this server and i want to know how > could i configure the nginx correctly > > here is my current nginx.conf > > > user nobody; > # no need for more workers in the proxy mode > worker_processes 1; > > error_log logs/error.log info; > > worker_rlimit_nofile 8192; > > events { > worker_connections 51200; # you might need to increase this setting for > busy servers > use epoll; # Linux kernels 2.4.x change to rtsig > } > > http { > server_names_hash_max_size 2048; > > include mime.types; > default_type application/octet-stream; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > > keepalive_timeout 10; > > gzip on; > gzip_min_length 1100; > gzip_buffers 4 32k; > gzip_types text/plain application/x-javascript text/xml text/css; > ignore_invalid_headers on; > > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; > connection_pool_size 1024; > client_header_buffer_size 4k; > large_client_header_buffers 4 32k; > request_pool_size 4k; > output_buffers 4 32k; > postpone_output 1460; > proxy_temp_path /home/proxy_temp; > include "/usr/local/nginx/conf/vhost.conf"; > } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235709,235709#msg-235709 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From nginx-forum at nginx.us Wed Jan 30 11:04:28 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 30 Jan 2013 06:04:28 -0500 Subject: VirtualBox Linux Host & Guests In-Reply-To: References: Message-ID: <379434f0c9fc3e43eca51aa02f680fdb.NginxMailingListEnglish@forum.nginx.org> It also depends on what the Host has as a nic and how its settings are set, ea. QoS and LLTP are useless protocols but take resources away, force VM's to use 1gb FD, another thing is to stick to VBox 3.2, force the GA to use timesync only. Ea. use IPerf to push settings and boundaries. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235687,235728#msg-235728 From nginx-forum at nginx.us Wed Jan 30 11:57:01 2013 From: nginx-forum at nginx.us (pricne5) Date: Wed, 30 Jan 2013 06:57:01 -0500 Subject: Proxing webservices (Webservices, WSDL, SOAP) Message-ID: <98f2207d4940a759786c6fc8ed53579b.NginxMailingListEnglish@forum.nginx.org> What is the correct way to proxing to any remote webservice? How to use nginx in front of IIS or other web server, who serves webservices? As an example we have any remote SOAP webservice at http://B:8089/getClientService/getClientService?wsdl. In SOAP document of these webservice we have endpoint location: ..... ..... If we use proxy_pass: server { listen 80; server_name A; location /{ proxy_pass http://B:8089/; } nginx won't rewrite(or change) SOAP's endpoint address to itself, so any futher SOAP requests will fail, because requesting side makes request to direct host described at SOAP endpoint location :( Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235730,235730#msg-235730 From mdounin at mdounin.ru Wed Jan 30 12:37:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Jan 2013 16:37:57 +0400 Subject: nginx 411 error In-Reply-To: <0EF79984908C3844AA4E8E15EFDB335B2F4EC422@CNHZ-EXMAIL-07.ali.com> References: <0EF79984908C3844AA4E8E15EFDB335B2F4EC422@CNHZ-EXMAIL-07.ali.com> Message-ID: <20130130123757.GF40753@mdounin.ru> Hello! On Wed, Jan 30, 2013 at 05:20:59AM +0000, ??? wrote: > Hello, > > when i change my webserver from apache to tengine,sometimes occurs [411 Length Required] error. > > the request method is post ,but post body is empty. > > but when i use apache ,this error never occurs.Is there any solution ? The 411 Length Required used to be returned by nginx if client tried to use chunked Transfer-Encoding. Support for chunked Transfer-Encoding was introduced in nginx 1.3.9+, you have to upgrade to recent nginx 1.3.x if you need it. -- Maxim Dounin http://nginx.com/support.html From ar at xlrs.de Wed Jan 30 12:57:27 2013 From: ar at xlrs.de (Axel) Date: Wed, 30 Jan 2013 13:57:27 +0100 Subject: nginx 411 error In-Reply-To: <20130130123757.GF40753@mdounin.ru> References: <0EF79984908C3844AA4E8E15EFDB335B2F4EC422@CNHZ-EXMAIL-07.ali.com> <20130130123757.GF40753@mdounin.ru> Message-ID: <05d116ac6a1a6edc55343247816035c8@webmail.xlrs.de> Hi Maxim, are you sure that an upgrade to nginx 1.3.x is required? I had this issue a while ago and I solved it by adding chunkin on; error_page 411 = @my_411_error; location @my_411_error { chunkin_resume; } to my vHost configuration. I never had this error again. rgds, Axel Am 30.01.2013 13:37, schrieb Maxim Dounin: > Hello! > > On Wed, Jan 30, 2013 at 05:20:59AM +0000, ??? wrote: > >> Hello, >> >> when i change my webserver from apache to tengine,sometimes occurs >> [411 Length Required] error. >> >> the request method is post ,but post body is empty. >> >> but when i use apache ,this error never occurs.Is there any solution >> ? > > The 411 Length Required used to be returned by nginx if client > tried to use chunked Transfer-Encoding. Support for chunked > Transfer-Encoding was introduced in nginx 1.3.9+, you have to > upgrade to recent nginx 1.3.x if you need it. -- Never argue with an idiot; people watching may not tell the difference From edigarov at qarea.com Wed Jan 30 13:25:00 2013 From: edigarov at qarea.com (Gregory Edigarov) Date: Wed, 30 Jan 2013 15:25:00 +0200 Subject: Nginx temp file being deleted errenously In-Reply-To: <2200278a7c45e3b141fd03ec4a4b8133.NginxMailingListEnglish@forum.nginx.org> References: <2200278a7c45e3b141fd03ec4a4b8133.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51091F2C.4050808@qarea.com> On 01/26/2013 02:36 AM, tulumvinh at gmail.com wrote: > Hello everyone, > > My company processes millions of requests for file uploads/downloads and we > are looking at Nginx to replace Apache. We are running into a problem that > I hope you can help -- I have searched the web and even read Nginx HTTP > Server book by Clement Nedelcu. > > At a high level, the flow of a request through our system is as follows: > user agent posts the data to upload --> Ngixn acting as a reverse proxy > --> Object store -> post action of passing request to Uwsgi. > > Problem: when uploading a large file (~ 100MB) , the post action is > failing. The error message seen in error.log is "8 sendfile() failed (9: > Bad file descriptor) while sending request to upstream," message. This use > case occurs even if no load on the system, > > When I enable "debug", I see that after the Nginx streams the bytes to the > object store, the temp file is deleted. When the post action is executed it > fails as the temp file is gone. > > > *Note: this error is NOT seen when upload small files. I would store the file in some temporary space on frontend, and then forwarded it to the storage backend. -- With best regards, Gregory Edigarov From mdounin at mdounin.ru Wed Jan 30 15:09:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Jan 2013 19:09:52 +0400 Subject: nginx 411 error In-Reply-To: <05d116ac6a1a6edc55343247816035c8@webmail.xlrs.de> References: <0EF79984908C3844AA4E8E15EFDB335B2F4EC422@CNHZ-EXMAIL-07.ali.com> <20130130123757.GF40753@mdounin.ru> <05d116ac6a1a6edc55343247816035c8@webmail.xlrs.de> Message-ID: <20130130150952.GG40753@mdounin.ru> Hello! On Wed, Jan 30, 2013 at 01:57:27PM +0100, Axel wrote: > Hi Maxim, > > are you sure that an upgrade to nginx 1.3.x is required? > > I had this issue a while ago and I solved it by adding > > chunkin on; > error_page 411 = @my_411_error; > location @my_411_error { > chunkin_resume; > } > > to my vHost configuration. > I never had this error again. This uses agentzh chunkin module, which is probably good enough if you have no other options, but a) not something officially supported and b) known to have limitations (e.g., AFAIR it doesn't work with DAV module). With support for chunked Transfer-Encoding available in 1.3.9+ I would recommend using nginx 1.3.x instead. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Wed Jan 30 15:33:25 2013 From: nginx-forum at nginx.us (revirii) Date: Wed, 30 Jan 2013 10:33:25 -0500 Subject: proxy_pass to backend (varnish): delivered ip? Message-ID: <3afc55e89350ff798f7a5f1e73567cc4.NginxMailingListEnglish@forum.nginx.org> Hey, maybe the solution is really simple, but i can't find it. nginx (1.2.1) handles ssl and proxies the traffic to the backend (varnish, which also handles http): location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:80; } The requests are sent to varnish, in the varnish logs i see them, they look like (shortened): 127.0.0.1 - - [30/Jan/2013:16:06:54 +0100] ......... So varnish receives 127.0.0.1 from nginx, but i want varnish to receive the external ip. For testing reasons i switched the ip to the lan ip (which varnish also listens to): location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://192.168.0.1:80; } And now the requests in varnish log look like: 192.168.0.1 - - [30/Jan/2013:16:16:25 +0100] ......... Well, where's the error? How can it be done that varnish receives the external ip (smth. like 84.156.23.145) from nginx? I suppose it's quite simple but i can't see it... :-/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235737,235737#msg-235737 From francis at daoine.org Wed Jan 30 15:53:17 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 30 Jan 2013 15:53:17 +0000 Subject: proxy_pass to backend (varnish): delivered ip? In-Reply-To: <3afc55e89350ff798f7a5f1e73567cc4.NginxMailingListEnglish@forum.nginx.org> References: <3afc55e89350ff798f7a5f1e73567cc4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130130155317.GT4332@craic.sysops.org> On Wed, Jan 30, 2013 at 10:33:25AM -0500, revirii wrote: Hi there, > The requests are sent to varnish, in the varnish logs i see them, they look > like (shortened): > > 127.0.0.1 - - [30/Jan/2013:16:06:54 +0100] ......... > > So varnish receives 127.0.0.1 from nginx, but i want varnish to receive the > external ip. The connection to varnish comes from the address 127.0.0.1. That's what it logs here. > And now the requests in varnish log look like: > > 192.168.0.1 - - [30/Jan/2013:16:16:25 +0100] ......... The connection to varnish comes from the address 192.168.0.1. That's what it logs here. > Well, where's the error? How can it be done that varnish receives the > external ip (smth. like 84.156.23.145) from nginx? I suppose it's quite > simple but i can't see it... :-/ varnish could see the "true" remote_addr in the X-Real-IP: http header that you send, if it looked there. If you want varnish to log the contents of that header, or to log that instead of what it sees as the connecting ip address, you'll be better off to ask in a place where people know varnish. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Jan 30 16:13:38 2013 From: nginx-forum at nginx.us (revirii) Date: Wed, 30 Jan 2013 11:13:38 -0500 Subject: proxy_pass to backend (varnish): delivered ip? In-Reply-To: <20130130155317.GT4332@craic.sysops.org> References: <20130130155317.GT4332@craic.sysops.org> Message-ID: <20a09ca4c02f27639503f5a18d281f1b.NginxMailingListEnglish@forum.nginx.org> Hey, thx for the fast answer :-) > The connection to varnish comes from the address 127.0.0.1. That's > what it logs here. > The connection to varnish comes from the address 192.168.0.1. That's > what it logs here. But why? The only difference is the proxy_pass statement: proxy_pass http://127.0.0.1:80; vs. proxy_pass http://192.168.0.1:80; No other changes were done, and no changes in varnish config. > varnish could see the "true" remote_addr in the X-Real-IP: http header > that you send, if it looked there. > > If you want varnish to log the contents of that header, or to log that > instead of what it sees as the connecting ip address, you'll be better > off to ask in a place where people know varnish. Hm, would be interesting which param varnish checks. It can't be $remote_addr, so it has to be the address nginx proxies to (127.0.01 or 192.168.0.1). Very strange. So it seems to be a varnish problem? :-/ revirii Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235737,235739#msg-235739 From ar at xlrs.de Wed Jan 30 16:15:59 2013 From: ar at xlrs.de (Axel) Date: Wed, 30 Jan 2013 17:15:59 +0100 Subject: nginx 411 error In-Reply-To: <20130130150952.GG40753@mdounin.ru> References: <0EF79984908C3844AA4E8E15EFDB335B2F4EC422@CNHZ-EXMAIL-07.ali.com> <20130130123757.GF40753@mdounin.ru> <05d116ac6a1a6edc55343247816035c8@webmail.xlrs.de> <20130130150952.GG40753@mdounin.ru> Message-ID: <0da0d257a3a6c0c7212e9a4c83d4e36d@webmail.xlrs.de> Thanks for clarification! rgds, Axel Am 30.01.2013 16:09, schrieb Maxim Dounin: > Hello! > > On Wed, Jan 30, 2013 at 01:57:27PM +0100, Axel wrote: > >> Hi Maxim, >> >> are you sure that an upgrade to nginx 1.3.x is required? >> >> I had this issue a while ago and I solved it by adding >> >> chunkin on; >> error_page 411 = @my_411_error; >> location @my_411_error { >> chunkin_resume; >> } >> >> to my vHost configuration. >> I never had this error again. > > This uses agentzh chunkin module, which is probably good enough if > you have no other options, but a) not something officially > supported and b) known to have limitations (e.g., AFAIR it doesn't > work with DAV module). > > With support for chunked Transfer-Encoding available in 1.3.9+ I > would recommend using nginx 1.3.x instead. -- Never argue with an idiot; people watching may not tell the difference From francis at daoine.org Wed Jan 30 16:20:03 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 30 Jan 2013 16:20:03 +0000 Subject: proxy_pass to backend (varnish): delivered ip? In-Reply-To: <20a09ca4c02f27639503f5a18d281f1b.NginxMailingListEnglish@forum.nginx.org> References: <20130130155317.GT4332@craic.sysops.org> <20a09ca4c02f27639503f5a18d281f1b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130130162003.GU4332@craic.sysops.org> On Wed, Jan 30, 2013 at 11:13:38AM -0500, revirii wrote: Hi there, > > The connection to varnish comes from the address 127.0.0.1. That's > > what it logs here. > > > The connection to varnish comes from the address 192.168.0.1. That's > > what it logs here. > > But why? The only difference is the proxy_pass statement: > > proxy_pass http://127.0.0.1:80; > vs. > proxy_pass http://192.168.0.1:80; > > No other changes were done, and no changes in varnish config. Look at the routing table on your nginx server. If it connects *to* 127.0.0.1, it will connect *from* 127.0.0.1 (which is one of the nginx server's addresses). If it connects *to* 192.168.0.1, it will connect *from* 192.168.0.1 (which is one of the nginx server's addresses). (Probably, if it connects *to* 192.168.0.2 (which is on a different machine), it will connect *from* 192.168.0.1.) > Hm, would be interesting which param varnish checks. It can't be > $remote_addr, so it has to be the address nginx proxies to (127.0.01 or > 192.168.0.1). No, it is the address that the connection to varnish comes *from*. Because of your specific setup, that happens to match the address that nginx connects to. But try connecting to varnish from some other machine and you'll see the difference. > Very strange. So it seems to be a varnish problem? :-/ It's usually considered a feature that the source address of a connection is logged. There is nothing nginx can do to hide its source address. What you want is something non-standard. Possibly there's a varnish configuration to allow it. f -- Francis Daly francis at daoine.org From ar at xlrs.de Wed Jan 30 16:23:05 2013 From: ar at xlrs.de (Axel) Date: Wed, 30 Jan 2013 17:23:05 +0100 Subject: proxy_pass to backend (varnish): delivered ip? In-Reply-To: <20a09ca4c02f27639503f5a18d281f1b.NginxMailingListEnglish@forum.nginx.org> References: <20130130155317.GT4332@craic.sysops.org> <20a09ca4c02f27639503f5a18d281f1b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, Am 30.01.2013 17:13, schrieb revirii: >> varnish could see the "true" remote_addr in the X-Real-IP: http >> header >> that you send, if it looked there. > > Hm, would be interesting which param varnish checks. It can't be > $remote_addr, so it has to be the address nginx proxies to (127.0.01 > or > 192.168.0.1). Very strange. So it seems to be a varnish problem? :-/ You can see the right X-Forwarded-For header with tcpdump. Take a look at the log format varnish uses. I don't know how to configure varnish, but with Apache you have to change your LogFormat from LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined to something like LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\"\"%{User-Agent}i\"" nginx rgds, Axel -- Never argue with an idiot; people watching may not tell the difference From nilshar at gmail.com Wed Jan 30 16:50:51 2013 From: nilshar at gmail.com (Nilshar) Date: Wed, 30 Jan 2013 17:50:51 +0100 Subject: log failed try_files Message-ID: Hello list, I got this setting : location ~ ^(/.*)/(.*)$ { root /some/path/; try_files $1/$2 $1/../$2 =404; } it works perfectly, but I would like to log when $1/$2 does not exist, is it possible ? Thanks Nilshar. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 30 17:43:46 2013 From: nginx-forum at nginx.us (iLinux85) Date: Wed, 30 Jan 2013 12:43:46 -0500 Subject: nginx high load average In-Reply-To: <1359526160.6060.1558.camel@steve-new> References: <1359526160.6060.1558.camel@steve-new> Message-ID: <295ca82a5c11e96293e4abc6e24f677d.NginxMailingListEnglish@forum.nginx.org> this is a shared server like any other shared sites support download and upload files like movies and programs , stuff over 1 and 2 gigabyte for download and upload , i am trying to adjust the connections in the server to work good with nginx , and this is my vhost.conf high connection mean alot of visitors established connection in my server 4 TIME_WAIT 9 SYN_RECV 2 LISTEN 40 LAST_ACK 988 ESTABLISHED and this is vhost.conf server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name site.com www.site.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { root /home3/s9/public_html; } location / { proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.site.com:81 http://www.site.com; proxy_redirect http://site.com:81 http://site.com; proxy_pass http://192.168.0.1:81/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # track uploads in the 'proxied' zone # remember connections for 30s after they finished track_uploads proxied 30s; } location ^~ /progress { # report uploads tracked in the 'proxied' zone report_uploads proxied; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235709,235745#msg-235745 From nginx-forum at nginx.us Wed Jan 30 17:58:35 2013 From: nginx-forum at nginx.us (chowdhurykingdom) Date: Wed, 30 Jan 2013 12:58:35 -0500 Subject: Superb Umrah package or similar Message-ID: <78a6f8dafe9ff4c210078b0a9998418c.NginxMailingListEnglish@forum.nginx.org> Islam mandates that all Muslims who are financially and physically capable perform a pilgrimage to Mecca (Hajj). Our Prophet (SAW) advised to financially prepare for it, and delineated the specific way in which each of the rituals involved was to be performed. The Sunnah raised the religious and ethical value of pilgrimage to such an extent that it has become the ultimate worldly hope of a Muslim's life.s %b \"%{Referer}i\" > \"%{User-Agent}i\"" combined > > to something like > > LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b > \"%{Referer}i\"\"%{User-Agent}i\"" nginx Good idea. I'll check if varnish is able to do that. thx revirii Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235737,235761#msg-235761 From nginx-forum at nginx.us Thu Jan 31 07:13:39 2013 From: nginx-forum at nginx.us (revirii) Date: Thu, 31 Jan 2013 02:13:39 -0500 Subject: proxy_pass to backend (varnish): delivered ip? In-Reply-To: <20130130162003.GU4332@craic.sysops.org> References: <20130130162003.GU4332@craic.sysops.org> Message-ID: <41f1041b59aa1c2ecec232419bafb164.NginxMailingListEnglish@forum.nginx.org> Good Morning, > If it connects *to* 127.0.0.1, it will connect *from* 127.0.0.1 (which > is one of the nginx server's addresses). > > If it connects *to* 192.168.0.1, it will connect *from* 192.168.0.1 > (which is one of the nginx server's addresses). Hm, the said nginx vhost doesn't use 127.0.0.1 or a lan address in combination with port 80 - varnish listens on *.80. the nginx ssl vhost listens on an "real" ip and passes the requets to varnish on 127.0.0.1:80. Or is there a misunderstanding on my side? > No, it is the address that the connection to varnish comes *from*. > Because > of your specific setup, that happens to match the address that nginx > connects to. But try connecting to varnish from some other machine and > you'll see the difference. Well, it's not that special, i think. http: request -> varnish on real ip -> nginx backend on 127.0.0.1:81 https: request -> nginx on real ip:443 -> proxy_pass to varnish on 127.0.01.80 -> nginx backend on 127.0.0.1:81 (If necessary i could draw a small picture) > It's usually considered a feature that the source address of a > connection > is logged. There is nothing nginx can do to hide its source address. > > What you want is something non-standard. Possibly there's a varnish > configuration to allow it. Ok, so what your're trying to tell me is: if i proxy_pass to varnish with "proxy_pass http://127.0.0.1:80;" nginx uses some localhost address (although it can't be 127.0.0.1:80, which is used by varnish) to connect to varnish, and varnish sees this localhost address if i proxy_pass to varnish with "proxy_pass http://192.168.0.1:80;" nginx uses some lan address (although it can't be 192.168.0.1:80, which is used by varnish) to connect to varnish, and varnish sees this lan address if i proxy_pass to varnish with "proxy_pass http://real_ip:80;" nginx uses the real ip (although it can't be real_ip:80, which is used by varnish) to connect to varnish, and varnish sees this real_ip address If so, there's nothing i can do within nginx config, as it's simply not possible. And varnish config or log is the (only) place where i can achieve this. thx a lot revirii Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235737,235762#msg-235762 From nginx-forum at nginx.us Thu Jan 31 08:48:31 2013 From: nginx-forum at nginx.us (rg00) Date: Thu, 31 Jan 2013 03:48:31 -0500 Subject: Nginx randomly crashes Message-ID: <64c3500dc9fcb12f7605ee4e1e14dc52.NginxMailingListEnglish@forum.nginx.org> I've got a problem with Nginx running on Ubuntu 12.10. I'm running it mainly as a reverse proxy and there is no high load on the machine. It randomly crashes without any helpful log info (or at least I think so). Here's the error log *************************************************************************************************** 2013/01/31 09:19:03 [debug] 15238#0: *10555 event timer del: 68: 1359620363778 2013/01/31 09:19:03 [debug] 15238#0: *10555 generic phase: 0 2013/01/31 09:19:03 [debug] 15238#0: *10555 rewrite phase: 1 2013/01/31 09:19:03 [debug] 15238#0: *10555 http script regex: "^(.*)" 2013/01/31 09:19:03 [notice] 15238#0: *10555 "^(.*)" matches "/", client: 192.168.2.42, server: abc.def.com, request: "HEAD / HTTP/1.1", host: " abc.def.com" 2013/01/31 09:19:03 [debug] 15238#0: *10555 http script copy: "https://abc.def.com" 2013/01/31 09:19:03 [debug] 15238#0: *10555 http script regex end 2013/01/31 09:19:03 [notice] 15238#0: *10555 rewritten redirect: "https://abc.def.com", client: 192.168.2.42, server: abc.def.com, reques t: "HEAD / HTTP/1.1", host: "abc.def.com" 2013/01/31 09:19:03 [debug] 15238#0: *10555 http finalize request: 301, "/?" a:1, c:1 2013/01/31 09:19:03 [debug] 15238#0: *10555 http special response: 301, "/?" 2013/01/31 09:19:03 [debug] 15238#0: *10555 http set discard body 2013/01/31 09:19:03 [debug] 15238#0: *10555 xslt filter header 2013/01/31 09:19:03 [debug] 15238#0: *10555 HTTP/1.1 301 Moved Permanently Server: nginx/1.2.6 Date: Thu, 31 Jan 2013 08:19:03 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: https://abc.def.com 2013/01/31 09:19:03 [debug] 15238#0: *10555 write new buf t:1 f:0 00000000027A0868, pos 00000000027A0868, size: 200 file: 0, size: 0 2013/01/31 09:19:03 [debug] 15238#0: *10555 http write filter: l:1 f:0 s:200 2013/01/31 09:19:03 [debug] 15238#0: *10555 http write filter limit 0 2013/01/31 09:19:03 [debug] 15238#0: *10555 writev: 200 2013/01/31 09:19:03 [debug] 15238#0: *10555 http write filter 0000000000000000 2013/01/31 09:19:03 [debug] 15238#0: *10555 http finalize request: 0, "/?" a:1, c:1 2013/01/31 09:19:03 [debug] 15238#0: *10555 set http keepalive handler 2013/01/31 09:19:03 [debug] 15238#0: *10555 http close request 2013/01/31 09:19:03 [debug] 15238#0: *10555 http log handler 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 000000000279FBA0, unused: 360 2013/01/31 09:19:03 [debug] 15238#0: *10555 event timer add: 68: 30000:1359620373779 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 00000000027DE460 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 0000000002652060 2013/01/31 09:19:03 [debug] 15238#0: *10555 hc free: 0000000000000000 0 2013/01/31 09:19:03 [debug] 15238#0: *10555 hc busy: 0000000000000000 0 2013/01/31 09:19:03 [debug] 15238#0: *10555 tcp_nodelay 2013/01/31 09:19:03 [debug] 15238#0: *10555 reusable connection: 1 2013/01/31 09:19:03 [debug] 15238#0: *10555 post event 00000000025B5210 2013/01/31 09:19:03 [debug] 15238#0: *10555 delete posted event 00000000025B5210 2013/01/31 09:19:03 [debug] 15238#0: *10555 http keepalive handler 2013/01/31 09:19:03 [debug] 15238#0: *10555 malloc: 0000000002652060:8192 2013/01/31 09:19:03 [debug] 15238#0: *10555 recv: fd:68 -1 of 8192 2013/01/31 09:19:03 [debug] 15238#0: *10555 recv() not ready (11: Resource temporarily unavailable) 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 0000000002652060 2013/01/31 09:19:03 [debug] 15238#0: *10555 http keepalive handler 2013/01/31 09:19:03 [debug] 15238#0: *10555 malloc: 0000000002652060:8192 2013/01/31 09:19:03 [debug] 15238#0: *10555 recv: fd:68 0 of 8192 2013/01/31 09:19:03 [info] 15238#0: *10555 client 192.168.2.42 closed keepalive connection 2013/01/31 09:19:03 [debug] 15238#0: *10555 close http connection: 68 2013/01/31 09:19:03 [debug] 15238#0: *10555 event timer del: 68: 1359620373779 2013/01/31 09:19:03 [debug] 15238#0: *10555 reusable connection: 0 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 0000000002652060 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 0000000000000000 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 000000000283EBF0, unused: 8 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 00000000027F9CF0, unused: 128 2013/01/31 09:24:49 [alert] 15237#0: worker process 15238 exited on signal 9 ************************************************************************************************************************* At 09:24 I restarted the service because it was hanging. This is some system info about tcp connections: Thu Jan 31 09:18:01 CET 2013 14 ESTABLISHED 3 FIN_WAIT2 6 LISTEN 21 TIME_WAIT Thu Jan 31 09:19:01 CET 2013 28 ESTABLISHED 6 LISTEN 12 TIME_WAIT Thu Jan 31 09:20:01 CET 2013 13 CLOSE_WAIT 28 ESTABLISHED 6 LISTEN Thu Jan 31 09:21:01 CET 2013 35 CLOSE_WAIT 36 ESTABLISHED 8 LISTEN Thu Jan 31 09:22:01 CET 2013 46 CLOSE_WAIT 35 ESTABLISHED 7 LISTEN Thu Jan 31 09:23:01 CET 2013 80 CLOSE_WAIT 33 ESTABLISHED 7 LISTEN Thu Jan 31 09:24:01 CET 2013 128 CLOSE_WAIT 26 ESTABLISHED 7 LISTEN 8 SYN_RECV Thu Jan 31 09:25:02 CET 2013 26 ESTABLISHED 1 FIN_WAIT1 7 LISTEN 4 TIME_WAIT During that time I tried to connect with curl and this is the output: roberto at t500:~> curl -I -L -k -m 30 http://abc.def.com curl: (28) Operation timed out after 30001 milliseconds with 0 bytes received I upgraded Ubuntu package from 1.2.1 to 1.2.6 but no result. In /var/log/syslog I have no info about this. What can I do? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235763,235763#msg-235763 From nginx-forum at nginx.us Thu Jan 31 09:22:48 2013 From: nginx-forum at nginx.us (Ondra42) Date: Thu, 31 Jan 2013 04:22:48 -0500 Subject: Incomplete page by nginx -> fcgi -> php-fpm with keepalive Message-ID: <0021ec6f8aca71d797889129b98fa44c.NginxMailingListEnglish@forum.nginx.org> I have problem somewhere between nginx and php-fpm. When I use keep-alive connection for fastgcgi php-fpm, there is often problem with loading page. Page not completely loaded. I tried to debug using wireshark, and see that php-fpm send to nginx coplete reply, but nginx to client sent incoplete page. Size of incomplete page is defined by chunk size. For test, download http://kinohled.cz/bugreport.tar.gz and run ./showBug.sh . You will se: Run Nginx on port 9876 Run php-fpm on port 6789 Correct size of answer is 16172; keepalive on, size: 8002 keepalive off, size: 16172 Next test (1 of 5)- ... Tested on Debian Squeeze and many version of Nginx/PHP-FPM . Error shows on default Nginx/PHP-FPM version by Debian Squeeze by dotdeb.org . nginx/1.2.6 PHP 5.3.21-1~dotdeb.0 On Nginx error log: upstream sent unsupported FastCGI protocol version: 0 while reading upstream Any suggestion ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235764,235764#msg-235764 From nginx-forum at nginx.us Thu Jan 31 11:22:35 2013 From: nginx-forum at nginx.us (m.desantis) Date: Thu, 31 Jan 2013 06:22:35 -0500 Subject: nginx - php-fpm: access restrictions for some php pages Message-ID: <6ec05dcb6cd3b0bac16886dfb37a976b.NginxMailingListEnglish@forum.nginx.org> Hello, I have a folder containing some PHP files served with php-fpm (fastcgi); inside tihs folder, I have a file which I want to be allowed for internal IPs and deny for external. The problem I have is that with this configuration... # PHP location ~ ^\/some\/path\/(.*\.php)$ { alias /some/path/; fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; # Changes due to the alias declaration fastcgi_param SCRIPT_FILENAME $document_root/$1; fastcgi_param SCRIPT_NAME /$1; } # PHP: phpinfo() access restrictions location = /some/path/phpinfo.php { allow 10.0.0.0/24; deny all; } ...access to /some/path/phpinfo.php is managed correctly but the fastcgi rules are not applied (I download the phpinfo.php file); while with this configuration... # PHP location ~ ^\/some\/path\/(.*\.php)$ { alias /some/path/; fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; # Changes due to the alias declaration fastcgi_param SCRIPT_FILENAME $document_root/$1; fastcgi_param SCRIPT_NAME /$1; } # PHP: phpinfo() access restrictions location ~ ^\/some\/path\/phpinfo\.php$ { allow 10.0.0.0/24; deny all; } .../some/path/phpinfo.php is interpreted correctly but access restrictions are not applied. How can I fix the configuration in order that /some/path/phpinfo.php gets interpreted and access restrictions are applied? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235769,235769#msg-235769 From mdounin at mdounin.ru Thu Jan 31 11:28:23 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 31 Jan 2013 15:28:23 +0400 Subject: Nginx randomly crashes In-Reply-To: <64c3500dc9fcb12f7605ee4e1e14dc52.NginxMailingListEnglish@forum.nginx.org> References: <64c3500dc9fcb12f7605ee4e1e14dc52.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130131112823.GM40753@mdounin.ru> Hello! On Thu, Jan 31, 2013 at 03:48:31AM -0500, rg00 wrote: > I've got a problem with Nginx running on Ubuntu 12.10. > I'm running it mainly as a reverse proxy and there is no high load on the > machine. > It randomly crashes without any helpful log info (or at least I think so). (Note: this looks like a hang, not a crash. There is a difference.) [...] > 2013/01/31 09:19:03 [debug] 15238#0: *10555 reusable connection: 0 > 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 0000000002652060 > 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 0000000000000000 > 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 000000000283EBF0, unused: > 8 > 2013/01/31 09:19:03 [debug] 15238#0: *10555 free: 00000000027F9CF0, unused: > 128 > 2013/01/31 09:24:49 [alert] 15237#0: worker process 15238 exited on signal > 9 > ************************************************************************************************************************* > > At 09:24 I restarted the service because it was hanging. > This is some system info about tcp connections: What nginx -V shows? As a first step I would recommend trying to reproduce the problem without any 3rd party modules/patches compiled in. You may also find some helpful tips about debugging here: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.com/support.html From francis at daoine.org Thu Jan 31 11:38:04 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 31 Jan 2013 11:38:04 +0000 Subject: nginx - php-fpm: access restrictions for some php pages In-Reply-To: <6ec05dcb6cd3b0bac16886dfb37a976b.NginxMailingListEnglish@forum.nginx.org> References: <6ec05dcb6cd3b0bac16886dfb37a976b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130131113804.GW4332@craic.sysops.org> On Thu, Jan 31, 2013 at 06:22:35AM -0500, m.desantis wrote: Hi there, > I have a folder containing some PHP files served with php-fpm (fastcgi); > inside tihs folder, I have a file which I want to be allowed for internal > IPs and deny for external. One request is handled in one location. http://nginx.org/r/location for details of how the one location is chosen. > How can I fix the configuration in order that /some/path/phpinfo.php gets > interpreted and access restrictions are applied? In your location that matches this request, put both your "access control" and your "fastcgi handling" configuration. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jan 31 11:38:27 2013 From: nginx-forum at nginx.us (rg00) Date: Thu, 31 Jan 2013 06:38:27 -0500 Subject: Nginx randomly crashes In-Reply-To: <20130131112823.GM40753@mdounin.ru> References: <20130131112823.GM40753@mdounin.ru> Message-ID: <95990669a815d5a3c0300abc8902efc6.NginxMailingListEnglish@forum.nginx.org> This is the output of nginx -V: nginx version: nginx/1.2.6 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.2.6/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.2.6/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.2.6/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.2.6/debian/modules/nginx-dav-ext-module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235763,235771#msg-235771 From francis at daoine.org Thu Jan 31 11:46:06 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 31 Jan 2013 11:46:06 +0000 Subject: proxy_pass to backend (varnish): delivered ip? In-Reply-To: <41f1041b59aa1c2ecec232419bafb164.NginxMailingListEnglish@forum.nginx.org> References: <20130130162003.GU4332@craic.sysops.org> <41f1041b59aa1c2ecec232419bafb164.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130131114606.GX4332@craic.sysops.org> On Thu, Jan 31, 2013 at 02:13:39AM -0500, revirii wrote: Hi there, > > If it connects *to* 127.0.0.1, it will connect *from* 127.0.0.1 (which > > is one of the nginx server's addresses). > > > > If it connects *to* 192.168.0.1, it will connect *from* 192.168.0.1 > > (which is one of the nginx server's addresses). > > Hm, the said nginx vhost doesn't use 127.0.0.1 or a lan address in > combination with port 80 - varnish listens on *.80. the nginx ssl vhost > listens on an "real" ip and passes the requets to varnish on 127.0.0.1:80. > Or is there a misunderstanding on my side? Yes. nginx as a *server* listens on whatever ip:port pairs are in the configuration. nginx as a *client* uses the operating system's facilities to set the client connection information. When nginx talks to varnish, nginx is the client and varnish is the server. > Ok, so what your're trying to tell me is: > if i proxy_pass to varnish with "proxy_pass http://127.0.0.1:80;" nginx uses > some localhost address (although it can't be 127.0.0.1:80, which is used by > varnish) to connect to varnish, and varnish sees this localhost address The proxy_pass connection comes from the address and port that nginx chooses -- which is usually "whatever the OS chooses", because nginx doesn't care to choose. In this case, the address will be 127.0.0.1, and the port will be something unpredictable you usually don't care about. > if i proxy_pass to varnish with "proxy_pass http://real_ip:80;" nginx uses > the real ip (although it can't be real_ip:80, which is used by varnish) to > connect to varnish, and varnish sees this real_ip address If real_ip is configured on the machine, then yes. If not, then the OS will choose the appropriate locally-configured address (plus an unpredictable port) to make the connection. > If so, there's nothing i can do within nginx config, as it's simply not > possible. And varnish config or log is the (only) place where i can achieve > this. That part is correct. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jan 31 11:53:40 2013 From: nginx-forum at nginx.us (m.desantis) Date: Thu, 31 Jan 2013 06:53:40 -0500 Subject: nginx - php-fpm: access restrictions for some php pages In-Reply-To: <20130131113804.GW4332@craic.sysops.org> References: <20130131113804.GW4332@craic.sysops.org> Message-ID: <1a27d3cbd508bb7e7aec7ecdcef94a1e.NginxMailingListEnglish@forum.nginx.org> You mean like this? location ~ ^\/some\/path\/(.*\.php)$ { alias /some/path; fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; # Changes due to the alias declaration fastcgi_param SCRIPT_FILENAME $document_root/$1; fastcgi_param SCRIPT_NAME /$1; allow 10.0.0.0/24; deny all; } But now restriction rules are applied to all PHP files; instead I want to apply them just to some PHP files; should I use a nested location? something like this? location ~ ^\/some\/path\/(.*\.php)$ { alias /some/path; fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; # Changes due to the alias declaration fastcgi_param SCRIPT_FILENAME $document_root/$1; fastcgi_param SCRIPT_NAME /$1; location ~ \/phpinfo\.php$ { allow 10.0.0.0/24; deny all; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235769,235775#msg-235775 From mdounin at mdounin.ru Thu Jan 31 11:54:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 31 Jan 2013 15:54:31 +0400 Subject: Incomplete page by nginx -> fcgi -> php-fpm with keepalive In-Reply-To: <0021ec6f8aca71d797889129b98fa44c.NginxMailingListEnglish@forum.nginx.org> References: <0021ec6f8aca71d797889129b98fa44c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130131115431.GN40753@mdounin.ru> Hello! On Thu, Jan 31, 2013 at 04:22:48AM -0500, Ondra42 wrote: > I have problem somewhere between nginx and php-fpm. When I use keep-alive > connection for fastgcgi php-fpm, there is often problem with loading page. > Page not completely loaded. I tried to debug using wireshark, and see that > php-fpm send to nginx coplete reply, but nginx to client sent incoplete > page. Size of incomplete page is defined by chunk size. For test, download > http://kinohled.cz/bugreport.tar.gz and run ./showBug.sh . You will se: > > Run Nginx on port 9876 > Run php-fpm on port 6789 > Correct size of answer is 16172; > keepalive on, size: 8002 > keepalive off, size: 16172 > Next test (1 of 5)- > ... > > Tested on Debian Squeeze and many version of Nginx/PHP-FPM . Error shows on > default Nginx/PHP-FPM version by Debian Squeeze by dotdeb.org . > > nginx/1.2.6 > PHP 5.3.21-1~dotdeb.0 > > On Nginx error log: > > upstream sent unsupported FastCGI protocol version: 0 while reading > upstream > > Any suggestion ? Please try this patch: http://mdounin.ru/temp/patch-nginx-fastcgi-keepalive.txt -- Maxim Dounin http://nginx.com/support.html From francis at daoine.org Thu Jan 31 12:02:13 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 31 Jan 2013 12:02:13 +0000 Subject: nginx - php-fpm: access restrictions for some php pages In-Reply-To: <1a27d3cbd508bb7e7aec7ecdcef94a1e.NginxMailingListEnglish@forum.nginx.org> References: <20130131113804.GW4332@craic.sysops.org> <1a27d3cbd508bb7e7aec7ecdcef94a1e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130131120213.GY4332@craic.sysops.org> On Thu, Jan 31, 2013 at 06:53:40AM -0500, m.desantis wrote: > You mean like this? No. Read the mail again. > But now restriction rules are applied to all PHP files; instead I want to > apply them just to some PHP files; should I use a nested location? something > like this? No. Your very first example was almost correct. You had location = /some/path/phpinfo.php { } which you said was the one location which matched the request that you cared about. In *that* location, put all of the configuration that you want for that request. (It is probably only six lines, since you know exactly what filename you want the fastcgi server to process.) f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Jan 31 12:18:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 31 Jan 2013 16:18:15 +0400 Subject: Nginx randomly crashes In-Reply-To: <95990669a815d5a3c0300abc8902efc6.NginxMailingListEnglish@forum.nginx.org> References: <20130131112823.GM40753@mdounin.ru> <95990669a815d5a3c0300abc8902efc6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130131121814.GO40753@mdounin.ru> Hello! On Thu, Jan 31, 2013 at 06:38:27AM -0500, rg00 wrote: > This is the output of nginx -V: > > nginx version: nginx/1.2.6 > TLS SNI support enabled > configure arguments: --prefix=/usr/share/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-log-path=/var/log/nginx/access.log > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock > --pid-path=/run/nginx.pid --with-pcre-jit --with-debug > --with-http_addition_module --with-http_dav_module --with-http_geoip_module > --with-http_gzip_static_module --with-http_image_filter_module > --with-http_realip_module --with-http_stub_status_module > --with-http_ssl_module --with-http_sub_module --with-http_xslt_module > --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl > --with-mail --with-mail_ssl_module > --add-module=/build/buildd/nginx-1.2.6/debian/modules/nginx-auth-pam > --add-module=/build/buildd/nginx-1.2.6/debian/modules/nginx-echo > --add-module=/build/buildd/nginx-1.2.6/debian/modules/nginx-upstream-fair > --add-module=/build/buildd/nginx-1.2.6/debian/modules/nginx-dav-ext-module Well, basic suggestion is the same: try to reproduce the problem without any 3rd party modules/patches compiled in. If you use geoip module, please also make sure it's database isn't corrupted (or just try without it), as MaxMind's geoip library is known to do bad things if database is corrupted. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Thu Jan 31 12:31:15 2013 From: nginx-forum at nginx.us (Ondra42) Date: Thu, 31 Jan 2013 07:31:15 -0500 Subject: Incomplete page by nginx -> fcgi -> php-fpm with keepalive In-Reply-To: <20130131115431.GN40753@mdounin.ru> References: <20130131115431.GN40753@mdounin.ru> Message-ID: <0952102fe3e1bb9d4943f594dbbaba75.NginxMailingListEnglish@forum.nginx.org> This patch help!! Nice and fast work. Well done. :o) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235764,235779#msg-235779 From francis at daoine.org Thu Jan 31 12:35:45 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 31 Jan 2013 12:35:45 +0000 Subject: nginx - php-fpm: access restrictions for some php pages In-Reply-To: <20130131120213.GY4332@craic.sysops.org> References: <20130131113804.GW4332@craic.sysops.org> <1a27d3cbd508bb7e7aec7ecdcef94a1e.NginxMailingListEnglish@forum.nginx.org> <20130131120213.GY4332@craic.sysops.org> Message-ID: <20130131123545.GZ4332@craic.sysops.org> On Thu, Jan 31, 2013 at 12:02:13PM +0000, Francis Daly wrote: > On Thu, Jan 31, 2013 at 06:53:40AM -0500, m.desantis wrote: > > You mean like this? > > No. > > Read the mail again. Re-reading that, it comes across as more abrupt than I intended. Sorry about that. If you can now tell what I meant in the previous mail, can you suggest any phrasing or extra information that would have made it clear on first reading? (And if you can't tell what I meant in the previous mail, that's useful information too.) Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jan 31 14:12:07 2013 From: nginx-forum at nginx.us (m.desantis) Date: Thu, 31 Jan 2013 09:12:07 -0500 Subject: nginx - php-fpm: access restrictions for some php pages In-Reply-To: <20130131123545.GZ4332@craic.sysops.org> References: <20130131123545.GZ4332@craic.sysops.org> Message-ID: <36f7263126c0554acd4e4fc6cb1b22dc.NginxMailingListEnglish@forum.nginx.org> > Re-reading that, it comes across as more abrupt than I intended. Sorry > about that. No problem; unfortunately my english is poor, so easily I run into misunderstandings when I communicate in english with other people. > If you can now tell what I meant in the previous mail, can you suggest > any phrasing or extra information that would have made it clear on > first reading? An extra information which would have been useful to me is a configuration code example, because is less prone to misunderstandings by me (due to my comprehension skills), or maybe some link about the matter I submitted (if known), because I couldn't find on the web any infos about the needings I have (maybe just because the solution is trivial). Anyway, I have a doubt: in the previous reply you sent, you say > Your very first example was almost correct. You had > > location = /some/path/phpinfo.php { > } > > which you said was the one location which matched the request that you > cared about. > > In *that* location, put all of the configuration that you want for > that request. So I think you mean something like this: location = /some/path/phpinfo.php { # common configurations... # configuration for some children urls... } location ~ \/some\/path\/(.*\.php)$ { # common configurations... } I found that a nested locations configuration works too: location ~ ^\/some\/path\/(.*\.php)$ { # configuration for some children urls... location ~ \/phpinfo\.php$ { # common configurations... } } Of the two options I would prefer the last, because I avoid to write two different configurations equal between each other, which would imply that every time I change one configuration I have to duplicate it into the other location (but above all I have a loss of logic). Do you have some considerations which maybe I miss about the difference between the two configurations? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235769,235783#msg-235783 From francis at daoine.org Thu Jan 31 19:52:09 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 31 Jan 2013 19:52:09 +0000 Subject: nginx - php-fpm: access restrictions for some php pages In-Reply-To: <36f7263126c0554acd4e4fc6cb1b22dc.NginxMailingListEnglish@forum.nginx.org> References: <20130131123545.GZ4332@craic.sysops.org> <36f7263126c0554acd4e4fc6cb1b22dc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130131195209.GA4332@craic.sysops.org> On Thu, Jan 31, 2013 at 09:12:07AM -0500, m.desantis wrote: > An extra information which would have been useful to me is a configuration > code example, because is less prone to misunderstandings by me (due to my > comprehension skills), or maybe some link about the matter I submitted (if > known), because I couldn't find on the web any infos about the needings I > have What documentation did you find your current configuration in? If it is something on the nginx.org domain, then possibly it can be adjusted to be clearer for the next person. > (maybe just because the solution is trivial). The solution is straightforward when you know how nginx works. One request is handled in one location. Whatever configuration you want to apply to a specific request must be available in the location which handles that request. (There are some subtleties where one client http request can become multiple nginx requests.) I find it easier to keep the locations simple. To me, that means separate locations for separate configurations. > Anyway, I have a doubt: in the previous reply you sent, you say > > location = /some/path/phpinfo.php { > > } > > In *that* location, put all of the configuration that you want for > > that request. > > So I think you mean something like this: > > location = /some/path/phpinfo.php { > # common configurations... > # configuration for some children urls... > } Yes. Inside that location block, put your allow and deny directives. And also put your fastcgi directives. The fastcgi directives are possibly only something like: fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME /some/path/some/path/phpinfo.php This location, because of the =, will only handle one type of http request. The query string part, after the ?, can vary. But the rest of the http request is fixed. There are no other urls that will match this location. > I found that a nested locations configuration works too: > > location ~ ^\/some\/path\/(.*\.php)$ { > # configuration for some children urls... > location ~ \/phpinfo\.php$ { > # common configurations... > } > } There isn't enough actual example configuration there to know for sure, but it looks to me like that will not do what you want. > Of the two options I would prefer the last, because I avoid to write two > different configurations equal between each other, which would imply that > every > time I change one configuration I have to duplicate it into the other > location For me, doing that is a *good* thing. When you're changing the configuration, you should know why you are doing it. search-and-replace should allow you to verify that you are changing all and only what you mean to change without too much extra work. > (but above all I have a loss of logic). Do you have some considerations > which > maybe I miss about the difference between the two configurations? One works; the other doesn't? You can probably make the nested location do what you want by adding a few more lines to it. It might be a useful exercise to do that, and then compare the two working configurations. (You can possibly tidy the "main" php configuration too -- there aren't many requests which would lead to your "fastcgi_split_path_info" or "fastcgi_index" directives making a difference.) f -- Francis Daly francis at daoine.org