From nginx-forum at nginx.us Mon Jul 1 08:33:55 2013 From: nginx-forum at nginx.us (imanenkov) Date: Mon, 01 Jul 2013 04:33:55 -0400 Subject: Cannot totally switch off caching Message-ID: Greeting! For some testing I need to switch off a nginx caching (nginx + php-fpm). Now I have a trouble - when I request a server (PHP app) first time, response generated about 10 sec (its ok), but when a request a server another time (during approx 1-2 mins from first request) response is returned within 50-100 msec, as I understand from some cache. I trying to get pages via wget and httperf. My configurations: I create a 2 config templates named default and php: default: index index.html index.php; location /status { stub_status on; } location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # ????????? ?????? ? ??????? .htaccess ? .htpassword location ~ /\.ht { deny all; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } php (initial variant): location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } host settings in sites-enabled: server{ listen 80; access_log /var/log/nginx/site.access_log; error_log /var/log/nginx/site.error_log; root /var/www/site; include /etc/nginx/templates/default; include /etc/nginx/templates/php; } Tests running: httperf --server site.local --num-conns 1 --verbose >perf.log First request - approx 10 sec Second request approx 100 msec. I trying to disable caching with: location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_cache off; fastcgi_no_cache 1; fastcgi_cache_bypass 1; expires off; } +restarting nginx ? php-fpm, but this has no effect. Please help. Best regards, Ilya Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240433#msg-240433 From nginx-forum at nginx.us Mon Jul 1 10:26:14 2013 From: nginx-forum at nginx.us (rootberry) Date: Mon, 01 Jul 2013 06:26:14 -0400 Subject: Rewriting pages to re-rendered snapshots Message-ID: <7172abadcf4d027b7e247d787428760f.NginxMailingListEnglish@forum.nginx.org> I have a very Javascript heavy website which is being served by nginx. It all works great, except Google is having trouble indexing me. I've made pre-rendered snapshots of all my pages using Phantomjs and these are stored in /snapshots (relative to my website's root). I've been using this rewrite rule to detect the Google bot and serve it a snapshot: location / { if ($args ~ "_escaped_fragment_=") { rewrite ^/(.*)$ /snapshots/$1.html break; } } It works for all pages apart from the homepage. In the snapshots directory, the home page is called index.html, however the Google bot requests http://mysite.com/?_escaped_fragment_= when it wants the home page, which makes this rewrite rule return a 404. How can I adapt this rule to return the index.html snapshot when / is requested? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240435,240435#msg-240435 From contact at jpluscplusm.com Mon Jul 1 10:44:40 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 1 Jul 2013 11:44:40 +0100 Subject: Rewriting pages to re-rendered snapshots In-Reply-To: <7172abadcf4d027b7e247d787428760f.NginxMailingListEnglish@forum.nginx.org> References: <7172abadcf4d027b7e247d787428760f.NginxMailingListEnglish@forum.nginx.org> Message-ID: I'm sure others will give you better methods to sort this but, as a temporary fix, how about a separate stanza like this: location = / { #special index.html UA-sniffing + rewrite logic } Cheers, Jonathan From nginx-forum at nginx.us Mon Jul 1 10:49:40 2013 From: nginx-forum at nginx.us (rootberry) Date: Mon, 01 Jul 2013 06:49:40 -0400 Subject: Rewriting pages to re-rendered snapshots In-Reply-To: References: Message-ID: <8e9ee6919de0d551c6afec7a0db0e912.NginxMailingListEnglish@forum.nginx.org> Not sure what you mean Jonathan, I can already detect the Google bot by using ?_escaped_fragment= which is also Google's preferred way over user agent sniffing. I just need to adapt the rewrite rule to serve index.html to requests for /. Thanks for the reply, though! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240435,240437#msg-240437 From contact at jpluscplusm.com Mon Jul 1 11:38:41 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 1 Jul 2013 12:38:41 +0100 Subject: Rewriting pages to re-rendered snapshots In-Reply-To: <8e9ee6919de0d551c6afec7a0db0e912.NginxMailingListEnglish@forum.nginx.org> References: <8e9ee6919de0d551c6afec7a0db0e912.NginxMailingListEnglish@forum.nginx.org> Message-ID: Duplicate your entire "location / {}" as "location = / {}" (which matches *only* "/"), and when you detect google, serve /snapshots/whatever/index.html. Jonathan From mdounin at mdounin.ru Mon Jul 1 11:40:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jul 2013 15:40:15 +0400 Subject: Cannot totally switch off caching In-Reply-To: References: Message-ID: <20130701114015.GP20717@mdounin.ru> Hello! On Mon, Jul 01, 2013 at 04:33:55AM -0400, imanenkov wrote: > Greeting! > > For some testing I need to switch off a nginx caching (nginx + php-fpm). Now > I have a trouble - when I request a server (PHP app) first time, response > generated about 10 sec (its ok), but when a request a server another time > (during approx 1-2 mins from first request) response is returned within > 50-100 msec, as I understand from some cache. By default nginx doesn't cache anything. You may want to look into your php code to find out what is cached / causes reduced response time in subsequent requests. -- Maxim Dounin http://nginx.org/en/donation.html From igor.sverkos at googlemail.com Mon Jul 1 12:23:37 2013 From: igor.sverkos at googlemail.com (Igor Sverkos) Date: Mon, 1 Jul 2013 14:23:37 +0200 Subject: Cannot totally switch off caching In-Reply-To: <20130701114015.GP20717@mdounin.ru> References: <20130701114015.GP20717@mdounin.ru> Message-ID: Hi, and don't forget system's page cache. From my experience: The files needed to process the first request aren't yet read, so Linux has to read them from disk (slow). Then, for the second and further requests, the files are in the page cache (aka system buffer), so that Linux don't have to read them again from disk. Access is now super fast. -- Regards, Igor From nginx-forum at nginx.us Mon Jul 1 12:32:32 2013 From: nginx-forum at nginx.us (imanenkov) Date: Mon, 01 Jul 2013 08:32:32 -0400 Subject: Cannot totally switch off caching In-Reply-To: <20130701114015.GP20717@mdounin.ru> References: <20130701114015.GP20717@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > > By default nginx doesn't cache anything. You may want to look > into your php code to find out what is cached / causes reduced > response time in subsequent requests. > I think about it, but this is heavy app, and its cannot generate a page with 100-200 msec (unfortunatelly:) ). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240442#msg-240442 From akunz at wishmedia.de Mon Jul 1 12:37:31 2013 From: akunz at wishmedia.de (Alexander Kunz) Date: Mon, 01 Jul 2013 14:37:31 +0200 Subject: Cannot totally switch off caching In-Reply-To: References: Message-ID: <51D1780B.3010107@wishmedia.de> Hi, and don't forget database's cache. You write about PHP, often there is mysql also involved, which have the ability to cache. Regards Alexander Am 01.07.2013 10:33, schrieb imanenkov: > Greeting! > > For some testing I need to switch off a nginx caching (nginx + php-fpm). Now > I have a trouble - when I request a server (PHP app) first time, response > generated about 10 sec (its ok), but when a request a server another time > (during approx 1-2 mins from first request) response is returned within > 50-100 msec, as I understand from some cache. > > I trying to get pages via wget and httperf. > > My configurations: > > I create a 2 config templates named default and php: > > default: > index index.html index.php; > > location /status { > stub_status on; > } > > location / { > try_files $uri $uri/ /index.php?q=$uri&$args; > } > > # ????????? ?????? ? ??????? .htaccess ? .htpassword > location ~ /\.ht { > deny all; > } > > location = /favicon.ico { > log_not_found off; > access_log off; > } > > location = /robots.txt { > allow all; > log_not_found off; > access_log off; > } > > > php (initial variant): > > location ~ \.php$ { > try_files $uri =404; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > } > > host settings in sites-enabled: > > server{ > listen 80; > access_log /var/log/nginx/site.access_log; > error_log /var/log/nginx/site.error_log; > > root /var/www/site; > > include /etc/nginx/templates/default; > include /etc/nginx/templates/php; > } > > Tests running: > httperf --server site.local --num-conns 1 --verbose >perf.log > > First request - approx 10 sec > Second request approx 100 msec. > > I trying to disable caching with: > > location ~ \.php$ { > try_files $uri =404; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > fastcgi_cache off; > fastcgi_no_cache 1; > fastcgi_cache_bypass 1; > expires off; > } > > +restarting nginx ? php-fpm, but this has no effect. > > Please help. > > Best regards, > Ilya > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240433#msg-240433 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From ben at indietorrent.org Mon Jul 1 12:49:33 2013 From: ben at indietorrent.org (Ben Johnson) Date: Mon, 01 Jul 2013 08:49:33 -0400 Subject: Cannot totally switch off caching In-Reply-To: References: Message-ID: <51D17ADD.3050000@indietorrent.org> On 7/1/2013 4:33 AM, imanenkov wrote: > For some testing I need to switch off a nginx caching (nginx + php-fpm). Now > I have a trouble - when I request a server (PHP app) first time, response > generated about 10 sec (its ok), but when a request a server another time > (during approx 1-2 mins from first request) response is returned within > 50-100 msec, as I understand from some cache. Are you using some type of opcode-caching software, e.g. APC, memcached, etc.? If you're convinced that PHP is doing the caching, I would disable any opcode-caching software first. -Ben From sb at waeme.net Mon Jul 1 15:29:20 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Mon, 1 Jul 2013 19:29:20 +0400 Subject: GPG error on Nginx repository - NO_PUBKEY In-Reply-To: References: <92184EC0-3F68-4E04-A1D9-325EC0046FAF@waeme.net> <511d6d0264744ee9e24293b32268a695.NginxMailingListEnglish@forum.nginx.org> <20130628061508.GA12850@redoubt.spodhuis.org> Message-ID: <7BDCB11F-6DD9-44CD-B67E-E569A23606C6@waeme.net> On 28 Jun2013, at 21:09 , B.R. wrote: > > We've added short explanation with links to gpg docs about how > and why pgp signatures should be checked: > > http://nginx.org/en/linux_packages.html#signatures > > ?The link to Dewinter's website is broken. > Maybe would you like to replace it with http://www.gnupg.org/documentation/howtos.en.html?? Unfortunately link to gpg minihowto on http://www.gnupg.org/documentation/howtos.en.html also points to dewinter.com. From nginx-forum at nginx.us Mon Jul 1 15:57:21 2013 From: nginx-forum at nginx.us (kgk) Date: Mon, 01 Jul 2013 11:57:21 -0400 Subject: How does FastCGI work under the covers? In-Reply-To: <20130630223457.GN20717@mdounin.ru> References: <20130630223457.GN20717@mdounin.ru> Message-ID: <2a601c989873ae017b0fd51f1ededf9d.NginxMailingListEnglish@forum.nginx.org> Hello Maxim, Thank you so much for answering my previous questions! Now I have a few more: 1. Will Nginx provide a FastCGI request only when it is 100% complete, i.e., the browser has provided every necessary byte? 2. Will Nginx always accept 100% of a FastCGI response and hold it inside internal buffers (or on disk), and not wait for the client (= web browser)? kgk Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > > FastCGI multiplexing isn't used by nginx. That is, within a > single connection to a fastcgi application only one request is > sent, and then nginx will wait for a response. More connections > will be opened if there are multiple simulteneous requests. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240429,240453#msg-240453 From mdounin at mdounin.ru Mon Jul 1 16:15:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jul 2013 20:15:41 +0400 Subject: Cannot totally switch off caching In-Reply-To: References: <20130701114015.GP20717@mdounin.ru> Message-ID: <20130701161541.GS20717@mdounin.ru> Hello! On Mon, Jul 01, 2013 at 08:32:32AM -0400, imanenkov wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > > > By default nginx doesn't cache anything. You may want to look > > into your php code to find out what is cached / causes reduced > > response time in subsequent requests. > > > I think about it, but this is heavy app, and its cannot generate a page with > 100-200 msec (unfortunatelly:) ). On the other hand, 100-200 msec is way too long for nginx to return a cached response. If you assume the response is cached by nginx somehow, simpliest test is to switch off php-fpm and check if you are still able to request a resource. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Jul 1 16:51:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jul 2013 20:51:19 +0400 Subject: How does FastCGI work under the covers? In-Reply-To: <2a601c989873ae017b0fd51f1ededf9d.NginxMailingListEnglish@forum.nginx.org> References: <20130630223457.GN20717@mdounin.ru> <2a601c989873ae017b0fd51f1ededf9d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130701165119.GU20717@mdounin.ru> Hello! On Mon, Jul 01, 2013 at 11:57:21AM -0400, kgk wrote: > Hello Maxim, > > Thank you so much for answering my previous questions! Now I have a few > more: > > 1. Will Nginx provide a FastCGI request only when it is 100% complete, > i.e., the browser has provided every necessary byte? Yes. > 2. Will Nginx always accept 100% of a FastCGI response and hold it inside > internal buffers (or on disk), and not wait for the client (= web browser)? Depends on configuration, see http://nginx.org/r/fastcgi_max_temp_file_size. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Jul 1 17:08:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jul 2013 21:08:24 +0400 Subject: GPG error on Nginx repository - NO_PUBKEY In-Reply-To: <7BDCB11F-6DD9-44CD-B67E-E569A23606C6@waeme.net> References: <92184EC0-3F68-4E04-A1D9-325EC0046FAF@waeme.net> <511d6d0264744ee9e24293b32268a695.NginxMailingListEnglish@forum.nginx.org> <20130628061508.GA12850@redoubt.spodhuis.org> <7BDCB11F-6DD9-44CD-B67E-E569A23606C6@waeme.net> Message-ID: <20130701170824.GV20717@mdounin.ru> Hello! On Mon, Jul 01, 2013 at 07:29:20PM +0400, Sergey Budnevitch wrote: > > On 28 Jun2013, at 21:09 , B.R. wrote: > > > > We've added short explanation with links to gpg docs about how > > and why pgp signatures should be checked: > > > > http://nginx.org/en/linux_packages.html#signatures > > > > ?The link to Dewinter's website is broken. > > Maybe would you like to replace it with http://www.gnupg.org/documentation/howtos.en.html?? > > Unfortunately link to gpg minihowto on http://www.gnupg.org/documentation/howtos.en.html also > points to dewinter.com. At least some translations seems to be on www.gnupg.org itself, and there are other links as well, so it probably worth the change. -- Maxim Dounin http://nginx.org/en/donation.html From nginx at 2xlp.com Mon Jul 1 21:18:47 2013 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Mon, 1 Jul 2013 17:18:47 -0400 Subject: adding query string to redirect ? Message-ID: <0EB5CAEA-2E7E-4EF4-94CF-782BC2FC6F26@2xlp.com> we'd like to add onto the query string an identifier of the nginx server something like: return 301 https://$host$request_uri?source=server1 ; the problem is that we can't figure out how to make this work correctly when the url already contains query strings. Example: return 301 https://$host$request_uri?source=server1 ; Good! in /foo.bar out /foo.bar?source=server1 Bad! in /foo.bar?a=1 out /foo.bar?a=1?source=server1 How can we get this? in /foo.bar out /foo.bar?source=server1 in /foo.bar?a=1 out /foo.bar?a=1&source=server1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Mon Jul 1 21:53:46 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Mon, 1 Jul 2013 23:53:46 +0200 Subject: adding query string to redirect ? In-Reply-To: <0EB5CAEA-2E7E-4EF4-94CF-782BC2FC6F26@2xlp.com> References: <0EB5CAEA-2E7E-4EF4-94CF-782BC2FC6F26@2xlp.com> Message-ID: At the location/server level try: if ($is_args) { return 301 https://$host$request_uri&source=server1; } ## Goes here if the above is not chosen. return 301 https://$host$uri?source=server1 ; ----appa On Mon, Jul 1, 2013 at 11:18 PM, Jonathan Vanasco wrote: > > we'd like to add onto the query string an identifier of the nginx server > > something like: > > return 301 https://$host$request_uri?source=server1 ; > > the problem is that we can't figure out how to make this work correctly > when the url already contains query strings. > > > Example: > return 301 https://$host$request_uri?source=server1 ; > Good! > in /foo.bar > out /foo.bar?source=server1 > Bad! > in /foo.bar?a=1 > out /foo.bar?a=1?source=server1 > > How can we get this? > > in /foo.bar > out /foo.bar?source=server1 > > in /foo.bar?a=1 > out /foo.bar?a=1&source=server1 > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jul 1 21:56:29 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 1 Jul 2013 22:56:29 +0100 Subject: adding query string to redirect ? In-Reply-To: <0EB5CAEA-2E7E-4EF4-94CF-782BC2FC6F26@2xlp.com> References: <0EB5CAEA-2E7E-4EF4-94CF-782BC2FC6F26@2xlp.com> Message-ID: <20130701215629.GU27406@craic.sysops.org> On Mon, Jul 01, 2013 at 05:18:47PM -0400, Jonathan Vanasco wrote: Hi there, > the problem is that we can't figure out how to make this work correctly when the url already contains query strings. If the format difference between $request_uri and $uri is not important to you in this case, and if the order of the arguments is not important, then replacing $request_uri with $uri?source=server1&$args might be enough. See http://nginx.org/en/docs/http/ngx_http_core_module.html#variables for what the different variables mean. If those conditions don't hold, then you may need to set your extra bit to start with ? or & depending on the existence of $args or of $is_args, which you could do in a map. f -- Francis Daly francis at daoine.org From ben at indietorrent.org Tue Jul 2 00:28:46 2013 From: ben at indietorrent.org (Ben Johnson) Date: Mon, 01 Jul 2013 20:28:46 -0400 Subject: What is the purpose of this "location {}" block? Message-ID: I'm using ISPConfig3 and the default nginx vhost configuration template includes the following: location ~ \.php$ { try_files /dcc5f1e779623ed233ada555c6142e42.htm @php; } location @php { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web2.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; } What is the point of the first location block? Wouldn't the end-result be exactly the same if the content of the second block were to be moved into the first block, and the second block eliminated? For example: location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web2.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; } Everything "works" just fine; I'm simply curious if there is some non-obvious reason for this "try_files trick" with a file that will never exist (that HTML file doesn't exist and seems to have a randomly-generated name -- presumably to ensure that it will *never* exist). Perhaps this file is created in "Maintenance Mode", thereby causing all requests to be redirected to the maintenance message page? Thanks! -Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jul 2 03:15:59 2013 From: nginx-forum at nginx.us (ferguson) Date: Mon, 01 Jul 2013 23:15:59 -0400 Subject: buy beautiful oil paintings at low price Message-ID: Two works by British professional Francis Breads, such as the first painting he ever promoted, have fetched more than ?21m at a London, uk, uk community public auction, according to the BBC. Go III, which promoted for ?150 at Bacon?s first individual show 54 years ago, was bought for ?10.4m by an America individual choice. It had been estimated to provide for between ?5m and ?7m.A 1966 triptych icon of Isabel Rawsthorne, Bacon?s friend, muse and fan, went for ?11.3m. Breads and Rawsthorne became acquainted during preparations for their first individual shows at London?s Hanover Selection in 1949. ___________________________________ buy cheap oil paintings at www.oilpainting-shop.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240467,240467#msg-240467 From nginx-forum at nginx.us Tue Jul 2 05:26:40 2013 From: nginx-forum at nginx.us (imanenkov) Date: Tue, 02 Jul 2013 01:26:40 -0400 Subject: Cannot totally switch off caching In-Reply-To: <51D17ADD.3050000@indietorrent.org> References: <51D17ADD.3050000@indietorrent.org> Message-ID: Ben Johnson Wrote: > Are you using some type of opcode-caching software, e.g. APC, > memcached, > etc.? If you're convinced that PHP is doing the caching, I would > disable > any opcode-caching software first. > > -Ben Yes, initially I use APC, but I switch it off, and problem still exist. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240470#msg-240470 From nginx-forum at nginx.us Tue Jul 2 05:45:04 2013 From: nginx-forum at nginx.us (ferguson) Date: Tue, 02 Jul 2013 01:45:04 -0400 Subject: buy cheap lights at Lightsuperdeal.com Message-ID: <3f8930ff4d60ae4477129bd50d79d8cd.NginxMailingListEnglish@forum.nginx.org> [url=http://www.lightsuperdeal.com/led-lights]LED Lights[/url] which are an inseparable part of the globe's big places are now placed around Sukhbaatar Rectangle. The formal wedding to convert on the light was organised at the square at 10 pm on July 22. The work was requested by Sentino, a value included supplier of Philips Lighting style products for Mongolia. In the first level, the region of Sukhbaatar Rectangle, Sukhbaatar Sculpture, Govt Structure, [url=http://www.lightsuperdeal.com/wall-lights]Wall Lights[/url]Ulaanbaatar Town Governor?s Office, State Educational Cinema of Safari and Dancing, Main Social Structure (CCP), and Mongolian Kid's Structure are all designed by these LED lights. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240471,240471#msg-240471 From nginx-forum at nginx.us Tue Jul 2 05:46:32 2013 From: nginx-forum at nginx.us (ferguson) Date: Tue, 02 Jul 2013 01:46:32 -0400 Subject: beautiful lights for sale Message-ID: <553f58a8cf00d84e404ea85db34a341b.NginxMailingListEnglish@forum.nginx.org> One of the more uncommon Verner Panton reissues of delayed is the VerPan Nineteen seventies Spion walls or Ceiling Light, although it's perfect for anyone looking for something unique and area age for a space. Not that it comes cheap - this is showpiece lighting with a cost to coordinate. It's also a hemispherical metallised colour which repairs straight to the walls or roof. When lighted, it gives off a strange red shine, the type of shine that would have really set off a hipster's pad returning in 1971. _________________________ buy cheap lights at www.lightsuperdeal.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240472,240472#msg-240472 From nginx-forum at nginx.us Tue Jul 2 06:08:52 2013 From: nginx-forum at nginx.us (imanenkov) Date: Tue, 02 Jul 2013 02:08:52 -0400 Subject: Cannot totally switch off caching In-Reply-To: <20130701161541.GS20717@mdounin.ru> References: <20130701161541.GS20717@mdounin.ru> Message-ID: <9674eedfc7421308b18e546131e9b387.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: > On the other hand, 100-200 msec is way too long for nginx to > return a cached response. > > If you assume the response is cached by nginx somehow, simpliest > test is to switch off php-fpm and check if you are still able to > request a resource. Thanks for idea! I change a location path in template named "php" from "location ~ \.php$" to "location ~ \.pZp$" (just for excluding *.php processing), restart nginx, and server returned a just content (source) of my index.php file. Then I revert location changes back to \.php, restart nginx, make request, and server return fast response of correct page again (0.01 sec with wget, and 60 msec with httperf). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240474#msg-240474 From nginx-forum at nginx.us Tue Jul 2 09:56:50 2013 From: nginx-forum at nginx.us (hawkins) Date: Tue, 02 Jul 2013 05:56:50 -0400 Subject: Connection reset by peer and other problems Message-ID: <12a2cf2a344e2ea02f1d92e7c78133f0.NginxMailingListEnglish@forum.nginx.org> Hi guys, I'm getting a lot of errors on logs right now and we've already tried everything to solve the problem without success. Everything was running just fine until last week. We use nginx and php-fpm. nginx version: nginx/1.2.0 The errors that we getting are from random pages and random domains this is not specific. Here is some errors from the logs: [error] 26002#0: *8882 readv() failed (104: Connection reset by peer) while reading upstream, client: XX.XXX.XX.XXX, server: www.DOMAINX.*, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm/site_live.sock:", host: "DOMAINX.de", referrer: "http://www.google.de/url?sa=t&rct=j&q=DOMAINX&source=web&cd=1&sqi=2&ved=0CDsQFjAA&url=https%3A%2F%2FDOMAINX.de%2F&ei=EaDSUd7UK6K84ASlu4DABA&usg=AFQjCNEd-czvmCp5Jji3kkpWZ1MtMdhpMA&bvm=bv.48572450,d.bGE" [error] 26002#0: *8879 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: XX.XX.X.XX, server: www.DOMAINX.*, request: "GET /files/2013/06/Twitter1.png HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm/site_live.sock:", host: "DOMAINX.nl" [error] 26016#0: *6433 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 80.153.156.134, server: www.DOMAINX.*, request: "GET /bezgotowkowo/conversion.php?campaignID=11759&productID=17653&conversionType=sale&descrMerchant=&descrAffiliate=>mcb=1675309277 HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm/site_live.sock:", host: "DOMAINX.pl", referrer: "https://DOMAINX.at/order-skip" And here is is our nginx configs: ---------------------------------------------- server { listen 95; server_name www.DOMAINX.th DOMAINX.th www.DOMAINX.se DOMAINX.se www.DOMAINX.be DOMAINX.be; rewrite ^ $scheme://DOMAINX.de$request_uri?; } server { listen 95; server_name www.DOMAINX.com; rewrite ^ $scheme://DOMAINX.com$request_uri?; } server { listen 95; server_name DOMAINX.com; access_log /site/logs/live/nginx/landing.access.log main; error_log /site/logs/live/nginx/landing.error.log; root /site/www/htdocs/landing.live/public/; index index.html index.php; port_in_redirect off; if ($http_x_forwarded_protocol !~* 'https') { return 301 https://$host$request_uri; } error_page 404 /redirect; rewrite ^/redirect $scheme://$host; location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm/site_live.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PHP_VALUE "max_input_time=1200\n memory_limit=512M\n post_max_size=128M\n upload_max_filesize=128M\n newrelic.enabled=1\n newrelic.appname=DOMAINX.com\n newrelic.framework=zend\n error_log=/site/logs/live/nginx/site.error.log"; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param APPLICATION_ENV live; fastcgi_param HTTPS $thttps; } } server { listen 95; server_name m.DOMAINX.*; access_log /site/logs/live/nginx/mobile.access.log main; error_log /site/logs/live/nginx/mobile.error.log; root /site/www/htdocs/mobile.live/public/; index index.html index.php; port_in_redirect off; location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm/site_live.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PHP_VALUE "max_input_time=1200\n memory_limit=512M\n post_max_size=128M\n upload_max_filesize=128M\n newrelic.enabled=1\n newrelic.appname=mobile-live\n newrelic.framework=zend\n error_log=/site/logs/live/nginx/site.error.log"; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param APPLICATION_ENV live; fastcgi_param HTTPS $thttps; } error_page 401 /error/401.html; error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 500 502 503 504 /error/50x.html; location /error { root /var/www/default/; } } server { listen 95; server_name developer.DOMAINX.com; access_log /data/site/logs/live/nginx/landing.developer.access.log main; error_log /data/site/logs/live/nginx/landing.developer.error.log; root /var/www/developer; index index.html index.php; location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm/developer.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PHP_VALUE "max_input_time=1200\n memory_limit=512M\n post_max_size=128M\n upload_max_filesize=128M\n newrelic.enabled=1\n newrelic.appname=developer\n error_log=/data/site/logs/live/developer.error.log"; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param APPLICATION_ENV live; fastcgi_param HTTPS $thttps; } error_page 401 /error/401.html; error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 500 502 503 504 /error/50x.html; location /error { root /var/www/default/; } } server { listen 95 default_server; server_name www.DOMAINX.* DOMAINX.* site.DOMAINX.*; access_log /site/logs/live/nginx/site.access.log main; error_log /site/logs/live/nginx/site.error.log; client_max_body_size 20m; client_header_timeout 1200; client_body_timeout 1200; send_timeout 1200; keepalive_timeout 1200; set $thttps $https; set $tscheme $scheme; if ($http_x_forwarded_protocol = https) { set $thttps on; set $tscheme "https"; } if ($http_x_forwarded_protocol = HTTPS) { set $thttps on; set $tscheme "https"; } root /site/www/htdocs/site.live/site/public/; try_files $uri $uri/ /index.php?$args; index index.php; location ^~ /files/ { expires 864000; fastcgi_pass unix:/var/run/php5-fpm/site_live.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_connect_timeout 1200; fastcgi_send_timeout 1200; fastcgi_read_timeout 1200; fastcgi_param PHP_VALUE "max_input_time=1200\n memory_limit=512M\n post_max_size=128M\n upload_max_filesize=128M\n newrelic.enabled=1\n newrelic.appname=site-live\n newrelic.framework=zend\n error_log=/site/logs/live/nginx/site.error.log"; fastcgi_param SCRIPT_FILENAME /site/www/htdocs/site.live/site/public/index.php$args; fastcgi_param APPLICATION_ENV live; fastcgi_param HTTPS $thttps; # allow clients to cache served resources fastcgi_param PHP_ADMIN_VALUE "session.cache_limiter = public"; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm/site_live.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_connect_timeout 1200; fastcgi_send_timeout 1200; fastcgi_read_timeout 1200; fastcgi_param PHP_VALUE "max_input_time=1200\n memory_limit=512M\n post_max_size=128M\n upload_max_filesize=128M\n newrelic.enabled=1\n newrelic.appname=site-live\n newrelic.framework=zend\n error_log=/site/logs/live/nginx/site.error.log"; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param APPLICATION_ENV live; fastcgi_param HTTPS $thttps; # emulate 'session.cache_limiter = nocache' setting in "php.ini": fastcgi_param PHP_ADMIN_VALUE "session.cache_limiter = nocache"; } error_page 401 /error/401.html; error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 500 502 503 504 /error/50x.html; location /error { root /var/www/default/; } location /rev.txt { access_log off; auth_basic off; } set $redirect off; if ($host ~* "DOMAINX\.(.*)") { set $tld $1; } if ($host ~* "^www\.DOMAINX\.(.*)") { set $redirect on; } if ($host ~* "(zenpay|evopay)\.(.*)") { set $tld $2; rewrite ^ $scheme://DOMAINX.$tld$request_uri? permanent; } if ($request_uri ~* rev.txt) { set $redirect off; } if ($thttps != "on") { set $redirect on; } if ($redirect = on) { rewrite ^ https://DOMAINX.$tld$request_uri? permanent; } if (-f /var/deploy/live.maintenance.lock) { rewrite ^(.*)$ /maintenance.php break; } } Thanks for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240477,240477#msg-240477 From mdounin at mdounin.ru Tue Jul 2 10:12:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jul 2013 14:12:06 +0400 Subject: Cannot totally switch off caching In-Reply-To: <9674eedfc7421308b18e546131e9b387.NginxMailingListEnglish@forum.nginx.org> References: <20130701161541.GS20717@mdounin.ru> <9674eedfc7421308b18e546131e9b387.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130702101206.GX20717@mdounin.ru> Hello! On Tue, Jul 02, 2013 at 02:08:52AM -0400, imanenkov wrote: > Maxim Dounin Wrote: > > On the other hand, 100-200 msec is way too long for nginx to > > return a cached response. > > > > If you assume the response is cached by nginx somehow, simpliest > > test is to switch off php-fpm and check if you are still able to > > request a resource. > Thanks for idea! I change a location path in template named "php" from > "location ~ \.php$" to "location ~ \.pZp$" (just for excluding *.php > processing), restart nginx, and server returned a just content (source) of > my index.php file. Then I revert location changes back to \.php, restart > nginx, make request, and server return fast response of correct page again > (0.01 sec with wget, and 60 msec with httperf). Unfortunately, all the tests you did actually prove nothing. You've been told to switch off php-fpm, not to change nginx configuration. If you want to change nginx configuration - just add $upstream_cache_status variable to a log, it will show if a response was from nginx cache (HIT) or was requested from a backend. Other upstream-related variables may be interesting too, in particular $upstream_response_time. See here for more: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jul 2 11:49:56 2013 From: nginx-forum at nginx.us (imanenkov) Date: Tue, 02 Jul 2013 07:49:56 -0400 Subject: Cannot totally switch off caching In-Reply-To: <20130702101206.GX20717@mdounin.ru> References: <20130702101206.GX20717@mdounin.ru> Message-ID: <5bc28178b4bbac2c5207ca8e87a65f20.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Unfortunately, all the tests you did actually prove nothing. > You've been told to switch off php-fpm, not to change nginx > configuration. Do you mean stopping php-fpm with "/etc/init.d/php5-fpm stop" command? In this case I receive a "error 502 bad gateway" error. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240481#msg-240481 From mdounin at mdounin.ru Tue Jul 2 12:27:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jul 2013 16:27:51 +0400 Subject: Cannot totally switch off caching In-Reply-To: <5bc28178b4bbac2c5207ca8e87a65f20.NginxMailingListEnglish@forum.nginx.org> References: <20130702101206.GX20717@mdounin.ru> <5bc28178b4bbac2c5207ca8e87a65f20.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130702122751.GA20717@mdounin.ru> Hello! On Tue, Jul 02, 2013 at 07:49:56AM -0400, imanenkov wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Unfortunately, all the tests you did actually prove nothing. > > You've been told to switch off php-fpm, not to change nginx > > configuration. > Do you mean stopping php-fpm with "/etc/init.d/php5-fpm stop" command? Yes. > In this case I receive a "error 502 bad gateway" error. This proves that the response is _not_ cached by nginx, but it's got from php-fpm instead. As already suggested early, you have to look into your php code. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jul 2 12:37:05 2013 From: nginx-forum at nginx.us (imanenkov) Date: Tue, 02 Jul 2013 08:37:05 -0400 Subject: Cannot totally switch off caching In-Reply-To: <20130702101206.GX20717@mdounin.ru> References: <20130702101206.GX20717@mdounin.ru> Message-ID: Maxim Dounin Wrote: > If you want to change nginx configuration - just add > $upstream_cache_status variable to a log, it will show if a > response was from nginx cache (HIT) or was requested from a > backend. > > Other upstream-related variables may be interesting too, in > particular $upstream_response_time. See here for more: I add new log_format in /etc/nginx/nginx.conf: log_format main '$remote_addr - $remore_user, $upstream_cache_status : $upstream_response_time' and assign this formatter to access.log. On several times running on "caches/fast" site output is: 192.168.111.254 - -, - : 0.090 192.168.111.254 - -, - : 0.044 192.168.111.254 - -, - : 0.054 192.168.111.254 - -, - : 0.057 192.168.111.254 - -, - : 0.047 192.168.111.254 - -, - : 0.049 192.168.111.254 - -, - : 0.053 wait some time (~2 mins), run again, and another string: 192.168.111.254 - -, - : 23.998 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240485#msg-240485 From mdounin at mdounin.ru Tue Jul 2 12:56:42 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jul 2013 16:56:42 +0400 Subject: Cannot totally switch off caching In-Reply-To: References: <20130702101206.GX20717@mdounin.ru> Message-ID: <20130702125642.GC20717@mdounin.ru> Hello! On Tue, Jul 02, 2013 at 08:37:05AM -0400, imanenkov wrote: > Maxim Dounin Wrote: > > If you want to change nginx configuration - just add > > $upstream_cache_status variable to a log, it will show if a > > response was from nginx cache (HIT) or was requested from a > > backend. > > > > Other upstream-related variables may be interesting too, in > > particular $upstream_response_time. See here for more: > > I add new log_format in /etc/nginx/nginx.conf: > log_format main '$remote_addr - $remore_user, $upstream_cache_status : > $upstream_response_time' > > and assign this formatter to access.log. > > On several times running on "caches/fast" site output is: > 192.168.111.254 - -, - : 0.090 > 192.168.111.254 - -, - : 0.044 > 192.168.111.254 - -, - : 0.054 > 192.168.111.254 - -, - : 0.057 > 192.168.111.254 - -, - : 0.047 > 192.168.111.254 - -, - : 0.049 > 192.168.111.254 - -, - : 0.053 > > wait some time (~2 mins), run again, and another string: > 192.168.111.254 - -, - : 23.998 And this again proves that nginx doesn't cache anything but passes requests to php-fpm. As previously suggested, you have to look into your php code. -- Maxim Dounin http://nginx.org/en/donation.html From akunz at wishmedia.de Tue Jul 2 13:07:19 2013 From: akunz at wishmedia.de (Alexander Kunz) Date: Tue, 02 Jul 2013 15:07:19 +0200 Subject: Cannot totally switch off caching In-Reply-To: References: <20130702101206.GX20717@mdounin.ru> Message-ID: <51D2D087.3040907@wishmedia.de> Hello, Am 02.07.2013 14:37, schrieb imanenkov: > Maxim Dounin Wrote: >> If you want to change nginx configuration - just add >> $upstream_cache_status variable to a log, it will show if a >> response was from nginx cache (HIT) or was requested from a >> backend. >> >> Other upstream-related variables may be interesting too, in >> particular $upstream_response_time. See here for more: > > I add new log_format in /etc/nginx/nginx.conf: > log_format main '$remote_addr - $remore_user, $upstream_cache_status : > $upstream_response_time' > > and assign this formatter to access.log. > > On several times running on "caches/fast" site output is: > 192.168.111.254 - -, - : 0.090 > 192.168.111.254 - -, - : 0.044 > 192.168.111.254 - -, - : 0.054 > 192.168.111.254 - -, - : 0.057 > 192.168.111.254 - -, - : 0.047 > 192.168.111.254 - -, - : 0.049 > 192.168.111.254 - -, - : 0.053 > > wait some time (~2 mins), run again, and another string: > 192.168.111.254 - -, - : 23.998 > do you use a PHP framework? Most frameworks can cache also. What happens if you request the page with a unique parameter which is not used by PHP? Somethink like a random value at the end of your url test.php?random=xxxxx What happens on your page/script generally? 24 seconds is a long time. Are you querying a database? Or filesystem operations? Perhaps it can help you commenting out step by step in your PHP page/script. What kind of sotware do you use for this load test? Is this "192.168.111.254" your localhost? Or is it possible that there is a proxy between your test server and load generator which is not bypassed? It's not a nginx caching result. Kind regrads Alexander From contact at jpluscplusm.com Tue Jul 2 13:08:47 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 2 Jul 2013 14:08:47 +0100 Subject: What is the purpose of this "location {}" block? In-Reply-To: References: Message-ID: A maint mode switch sounds probable. -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From mdounin at mdounin.ru Tue Jul 2 13:34:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jul 2013 17:34:21 +0400 Subject: nginx-1.5.2 Message-ID: <20130702133421.GD20717@mdounin.ru> Changes with nginx 1.5.2 02 Jul 2013 *) Feature: now several "error_log" directives can be used. *) Bugfix: the $r->header_in() embedded perl method did not return value of the "Cookie" and "X-Forwarded-For" request header lines; the bug had appeared in 1.3.14. *) Bugfix: in the ngx_http_spdy_module. Thanks to Jim Radford. *) Bugfix: nginx could not be built on Linux with x32 ABI. Thanks to Serguei Ivantsov. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jul 2 13:46:13 2013 From: nginx-forum at nginx.us (imanenkov) Date: Tue, 02 Jul 2013 09:46:13 -0400 Subject: Cannot totally switch off caching In-Reply-To: <51D2D087.3040907@wishmedia.de> References: <51D2D087.3040907@wishmedia.de> Message-ID: Alexander Kunz Wrote: ------------------------------------------------------- > > do you use a PHP framework? Most frameworks can cache also. Yes, site based on Drupal 7. I trying to switch off drupal cache, all the same. > What > happens > if you request the page with a unique parameter which is not used by > PHP? Somethink like a random value at the end of your url > test.php?random=xxxxx When I add a new parameter (http://site/?qqq), page generate long time. When a request a page with the same parameter again, page return fast. When I change parameter again, page generate long time again. > What happens on your page/script generally? 24 seconds is a long time. This is some sort of information portal, during page render scripts make about 50-75 requests to db. > Are you querying a database? Yes, mysql 5 on anotner VM (in the proxmox virtual local network). >Or filesystem operations? No, filesystem is not using (during start page generation, which I use for this tests). >Perhaps it can > help you commenting out step by step in your PHP page/script. > > What kind of sotware do you use for this load test? I run tests via httperf (and now for single requests trying simple wget). All machines is VM under proxmox with 512mb RAM and 1 virtual core, except nginx+php server - there is 1 Gb. Dictinct VMs for nginx+php, db, and machine for test running. Also there is one proxy/load balancer machine, but now balancer have 1 node in config and redirect all requests to nginx (proxy using nginx too). I trying to exclude proxy totally (with direct access to nginx+php server - all the same). >Is this > "192.168.111.254" your localhost? Or is it possible that there is a > proxy between your test server and load generator which is not > bypassed? This is VM - proxy, but I trying to exclude it and make direct requests to nginx+php machine, this has no effect. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240496#msg-240496 From akunz at wishmedia.de Tue Jul 2 15:05:51 2013 From: akunz at wishmedia.de (Alexander Kunz) Date: Tue, 02 Jul 2013 17:05:51 +0200 Subject: Cannot totally switch off caching In-Reply-To: References: <51D2D087.3040907@wishmedia.de> Message-ID: <51D2EC4F.4020606@wishmedia.de> Hello, Am 02.07.2013 15:46, schrieb imanenkov: > Alexander Kunz Wrote: > ------------------------------------------------------- >> >> do you use a PHP framework? Most frameworks can cache also. > Yes, site based on Drupal 7. I trying to switch off drupal cache, all the > same. i am no drupal user/developer but if i google about drupal cache it sounds, there is not only one cache, and it sounds like its not only one click to disable it complete. Perhaps you can find some informations about the cache here: https://drupal.org/node/797346 or it is a good point so start. If you exclude the frontend proxy, for me it sounds like a cache "problem" in drupal. Perhaps you can post a question in a drupal developer mailinglist? good luck, Alexander From nginx-forum at nginx.us Tue Jul 2 15:23:28 2013 From: nginx-forum at nginx.us (crazynuxer) Date: Tue, 02 Jul 2013 11:23:28 -0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: <20130522152034.GJ69760@mdounin.ru> References: <20130522152034.GJ69760@mdounin.ru> Message-ID: Hello all, I have some problem , I use version nginx/1.2.2 and nginx/1.0.11, should I upgrade my version first or just compile then send you my debug log thanks, Rizki Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205826,240499#msg-240499 From nginx-forum at nginx.us Tue Jul 2 21:24:15 2013 From: nginx-forum at nginx.us (Peleke) Date: Tue, 02 Jul 2013 17:24:15 -0400 Subject: Disable open_file_cache for a specific location Message-ID: I have set the open_file_cache variable to max=5000 inactive=20s so that it is enabled globally but now I want to disable all caching for a specific virtual location on a domain: - open_file_cache max=5000 inactive=20s; -- location ^~ /gallery { open_file_cache off; } Sadly that rule doesn't work because /gallery doesn't exist as a real folder. It is only a permalink (/%category%/%postname%/) from WordPress. All subpages (like www.domain.tld/gallery/flickr) should have no cache enabled because the plugin for those galleries doesn't work with caching. How can I solve the problem? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240504,240504#msg-240504 From francis at daoine.org Tue Jul 2 21:43:30 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 2 Jul 2013 22:43:30 +0100 Subject: Disable open_file_cache for a specific location In-Reply-To: References: Message-ID: <20130702214330.GV27406@craic.sysops.org> On Tue, Jul 02, 2013 at 05:24:15PM -0400, Peleke wrote: Hi there, > I have set the open_file_cache variable to max=5000 inactive=20s so that it > is enabled globally but now I want to disable all caching for a specific > virtual location on a domain: http://nginx.org/r/open_file_cache describes what open_file_cache does. That description doesn't seem to match what you seem to indicate you think it does. > - open_file_cache max=5000 inactive=20s; > -- location ^~ /gallery { > open_file_cache off; > } > > Sadly that rule doesn't work because /gallery doesn't exist as a real > folder. Why do you think that it doesn't work? What test do you do; what result do you see; and what result do you expect to see? > It is only a permalink (/%category%/%postname%/) from WordPress. All > subpages (like www.domain.tld/gallery/flickr) should have no cache enabled > because the plugin for those galleries doesn't work with caching. nginx tends not to cache unless you configure it to. Why do you think nginx is caching something that you do not want it to? > How can I solve the problem? I suspect that describing the problem in terms of specific things that can be observed or tested will be a good start. Right now, I'm not sure what problem you are reporting. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Jul 3 00:16:00 2013 From: nginx-forum at nginx.us (Peleke) Date: Tue, 02 Jul 2013 20:16:00 -0400 Subject: Disable open_file_cache for a specific location In-Reply-To: <20130702214330.GV27406@craic.sysops.org> References: <20130702214330.GV27406@craic.sysops.org> Message-ID: Sorry, maybe it is only related to the permalink structure which worked with Apache before server move. The gallery script adds it own extensions to the address. You can see it live on www.peleke.de/galerie and then click on one of the three gallery sources (Flickr, Facebook or Google+). Can you see what I mean? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240504,240506#msg-240506 From sandeepvreddy at outlook.com Wed Jul 3 04:55:35 2013 From: sandeepvreddy at outlook.com (Sandeep L) Date: Wed, 3 Jul 2013 10:25:35 +0530 Subject: Nginx upstream servers status Message-ID: Hi, I am trying to configure nginx with upstream. We have 3 machines where we run application server and proxy passing all requests from nginx to application serves. I used following configuration in nginx: upstream appcluster { server host1.example.com:8080 max_fails=2 fail_timeout=300s; server host2.example.com:8080 max_fails=2 fail_timeout=300s; } Now issue is if the request comes to nginx when one server is down due to unknown reasons its waiting for a long time getting response or some times its getting connection timeout. Is there any module in nginx to get upstream servers status and forward requests only working upstream server. Can someone suggest me right configuration to get response from appcluster without latency or connection time out whenever a server wont respond. Thanks, Sandeep. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajan at noppix.com Wed Jul 3 05:05:09 2013 From: sajan at noppix.com (Sajan Parikh) Date: Wed, 03 Jul 2013 00:05:09 -0500 Subject: Nginx upstream servers status In-Reply-To: References: Message-ID: <51D3B105.8040007@noppix.com> Have you looked at the proxy_next_upstream configuration? http://wiki.nginx.org/NginxHttpProxyModule#proxy_next_upstream It should do what you want. Also, depending on yourapplication and traffic, it might also be worth looking into lowering fail_timeout. Sajan Parikh /Owner, Noppix LLC/ e: sajan at noppix.com p: (563) 726-0371 Noppix LLC Logo On 07/02/2013 11:55 PM, Sandeep L wrote: > Hi, > > > I am trying to configure nginx with upstream. > > > We have 3 machines where we run application server and proxy passing > all requests from nginx to application serves. > I used following configuration in nginx: > > > *upstream appcluster {* > * server host1.example.com:8080 max_fails=2 fail_timeout=300s;* > * server host2.example.com:8080 max_fails=2 fail_timeout=300s;* > *}* > > > Now issue is if the request comes to nginx when one server is down due > to unknown reasons its waiting for a long time getting response or > some times its getting connection timeout. > > > Is there any module in nginx to get upstream servers status and > forward requests only working upstream server. > > > Can someone suggest me right configuration to get response from > appcluster without latency or connection time out whenever a server > wont respond. > > > Thanks, > Sandeep. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: emailsiglogo.png Type: image/png Size: 6717 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4473 bytes Desc: S/MIME Cryptographic Signature URL: From nginx-forum at nginx.us Wed Jul 3 05:11:20 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 03 Jul 2013 01:11:20 -0400 Subject: Nginx upstream servers status In-Reply-To: References: Message-ID: <8583b010401aa9d8f2e6a0ceaf0c2745.NginxMailingListEnglish@forum.nginx.org> you allow 600 seconds to pass until you npotice, that your upstream-server is not responsible. ... max_fails=2 fail_timeout=300s; why? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240513,240515#msg-240515 From sandeepvreddy at outlook.com Wed Jul 3 05:37:24 2013 From: sandeepvreddy at outlook.com (Sandeep L) Date: Wed, 3 Jul 2013 11:07:24 +0530 Subject: Nginx upstream servers status In-Reply-To: <8583b010401aa9d8f2e6a0ceaf0c2745.NginxMailingListEnglish@forum.nginx.org> References: , <8583b010401aa9d8f2e6a0ceaf0c2745.NginxMailingListEnglish@forum.nginx.org> Message-ID: @Parikh I tried with proxy_next_upstream also but facing same issue. Regarding fail_timeout I used values from 10s to 30s but facing same issue. I used following configuration, just check and let me know If I am missing any thing. upstream appcluster { server host1.example.com:8080 max_fails=2 fail_timeout=10s; server host2.example.com:8080 max_fails=2 fail_timeout=10s;} server { listen *; location / { proxy_pass http://appcluster; proxy_next_upstream error timeout http_404 http_500 http_502 http_503 http_504 off; proxy_set_header X-Real-IP $remote_addr; }} Thanks,Sandeep. > To: nginx at nginx.org > Subject: Re: Nginx upstream servers status > From: nginx-forum at nginx.us > Date: Wed, 3 Jul 2013 01:11:20 -0400 > > you allow 600 seconds to pass until you npotice, that your upstream-server > is not responsible. > > ... max_fails=2 fail_timeout=300s; > > why? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240513,240515#msg-240515 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajan at noppix.com Wed Jul 3 05:47:31 2013 From: sajan at noppix.com (Sajan Parikh) Date: Wed, 03 Jul 2013 00:47:31 -0500 Subject: Nginx upstream servers status In-Reply-To: References: , <8583b010401aa9d8f2e6a0ceaf0c2745.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51D3BAF3.2010805@noppix.com> In your proxy_next_upstream, why do you have 'off' enabled. off --- it forbids the request transfer to the next server. Remove 'off' from that line. Sajan Parikh /Owner, Noppix LLC/ e: sajan at noppix.com p: (563) 726-0371 Noppix LLC Logo On 07/03/2013 12:37 AM, Sandeep L wrote: > @Parikh I tried with proxy_next_upstream also but facing same issue. > > Regarding fail_timeout I used values from 10s to 30s but facing same > issue. > > I used following configuration, just check and let me know If I am > missing any thing. > > *upstream appcluster { > server host1.example.com:8080 max_fails=2 fail_timeout=10s; > server host2.example.com:8080 max_fails=2 fail_timeout=10s; > } > * > * > * > *server {* > * listen *;* > * location / {* > * proxy_pass http://appcluster;* > * proxy_next_upstream error timeout http_404 > http_500 http_502 http_503 http_504 off;* > * proxy_set_header X-Real-IP $remote_addr;* > * }* > *}* > > > Thanks, > Sandeep. > > > > To: nginx at nginx.org > > Subject: Re: Nginx upstream servers status > > From: nginx-forum at nginx.us > > Date: Wed, 3 Jul 2013 01:11:20 -0400 > > > > you allow 600 seconds to pass until you npotice, that your > upstream-server > > is not responsible. > > > > ... max_fails=2 fail_timeout=300s; > > > > why? > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,240513,240515#msg-240515 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: emailsiglogo.png Type: image/png Size: 6717 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4473 bytes Desc: S/MIME Cryptographic Signature URL: From sandeepvreddy at outlook.com Wed Jul 3 05:55:32 2013 From: sandeepvreddy at outlook.com (Sandeep L) Date: Wed, 3 Jul 2013 11:25:32 +0530 Subject: Nginx upstream servers status In-Reply-To: <51D3BAF3.2010805@noppix.com> References: , , <8583b010401aa9d8f2e6a0ceaf0c2745.NginxMailingListEnglish@forum.nginx.org>, , <51D3BAF3.2010805@noppix.com> Message-ID: Hi Parikh, I used following configuration but still its waiting for long time. server host1.example.com:8080 max_fails=2 fail_timeout=5s; As you suggested removed off proxy_next_upstream error timeout http_404 http_500 http_502 http_503 http_504; Thanks,Sandeep. Date: Wed, 3 Jul 2013 00:47:31 -0500 From: sajan at noppix.com To: nginx at nginx.org Subject: Re: Nginx upstream servers status In your proxy_next_upstream, why do you have 'off' enabled. off ? it forbids the request transfer to the next server. Remove 'off' from that line. Sajan Parikh Owner, Noppix LLC e: sajan at noppix.com p: (563) 726-0371 On 07/03/2013 12:37 AM, Sandeep L wrote: @Parikh I tried with proxy_next_upstream also but facing same issue. Regarding fail_timeout I used values from 10s to 30s but facing same issue. I used following configuration, just check and let me know If I am missing any thing. upstream appcluster { server host1.example.com:8080 max_fails=2 fail_timeout=10s; server host2.example.com:8080 max_fails=2 fail_timeout=10s; } server { listen *; location / { proxy_pass http://appcluster; proxy_next_upstream error timeout http_404 http_500 http_502 http_503 http_504 off; proxy_set_header X-Real-IP $remote_addr; } } Thanks, Sandeep. > To: nginx at nginx.org > Subject: Re: Nginx upstream servers status > From: nginx-forum at nginx.us > Date: Wed, 3 Jul 2013 01:11:20 -0400 > > you allow 600 seconds to pass until you npotice, that your upstream-server > is not responsible. > > ... max_fails=2 fail_timeout=300s; > > why? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240513,240515#msg-240515 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: emailsiglogo.png Type: image/png Size: 6717 bytes Desc: not available URL: From nginx-forum at nginx.us Wed Jul 3 06:04:46 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 03 Jul 2013 02:04:46 -0400 Subject: Nginx upstream servers status In-Reply-To: References: Message-ID: <5fd9a845f33559d758d101ea9cbb03b9.NginxMailingListEnglish@forum.nginx.org> you are sure, your upstream-servers are not answering on given ports? http://wiki.nginx.org/HttpUpstreamModule#server vs http://wiki.nginx.org/NginxHttpProxyModule#proxy_connect_timeout http://wiki.nginx.org/NginxHttpProxyModule#proxy_read_timeout Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240513,240521#msg-240521 From sandeepvreddy at outlook.com Wed Jul 3 06:10:48 2013 From: sandeepvreddy at outlook.com (Sandeep L) Date: Wed, 3 Jul 2013 11:40:48 +0530 Subject: Nginx upstream servers status In-Reply-To: <5fd9a845f33559d758d101ea9cbb03b9.NginxMailingListEnglish@forum.nginx.org> References: , <5fd9a845f33559d758d101ea9cbb03b9.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have 2 upstream servers host1 and host2. host1 is up and running and listening on port 8080host2 is powered off. When I sent a request to nginx, I received response after 2 minutes. Until 2 minutes the request is waiting. Thanks,Sandeep. > To: nginx at nginx.org > Subject: Re: RE: Nginx upstream servers status > From: nginx-forum at nginx.us > Date: Wed, 3 Jul 2013 02:04:46 -0400 > > you are sure, your upstream-servers are not answering on given ports? > > > http://wiki.nginx.org/HttpUpstreamModule#server > > vs > > http://wiki.nginx.org/NginxHttpProxyModule#proxy_connect_timeout > http://wiki.nginx.org/NginxHttpProxyModule#proxy_read_timeout > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240513,240521#msg-240521 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jul 3 06:39:33 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 03 Jul 2013 02:39:33 -0400 Subject: Nginx upstream servers status In-Reply-To: References: Message-ID: <9fe6928a30f050fd34f8ac5447ba5ed9.NginxMailingListEnglish@forum.nginx.org> i'd suggest you'll start with low-level-debugging: - goto host1 and make a tcpdump port 8080 / tail -f against access-logs of that server; - make a request - check., what happens to that request, e.g. where it "hangs" you could also, just in case, make a "tcpdump port 808 and host host2" onm your nginx, just to make sure that nginx is sending the requests to the right server Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240513,240523#msg-240523 From nginx-forum at nginx.us Wed Jul 3 06:50:29 2013 From: nginx-forum at nginx.us (imanenkov) Date: Wed, 03 Jul 2013 02:50:29 -0400 Subject: Cannot totally switch off caching In-Reply-To: <51D2EC4F.4020606@wishmedia.de> References: <51D2EC4F.4020606@wishmedia.de> Message-ID: <66957bf4dd6add8c57584e62eb882da6.NginxMailingListEnglish@forum.nginx.org> Alexander Kunz Wrote: > i am no drupal user/developer but if i google about drupal cache it > sounds, there is not only one cache, and it sounds like its not only > one > click to disable it complete. Perhaps you can find some informations > about the cache here: > > https://drupal.org/node/797346 > > or it is a good point so start. If you exclude the frontend proxy, for > me it sounds like a cache "problem" in drupal. Perhaps you can post a > question in a drupal developer mailinglist? Thanks a lot! I trying to disable all drupal caches and clear cache tables, and all for as needed (every request process "long" time. I could not believe that drupal cache so fast (in testing environment), so initially I did not think about this case. My fault( Thank to all for the right search direction! Best regards. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240524#msg-240524 From akunz at wishmedia.de Wed Jul 3 06:53:31 2013 From: akunz at wishmedia.de (Alexander Kunz) Date: Wed, 03 Jul 2013 08:53:31 +0200 Subject: Cannot totally switch off caching In-Reply-To: <66957bf4dd6add8c57584e62eb882da6.NginxMailingListEnglish@forum.nginx.org> References: <51D2EC4F.4020606@wishmedia.de> <66957bf4dd6add8c57584e62eb882da6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51D3CA6B.4030408@wishmedia.de> Hello, thanks for your feedback and nice to hear its solved. have a nice day. Alexander Am 03.07.2013 08:50, schrieb imanenkov: > Alexander Kunz Wrote: >> i am no drupal user/developer but if i google about drupal cache it >> sounds, there is not only one cache, and it sounds like its not only >> one >> click to disable it complete. Perhaps you can find some informations >> about the cache here: >> >> https://drupal.org/node/797346 >> >> or it is a good point so start. If you exclude the frontend proxy, for >> me it sounds like a cache "problem" in drupal. Perhaps you can post a >> question in a drupal developer mailinglist? > > Thanks a lot! I trying to disable all drupal caches and clear cache tables, > and all for as needed (every request process "long" time. I could not > believe that drupal cache so fast (in testing environment), so initially I > did not think about this case. My fault( > > Thank to all for the right search direction! > > Best regards. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240433,240524#msg-240524 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Wed Jul 3 07:55:09 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 3 Jul 2013 08:55:09 +0100 Subject: Disable open_file_cache for a specific location In-Reply-To: References: <20130702214330.GV27406@craic.sysops.org> Message-ID: <20130703075509.GW27406@craic.sysops.org> On Tue, Jul 02, 2013 at 08:16:00PM -0400, Peleke wrote: Hi there, > Sorry, maybe it is only related to the permalink structure which worked with > Apache before server move. Could be; the fact that this is a new nginx deployment which used to be an apache deployment is probably useful information for when it comes to fixing the nginx configuration. > The gallery script adds it own extensions to the address. > You can see it live on www.peleke.de/galerie and then click on one of the > three gallery sources (Flickr, Facebook or Google+). > Can you see what I mean? No. I don't see anything there which is obviously cache-related. I do see the words "Gallery not found!!!". So: when you type a specific curl command -- such as "curl -i http://www.peleke.de/galerie/facebook/" -- what response do you get and what response do you expect? And what nginx configuration do you use that leads you to get and expect those things? f -- Francis Daly francis at daoine.org From sandeepvreddy at outlook.com Wed Jul 3 08:59:23 2013 From: sandeepvreddy at outlook.com (Sandeep L) Date: Wed, 3 Jul 2013 14:29:23 +0530 Subject: Nginx upstream servers status In-Reply-To: <9fe6928a30f050fd34f8ac5447ba5ed9.NginxMailingListEnglish@forum.nginx.org> References: , <9fe6928a30f050fd34f8ac5447ba5ed9.NginxMailingListEnglish@forum.nginx.org> Message-ID: While looking at logs following message appeared: [error] 16488#0: *80 upstream timed out (110: Connection timed out) while connecting to upstream, client: IP, server: , request: "GET /assets/images/transparent.png HTTP/1.1", upstream: "http://host2.example.com:8080/assets/images/transparent.png", host: "hostname", referrer: "http://hostname" Thanks,Sandeep. > To: nginx at nginx.org > Subject: Re: RE: Nginx upstream servers status > From: nginx-forum at nginx.us > Date: Wed, 3 Jul 2013 02:39:33 -0400 > > i'd suggest you'll start with low-level-debugging: > - goto host1 and make a tcpdump port 8080 / tail -f against access-logs > of that server; > - make a request > - check., what happens to that request, e.g. where it "hangs" > > you could also, just in case, make a "tcpdump port 808 and host host2" onm > your nginx, just to make sure that nginx is sending the requests to the > right server > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240513,240523#msg-240523 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Jul 3 13:17:02 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 3 Jul 2013 09:17:02 -0400 Subject: nginx-1.5.2 In-Reply-To: <20130702133421.GD20717@mdounin.ru> References: <20130702133421.GD20717@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.2 for Windows http://goo.gl/SO98Q (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Jul 2, 2013 at 9:34 AM, Maxim Dounin wrote: > Changes with nginx 1.5.2 02 Jul > 2013 > > *) Feature: now several "error_log" directives can be used. > > *) Bugfix: the $r->header_in() embedded perl method did not return > value > of the "Cookie" and "X-Forwarded-For" request header lines; the bug > had appeared in 1.3.14. > > *) Bugfix: in the ngx_http_spdy_module. > Thanks to Jim Radford. > > *) Bugfix: nginx could not be built on Linux with x32 ABI. > Thanks to Serguei Ivantsov. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jul 3 13:45:44 2013 From: nginx-forum at nginx.us (Peleke) Date: Wed, 03 Jul 2013 09:45:44 -0400 Subject: Disable open_file_cache for a specific location In-Reply-To: <20130703075509.GW27406@craic.sysops.org> References: <20130703075509.GW27406@craic.sysops.org> Message-ID: <038a121c43a99e2db6c9f67cba99c535.NginxMailingListEnglish@forum.nginx.org> Try Flickr or Google+ instead, that error message isn't related to the problem I mentioned, sorry. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240504,240539#msg-240539 From nginx-forum at nginx.us Wed Jul 3 14:19:28 2013 From: nginx-forum at nginx.us (benseb) Date: Wed, 03 Jul 2013 10:19:28 -0400 Subject: SPDY Installed but not working? Message-ID: <546148a5448a713fc06901e7e11ab03d.NginxMailingListEnglish@forum.nginx.org> We have installed Nginx on CentOS 6. This is a new install using Nginx 1.4.1 and OpenSSL 1.0.1e We then confgured our vhosts to use SPDY, however using a few different tests, it's showing that SPDY is not enabled? There are no messages in the logs and it restarts fine? spdycheck.com: ----------------------- Missing NPN Extension in SSL/TLS Handshake Sorry, but this server is not including an NPN Entension during the SSL/TLS handshake. The NPN Extension is an additional part of the SSL/TLS ServerHello message which allows web servers to tell browsers they support additional protocols, like SPDY. SSL/TLS servers that don't use send the NPN Extension cannot use SPDY because they have no way to tell the browser to use SPDY instead of HTTP. ssllabs.com: ------------------- Next Protocol Negotiation No Please see config below: [root at lb-3 ~]# nginx -V nginx version: nginx/1.4.1 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --user=nginx --group=nginx --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-http_mp4_module --with-http_spdy_module --with-http_gunzip_module --with-mail --with-file-aio --with-mail_ssl_module --with-ipv6 --with-cc-opt='-O2 -g' --with-cc-opt='-O2 -g' [root at lb-3 ~]# openssl version OpenSSL 1.0.1e 11 Feb 2013 [root at lb-3 ~]# which openssl /usr/bin/openssl server { listen 443 ssl spdy; spdy_headers_comp 5; ssl_certificate /etc/nginx/certs/xxx; ssl_certificate_key /etc/nginx/certs/xxx; server_name www.xxx.com; ...snip... } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240545,240545#msg-240545 From luky-37 at hotmail.com Wed Jul 3 17:24:32 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 3 Jul 2013 19:24:32 +0200 Subject: SPDY Installed but not working? In-Reply-To: <546148a5448a713fc06901e7e11ab03d.NginxMailingListEnglish@forum.nginx.org> References: <546148a5448a713fc06901e7e11ab03d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi! > Missing NPN Extension in SSL/TLS Handshake Did you compile openssl on your own? Can you post the output of "openssl version -a"? Sounds to me as if OpenSSL was build without TLS extensions. Thanks, Lukas From calin.don at gmail.com Wed Jul 3 19:54:15 2013 From: calin.don at gmail.com (Calin Don) Date: Wed, 3 Jul 2013 22:54:15 +0300 Subject: Disable access log escaping Message-ID: Hi, I'm using a module which sets some data formated as json to a variable. I'm trying to log this variable using the access log, but the content is escaped. I'm getting something like {\x22foo\x22:\x22bar\x22} instead of {'foo':'bar'}. Is there a way to disable the escaping per access_log or per log_format? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike503 at gmail.com Wed Jul 3 20:05:50 2013 From: mike503 at gmail.com (Michael Shadle) Date: Wed, 3 Jul 2013 13:05:50 -0700 Subject: Congrats to nginx - now the most used webserver in the top 1000 websites Message-ID: http://w3techs.com/blog/entry/nginx_just_became_the_most_used_web_server_among_the_top_1000_websites From francis at daoine.org Wed Jul 3 22:17:52 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 3 Jul 2013 23:17:52 +0100 Subject: Disable open_file_cache for a specific location In-Reply-To: <038a121c43a99e2db6c9f67cba99c535.NginxMailingListEnglish@forum.nginx.org> References: <20130703075509.GW27406@craic.sysops.org> <038a121c43a99e2db6c9f67cba99c535.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130703221752.GX27406@craic.sysops.org> On Wed, Jul 03, 2013 at 09:45:44AM -0400, Peleke wrote: > Try Flickr or Google+ instead, that error message isn't related to the > problem I mentioned, sorry. Good news! I see no evidence of the nginx-related problem that you have described, on either of those two pages, so I presume that you fixed it before I looked. f -- Francis Daly francis at daoine.org From bsdkazakhstan at gmail.com Wed Jul 3 22:27:14 2013 From: bsdkazakhstan at gmail.com (BSD Kazakhstan) Date: Thu, 4 Jul 2013 01:27:14 +0300 Subject: error.log file doesn't log "404 File Not Found" errors Message-ID: My error.log doesn't show any request/log based on .php requests, i.e. when I type /test.php (file doesn't exist), error.log doesn't show anything regarding it. Shouldn't nginx log this request inside error.log as 404 Not Found? Nginx run with php-fpm and everything works ok with php. Any idea? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsdkazakhstan at gmail.com Wed Jul 3 22:30:15 2013 From: bsdkazakhstan at gmail.com (BSD Kazakhstan) Date: Thu, 4 Jul 2013 01:30:15 +0300 Subject: Solution for bandwidth limit on Virtualhosts, Nginx. Message-ID: Hello. I've written a script for having the details of bandwidth usage (100GB per Month, as an example) of hosted websites, for Nginx server. Anyone needs such solution, feel free to contact me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jul 3 22:33:11 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 3 Jul 2013 23:33:11 +0100 Subject: Disable access log escaping In-Reply-To: References: Message-ID: <20130703223311.GY27406@craic.sysops.org> On Wed, Jul 03, 2013 at 10:54:15PM +0300, Calin Don wrote: Hi there, > I'm using a module which sets some data formated as json to a variable. I'm > trying to log this variable using the access log, but the content is > escaped. I'm getting something like {\x22foo\x22:\x22bar\x22} instead of > {'foo':'bar'}. \x22 should correspond to ". ' would correspond to \x27, but I don't think that is escaped (I didn't look at the newest source). Anyway... > Is there a way to disable the escaping per access_log or per log_format? I don't believe so (using only nginx configuration). It will probably be easier for your log-reading code to unescape the four characters starting "\x" before further processing. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jul 4 00:57:43 2013 From: nginx-forum at nginx.us (badtzhou) Date: Wed, 03 Jul 2013 20:57:43 -0400 Subject: nginx cache loader process Message-ID: <0d163d4a4b4d816aca9331919818fa09.NginxMailingListEnglish@forum.nginx.org> We have several hundred Gs of file cached using nginx. Every time we restarted nginx, the cache loader process will appear and server load will go super high and respond very slowly. Looks like cache loader process is very I/O intensive and take a long time to finish. Is there anyway to get around the problem? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240564,240564#msg-240564 From sandeepvreddy at outlook.com Thu Jul 4 05:39:13 2013 From: sandeepvreddy at outlook.com (Sandeep L) Date: Thu, 4 Jul 2013 11:09:13 +0530 Subject: Nginx upstream servers status In-Reply-To: References: , , <9fe6928a30f050fd34f8ac5447ba5ed9.NginxMailingListEnglish@forum.nginx.org>, Message-ID: Hi, After experimenting with some parameters I used following configuration. upstream appcluster { server host1.example.com:8080 max_fails=1 fail_timeout=1s; server host2.example.com:8080 max_fails=1 fail_timeout=1s; } server { listen *; location / { proxy_pass http://appcluster; proxy_next_upstream error timeout http_404 http_500 http_502 http_503 http_504; proxy_set_header X-Real-IP $remote_addr; proxy_connect_timeout 2; proxy_send_timeout 2; proxy_read_timeout 5; } } The issue I am facing here is - with similar configuration in lighttpd response time per request 0.3 seconds, where as with nginx it is around 2.5 seconds. Can someone suggest me how to response time with nginx? Thanks,Sandeep. From: sandeepvreddy at outlook.com To: nginx at nginx.org Subject: RE: Nginx upstream servers status Date: Wed, 3 Jul 2013 14:29:23 +0530 While looking at logs following message appeared: [error] 16488#0: *80 upstream timed out (110: Connection timed out) while connecting to upstream, client: IP, server: , request: "GET /assets/images/transparent.png HTTP/1.1", upstream: "http://host2.example.com:8080/assets/images/transparent.png", host: "hostname", referrer: "http://hostname" Thanks,Sandeep. > To: nginx at nginx.org > Subject: Re: RE: Nginx upstream servers status > From: nginx-forum at nginx.us > Date: Wed, 3 Jul 2013 02:39:33 -0400 > > i'd suggest you'll start with low-level-debugging: > - goto host1 and make a tcpdump port 8080 / tail -f against access-logs > of that server; > - make a request > - check., what happens to that request, e.g. where it "hangs" > > you could also, just in case, make a "tcpdump port 808 and host host2" onm > your nginx, just to make sure that nginx is sending the requests to the > right server > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240513,240523#msg-240523 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Thu Jul 4 05:42:55 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Thu, 4 Jul 2013 09:42:55 +0400 Subject: nginx cache loader process In-Reply-To: <0d163d4a4b4d816aca9331919818fa09.NginxMailingListEnglish@forum.nginx.org> References: <0d163d4a4b4d816aca9331919818fa09.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Jul 4, 2013, at 4:57 AM, "badtzhou" wrote: > We have several hundred Gs of file cached using nginx. Every time we > restarted nginx, the cache loader process will appear and server load will > go super high and respond very slowly. > > Looks like cache loader process is very I/O intensive and take a long time > to finish. Is there anyway to get around the problem? Did you try loader_* parameters for proxy_cache_path? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path > Thanks > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240564,240564#msg-240564 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Thu Jul 4 06:13:38 2013 From: nginx-forum at nginx.us (drook) Date: Thu, 04 Jul 2013 02:13:38 -0400 Subject: nginx, solaris, eventport Message-ID: <7d7e8ba4ddf34f42defebfea12102ac9.NginxMailingListEnglish@forum.nginx.org> Hi. I'm using nginx on Solaris for years. For years I've been experiencing errors when eventport is on (with /dev/poll everything is fine, but I'm kinda perfectionist and I want to use native solaris features). I realize that screaming "help with eventport" is counter-productive, so how can I localize/debug my troubles to write a comprehensive error report ? Right now the problem looks like the inability to connect to port 80 which nginx is listening on, error log on the "notice" level is silent about reasons of this. Restarting nginx helps. Will switching to "debug" loglevel clarify this or do I need to do something else ? Thanks. Eugene. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240568,240568#msg-240568 From ru at nginx.com Thu Jul 4 06:58:07 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 4 Jul 2013 10:58:07 +0400 Subject: nginx, solaris, eventport In-Reply-To: <7d7e8ba4ddf34f42defebfea12102ac9.NginxMailingListEnglish@forum.nginx.org> References: <7d7e8ba4ddf34f42defebfea12102ac9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130704065807.GQ15373@lo0.su> On Thu, Jul 04, 2013 at 02:13:38AM -0400, drook wrote: > Hi. > > I'm using nginx on Solaris for years. For years I've been experiencing > errors when eventport is on (with /dev/poll everything is fine, but I'm > kinda perfectionist and I want to use native solaris features). > > I realize that screaming "help with eventport" is counter-productive, so how > can I localize/debug my troubles to write a comprehensive error report ? > Right now the problem looks like the inability to connect to port 80 which > nginx is listening on, error log on the "notice" level is silent about > reasons of this. Restarting nginx helps. Will switching to "debug" loglevel > clarify this or do I need to do something else ? > > Thanks. > Eugene. http://nginx.org/en/docs/debugging_log.html http://wiki.nginx.org/Debugging From igor at sysoev.ru Thu Jul 4 08:34:17 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 4 Jul 2013 12:34:17 +0400 Subject: nginx cache loader process In-Reply-To: <0d163d4a4b4d816aca9331919818fa09.NginxMailingListEnglish@forum.nginx.org> References: <0d163d4a4b4d816aca9331919818fa09.NginxMailingListEnglish@forum.nginx.org> Message-ID: <53D87DB8-8106-4505-BDA5-A82EFB69B4E7@sysoev.ru> On Jul 4, 2013, at 4:57 , badtzhou wrote: > We have several hundred Gs of file cached using nginx. Every time we > restarted nginx, the cache loader process will appear and server load will > go super high and respond very slowly. > > Looks like cache loader process is very I/O intensive and take a long time > to finish. Is there anyway to get around the problem? What nginx version do you use? Cache loader runs better since 1.1.0. -- Igor Sysoev http://nginx.com/services.html From contact at jpluscplusm.com Thu Jul 4 12:22:16 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 4 Jul 2013 13:22:16 +0100 Subject: Solution for bandwidth limit on Virtualhosts, Nginx. In-Reply-To: References: Message-ID: On 3 July 2013 23:30, BSD Kazakhstan wrote: > Hello. > I've written a script for having the details of bandwidth usage (100GB per > Month, as an example) of hosted websites, for Nginx server. > > Anyone needs such solution, feel free to contact me. Or people could just use AWStats which has been around for years :-) J From contact at jpluscplusm.com Thu Jul 4 12:27:21 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 4 Jul 2013 13:27:21 +0100 Subject: error.log file doesn't log "404 File Not Found" errors In-Reply-To: References: Message-ID: On 3 July 2013 23:27, BSD Kazakhstan wrote: > My error.log doesn't show any request/log based on .php requests, > i.e. when I type /test.php (file doesn't exist), error.log doesn't show > anything regarding it. > Shouldn't nginx log this request inside error.log as 404 Not Found? No. 404s are logged in whatever access_log you have configured. Generally, any new lines in error.log represent a problem that the hoster (sysadmin/app-dev/etc) should look at. A 404 is (or can be) a client-side error, hence should be logged elsewhere. Which it is. J From contact at jpluscplusm.com Thu Jul 4 12:30:28 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 4 Jul 2013 13:30:28 +0100 Subject: Disable open_file_cache for a specific location In-Reply-To: <20130703221752.GX27406@craic.sysops.org> References: <20130703075509.GW27406@craic.sysops.org> <038a121c43a99e2db6c9f67cba99c535.NginxMailingListEnglish@forum.nginx.org> <20130703221752.GX27406@craic.sysops.org> Message-ID: On 3 July 2013 23:17, Francis Daly wrote: > On Wed, Jul 03, 2013 at 09:45:44AM -0400, Peleke wrote: > >> Try Flickr or Google+ instead, that error message isn't related to the >> problem I mentioned, sorry. > > Good news! I see no evidence of the nginx-related problem that you have > described, on either of those two pages, so I presume that you fixed it > before I looked. +1. I also checked Yahoo and LinkedIn just to be sure. No problems there, either. J From nginx-forum at nginx.us Thu Jul 4 16:03:54 2013 From: nginx-forum at nginx.us (ppy) Date: Thu, 04 Jul 2013 12:03:54 -0400 Subject: Possible to have a limit_req "nodelay burst" option? In-Reply-To: References: Message-ID: <75889ab591fdc9f01bb9d5f8c49ff467.NginxMailingListEnglish@forum.nginx.org> I have to agree with this completely. In fact, I thought this was the intended behaviour of the "burst" argument, and it wasn't until further testing that I realised its true meaning. I am looking for the exact same behaviour here ? to allow *actual* burst requests before the delay starts to kick in. The eventual 503 is not necessary. I believe this is a very common scenario and it would likely benefit a lot of others looking for the same kind of thing. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238389,240585#msg-240585 From mdounin at mdounin.ru Thu Jul 4 22:54:14 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Jul 2013 02:54:14 +0400 Subject: nginx, solaris, eventport In-Reply-To: <7d7e8ba4ddf34f42defebfea12102ac9.NginxMailingListEnglish@forum.nginx.org> References: <7d7e8ba4ddf34f42defebfea12102ac9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130704225414.GX20717@mdounin.ru> Hello! On Thu, Jul 04, 2013 at 02:13:38AM -0400, drook wrote: > Hi. > > I'm using nginx on Solaris for years. For years I've been experiencing > errors when eventport is on (with /dev/poll everything is fine, but I'm > kinda perfectionist and I want to use native solaris features). > > I realize that screaming "help with eventport" is counter-productive, so how > can I localize/debug my troubles to write a comprehensive error report ? > Right now the problem looks like the inability to connect to port 80 which > nginx is listening on, error log on the "notice" level is silent about > reasons of this. Restarting nginx helps. Will switching to "debug" loglevel > clarify this or do I need to do something else ? There are number of known problems with eventport implementation, notably related to work with upstream servers. It needs attention. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Jul 5 01:28:28 2013 From: nginx-forum at nginx.us (ctrlbrk) Date: Thu, 04 Jul 2013 21:28:28 -0400 Subject: 1.4.1 SPDY error FIXME: chain too big in spdy filter In-Reply-To: <38422f0d72a15b4b203000c802a5ac3f.NginxMailingListEnglish@forum.nginx.org> References: <201305301916.03455.vbart@nginx.com> <9cf38281e8a60d8aff1f5a591afae5d6.NginxMailingListEnglish@forum.nginx.org> <38422f0d72a15b4b203000c802a5ac3f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <429eda69401bac97c3c96fb7a6afe464.NginxMailingListEnglish@forum.nginx.org> Anyone? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239647,240591#msg-240591 From nginx-forum at nginx.us Fri Jul 5 11:33:47 2013 From: nginx-forum at nginx.us (Peleke) Date: Fri, 05 Jul 2013 07:33:47 -0400 Subject: Disable open_file_cache for a specific location In-Reply-To: <20130703221752.GX27406@craic.sysops.org> References: <20130703221752.GX27406@craic.sysops.org> Message-ID: No, the problem is not solved: You can see the album thumbnails but when you click on them there should be a page with the different pictures from that album which you can view in full screen mode. You can see how it should work on http://wp.oopstouch.com/?page_id=22 and I think the problem is related to the URL and permalink structure. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240504,240571#msg-240571 From nginx-forum at nginx.us Fri Jul 5 11:33:55 2013 From: nginx-forum at nginx.us (Peleke) Date: Fri, 05 Jul 2013 07:33:55 -0400 Subject: Disable open_file_cache for a specific location In-Reply-To: <20130703221752.GX27406@craic.sysops.org> References: <20130703221752.GX27406@craic.sysops.org> Message-ID: <3c52e1ac5d5e6e7a4b93b61d95f51eca.NginxMailingListEnglish@forum.nginx.org> The problem still exists. You can only see the album thumbnails but it should be possible to click on them to see all pictures from an album and then see them in full screen. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240504,240594#msg-240594 From mrvisser at gmail.com Fri Jul 5 12:32:38 2013 From: mrvisser at gmail.com (Branden Visser) Date: Fri, 5 Jul 2013 08:32:38 -0400 Subject: Tweak proxy_next_upstream based on HTTP method Message-ID: Hi all, I was wondering if there is a way to have different proxy_* rules depending on the HTTP method? My use case is that I want to be a little more conservative about what requests I retry for POST requests, as they have an undesired impact if tried twice after a "false" timeout. e.g., for GET, I may want to try another when I timeout to the back-end server after 5s, whereas for a POST request, I may want to simply fail the POST request after 15s. I can work around this by doing specific location blocks for each URL, but having separate defaults based on HTTP method should be easier to maintain. Thanks, Branden -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jul 5 13:02:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Jul 2013 17:02:22 +0400 Subject: Tweak proxy_next_upstream based on HTTP method In-Reply-To: References: Message-ID: <20130705130222.GF20717@mdounin.ru> Hello! On Fri, Jul 05, 2013 at 08:32:38AM -0400, Branden Visser wrote: > Hi all, > > I was wondering if there is a way to have different proxy_* rules depending > on the HTTP method? My use case is that I want to be a little more > conservative about what requests I retry for POST requests, as they have an > undesired impact if tried twice after a "false" timeout. > > e.g., for GET, I may want to try another when I timeout to the back-end > server after 5s, whereas for a POST request, I may want to simply fail the > POST request after 15s. > > I can work around this by doing specific location blocks for each URL, but > having separate defaults based on HTTP method should be easier to maintain. There is the limit_except directive which might be helpful, see http://nginx.org/r/limit_except. -- Maxim Dounin http://nginx.org/en/donation.html From kopszak at gmail.com Fri Jul 5 13:13:51 2013 From: kopszak at gmail.com (Piotr Kopszak) Date: Fri, 5 Jul 2013 15:13:51 +0200 Subject: migrating Topincs to nginx In-Reply-To: References: Message-ID: Dear list, This is going to be a long post, but I hope you will bear with me. I am trying to find a way to run Topincs, an agile web application framework based on Topic Maps paradigm, on nginx. Topincs is a typical LAMP app, so I hoped I would be able to figure it out on my own, but unfortunately not. Even if you find this post too long and boring to read, I would be grateful for pointing me to resources concerning migration from apache2 to nginx and descriptions of successful cases. First a description of the situation by Topincs' author: "My main objective in the Apache and Topincs integration was that it is possible to occupy any URL space below a domain without the need for a general 'root', e.g. everything under /topincs. Plus it should be possible to run different stores under different Topincs versions, just in case. So Apache does two things for Topincs: 1. The rewrite rules basically cut off the store path prefix (e.g. /trial/movies) and pass the result on to TOPINCS_HOME/docroot/.topincs. 2. Based on the path prefix it maps the URL to a store (database). This is done by setting an Apache environment variable in the configuration. This variable is read in .topincs only. So once you manage to provide .topincs with the above, you are set." ( http://tech.groups.yahoo.com/group/topincs/message/673 ) So let's start with a working Apache configuration. I'm running Debian Wheezy, uname -a gives: Linux box 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux /etc/apache2/sites-enabled/000-default contains: -------------------------------------------------------------- ServerAdmin webmaster at localhost DocumentRoot /var/www/ Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined Include "/home/apollo/topincs/conf/httpd.conf" --------------------------------------------------------------- /home/apollo/topincs/conf/httpd.conf : -------------------------------------------------------------- RewriteEngine on Order allow,deny Allow from all DirectoryIndex index.php AddType 'text/html; charset=UTF-8' .html DefaultType application/x-httpd-php php_value include_path "/home/apollo/topincs/php:/home/apollo/topincs/vendor/php" php_value default_charset "UTF-8" php_value magic_quotes_gpc "0" php_value max_execution_time "7200" php_value memory_limit "500M" php_value short_open_tag "0" FileETag none Header set Expires "Fri, 31 Dec 2020 23:59:59 GMT" Header set Cache-control "public" Include "/home/apollo/topincs/conf/*.httpd.conf" -------------------------------------------------------------- /home/apollo/topincs/conf/ contains mercury.httpd.conf: -------------------------------------------------------------------- SetEnv TOPINCS_STORE mercury RewriteRule ^/mercury/([3-9]\.[0-9]\.[0-9].*/(.core-topics|css|images|js|vendor|fonts).*)$ /mercury/$1 [PT,E=TOPINCS_STORE:mercury] RewriteRule ^/mercury((\.|/).*)$ /mercury/.topincs?request=$1 [PT,L,QSA,E=TOPINCS_STORE:mercury] Alias /mercury "/home/apollo/topincs/docroot" ---------------------------------------------------------------------- The (useless) nginx setup I came up with so far is following: /etc/nginx/nginx.conf --------------------------------------------------------------------- user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; include fastcgi_params; } ------------------------------------------------------------------------ /etc/nginx/sites-enabled/topincs : ----------------------------------------------------------------------- server { listen 80; root /home/apollo/topincs/docroot; index index.php; location / { fastcgi_pass localhost:9000; fastcgi_index "/index.php"; allow all; try_files $uri $uri/ /index.php; } location ~ \.php$ { include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } location ~ mercury { fastcgi_param TOPINCS_STORE mercury; } location ~ /\.[0-9]\.[0-9]+\.[0-9](beta\([0-9]+\))? { add_header Expires "Fri, 31 Dec 2020 23:59:59 GMT"; add_header Cache-Control "public"; } location /mercury { rewrite ^/mercury/([3-9]\.[0-9]\.[0-9].*/(.core-topics|css|images|js|vendor|fonts).*)$ /mercury/$1; rewrite ^/mercury((\.|/).*)$ /mercury/.topincs?request=$1; } } -------------------------------------------------------------------- /etc/php5/fpm/php.ini -------------------------------------------------------------------- [PHP] short_open_tag = Off asp_tags = Off precision = 14 output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 17 disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority, disable_classes = zend.enable_gc = On expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = Off display_startup_errors = Off log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = On error_log = /var/log/php_errors.log variables_order = "GPCS" request_order = "GP" register_argc_argv = Off auto_globals_jit = On post_max_size = 8M auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=0 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 request_terminate_timeout = 30s [CLI Server] cli_server.color = On [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [bcmath] bcmath.scale = 0 [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 0 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [ldap] ldap.max_links = -1 [dba] cgi.fix_pathinfo = 0; -------------------------------------------- /etc/php5/fpm/pool.d/www.conf -------------------------------------------- [www] user = www-data group = www-data listen = 127.0.0.1:9000 pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 slowlog = /var/log/php5-fpm.slow.log request_terminate_timeout = 30s chdir = / php_value[include_path] = "/home/apollo/topincs/php:/home/apollo/topincs/vendor/php" php_value[default_charset] = "UTF-8" php_value[magic_quotes_gpc] = "0" php_value[php_value max_execution_time] = "7200" php_value[php_value memory_limit] = "500M" php_value[php_value short_open_tag] = "0" env[TOPINCS_STORE] = "mercury"; ---------------------------------------------- That's all I can think of. If you have any ideas how to make this work it would be GREATLY appreciated, not only by my but also by other Topincs users, who will enjoy using nginx instead of apache. Many thanks in advance Piotr P.S. Topincs installation instructions are here: http://www.cerny-online.com/topincs/manual/installing -- http://okle.pl From mrvisser at gmail.com Fri Jul 5 13:27:18 2013 From: mrvisser at gmail.com (Branden Visser) Date: Fri, 5 Jul 2013 09:27:18 -0400 Subject: Tweak proxy_next_upstream based on HTTP method In-Reply-To: <20130705130222.GF20717@mdounin.ru> References: <20130705130222.GF20717@mdounin.ru> Message-ID: Thanks for the quick reply Maxim. That looks interesting, though in particular proxy_next_upstream and proxy_read_timeout don't report to be valid in that context. I'll give it a try, perhaps it's just an error in the docs. On Fri, Jul 5, 2013 at 9:02 AM, Maxim Dounin wrote: > Hello! > > On Fri, Jul 05, 2013 at 08:32:38AM -0400, Branden Visser wrote: > > > Hi all, > > > > I was wondering if there is a way to have different proxy_* rules > depending > > on the HTTP method? My use case is that I want to be a little more > > conservative about what requests I retry for POST requests, as they have > an > > undesired impact if tried twice after a "false" timeout. > > > > e.g., for GET, I may want to try another when I timeout to the back-end > > server after 5s, whereas for a POST request, I may want to simply fail > the > > POST request after 15s. > > > > I can work around this by doing specific location blocks for each URL, > but > > having separate defaults based on HTTP method should be easier to > maintain. > > There is the limit_except directive which might be helpful, see > http://nginx.org/r/limit_except. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrvisser at gmail.com Fri Jul 5 13:46:03 2013 From: mrvisser at gmail.com (Branden Visser) Date: Fri, 5 Jul 2013 09:46:03 -0400 Subject: Tweak proxy_next_upstream based on HTTP method In-Reply-To: References: <20130705130222.GF20717@mdounin.ru> Message-ID: No it doesn't look like limit_except will allow me to change behaviour of proxy timeouts or next_upstream based on the HTTP method. Are there any recommendations on how this could be done aside from my workaround? On Fri, Jul 5, 2013 at 9:27 AM, Branden Visser wrote: > Thanks for the quick reply Maxim. That looks interesting, though in > particular proxy_next_upstream and proxy_read_timeout don't report to be > valid in that context. I'll give it a try, perhaps it's just an error in > the docs. > > > On Fri, Jul 5, 2013 at 9:02 AM, Maxim Dounin wrote: > >> Hello! >> >> On Fri, Jul 05, 2013 at 08:32:38AM -0400, Branden Visser wrote: >> >> > Hi all, >> > >> > I was wondering if there is a way to have different proxy_* rules >> depending >> > on the HTTP method? My use case is that I want to be a little more >> > conservative about what requests I retry for POST requests, as they >> have an >> > undesired impact if tried twice after a "false" timeout. >> > >> > e.g., for GET, I may want to try another when I timeout to the back-end >> > server after 5s, whereas for a POST request, I may want to simply fail >> the >> > POST request after 15s. >> > >> > I can work around this by doing specific location blocks for each URL, >> but >> > having separate defaults based on HTTP method should be easier to >> maintain. >> >> There is the limit_except directive which might be helpful, see >> http://nginx.org/r/limit_except. >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jul 5 13:51:16 2013 From: nginx-forum at nginx.us (rahul286) Date: Fri, 05 Jul 2013 09:51:16 -0400 Subject: Congrats to nginx - now the most used webserver in the top 1000 websites In-Reply-To: References: Message-ID: Congrats Igor and Nginx team. :-) It just matter of time before Nginx becomes #1 in Top 10,000 and eventually #1 overall. Most cheap hosting companies are run by non-techie people (using resellers tool) who don't understand benefit of Nginx and/or do not have control on server softwares! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240555,240602#msg-240602 From vbart at nginx.com Fri Jul 5 14:07:30 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 5 Jul 2013 18:07:30 +0400 Subject: 1.4.1 SPDY error FIXME: chain too big in spdy filter In-Reply-To: <38422f0d72a15b4b203000c802a5ac3f.NginxMailingListEnglish@forum.nginx.org> References: <201305301916.03455.vbart@nginx.com> <9cf38281e8a60d8aff1f5a591afae5d6.NginxMailingListEnglish@forum.nginx.org> <38422f0d72a15b4b203000c802a5ac3f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201307051807.30994.vbart@nginx.com> On Sunday 30 June 2013 04:03:08 ctrlbrk wrote: > ctrlbrk Wrote: > ------------------------------------------------------- > > > BTW, I use > > > > fastcgi_buffers 16 2048k; > > fastcgi_buffer_size 2048k; > > fastcgi_busy_buffers_size 2048k; > > > > So I am under the 16MB, unless you are applying it *16? > > Bump for suggestions please > 2048k (fastcgi_buffer_size) + 16 * 2048k (fastcgi_buffers) = 34 Mb > 16 wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Jul 5 14:35:04 2013 From: nginx-forum at nginx.us (benseb) Date: Fri, 05 Jul 2013 10:35:04 -0400 Subject: SPDY Installed but not working? In-Reply-To: References: Message-ID: <50c93f6e3eebae6e9e5b646041fccec3.NginxMailingListEnglish@forum.nginx.org> It was installed via yum -(IUS) I installed both OpenSSL10 and openssl10-libs.x86_64 : A general purpose cryptography library with TLS implementation [root at lb-3 ~]# openssl version -a OpenSSL 1.0.1e 11 Feb 2013 built on: Wed Feb 13 11:31:32 EST 2013 platform: linux-x86_64 options: bn(64,64) md2(int) rc4(8x,int) des(idx,cisc,16,int) idea(int) blowfish(idx) compiler: gcc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -DKRB5_MIT -m64 -DL_ENDIAN -DTERMIO -Wall -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -Wa,--noexecstack -DPURIFY -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM OPENSSLDIR: "/etc/pki/tls" engines: rsax dynamic Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240545,240604#msg-240604 From nginx-forum at nginx.us Fri Jul 5 14:38:55 2013 From: nginx-forum at nginx.us (benseb) Date: Fri, 05 Jul 2013 10:38:55 -0400 Subject: SPDY Installed but not working? In-Reply-To: <50c93f6e3eebae6e9e5b646041fccec3.NginxMailingListEnglish@forum.nginx.org> References: <50c93f6e3eebae6e9e5b646041fccec3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Also, does this help? [root at lb-3 ~]# nginx -V nginx version: nginx/1.4.1 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled See 'TLS NSI support enabled'? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240545,240605#msg-240605 From vbart at nginx.com Fri Jul 5 14:47:42 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 5 Jul 2013 18:47:42 +0400 Subject: SPDY Installed but not working? In-Reply-To: <50c93f6e3eebae6e9e5b646041fccec3.NginxMailingListEnglish@forum.nginx.org> References: <50c93f6e3eebae6e9e5b646041fccec3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201307051847.43017.vbart@nginx.com> On Friday 05 July 2013 18:35:04 benseb wrote: > It was installed via yum -(IUS) > > I installed both OpenSSL10 and openssl10-libs.x86_64 : A general purpose > cryptography library with TLS implementation > > > [root at lb-3 ~]# openssl version -a > OpenSSL 1.0.1e 11 Feb 2013 > built on: Wed Feb 13 11:31:32 EST 2013 > platform: linux-x86_64 > options: bn(64,64) md2(int) rc4(8x,int) des(idx,cisc,16,int) idea(int) > blowfish(idx) > compiler: gcc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT > -DDSO_DLFCN -DHAVE_DLFCN_H -DKRB5_MIT -m64 -DL_ENDIAN -DTERMIO -Wall -O2 -g > -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -Wa,--noexecstack -DPURIFY > -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 > -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM > -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM > OPENSSLDIR: "/etc/pki/tls" > engines: rsax dynamic Have you actually built nginx with this version of library? What's in the error log? wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Jul 5 15:05:58 2013 From: nginx-forum at nginx.us (benseb) Date: Fri, 05 Jul 2013 11:05:58 -0400 Subject: SPDY Installed but not working? In-Reply-To: <201307051847.43017.vbart@nginx.com> References: <201307051847.43017.vbart@nginx.com> Message-ID: The compile command I used was: --user=nginx --group=nginx --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-http_mp4_module --with-http_spdy_module --with-http_gunzip_module --with-mail --with-file-aio --with-mail_ssl_module --with-ipv6 --with-cc-opt='-O2 -g' --with-cc-opt='-O2 -g' As far as I know, this is the only version of openssl installed, unless I'm missing something? [ben at lb-3 ~]$ which openssl /usr/bin/openssl [ben at lb-3 ~]$ /usr/bin/openssl version OpenSSL 1.0.1e 11 Feb 2013 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240545,240608#msg-240608 From luky-37 at hotmail.com Fri Jul 5 15:19:54 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 5 Jul 2013 17:19:54 +0200 Subject: SPDY Installed but not working? In-Reply-To: References: <201307051847.43017.vbart@nginx.com>, Message-ID: Can you run ldd against the nginx executable? Lukas From nginx-forum at nginx.us Fri Jul 5 15:33:40 2013 From: nginx-forum at nginx.us (benseb) Date: Fri, 05 Jul 2013 11:33:40 -0400 Subject: SPDY Installed but not working? In-Reply-To: References: Message-ID: <1ba6baa280bdfd58343e41f860d6dc69.NginxMailingListEnglish@forum.nginx.org> Yes - if you tell me how? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240545,240610#msg-240610 From nginx-forum at nginx.us Fri Jul 5 15:38:23 2013 From: nginx-forum at nginx.us (benseb) Date: Fri, 05 Jul 2013 11:38:23 -0400 Subject: SPDY Installed but not working? In-Reply-To: <1ba6baa280bdfd58343e41f860d6dc69.NginxMailingListEnglish@forum.nginx.org> References: <1ba6baa280bdfd58343e41f860d6dc69.NginxMailingListEnglish@forum.nginx.org> Message-ID: [ben at lb-3 ~]$ ldd /usr/sbin/nginx linux-vdso.so.1 => (0x00007fffbe7ff000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fe08af81000) libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007fe08ad49000) libpcre.so.0 => /lib64/libpcre.so.0 (0x00007fe08ab1d000) libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007fe08a8b8000) libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x00007fe08a50b000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fe08a307000) libz.so.1 => /lib64/libz.so.1 (0x00007fe08a0f1000) libxml2.so.2 => /usr/lib64/libxml2.so.2 (0x00007fe089d9e000) libxslt.so.1 => /usr/lib64/libxslt.so.1 (0x00007fe089b61000) libexslt.so.0 => /usr/lib64/libexslt.so.0 (0x00007fe08994d000) libgd.so.2 => /usr/lib64/libgd.so.2 (0x00007fe089705000) libGeoIP.so.1 => /usr/lib64/libGeoIP.so.1 (0x00007fe0894cd000) libperl.so => /usr/lib64/perl5/CORE/libperl.so (0x00007fe089162000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007fe088f47000) libnsl.so.1 => /lib64/libnsl.so.1 (0x00007fe088d2e000) libm.so.6 => /lib64/libm.so.6 (0x00007fe088aaa000) libutil.so.1 => /lib64/libutil.so.1 (0x00007fe0888a6000) libc.so.6 => /lib64/libc.so.6 (0x00007fe088513000) /lib64/ld-linux-x86-64.so.2 (0x00007fe08b1a4000) libfreebl3.so => /lib64/libfreebl3.so (0x00007fe0882b1000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007fe08806c000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007fe087d86000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007fe087b82000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007fe087955000) libgcrypt.so.11 => /lib64/libgcrypt.so.11 (0x00007fe0876e0000) libgpg-error.so.0 => /lib64/libgpg-error.so.0 (0x00007fe0874db000) libXpm.so.4 => /usr/lib64/libXpm.so.4 (0x00007fe0872ca000) libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007fe086f8d000) libjpeg.so.62 => /usr/lib64/libjpeg.so.62 (0x00007fe086d3c000) libfontconfig.so.1 => /usr/lib64/libfontconfig.so.1 (0x00007fe086b06000) libfreetype.so.6 => /usr/lib64/libfreetype.so.6 (0x00007fe086869000) libpng12.so.0 => /usr/lib64/libpng12.so.0 (0x00007fe086642000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007fe086437000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007fe086233000) libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007fe086015000) libexpat.so.1 => /lib64/libexpat.so.1 (0x00007fe085dec000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fe085bcd000) libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007fe0859ca000) [ben at lb-3 ~]$ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240545,240611#msg-240611 From vbart at nginx.com Fri Jul 5 15:45:47 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 5 Jul 2013 19:45:47 +0400 Subject: SPDY Installed but not working? In-Reply-To: References: <201307051847.43017.vbart@nginx.com> Message-ID: <201307051945.47044.vbart@nginx.com> On Friday 05 July 2013 19:05:58 benseb wrote: > The compile command I used was: > > --user=nginx --group=nginx --prefix=/usr/share/nginx > --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/lib/nginx/tmp/client_body > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid > --lock-path=/var/lock/subsys/nginx --with-http_ssl_module > --with-http_realip_module --with-http_addition_module > --with-http_xslt_module --with-http_image_filter_module > --with-http_geoip_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_degradation_module --with-http_stub_status_module > --with-http_perl_module --with-http_mp4_module --with-http_spdy_module > --with-http_gunzip_module --with-mail --with-file-aio > --with-mail_ssl_module --with-ipv6 --with-cc-opt='-O2 -g' > --with-cc-opt='-O2 -g' > > As far as I know, this is the only version of openssl installed, unless I'm > missing something? > > [ben at lb-3 ~]$ which openssl > /usr/bin/openssl > > [ben at lb-3 ~]$ /usr/bin/openssl version > OpenSSL 1.0.1e 11 Feb 2013 > It's binary, but what about header files? You can have compiled binaries from one version of library while header files of another. What about nginx, are you sure that your scripts run the same version, and use the same config file? There are a lot of ways to shoot yourself in the foot when you install something from source on a system that relies on packages. And again, is there something in nginx error log? wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Jul 5 15:58:39 2013 From: nginx-forum at nginx.us (benseb) Date: Fri, 05 Jul 2013 11:58:39 -0400 Subject: SPDY Installed but not working? In-Reply-To: <201307051945.47044.vbart@nginx.com> References: <201307051945.47044.vbart@nginx.com> Message-ID: <570eded4cac50f8b250b7fe20f297735.NginxMailingListEnglish@forum.nginx.org> I did a replace on OpenSSL using YUM which should have removed all of the existing 0.98 version I presume. Nginx was a clean install (from source) so shouldnt have clashed with anything? I'm not sure where to go from here, nothing in the error logs. Previously when I tried to run spdy/nginx and the wrong version was installed, it showed this in the error logs, but this no longer happens on this new install (and new server) so it seems to be loading ok Does it matter that this is setup as a load balancer, with two upstream servers behind (We use proxy in HTTPS mode) - I am under the assumption that SPDY can be installed on this front end without worrying about the backend servers as that's a completely separate handshake, etc? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240545,240613#msg-240613 From nginx-forum at nginx.us Sat Jul 6 17:10:46 2013 From: nginx-forum at nginx.us (prasunb) Date: Sat, 06 Jul 2013 13:10:46 -0400 Subject: new requests are not updated in nginx Message-ID: <661a1ff0ee92eb7d4d83e6f3637c6bf2.NginxMailingListEnglish@forum.nginx.org> Hello, I have installed nginx as proxy server for our project purposes. We kept our application in our private lan and made that private url (say http://a.b.c.d:1234/appName) an entry in nginx config file (/etc/nginx/stes-available/web-apps ). Then restarted the nginx service to take effect new changes. nginx is running in 80 port and accessible from internet. In this way nginx had been run for last 4 months. But, suddenly it stopped updating itself. I mean whatever new url I do add in the config file, nginx is not redirecting any request to the real application. I have tracked the ports using tcpdump. nginx receiving the request, but it is not forwarding the request towards real server. I have checked the access log file and the error.log file, but no error is logged there. Please provide me any suggesion. Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240616,240616#msg-240616 From contact at jpluscplusm.com Sat Jul 6 20:23:58 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 6 Jul 2013 21:23:58 +0100 Subject: new requests are not updated in nginx In-Reply-To: <661a1ff0ee92eb7d4d83e6f3637c6bf2.NginxMailingListEnglish@forum.nginx.org> References: <661a1ff0ee92eb7d4d83e6f3637c6bf2.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 6 Jul 2013 18:11, "prasunb" wrote: > > Hello, > I have installed nginx as proxy server for our project purposes. We > kept our application in our private lan and made that private url (say > http://a.b.c.d:1234/appName) an entry in nginx config file > (/etc/nginx/stes-available/web-apps ). Then restarted the nginx service to > take effect new changes. nginx is running in 80 port and accessible from > internet. In this way nginx had been run for last 4 months. But, suddenly it > stopped updating itself. I mean whatever new url I do add in the config > file, nginx is not redirecting any request to the real application. > > I have tracked the ports using tcpdump. nginx receiving the request, but it > is not forwarding the request towards real server. I have checked the access > log file and the error.log file, but no error is logged there. Check that your restart script is REALLY restarting the process. Check the PID age. I've seen this before on certain OSes where a restart with configuration errors present resulted in silent failure and no restart. HTH, Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jul 7 03:31:47 2013 From: nginx-forum at nginx.us (prasunb) Date: Sat, 06 Jul 2013 23:31:47 -0400 Subject: new requests are not updated in nginx In-Reply-To: References: Message-ID: <7438023895d66727c16e238a4951d11c.NginxMailingListEnglish@forum.nginx.org> I have checked the PID status. Its PID is changing with every restart. Even though I am getting two warning messages while restarting, but don't think they are the culprits. warning messages........ Starting nginx: nginx: [warn] conflicting server name "127.0.0.1" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "127.0.0.1" on 0.0.0.0:80, ignored nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240616,240619#msg-240619 From nginx-forum at nginx.us Sun Jul 7 09:30:40 2013 From: nginx-forum at nginx.us (lennart) Date: Sun, 07 Jul 2013 05:30:40 -0400 Subject: Redirect www to no-www with variable (for multiple domains)? Message-ID: According to http://wiki.nginx.org/Pitfalls#Server_Name this is the best solution to redirect www to no-www for one domain: server { server_name www.domain.com; return 301 $scheme://domain.com$request_uri; } server { server_name domain.com; [...] } Is there a way to do this with regex or variables? I've ~15 domains and it would be more convenient to have only one entry "to rule them all" ;-) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240622,240622#msg-240622 From contact at jpluscplusm.com Sun Jul 7 11:43:32 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 7 Jul 2013 12:43:32 +0100 Subject: Redirect www to no-www with variable (for multiple domains)? In-Reply-To: References: Message-ID: On 7 Jul 2013 10:30, "lennart" wrote: > > According to http://wiki.nginx.org/Pitfalls#Server_Name this is the best > solution to redirect www to no-www for one domain: > > server { > server_name www.domain.com; > return 301 $scheme://domain.com$request_uri; > } > server { > server_name domain.com; > [...] > } > > Is there a way to do this with regex or variables? I've ~15 domains and it > would be more convenient to have only one entry "to rule them all" ;-) Absolutely there are ways to do this all in one go. Have a try and let us know where you get stuck :-) Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajan at noppix.com Sun Jul 7 11:47:42 2013 From: sajan at noppix.com (Sajan Parikh) Date: Sun, 07 Jul 2013 06:47:42 -0500 Subject: Redirect www to no-www with variable (for multiple domains)? In-Reply-To: References: Message-ID: <51D9555E.9050601@noppix.com> Haven't testing this, but would you not be able to replace 'domain.com' with $host? server { server_name www.$host; rewrite ^(.*) $scheme://$host$request_uri permanent; } That should work I think. Right now, I do it the way you've descrived individually in all my .conf files for each domain. - Sajan Parikh On 07/07/2013 04:30 AM, lennart wrote: > According to http://wiki.nginx.org/Pitfalls#Server_Name this is the best > solution to redirect www to no-www for one domain: > > server { > server_name www.domain.com; > return 301 $scheme://domain.com$request_uri; > } > server { > server_name domain.com; > [...] > } > > Is there a way to do this with regex or variables? I've ~15 domains and it > would be more convenient to have only one entry "to rule them all" ;-) > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240622,240622#msg-240622 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Jul 7 12:22:15 2013 From: nginx-forum at nginx.us (lennart) Date: Sun, 07 Jul 2013 08:22:15 -0400 Subject: Redirect www to no-www with variable (for multiple domains)? In-Reply-To: References: Message-ID: Jonathan Matthews Wrote: ------------------------------------------------------- > On 7 Jul 2013 10:30, "lennart" wrote: > > > > According to http://wiki.nginx.org/Pitfalls#Server_Name this is the > best > > solution to redirect www to no-www for one domain: > > > > server { > > server_name www.domain.com; > > return 301 $scheme://domain.com$request_uri; > > } > > server { > > server_name domain.com; > > [...] > > } > > > > Is there a way to do this with regex or variables? I've ~15 domains > and it > > would be more convenient to have only one entry "to rule them all" > ;-) > > Absolutely there are ways to do this all in one go. Have a try and > let us > know where you get stuck :-) > > Jonathan > _______________________________________________ As a newbie to NGINX (used 10yrs APACHE before) i don't know the exact route on the wiki & forum ;-). I also not found a newbie-section ;-) But off course, i shall post the solution. Sajan tells about $host, i'll give it a try Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240622,240625#msg-240625 From sajan at noppix.com Sun Jul 7 18:47:17 2013 From: sajan at noppix.com (Sajan Parikh) Date: Sun, 7 Jul 2013 13:47:17 -0500 Subject: Redirect www to no-www with variable (for multiple domains)? In-Reply-To: <51D9555E.9050601@noppix.com> References: <51D9555E.9050601@noppix.com> Message-ID: I should correct my server_name line. It should not have the variable, but the actual domain names. So I guess you'll still have to come back and add each domain. Sorry. On phone. Sajan Parikh On Jul 7, 2013, at 6:47 AM, Sajan Parikh wrote: > Haven't testing this, but would you not be able to replace 'domain.com' with $host? > > server { > server_name www.$host; > rewrite ^(.*) $scheme://$host$request_uri permanent; > } > > That should work I think. Right now, I do it the way you've descrived individually in all my .conf files for each domain. > > - Sajan Parikh > > On 07/07/2013 04:30 AM, lennart wrote: >> According to http://wiki.nginx.org/Pitfalls#Server_Name this is the best >> solution to redirect www to no-www for one domain: >> >> server { >> server_name www.domain.com; >> return 301 $scheme://domain.com$request_uri; >> } >> server { >> server_name domain.com; >> [...] >> } >> >> Is there a way to do this with regex or variables? I've ~15 domains and it >> would be more convenient to have only one entry "to rule them all" ;-) >> >> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240622,240622#msg-240622 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Jul 7 20:27:23 2013 From: nginx-forum at nginx.us (Peleke) Date: Sun, 07 Jul 2013 16:27:23 -0400 Subject: Disable open_file_cache for a specific location In-Reply-To: References: Message-ID: <6c96c039e8b6c53006338c12eead952f.NginxMailingListEnglish@forum.nginx.org> Sorry but my last message was delayed, may be you didn't see it but the problem is still not solved and it worked before with Apache. I would be happy if you could help me, thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240504,240630#msg-240630 From edho at myconan.net Mon Jul 8 04:47:08 2013 From: edho at myconan.net (Edho Arief) Date: Mon, 8 Jul 2013 13:47:08 +0900 Subject: Redirect www to no-www with variable (for multiple domains)? In-Reply-To: References: Message-ID: On Sun, Jul 7, 2013 at 6:30 PM, lennart wrote: > > Is there a way to do this with regex or variables? I've ~15 domains and it > would be more convenient to have only one entry "to rule them all" ;-) > the regex way, which is supposedly slower: server { server_name ~^www\.(?.+)$; listen 80; listen [::]:80; return 301 $scheme://$domain$request_uri; } -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From rkearsley at blueyonder.co.uk Mon Jul 8 13:06:48 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Mon, 08 Jul 2013 14:06:48 +0100 Subject: spdy per location Message-ID: <51DAB968.4070904@blueyonder.co.uk> Hi I'm trying to set up spdy so that I can choose weather or not to use it based on the server location that's accessed As I understand, the underlying protocol (http/https/spdy) is established first before any request can be sent (e.g. before we know which location it will match) I know this example is totally impossible, but would like to know if there is a real way of doing it: server { listen 80; listen 443 ssl spdy; location / { spdy off; blah; } location /spdy { spdy on; blah; } } Many thanks From appa at perusio.net Mon Jul 8 14:45:32 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Mon, 8 Jul 2013 16:45:32 +0200 Subject: spdy per location In-Reply-To: <51DAB968.4070904@blueyonder.co.uk> References: <51DAB968.4070904@blueyonder.co.uk> Message-ID: spdy is a socket directive option. You cannot set it outside of that context AFAICT. What you can do is play with redirects between two hosts, one with spdy and one without. Since usually certs have at least one DNS name besides the CN you can do it with the same cert. Probably I haven't tested and don't know if Nginx complains about a duplicated cert in different hosts. It's not nice or clean. It's an ugly hack. ----appa On Mon, Jul 8, 2013 at 3:06 PM, Richard Kearsley wrote: > Hi > I'm trying to set up spdy so that I can choose weather or not to use it > based on the server location that's accessed > As I understand, the underlying protocol (http/https/spdy) is established > first before any request can be sent (e.g. before we know which location it > will match) > > I know this example is totally impossible, but would like to know if there > is a real way of doing it: > > server > { > listen 80; > listen 443 ssl spdy; > > location / > { > spdy off; > blah; > } > > location /spdy > { > spdy on; > blah; > } > } > > Many thanks > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jul 8 14:49:47 2013 From: nginx-forum at nginx.us (Sylvia) Date: Mon, 08 Jul 2013 10:49:47 -0400 Subject: spdy per location In-Reply-To: <51DAB968.4070904@blueyonder.co.uk> References: <51DAB968.4070904@blueyonder.co.uk> Message-ID: <81c6915949699b82b8dba49b55b42c18.NginxMailingListEnglish@forum.nginx.org> Hello. It works like this: 1. accept tcp connection 2. establish ssl session a) presenting first certificate b) optional: present 2nd certificate for desired virtual host via SNI extension 3. NPN (next protocol negotiation), enabling SPDY ------- 4. requesting content i dont think there is a way to use spdy per location, only per (virtual) server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240651,240666#msg-240666 From sajan at noppix.com Mon Jul 8 16:40:12 2013 From: sajan at noppix.com (Sajan Parikh) Date: Mon, 08 Jul 2013 11:40:12 -0500 Subject: spdy per location In-Reply-To: References: <51DAB968.4070904@blueyonder.co.uk> Message-ID: <51DAEB6C.8040205@noppix.com> I guess if you cover all your bases when it comes to making sure your redirect where your users want to go, this might be one use of 'www'. DOMAIN.COM can have SPDY and WWW.DOMAIN.COM can have it off. Then you just redirect each location to the other one, or serve it. Sajan Parikh /Owner, Noppix LLC/ e: sajan at noppix.com o: (563) 726-0371 c: (563) 447-0822 Noppix LLC Logo On 07/08/2013 09:45 AM, Ant?nio P. P. Almeida wrote: > spdy is a socket directive option. You cannot set it outside of that > context AFAICT. > > What you can do is play with redirects between two hosts, one with > spdy and one without. > > Since usually certs have at least one DNS name besides the CN you can > do it with the same cert. Probably > I haven't tested and don't know if Nginx complains about a duplicated > cert in different hosts. > > It's not nice or clean. It's an ugly hack. > > ----appa > > > > On Mon, Jul 8, 2013 at 3:06 PM, Richard Kearsley > > wrote: > > Hi > I'm trying to set up spdy so that I can choose weather or not to > use it based on the server location that's accessed > As I understand, the underlying protocol (http/https/spdy) is > established first before any request can be sent (e.g. before we > know which location it will match) > > I know this example is totally impossible, but would like to know > if there is a real way of doing it: > > server > { > listen 80; > listen 443 ssl spdy; > > location / > { > spdy off; > blah; > } > > location /spdy > { > spdy on; > blah; > } > } > > Many thanks > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NewNoppixEmailLogo.png Type: image/png Size: 7312 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4473 bytes Desc: S/MIME Cryptographic Signature URL: From bsdkazakhstan at gmail.com Mon Jul 8 16:45:04 2013 From: bsdkazakhstan at gmail.com (BSD Kazakhstan) Date: Mon, 8 Jul 2013 19:45:04 +0300 Subject: Perl/CGI on Nginx? Message-ID: Hello everyone. I'm running Nginx on OpenBSD. PHP works fine, but how can I add support to .pl and .cgi files for Nginx? Many thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.algermissen at nordsc.com Mon Jul 8 16:45:33 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Mon, 8 Jul 2013 18:45:33 +0200 Subject: Removing a request header in an access phase handler Message-ID: <454BBB75-7E96-4815-A387-6E7D85B4D917@nordsc.com> Hi, I developing a handler for the access phase. In this handler I intend to remove a certain header. It seems that this is exceptionally hard to do - the only hint I have is how it is done in the headers_more module. However, I wonder, whether there is an easier way, given that it is not an unusual operation. If not, I'd greatly benefit from a documentation of the list and list-part types. Is that available somewhere? Seems hard to figure out all the bits and pieces that one has to go through to cleanly remove an element from a list. Jan From francis at daoine.org Mon Jul 8 16:54:47 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Jul 2013 17:54:47 +0100 Subject: Perl/CGI on Nginx? In-Reply-To: References: Message-ID: <20130708165447.GB11600@craic.sysops.org> On Mon, Jul 08, 2013 at 07:45:04PM +0300, BSD Kazakhstan wrote: Hi there, > I'm running Nginx on OpenBSD. PHP works fine, but how can I add support to > .pl and .cgi files for Nginx? nginx doesn't "do" php, pl, or cgi itself. You must set up an external server to handle them, and configure nginx to talk to that external server. Probably you have set up a fastcgi server which knows how to handle the php files, and you have configured nginx to "fastcgi_pass" to that server for the appropriate urls. You can do the same with pl and cgi files. Or perhaps you can set up a http server which knows how to handle the "dynamic" files; in that case you would configure nginx to "proxy_pass" to the external server. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Jul 8 17:08:23 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Jul 2013 18:08:23 +0100 Subject: Disable open_file_cache for a specific location In-Reply-To: <6c96c039e8b6c53006338c12eead952f.NginxMailingListEnglish@forum.nginx.org> References: <6c96c039e8b6c53006338c12eead952f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130708170823.GC11600@craic.sysops.org> On Sun, Jul 07, 2013 at 04:27:23PM -0400, Peleke wrote: Hi there, > Sorry but my last message was delayed, may be you didn't see it but the > problem is still not solved and it worked before with Apache. I would be > happy if you could help me, thanks! >From all of the words you have written so far, I still do not know of any one url which you access which gives you a response that is different in a specific way from what you expect to get. If you choose not to provide information, I fail to be surprised if others aren't inspired to try to help you. My suggestion is that you do whatever it takes to write a short nginx.conf which can be used to demonstrate the failure that you are trying to report. (Probably: begin with your current nginx.conf, and remove lines which are not immediately obviously necessary, testing after each removal to confirm that things that did work do still work, and that the thing that did fail does still fail.) If it turns out that you can't reproduce the failure, then congratulations, you have fixed it. Otherwise, send the the no-more-than-50-line nginx.conf that is sufficient for anyone else to use to build their own failing test system. And include the output of "nginx -V", and include a very clear and specific: * I access *this* url. * I expect to see *this* output. * I actually see *this* output. and, if it is not immediately obvious: * the difference between the last two is *this*. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Jul 8 17:59:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Jul 2013 21:59:20 +0400 Subject: Removing a request header in an access phase handler In-Reply-To: <454BBB75-7E96-4815-A387-6E7D85B4D917@nordsc.com> References: <454BBB75-7E96-4815-A387-6E7D85B4D917@nordsc.com> Message-ID: <20130708175920.GD38853@mdounin.ru> Hello! On Mon, Jul 08, 2013 at 06:45:33PM +0200, Jan Algermissen wrote: > Hi, > > I developing a handler for the access phase. In this handler I > intend to remove a certain header. > > It seems that this is exceptionally hard to do - the only hint I > have is how it is done in the headers_more module. > > However, I wonder, whether there is an easier way, given that it > is not an unusual operation. Removing request headers from a request isn't something supported by nginx. What is supported is filtering/modification of headers passed to upstream servers with proxy_set_header (fastcgi_param, ...). E.g., this is how proxy module provies a way to add X-Forwarded-For header. It implements the $proxy_add_x_forward_for variable, which is expected to be used in a config like this: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > If not, I'd greatly benefit from a documentation of the list and > list-part types. Is that available somewhere? Seems hard to > figure out all the bits and pieces that one has to go through to > cleanly remove an element from a list. Try looking into src/core/ngx_list.[ch] for a documentation in C. It doesn't really support elements removal though. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Jul 8 19:42:31 2013 From: nginx-forum at nginx.us (Peleke) Date: Mon, 08 Jul 2013 15:42:31 -0400 Subject: Disable open_file_cache for a specific location In-Reply-To: <20130708170823.GC11600@craic.sysops.org> References: <20130708170823.GC11600@craic.sysops.org> Message-ID: Okay: This is the demo gallery of the gallery plugin I (want to) use: http://wp.oopstouch.com/?page_id=22 As you can see the first thumbnail pictures are thumbnails for different albums. When you click on one of them the album will be opened and then you can click on a thumbnail picture again to open it in fullscreen. If you compare that with my gallery (which worked before with Apache) here: http://www.peleke.de/galerie/google/ Nothing happens after a click on an album thumbnail. You will only see a reload of the page and this added to the URL: ?page_id=0&albid=5898003541337462625 (for example) which should show the thumbnail pictures of the album. Maybe it is just a simple URL problem (that was solved in a .htaccess file with Apache before) or expires js problem!? I am no expert and would be happy if you could help me, thanks! Different nginx conf files: sendfile on; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; send_timeout 2; client_max_body_size 20m; client_body_buffer_size 128k; client_body_timeout 10; client_header_timeout 10; keepalive_timeout 10; open_file_cache max=5000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js; gzip_buffers 16 8k; gzip_disable "MSIE [1-6]\.(?!.*SV1)" location ~* ^.+.(jpg|jpeg|gif|css|js|png|ico|html|xml|txt)$ { access_log off; expires modified max; } location ~ /\.ht { deny all; } fastcgi_split_path_info ^(.+\.php)(/.+)$; location / { try_files $uri $uri/ /index.php; #include /etc/nginx/proxy_params; } location ~ \.php$ { #limit_req zone=limit burst=5 nodelay; try_files $uri =404; #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_intercept_errors off; fastcgi_read_timeout 120; fastcgi_buffers 256 4k; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240504,240678#msg-240678 From nginx-forum at nginx.us Mon Jul 8 21:47:12 2013 From: nginx-forum at nginx.us (lennart) Date: Mon, 08 Jul 2013 17:47:12 -0400 Subject: Redirect www to no-www with variable (for multiple domains)? In-Reply-To: References: Message-ID: Edho Arief Wrote: ------------------------------------------------------- > On Sun, Jul 7, 2013 at 6:30 PM, lennart wrote: > > > > Is there a way to do this with regex or variables? I've ~15 domains > and it > > would be more convenient to have only one entry "to rule them all" > ;-) > > > > the regex way, which is supposedly slower: > > server { > server_name ~^www\.(?.+)$; > listen 80; listen [::]:80; > return 301 $scheme://$domain$request_uri; > } > > -- > O< ascii ribbon campaign - stop html mail - www.asciiribbon.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx 'supposedly slower' is a very good argument to not use RegEx ;-) Do you have a source for that? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240622,240679#msg-240679 From francis at daoine.org Mon Jul 8 22:05:10 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Jul 2013 23:05:10 +0100 Subject: Disable open_file_cache for a specific location In-Reply-To: References: <20130708170823.GC11600@craic.sysops.org> Message-ID: <20130708220510.GD11600@craic.sysops.org> On Mon, Jul 08, 2013 at 03:42:31PM -0400, Peleke wrote: Hi there, > http://www.peleke.de/galerie/google/ > Nothing happens after a click on an album thumbnail. You will only see a > reload of the page and this added to the URL: > ?page_id=0&albid=5898003541337462625 (for example) which should show the > thumbnail pictures of the album. I think that you are saying: http://www.peleke.de/galerie/google/ does return the expected content, and http://www.peleke.de/galerie/google/?page_id=0&albid=5898003541337462625 does not return the expected content; instead it returns the same content as http://www.peleke.de/galerie/google/, as if the part after the ? is being ignored. Correct? > location / { > try_files $uri $uri/ /index.php; > #include /etc/nginx/proxy_params; > } If you use your favourite web search engine and look for the equivalent of "site:wordpress.org nginx", you'll probably get to http://codex.wordpress.org/Nginx If you search for "site:nginx.org wordpress", you'll probably get to http://wiki.nginx.org/WordPress They both have a "location /" block which is not the same as yours. The difference does seem to relate to the part after the ?. Do things work any better if you use what they suggest? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Jul 9 02:20:08 2013 From: nginx-forum at nginx.us (est) Date: Mon, 08 Jul 2013 22:20:08 -0400 Subject: $request_header_length and $request_body_length? Message-ID: <27b7977db8166ecb346ecd651f84709d.NginxMailingListEnglish@forum.nginx.org> Hello, I am trying to diagnose a weird 408 error problem on nginx. My theory is that the client might be using some kind of crack making the request body too short for Content-Length header, so nginx waits more data and ultimately fails at 60 seconds timeout. I tried to add few more log options, like $content_length and $request_length, however, the $request_length includes all the header and body length, thus can not get actuall body length. Does nginx provide variables like $request_header_length and $request_body_length? Thanks in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240681,240681#msg-240681 From robm at fastmail.fm Tue Jul 9 02:52:24 2013 From: robm at fastmail.fm (Robert Mueller) Date: Tue, 09 Jul 2013 12:52:24 +1000 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> Message-ID: <1373338344.3599.140661253413574.544426D0@webmail.messagingengine.com> > I have the same problem. I really need this feature. how is this going? > > >> Maxim Dounin: > > Valentin is already worked on this, and I believe he'll be able to > > provide a little bit more generic patch. > > does this mean in the future we can use epoll to detect the client > connection's abort? Yes, I haven't heard in a while what the status of this is. I'm currently using our existing patch, but would love to drop it and upgrade when it's included in nginx core... Rob From jan.algermissen at nordsc.com Mon Jul 8 22:13:37 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Tue, 9 Jul 2013 00:13:37 +0200 Subject: Removing a request header in an access phase handler In-Reply-To: <20130708175920.GD38853@mdounin.ru> References: <454BBB75-7E96-4815-A387-6E7D85B4D917@nordsc.com> <20130708175920.GD38853@mdounin.ru> Message-ID: <5F3C8ECB-8678-490E-BED6-33EADAE240D6@nordsc.com> Hi Maxim, thanks, question inline: On 08.07.2013, at 19:59, Maxim Dounin wrote: > Hello! > > On Mon, Jul 08, 2013 at 06:45:33PM +0200, Jan Algermissen wrote: > >> Hi, >> >> I developing a handler for the access phase. In this handler I >> intend to remove a certain header. >> >> It seems that this is exceptionally hard to do - the only hint I >> have is how it is done in the headers_more module. >> >> However, I wonder, whether there is an easier way, given that it >> is not an unusual operation. > > Removing request headers from a request isn't something supported > by nginx. > > What is supported is filtering/modification of headers passed to > upstream servers with proxy_set_header (fastcgi_param, ...). > > E.g., this is how proxy module provies a way to add > X-Forwarded-For header. It implements the $proxy_add_x_forward_for > variable, which is expected to be used in a config like this: > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > >> If not, I'd greatly benefit from a documentation of the list and >> list-part types. Is that available somewhere? Seems hard to >> figure out all the bits and pieces that one has to go through to >> cleanly remove an element from a list. > > Try looking into src/core/ngx_list.[ch] for a documentation in C. > It doesn't really support elements removal though. Yes, I could provide a list_remove implementation - problem is, I think that the request->headers_in convenience fields point to elements of the .headers list, yes? Given that list elts is an array, re-organizing that array would invalidate the pointers of headers_in. Right ow, I think that renaming the header in question and setting the headers_in field to null is probably the only option. What do you think? Jan > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Jul 9 11:13:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Jul 2013 15:13:54 +0400 Subject: Removing a request header in an access phase handler In-Reply-To: <5F3C8ECB-8678-490E-BED6-33EADAE240D6@nordsc.com> References: <454BBB75-7E96-4815-A387-6E7D85B4D917@nordsc.com> <20130708175920.GD38853@mdounin.ru> <5F3C8ECB-8678-490E-BED6-33EADAE240D6@nordsc.com> Message-ID: <20130709111354.GG38853@mdounin.ru> Hello! On Tue, Jul 09, 2013 at 12:13:37AM +0200, Jan Algermissen wrote: > Hi Maxim, > > thanks, question inline: > > On 08.07.2013, at 19:59, Maxim Dounin wrote: > > > Hello! > > > > On Mon, Jul 08, 2013 at 06:45:33PM +0200, Jan Algermissen wrote: > > > >> Hi, > >> > >> I developing a handler for the access phase. In this handler I > >> intend to remove a certain header. > >> > >> It seems that this is exceptionally hard to do - the only hint I > >> have is how it is done in the headers_more module. > >> > >> However, I wonder, whether there is an easier way, given that it > >> is not an unusual operation. > > > > Removing request headers from a request isn't something supported > > by nginx. > > > > What is supported is filtering/modification of headers passed to > > upstream servers with proxy_set_header (fastcgi_param, ...). > > > > E.g., this is how proxy module provies a way to add > > X-Forwarded-For header. It implements the $proxy_add_x_forward_for > > variable, which is expected to be used in a config like this: > > > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > > >> If not, I'd greatly benefit from a documentation of the list and > >> list-part types. Is that available somewhere? Seems hard to > >> figure out all the bits and pieces that one has to go through to > >> cleanly remove an element from a list. > > > > Try looking into src/core/ngx_list.[ch] for a documentation in C. > > It doesn't really support elements removal though. > > Yes, I could provide a list_remove implementation - problem is, > I think that the request->headers_in convenience fields point to > elements of the .headers list, yes? Given that list elts is an > array, re-organizing that array would invalidate the pointers of > headers_in. > > Right ow, I think that renaming the header in question and > setting the headers_in field to null is probably the only > option. > > What do you think? Quoting myself: : Removing request headers from a request isn't something : supported by nginx. If you are going to remove request headers - first of all, you should understand that what you are doing is a hack. And asking me what do I think is a bit pointless - I think it's a hack. :) -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jul 9 11:37:43 2013 From: nginx-forum at nginx.us (dlynam) Date: Tue, 09 Jul 2013 07:37:43 -0400 Subject: Permissions not working for upload module Message-ID: <18ef6b09a073315dd0af23e6c1e0dda5.NginxMailingListEnglish@forum.nginx.org> I've recently created an RPM for Cent-Os 4 for NGINX 1.2.6 which includes the `nginx_upload_module version 2.2.0` When the files get uploaded they don't have correct permissions. They have the default user `rw permission` However included the `upload_store_access` with the correct permissions See my config. location /upload { dav_methods PUT; root /opt/glassfish/domains/domain1/logs/uploadStore/; upload_store /opt/glassfish/domains/domain1/logs/uploadStore 1; upload_store_access all:rw; upload_set_form_field $upload_field_name.name $arg_filename; upload_set_form_field $upload_field_name.content "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; allow all; } The files get output to `/opt/glassfish/domains/domain1/logs/uploadStore/upload/` Is this correct? I would expect them to just go to `/opt/glassfish/domains/domain1/logs/uploadStore/` I need to get the output files to be readable by every user. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240692,240692#msg-240692 From mdounin at mdounin.ru Tue Jul 9 12:03:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Jul 2013 16:03:45 +0400 Subject: $request_header_length and $request_body_length? In-Reply-To: <27b7977db8166ecb346ecd651f84709d.NginxMailingListEnglish@forum.nginx.org> References: <27b7977db8166ecb346ecd651f84709d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130709120345.GJ38853@mdounin.ru> Hello! On Mon, Jul 08, 2013 at 10:20:08PM -0400, est wrote: > Hello, > > I am trying to diagnose a weird 408 error problem on nginx. > > My theory is that the client might be using some kind of crack making the > request body too short for Content-Length header, so nginx waits more data > and ultimately fails at 60 seconds timeout. > > I tried to add few more log options, like $content_length and > $request_length, however, the $request_length includes all the header and > body length, thus can not get actuall body length. > > Does nginx provide variables like $request_header_length and > $request_body_length? No, there is no such variables. For debugging purposes there is debug log provided, see http://nginx.org/en/docs/debugging_log.html. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Jul 9 12:08:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Jul 2013 16:08:18 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <1373338344.3599.140661253413574.544426D0@webmail.messagingengine.com> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> <1373338344.3599.140661253413574.544426D0@webmail.messagingengine.com> Message-ID: <20130709120817.GK38853@mdounin.ru> Hello! On Tue, Jul 09, 2013 at 12:52:24PM +1000, Robert Mueller wrote: > > > I have the same problem. I really need this feature. how is this going? > > > > >> Maxim Dounin: > > > Valentin is already worked on this, and I believe he'll be able to > > > provide a little bit more generic patch. > > > > does this mean in the future we can use epoll to detect the client > > connection's abort? > > Yes, I haven't heard in a while what the status of this is. I'm > currently using our existing patch, but would love to drop it and > upgrade when it's included in nginx core... As far as I can tell the state is roughly the same (though patch in question improved a bit). Valentin? -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jul 9 13:11:56 2013 From: nginx-forum at nginx.us (Peleke) Date: Tue, 09 Jul 2013 09:11:56 -0400 Subject: Disable open_file_cache for a specific location In-Reply-To: <20130708220510.GD11600@craic.sysops.org> References: <20130708220510.GD11600@craic.sysops.org> Message-ID: Yes, you are right. Such a small change! Thanks a lot and sorry for taking such a long time. Next time I know how to post a proper question. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240504,240687#msg-240687 From rkearsley at blueyonder.co.uk Tue Jul 9 16:50:23 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Tue, 09 Jul 2013 17:50:23 +0100 Subject: spdy per location In-Reply-To: <51DAEB6C.8040205@noppix.com> References: <51DAB968.4070904@blueyonder.co.uk> <51DAEB6C.8040205@noppix.com> Message-ID: <51DC3F4F.6010008@blueyonder.co.uk> Thanks all I think I will just open another port (looks like 6121 is registered for spdy?) because I'm not using hostnames (only IPs) and I don't like redirects so: server { listen 80; listen 443 ssl; listen 6121 ssl spdy; # it will still fall-back to https if the client doesn't support spdy location / { blah; } } Cheers On 08/07/13 17:40, Sajan Parikh wrote: > I guess if you cover all your bases when it comes to making sure your > redirect where your users want to go, this might be one use of 'www'. > DOMAIN.COM can have SPDY and WWW.DOMAIN.COM can have it off. > > Then you just redirect each location to the other one, or serve it. > > Sajan Parikh > /Owner, Noppix LLC/ > > e: sajan at noppix.com > o: (563) 726-0371 > c: (563) 447-0822 > > Noppix LLC Logo > On 07/08/2013 09:45 AM, Ant?nio P. P. Almeida wrote: >> spdy is a socket directive option. You cannot set it outside of that >> context AFAICT. >> >> What you can do is play with redirects between two hosts, one with >> spdy and one without. >> >> Since usually certs have at least one DNS name besides the CN you can >> do it with the same cert. Probably >> I haven't tested and don't know if Nginx complains about a duplicated >> cert in different hosts. >> >> It's not nice or clean. It's an ugly hack. >> >> ----appa >> >> >> >> On Mon, Jul 8, 2013 at 3:06 PM, Richard Kearsley >> > wrote: >> >> Hi >> I'm trying to set up spdy so that I can choose weather or not to >> use it based on the server location that's accessed >> As I understand, the underlying protocol (http/https/spdy) is >> established first before any request can be sent (e.g. before we >> know which location it will match) >> >> I know this example is totally impossible, but would like to know >> if there is a real way of doing it: >> >> server >> { >> listen 80; >> listen 443 ssl spdy; >> >> location / >> { >> spdy off; >> blah; >> } >> >> location /spdy >> { >> spdy on; >> blah; >> } >> } >> >> Many thanks >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 7312 bytes Desc: not available URL: From ben at indietorrent.org Tue Jul 9 20:48:27 2013 From: ben at indietorrent.org (Ben Johnson) Date: Tue, 09 Jul 2013 16:48:27 -0400 Subject: Possible to overwrite a fastcgi_param "later", once a location block has already been closed? Message-ID: <51DC771B.8020600@indietorrent.org> Hello, I am working with a server configuration that is partly outside of my control, and have a need to overwrite a fastcgi_param "after" the directives that are outside of my control have already been included. The basics of the configuration are: ------------------------------------------------------------------- # [...] location ~ \.php$ { try_files /2ed86bea62460140e9b23d047f7d68b1.htm @php; } location @php { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9013; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; } # At this point, the fastcgi_param values have already been defined. # This is the include file that I am able to modify. include my-include.conf ------------------------------------------------------------------- Is it possible for me to overwrite the values that are defined on the line "include /etc/nginx/fastcgi_params;" from within the included file that I can modify, "my-include.conf"? In particular, I would like to hard-code the SERVER_NAME value within "my-include.conf". Thanks for any help, -Ben From francis at daoine.org Tue Jul 9 21:47:55 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Jul 2013 22:47:55 +0100 Subject: Possible to overwrite a fastcgi_param "later", once a location block has already been closed? In-Reply-To: <51DC771B.8020600@indietorrent.org> References: <51DC771B.8020600@indietorrent.org> Message-ID: <20130709214755.GE11600@craic.sysops.org> On Tue, Jul 09, 2013 at 04:48:27PM -0400, Ben Johnson wrote: Hi there, > I am working with a server configuration that is partly outside of my > control, I suspect that that's not the intended use case for nginx. Whoever writes the config file has the opportunity to configure it not to start. > and have a need to overwrite a fastcgi_param "after" the > directives that are outside of my control have already been included. That sentence is possible; but not the one in your Subject: line. > location @php { > try_files $uri =404; > include /etc/nginx/fastcgi_params; > fastcgi_pass 127.0.0.1:9013; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_intercept_errors on; > } > > # At this point, the fastcgi_param values have already been defined. > > # This is the include file that I am able to modify. > > include my-include.conf That line must be within the "location @php" block, in order for it to do what you want. (Otherwise, you could try completely hijacking the config by using something like location ^~ / { location ~ php$ { # you control what goes here } # and what goes here } in my-include.conf, but that would be a good way to lose config privileges on the server.) > Is it possible for me to overwrite the values that are defined on the > line "include /etc/nginx/fastcgi_params;" from within the included file > that I can modify, "my-include.conf"? Current nginx sends all fastcgi_params that are defined, in order; so if you set the same one multiple times in the same context, all are sent. Your fastcgi server probably only pays attention to the first one received, or maybe to the last one received. Test to find out. When you know which it is, put your "include" line before or after the other "include" line, and see how it works. Future nginx may stop sending all repeated fastcgi_params. If you change fastcgi server, the one it pays attention to may change. So your testing should be repeated after every upgrade. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Jul 9 23:50:32 2013 From: nginx-forum at nginx.us (badtzhou) Date: Tue, 09 Jul 2013 19:50:32 -0400 Subject: nginx cache loader process In-Reply-To: <53D87DB8-8106-4505-BDA5-A82EFB69B4E7@sysoev.ru> References: <53D87DB8-8106-4505-BDA5-A82EFB69B4E7@sysoev.ru> Message-ID: <483697e03cc8e7c419125243aee103ae.NginxMailingListEnglish@forum.nginx.org> We are running nginx 1.2.9. What will happen if cache loader is running. The file on disk haven't been loaded into the cache zone yet and someone try to access the same file. Will it cause any issue? How will it affect cache loader process. If I don't want any file on the disk to be loaded, can I simply kill the cache loader process? Will that cause any problem? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240564,240702#msg-240702 From vill.srk at gmail.com Tue Jul 9 23:54:22 2013 From: vill.srk at gmail.com (Vil Surkin) Date: Wed, 10 Jul 2013 03:54:22 +0400 Subject: nginx cache loader process In-Reply-To: <483697e03cc8e7c419125243aee103ae.NginxMailingListEnglish@forum.nginx.org> References: <53D87DB8-8106-4505-BDA5-A82EFB69B4E7@sysoev.ru> <483697e03cc8e7c419125243aee103ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8A6A63C01DED4CBD8F72EB0AE750887E@gmail.com> Hello, Take a look at 'proxy_cache_lock' directive. -- Vil Surkin ?????, 10 ???? 2013 ?. ? 3:50, badtzhou ???????: > We are running nginx 1.2.9. What will happen if cache loader is running. The > file on disk haven't been loaded into the cache zone yet and someone try to > access the same file. Will it cause any issue? How will it affect cache > loader process. > > If I don't want any file on the disk to be loaded, can I simply kill the > cache loader process? Will that cause any problem? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240564,240702#msg-240702 > > _______________________________________________ > nginx mailing list > nginx at nginx.org (mailto:nginx at nginx.org) > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at indietorrent.org Wed Jul 10 00:13:16 2013 From: ben at indietorrent.org (Ben Johnson) Date: Tue, 09 Jul 2013 20:13:16 -0400 Subject: Possible to overwrite a fastcgi_param "later", once a location block has already been closed? In-Reply-To: <20130709214755.GE11600@craic.sysops.org> References: <51DC771B.8020600@indietorrent.org> <20130709214755.GE11600@craic.sysops.org> Message-ID: <51DCA71C.6070908@indietorrent.org> On 7/9/2013 5:47 PM, Francis Daly wrote: > On Tue, Jul 09, 2013 at 04:48:27PM -0400, Ben Johnson wrote: > > Hi there, > >> I am working with a server configuration that is partly outside of my >> control, > > I suspect that that's not the intended use case for nginx. Whoever writes > the config file has the opportunity to configure it not to start. > To be clear, this lack of control is not due to a lack of privileges; I use ISPConfig, which has its own way of doing things (and it usually knows best, based on past experience). >> and have a need to overwrite a fastcgi_param "after" the >> directives that are outside of my control have already been included. > > That sentence is possible; but not the one in your Subject: line. > I see; there is a distinction to be made. Thanks for making it. :) >> location @php { >> try_files $uri =404; >> include /etc/nginx/fastcgi_params; >> fastcgi_pass 127.0.0.1:9013; >> fastcgi_index index.php; >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> fastcgi_intercept_errors on; >> } >> >> # At this point, the fastcgi_param values have already been defined. >> >> # This is the include file that I am able to modify. >> >> include my-include.conf > > That line must be within the "location @php" block, in order for it to > do what you want. > Okay; this means that I would need to modify the ISPConfig virtual host template for nginx. I would love to avoid that, if at all possible, for compatibility with future ISPConfig releases. > (Otherwise, you could try completely hijacking the config by using > something like > > location ^~ / { > location ~ php$ { > # you control what goes here > } > # and what goes here > } > > in my-include.conf, but that would be a good way to lose config privileges > on the server.) > No concerns in this regard; I administer the server. But it seems like taking that measure would defeat the purpose of using ISPConfig. >> Is it possible for me to overwrite the values that are defined on the >> line "include /etc/nginx/fastcgi_params;" from within the included file >> that I can modify, "my-include.conf"? > > Current nginx sends all fastcgi_params that are defined, in order; so > if you set the same one multiple times in the same context, all are sent. > I see. Presumably, if I set the same fastcgi_param multiple times in different contexts, any content in which inheritance applies will overwrite any previously-defined value with the new value. Is this presumption correct? It seems so, based on the documentation regarding fastcgi_params. > Your fastcgi server probably only pays attention to the first one > received, or maybe to the last one received. Test to find out. > > When you know which it is, put your "include" line before or after the > other "include" line, and see how it works. > It seems that my FastCGI service is concerned with the last value only. If I place the my preferred value after ISPConfig's include line, it is honored. > Future nginx may stop sending all repeated fastcgi_params. If you change > fastcgi server, the one it pays attention to may change. So your testing > should be repeated after every upgrade. > > f > Perhaps it is prudent to proceed under this proviso. Maybe I can find a way to make this work, using ISPConfig's configuration template "merging" functionality, to make this work. I'll post back with my findings. Thanks for all your help, Francis. Very thorough, as always. -Ben From nginx-forum at nginx.us Wed Jul 10 07:16:07 2013 From: nginx-forum at nginx.us (wolfy) Date: Wed, 10 Jul 2013 03:16:07 -0400 Subject: Problem with VPN IP address and Nginx Message-ID: <59b39f8238b48d8e99675818d25dd19b.NginxMailingListEnglish@forum.nginx.org> Hi all ! When i use OpenVPN, my remote ip address detected by Nginx (not used on reverse proxy) is different than Apache (standalone, just for test), or http://whatismyipaddress.com, the ip detected by Nginx is my real ip address, not the IP address of my VPN, so i cannot use allow/deny function correctly. Could you please help me ? My nginx.conf : user www-data www-data; worker_processes 2; events { worker_connections 1000; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/gif image/jpeg image/png; gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_vary on; server_tokens off; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } My vhost's : server { server_name XXX.tld; root /var/www/selfoss; listen 443; ssl on; ssl_certificate /etc/nginx/mycert.crt; ssl_certificate_key /etc/nginx/mykey.key; index index.php; access_log /var/log/nginx/selfoss-access.log; error_log /var/log/nginx/selfoss-error.log; location / { allow XX.XX.XX.XX; deny all; try_files $uri /public/$uri /index.php$is_args$args; } location ~* \ (gif|jpg|png) { expires 30d; } location ~ ^/favicons/.*$ { try_files $uri /data/$uri; } location ~* ^/(data\/logs|data\/sqlite|config\.ini|\.ht) { deny all; } location ~ \.php$ { client_body_timeout 360; send_timeout 360; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_intercept_errors on; } } Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240709,240709#msg-240709 From hello at apfelbox.net Wed Jul 10 07:40:49 2013 From: hello at apfelbox.net (Jannik Zschiesche) Date: Wed, 10 Jul 2013 09:40:49 +0200 Subject: Wrong server used in SSL request Message-ID: Hi everyone, I have a rather strange issue. I have a server with 3 configured urls: example.com (+ ssl) shop.example.com (+ ssl) example2.com (- ssl) If I now open https://example2.com the server of https://shop.example.com is used. My config looks like this: https://gist.github.com/apfelbox/c13a226633a7df92e3fe Does anybody have an idea? Thanks and Regards Jannik Zschiesche -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello at apfelbox.net Wed Jul 10 07:43:01 2013 From: hello at apfelbox.net (Jannik Zschiesche) Date: Wed, 10 Jul 2013 09:43:01 +0200 Subject: Wrong server used in SSL request In-Reply-To: References: Message-ID: Hi, sorry, wrong link. Here is the correct one: https://gist.github.com/apfelbox/94c74ab9c515ee906e6b Regards Jannik -- Mit freundlichen Gr??en Jannik Zschiesche Am Mittwoch, 10. Juli 2013 um 09:40 schrieb Jannik Zschiesche: > Hi everyone, > > I have a rather strange issue. > > I have a server with 3 configured urls: > > example.com (http://example.com) (+ ssl) > shop.example.com (http://shop.example.com) (+ ssl) > example2.com (http://example2.com) (- ssl) > > If I now open https://example2.com the server of https://shop.example.com is used. > > > My config looks like this: > https://gist.github.com/apfelbox/c13a226633a7df92e3fe > > > Does anybody have an idea? > > > > Thanks and Regards > Jannik Zschiesche > > _______________________________________________ > nginx mailing list > nginx at nginx.org (mailto:nginx at nginx.org) > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Wed Jul 10 07:54:11 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 10 Jul 2013 08:54:11 +0100 Subject: Wrong server used in SSL request In-Reply-To: References: Message-ID: On 10 Jul 2013 08:41, "Jannik Zschiesche" wrote: > > Hi everyone, > > I have a rather strange issue. > > I have a server with 3 configured urls: > > example.com (+ ssl) > shop.example.com (+ ssl) > example2.com (- ssl) > > If I now open https://example2.com the server of https://shop.example.comis used. > > > My config looks like this: > https://gist.github.com/apfelbox/c13a226633a7df92e3fe > > > Does anybody have an idea? This is due to you having only one IP listening for ssl traffic. It's a fundamental limitation of ssl when not used with SNI. To fix it, you'll need to either use more IPs and listen explicitly on different ones for different virtual hosts, or use SNI, or use a wildcard (or UCC/SaN) certificate. The first fix is by far the most common for people in your situation. HTH, Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Wed Jul 10 08:16:50 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 10 Jul 2013 12:16:50 +0400 Subject: Wrong server used in SSL request In-Reply-To: References: Message-ID: <7F95FE79-C2EC-45DD-B646-C3548110F91A@sysoev.ru> On Jul 10, 2013, at 11:40 , Jannik Zschiesche wrote: > Hi everyone, > > I have a rather strange issue. > > I have a server with 3 configured urls: > > example.com (+ ssl) > shop.example.com (+ ssl) > example2.com (- ssl) > > If I now open https://example2.com the server of https://shop.example.com is used. http://nginx.org/en/docs/http/configuring_https_servers.html#name_based_https_servers -- Igor Sysoev http://nginx.com/services.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello at apfelbox.net Wed Jul 10 08:42:44 2013 From: hello at apfelbox.net (Jannik Zschiesche) Date: Wed, 10 Jul 2013 10:42:44 +0200 Subject: Wrong server used in SSL request In-Reply-To: References: Message-ID: Am Mittwoch, 10. Juli 2013 um 09:54 schrieb Jonathan Matthews: > This is due to you having only one IP listening for ssl traffic. It's a fundamental limitation of ssl when not used with SNI. > To fix it, you'll need to either use more IPs and listen explicitly on different ones for different virtual hosts, or use SNI, or use a wildcard (or UCC/SaN) certificate. The first fix is by far the most common for people in your situation. > HTH, > Jonathan Hi, thank you both. Actually, I have SNI enabled. https://example.com and https://shop.example.com both work correctly (so SNI works). The issue is with the nonexistent SSL server for example2.com. It seems, that if a SSL server for a domain is not configured, another server is used (instead of error-ing out). Is this correct? -- Cheers Jannik Zschiesche -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Jul 10 11:19:09 2013 From: r at roze.lv (Reinis Rozitis) Date: Wed, 10 Jul 2013 14:19:09 +0300 Subject: Wrong server used in SSL request In-Reply-To: References: Message-ID: <3C636D89862E40D5B5FBF31AC1B7B33E@MasterPC> > It seems, that if a SSL server for a domain is not configured, another > server is used (instead of error-ing out). Is this correct? Yes, the default/first server. The ?error-ing out? (with option to proceed anyway) usually happens on the client side/browser which checks that the host name doesn?t match the server SSL certificate. rr From r at roze.lv Wed Jul 10 11:22:27 2013 From: r at roze.lv (Reinis Rozitis) Date: Wed, 10 Jul 2013 14:22:27 +0300 Subject: Problem with VPN IP address and Nginx In-Reply-To: <59b39f8238b48d8e99675818d25dd19b.NginxMailingListEnglish@forum.nginx.org> References: <59b39f8238b48d8e99675818d25dd19b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <380934DFC5B0448FAA41D6CB6B517673@MasterPC> > the ip detected by Nginx is my real ip address, not the IP address of my > VPN, so i cannot use allow/deny function correctly. > Could you please help me ? Your nginx/host IP is probably not routed via the VPN tunnel so you access it directly and it sees your real ip - so check the routes on your client box. There is nothing you can change in nginx.conf for that. rr From mdounin at mdounin.ru Wed Jul 10 11:53:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jul 2013 15:53:31 +0400 Subject: nginx cache loader process In-Reply-To: <483697e03cc8e7c419125243aee103ae.NginxMailingListEnglish@forum.nginx.org> References: <53D87DB8-8106-4505-BDA5-A82EFB69B4E7@sysoev.ru> <483697e03cc8e7c419125243aee103ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130710115330.GA66479@mdounin.ru> Hello! On Tue, Jul 09, 2013 at 07:50:32PM -0400, badtzhou wrote: > We are running nginx 1.2.9. What will happen if cache loader is running. The > file on disk haven't been loaded into the cache zone yet and someone try to > access the same file. Will it cause any issue? How will it affect cache > loader process. If the cache isn't yet loaded, nginx will try to check if a cache file is there by a looking into disk directly from a worker process. That is, it will work without problems and will use cached responses, but will be slightly less effective. > If I don't want any file on the disk to be loaded, can I simply kill the > cache loader process? Will that cause any problem? It's not a good idea. In a worst case, if you'll kill it with kill -9 at a wrong time, shared memory zone will be corrupted, resulting in completely non-working nginx. If cache loader causes too high load on your server, may want to tune it's settings to make it less aggressive. See http://nginx.org/r/proxy_cache_path for details. -- Maxim Dounin http://nginx.org/en/donation.html From lists at ruby-forum.com Wed Jul 10 13:30:43 2013 From: lists at ruby-forum.com (Yunior Miguel A.) Date: Wed, 10 Jul 2013 15:30:43 +0200 Subject: update NGINX to v 1.4.1 Message-ID: <486a7f4897316cddb903f9bf2d686c53@ruby-forum.com> I have a server with some ruby aplication in nginx 1.1.19 and I am want to update to the version 1.4.1 but I can update using apt-get because the server no have internet. I can download the package in other PC. How can I do this? -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Jul 10 14:14:40 2013 From: nginx-forum at nginx.us (Sylvia) Date: Wed, 10 Jul 2013 10:14:40 -0400 Subject: update NGINX to v 1.4.1 In-Reply-To: <486a7f4897316cddb903f9bf2d686c53@ruby-forum.com> References: <486a7f4897316cddb903f9bf2d686c53@ruby-forum.com> Message-ID: <6730ea56674c89da33d3cda5cfa9c7b4.NginxMailingListEnglish@forum.nginx.org> download, put file into target machine install/upgrade with dpkg -i Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240720,240725#msg-240725 From nginx-forum at nginx.us Wed Jul 10 14:25:59 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 10 Jul 2013 10:25:59 -0400 Subject: update NGINX to v 1.4.1 In-Reply-To: <486a7f4897316cddb903f9bf2d686c53@ruby-forum.com> References: <486a7f4897316cddb903f9bf2d686c53@ruby-forum.com> Message-ID: <5ef671d54aba5c2140a7f9d1f6c87c62.NginxMailingListEnglish@forum.nginx.org> 1. aptitude download nginx or browse to the repo with your browser and downlaod manually or create a deb-package via checkinstall from nginx-sources 2. transfer to the target-machine 3. dpkg -i Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240720,240726#msg-240726 From nginx-forum at nginx.us Wed Jul 10 14:40:49 2013 From: nginx-forum at nginx.us (FSC) Date: Wed, 10 Jul 2013 10:40:49 -0400 Subject: SSL + Large file uploads (3GB+) Message-ID: <81e96a46e61f63ba0dd35bd660da617b.NginxMailingListEnglish@forum.nginx.org> Hello, I'm having problems uploading large files (3GB) to a server with SSL enabled. I'm using nginx 1.4.2 + passenger. Files around 1GB large work fine. My /tmp mount is large enough to handle files of that size. I guess I'm missing something. I read about a SSLRenegBufferSize parameter for the apache server - is there a similar directive for nginx that I'm missing? I appreciate any hints. My nginx configuration: nginx version: nginx/1.4.1 built by gcc 4.4.5 (Debian 4.4.5-8) TLS SNI support enabled configure arguments: --prefix=/opt/nginx --with-http_ssl_module --with-http_gzip_static_module --with-cc-opt=-Wno-error --add-module=/usr/local/rvm/gems/ruby-1.9.3-p392/gems/passenger-3.0.19/ext/nginx Here is the curl output that I get when trying to post a 3GB+ file: curl -v -X POST -d @BIG_TEST_ARCHIVE_3.zip https://shared.com * About to connect() to shared.com port 443 (#0) * Trying 199.58.85.103... * connected * Connected to shared.com (199.58.85.103) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: OU=Domain Control Validated; CN=shared.com * start date: 2013-05-06 12:15:19 GMT * expire date: 2016-05-06 12:15:19 GMT * subjectAltName: shared.com matched * issuer: C=US; ST=Arizona; L=Scottsdale; O=GoDaddy.com, Inc.; OU=http://certificates.godaddy.com/repository; CN=Go Daddy Secure Certification Authority; serialNumber=07969287 * SSL certificate verify ok. > POST / HTTP/1.1 > User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8x zlib/1.2.5 > Host: shared.com > Accept: */* > Content-Length: 1590530720 > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > < HTTP/1.1 100 Continue * SSL read: error:00000000:lib(0):func(0):reason(0), errno 60 * Closing connection #0 curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 60 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240727,240727#msg-240727 From mdounin at mdounin.ru Wed Jul 10 15:25:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jul 2013 19:25:56 +0400 Subject: SSL + Large file uploads (3GB+) In-Reply-To: <81e96a46e61f63ba0dd35bd660da617b.NginxMailingListEnglish@forum.nginx.org> References: <81e96a46e61f63ba0dd35bd660da617b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130710152555.GG66479@mdounin.ru> Hello! On Wed, Jul 10, 2013 at 10:40:49AM -0400, FSC wrote: > Hello, > > I'm having problems uploading large files (3GB) to a server with SSL > enabled. I'm using nginx 1.4.2 + passenger. Files around 1GB large work > fine. > > My /tmp mount is large enough to handle files of that size. I guess I'm > missing something. I read about a SSLRenegBufferSize parameter for the > apache server - is there a similar directive for nginx that I'm missing? I > appreciate any hints. > > My nginx configuration: > nginx version: nginx/1.4.1 > built by gcc 4.4.5 (Debian 4.4.5-8) > TLS SNI support enabled > configure arguments: --prefix=/opt/nginx --with-http_ssl_module > --with-http_gzip_static_module --with-cc-opt=-Wno-error > --add-module=/usr/local/rvm/gems/ruby-1.9.3-p392/gems/passenger-3.0.19/ext/nginx > > > Here is the curl output that I get when trying to post a 3GB+ file: > > curl -v -X POST -d @BIG_TEST_ARCHIVE_3.zip https://shared.com [...] > > POST / HTTP/1.1 > > User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 > OpenSSL/0.9.8x zlib/1.2.5 > > Host: shared.com > > Accept: */* > > Content-Length: 1590530720 > > Content-Type: application/x-www-form-urlencoded > > Expect: 100-continue > > > < HTTP/1.1 100 Continue > > * SSL read: error:00000000:lib(0):func(0):reason(0), errno 60 > * Closing connection #0 > curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 60 Provided curl output suggests the problem is in curl, note "Content-Length: 1590530720". -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Jul 10 17:59:34 2013 From: nginx-forum at nginx.us (wolfy) Date: Wed, 10 Jul 2013 13:59:34 -0400 Subject: Problem with VPN IP address and Nginx In-Reply-To: <380934DFC5B0448FAA41D6CB6B517673@MasterPC> References: <380934DFC5B0448FAA41D6CB6B517673@MasterPC> Message-ID: <8ec218747e742a3fc81f0f32894e952e.NginxMailingListEnglish@forum.nginx.org> I try to understand why with nginx my IP address is not that the IP address of my VPN, then with Apache, or any website to display my IP it works correctly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240709,240737#msg-240737 From nginx-forum at nginx.us Wed Jul 10 21:15:05 2013 From: nginx-forum at nginx.us (pravinmishra88) Date: Wed, 10 Jul 2013 17:15:05 -0400 Subject: Deploying sinatra app on rails app sub uri using( unicorn and nginx) Message-ID: <0012c4f7f1dced0bf1e2fc996ad6f068.NginxMailingListEnglish@forum.nginx.org> I have rails app running on unicorn+nginx. below is the nginx.conf and unicorn.rb configuration. nginx.conf upstream unicorn { server unix:/tmp/unicorn.todo.sock fail_timeout=0; } server{ listen 80 default deferred; #server_name localhost; root /var/www/demo/public; try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } unicorn.rb working_directory "/var/www/demo" pid "/var/www/demo/tmp/pids/unicorn.pid" stderr_path "/var/www/demo/unicorn.log" stdout_path "/var/www/demo/unicorn.log" listen "/tmp/unicorn.todo.sock" worker_processes 2 timeout 30 It's working fine for me. Now i wanted to deploy another small sinatra app rails app sub uri(localhost:3000/sinatraapp). DETAILS: As we know rails app running on localhost:3000, Now i am trying to configure sinatra app on localhost:3000/sinatraapp. Please suggest me, How will i configure above requirement. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240739,240739#msg-240739 From francis at daoine.org Wed Jul 10 21:21:04 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Jul 2013 22:21:04 +0100 Subject: Possible to overwrite a fastcgi_param "later", once a location block has already been closed? In-Reply-To: <51DCA71C.6070908@indietorrent.org> References: <51DC771B.8020600@indietorrent.org> <20130709214755.GE11600@craic.sysops.org> <51DCA71C.6070908@indietorrent.org> Message-ID: <20130710212104.GF11600@craic.sysops.org> On Tue, Jul 09, 2013 at 08:13:16PM -0400, Ben Johnson wrote: > On 7/9/2013 5:47 PM, Francis Daly wrote: Hi there, > > That line must be within the "location @php" block, in order for it to > > do what you want. > > Okay; this means that I would need to modify the ISPConfig virtual host > template for nginx. I would love to avoid that, if at all possible, for > compatibility with future ISPConfig releases. You could just modify /etc/nginx/fastcgi_params, but that may not suit depending on other considerations. > > (Otherwise, you could try completely hijacking the config by using > No concerns in this regard; I administer the server. But it seems like > taking that measure would defeat the purpose of using ISPConfig. Correct. The usual nginx rules of one request is handled in one location, and inheritance is by replacement or not at all, mean this is so. > I see. Presumably, if I set the same fastcgi_param multiple times in > different contexts, any content in which inheritance applies will > overwrite any previously-defined value with the new value. Is this > presumption correct? If you set any fastcgi_param in one context, no fastcgi_param is inherited into that context. > > Future nginx may stop sending all repeated fastcgi_params. If you change > > fastcgi server, the one it pays attention to may change. So your testing > > should be repeated after every upgrade. > Perhaps it is prudent to proceed under this proviso. On the nginx side, if it does ever stop sending multiple values, presumably it will be clear which single value it will send. At that point, it won't matter what your fastcgi server does with multiple values. So it's a relatively straightforward test after each upgrade. > Maybe I can find a > way to make this work, using ISPConfig's configuration template > "merging" functionality, to make this work. The good news is that nginx doesn't care how the config file is created :-) You can add your "include" directive, or you can use an external macro function to get the right lines into the right location. Possibly this tool has a facility like that. > Thanks for all your help, Francis. Very thorough, as always. Cheers; good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jul 10 21:25:46 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Jul 2013 22:25:46 +0100 Subject: Disable open_file_cache for a specific location In-Reply-To: References: <20130708220510.GD11600@craic.sysops.org> Message-ID: <20130710212546.GG11600@craic.sysops.org> On Tue, Jul 09, 2013 at 09:11:56AM -0400, Peleke wrote: Hi there, > Yes, you are right. Such a small change! Yes. nginx and wordpress do tend to work well together, so "typical" fixes are usually straightforward. > Thanks a lot and sorry for taking such a long time. Next time I know how to > post a proper question. You're welcome; no worries. The more specific a problem report is, the more chance any mailing list has of finding the answer. All the best, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jul 10 22:50:39 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Jul 2013 23:50:39 +0100 Subject: migrating Topincs to nginx In-Reply-To: References: Message-ID: <20130710225039.GH11600@craic.sysops.org> On Fri, Jul 05, 2013 at 03:13:51PM +0200, Piotr Kopszak wrote: Hi there, I have not tested any of this, so take it as "may be interesting to consider" rather than "this is the recipe to use". > "My main objective in the Apache and Topincs integration was that it > is possible to occupy any URL space below a domain without the need > for a general 'root', e.g. everything under /topincs. Plus it should > be possible to run different stores under different Topincs versions, > just in case. I confess I don't know what those words mean, in terms of anything that nginx should do. I *think* it says "there can be no configured prefix"; but it might mean "the prefix can be set by the administrator to whatever they want". nginx works in terms of locations, which are "parts of the url hierarchy". It is generally good if these are segregated by url prefix. > So Apache does two things for Topincs: > > 1. The rewrite rules basically cut off the store path prefix (e.g. > /trial/movies) and pass the result on to > TOPINCS_HOME/docroot/.topincs. > 2. Based on the path prefix it maps the URL to a store (database). > This is done by setting an Apache environment variable in the > configuration. This variable is read in .topincs only. > > So once you manage to provide .topincs with the above, you are set." ( I suspect that these words make perfect sense to someone who has studied the topincs jargon. That's not me. But I guess that if "an Apache environment variable" means specifically that, then it may not work outside of apache, without some code changes. For nginx the questions are, approximately, "what url are you requesting", and "what do you want nginx to do with that request". If you can answer those questions, you'll probably have a better chance of getting a working configuration. > /etc/apache2/sites-enabled/000-default contains: apache is very different from nginx. It allows configuration based on the url (location) and also the filesystem (directory, file). It combines configuration from different blocks to determine what happens with one request. It handles php internally. nginx allows configuration based on the url (location). One request is handled in one location. Unless the config you want is active in that one location, is does not apply. php is entirely external, typically done by talking to a fastcgi server. So... > This is all fairly general, and pretty much doesn't apply for nginx (unless the defaults are insufficient). > If you can find the matching url, you can put this in a location{}. > DirectoryIndex index.php > AddType 'text/html; charset=UTF-8' .html > DefaultType application/x-httpd-php Does that mean "urls ending in .html are served from the filesystem; everything else is handled by php"? Or is it "urls ending in a specific list of things are served from the filesystem; everything else is handled by php"? Or something else? That might make a difference for the nginx config. > php_value include_path > "/home/apollo/topincs/php:/home/apollo/topincs/vendor/php" > php_value default_charset "UTF-8" Those all look like php configuration settings. Put them in your php config file, not your nginx one. > This looks like another location. You might want to nest this inside the previous one; or you might want to repeat what is in the previous one, here. What configuration does topincs expect will apply for matching urls? > > SetEnv TOPINCS_STORE mercury What uses that SetEnv result? If it is the php application, maybe it should be a fastcgi_param. If it is something else, then the details might matter. > RewriteRule ^/mercury/([3-9]\.[0-9]\.[0-9].*/(.core-topics|css|images|js|vendor|fonts).*)$ > /mercury/$1 [PT,E=TOPINCS_STORE:mercury] > RewriteRule ^/mercury((\.|/).*)$ /mercury/.topincs?request=$1 > [PT,L,QSA,E=TOPINCS_STORE:mercury] > Alias /mercury "/home/apollo/topincs/docroot" And what do those lines do? "/mercury" is your prefix. The second one seems to say that /mercury.anything and /mercury/anything are handled by the php script at /home/apollo/topincs/docroot/.topincs with a query string of request=.anything or request=/anything. (Or maybe it also appends the original query string.) I'm not sure about the first one. /mercury/version-number/some-prefixes are... served from the file system? Or sent through to the second one where they match /mercury/anything? I'm mostly guessing here, because I don't speak (much) apache. > --------------------------------------------------------------------- > user www-data; > worker_processes 4; > pid /var/run/nginx.pid; > events { > worker_connections 768; > } > http { For initial testing, keep it simple. Only add directives where you know why they are there. And run in debug mode. > include /etc/nginx/mime.types; That line is useful, if you are serving something from the filesystem. > error_log /var/log/nginx/error.log debug; And that one, because of the debug mode. > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; I suggest you include just the one named file that you are testing with, to avoid any confusion. Your test system is only intended to run a single service, so keep it that way. > include fastcgi_params; That might be useful; but it is probably more useful within the server{} or location{} that you care about. > } > server { > listen 80; > root /home/apollo/topincs/docroot; > index index.php; Here it starts getting messy. Rather than you trying to translate from apache to nginx, could you try to translate from apache to a word-description of how some specific urls should be handled? After that, it may be obvious what the nginx config should be. So, when you browse your apache-based topincs server, choose a few urls that you access. You'll see them in your browser location bar, or in your server access log file. When you "curl -i" those urls, what do you see? An indication that it was served from the filesystem, or from php, or something else? When you know that, you'll be able to define the location{} blocks that you will need in nginx. And after *that*, you'll have a better chance of putting in the rest of the configuration. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jul 11 00:21:31 2013 From: nginx-forum at nginx.us (kenperkins) Date: Wed, 10 Jul 2013 20:21:31 -0400 Subject: Can't get SNI to work with v1.4.1 Message-ID: I'm sure I'm doing something wrong, but so far as I can tell, I've setup my vhosts according to the nginx docs, but I'm still not getting the SNI setup to work: https://gist.github.com/kenperkins/cdecd152d0384bd40cb7 > nginx -V nginx version: nginx/1.4.1 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-dav-ext-module --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.4.1/debian/modules/ngx_http_substitutions_filter_module Ubuntu 12.04 LTS Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240746,240746#msg-240746 From nginx-forum at nginx.us Thu Jul 11 07:14:39 2013 From: nginx-forum at nginx.us (sdip) Date: Thu, 11 Jul 2013 03:14:39 -0400 Subject: Help Needed to configure NGINX reverse proxy with URL rewrite. Message-ID: <2cb2395ab7c7d24f01c9598344d5358d.NginxMailingListEnglish@forum.nginx.org> Hi, I am new on NGINX. I am looking for reverse proxy solution which can rewrite URL as well. It seems NGINX might be able to do the same thus asking for your help. Our requirement is to publish a external url for one of our internal server and hide the external url address from those external users. Like external users will access trackme.company.com/tracker --> NGINX proxy --> servera.company.com/tracker/home.seam I tried to with IIS ARR but could not manage to produce. Just wondering if NGINX proxy seemlessly can do the same. Sandy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240748,240748#msg-240748 From nginx-forum at nginx.us Thu Jul 11 07:16:07 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 11 Jul 2013 03:16:07 -0400 Subject: Help Needed to configure NGINX reverse proxy with URL rewrite. In-Reply-To: <2cb2395ab7c7d24f01c9598344d5358d.NginxMailingListEnglish@forum.nginx.org> References: <2cb2395ab7c7d24f01c9598344d5358d.NginxMailingListEnglish@forum.nginx.org> Message-ID: yes, you can! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240748,240749#msg-240749 From nginx-forum at nginx.us Thu Jul 11 09:25:17 2013 From: nginx-forum at nginx.us (gnginx) Date: Thu, 11 Jul 2013 05:25:17 -0400 Subject: Nike 6.0 Mavrk Mid 2 Skate Shoes White Red Cheap Price Message-ID: Runners are cautioned to make the changeover a gradual 1 as they will have significantly a lot less support from too much heel assistance in the conventional as nicely as corrective operating footwear. The American Marketing Association defines a brand being a "name, time period, design, symbol, or some other characteristic that identifies a person seller's very good or company as distinct from individuals of other sellers. nike shoes uk. The appropriate term for brand is trademark. A brand may well identify one product, a household of goods, or all goods of that seller. If utilized to the business being a entire, the preferred term is industry identify." The American Marketing Association defines a brand as a "brand, name, design and style, image, or every other function that identifies one seller's excellent or services as distinct from all those of other sellers. The legitimate name for brand is trademark. A brand may well recognize a person item, a family of objects, or all items of that seller. If employed with the business as being a entire, the preferred name is commerce identify." Final month she pleaded responsible to a single count of second-degree harassment right after a bar brawl with fellow actress Samantha Swetra and was ordered to sober up with a 10-twelve week alcohol therapy programme. Studies have continually found that hearing tunes improves feelings, as well as the most recent conclusions declare that new music truly provides for a mental improve. nike shoes uk store. Hearing tunes for the duration of work out enhances most people`s functionality in a number of techniques. Songs offers an stimulating flow which syncs using the movements for the duration of work out, as well as which assists push more. In addition, it throws coming from event. Ipod and iphone has assisted numerous activities buffs to enjoy his or her regimen additional completely, however ipod and iphone can offer a number of authentic guidance. When the rope asked, "how a lot of us from the highway running around immediately do you consider have a nice $100 expense with their credit card, bank and even handbag.Ins I just responded,Hermes Kelly Handbags "probably not too many.Ins He was quoted saying, "my point really." The real key required his particular finances outside this jean pocket and additionally ripped of computer your $100 bill and presented with it again if you ask me in addition to said,Discount Prada Handbag "put this valuable on your bottom line, remains there along with know that you've got extra cash in the bank when compared with lots of people execute, and therefore whenever a smaller emergency shows up, you realize you'll be all right.Lanvin Pumps Centimeter So I gratefully location the cost in my purse and additionally departed. The benches and bullpens emptied in the top of the ninth after Carpenter (9-9) struck out Nyjer Morgan. The two had words and Morgan headed toward the mound before being restrained by teammate Prince Fielder. No punches were thrown and Morgan was ejected. A lot of manufacturers introduce lightweight versions of current manufacturers in the endeavor to assist runners to transgress to minimalistic footwear. nike free run cheap sale. They incorporate heel assist as nicely as ample mid foot assist with light-weight model sneakers and runners can steadily right negative behavior prior to heading full-on minimalistic. Some of the models pointed out right here are great for runners browsing for a minimalistic shoe and some will offer arch and medial help, whereas other people have all the qualities of a racing flat. Within the Fashion Designer Shoe industry, brand is just as crucial as it truly is in a element from the Fashion Industry. Be this element a Fashion Designer Brand or maybe lesser recognized manufacturers. Within the Fashion Designer Shoe industry, brand is just as critical as it's at any part of the Fashion Industry. Be this component a Fashion Designer Brand or perhaps lesser recognized brands. Ms de la Huerta has given paparazzi a lot more than they have bargained for in the previous - most notably in January when she was snapped tripping and unceremoniously exposing a breast right after becoming banned from a Golden Globes soon after get together for being far too drunk. The notorious celebration woman is has been noticed out with Jack Nicholson and was photographed with a mystery older guy in New York in July. nike basketball shoes outlet. Yes !,Bottega Veneta Tote you will be straight into comfort and ease, longevity in addition to practicality nevertheless you basically require exclusive things which howl designer.Bvlgari Engagement Rings Immediately, many people use Nike shoes or boots. Do you know that Wellington was pointed in the 1850's by just a few People today where they started off generating boots or shoes along with silicone concentrated models like waters pots. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240750,240750#msg-240750 From nginx-forum at nginx.us Thu Jul 11 09:26:00 2013 From: nginx-forum at nginx.us (gnginx) Date: Thu, 11 Jul 2013 05:26:00 -0400 Subject: Nike Air Huarache Free 2013 Shoes Blue Cool Grey Shoes Message-ID: Runners are cautioned to make the changeover a gradual 1 as they will have significantly a lot less support from too much heel assistance in the conventional as nicely as corrective operating footwear. The American Marketing Association defines a brand being a "name, time period, design, symbol, or some other characteristic that identifies a person seller's very good or company as distinct from individuals of other sellers. [url=http://www.nikeshoesworld.co.uk]nike shoes uk[/url]. The appropriate term for brand is trademark. A brand may well identify one product, a household of goods, or all goods of that seller. If utilized to the business being a entire, the preferred term is industry identify." The American Marketing Association defines a brand as a "brand, name, design and style, image, or every other function that identifies one seller's excellent or services as distinct from all those of other sellers. The legitimate name for brand is trademark. A brand may well recognize a person item, a family of objects, or all items of that seller. If employed with the business as being a entire, the preferred name is commerce identify." Final month she pleaded responsible to a single count of second-degree harassment right after a bar brawl with fellow actress Samantha Swetra and was ordered to sober up with a 10-twelve week alcohol therapy programme. Studies have continually found that hearing tunes improves feelings, as well as the most recent conclusions declare that new music truly provides for a mental improve. [url=http://www.nikeshoesworld.co.uk]nike shoes uk store[/url]. Hearing tunes for the duration of work out enhances most people`s functionality in a number of techniques. Songs offers an stimulating flow which syncs using the movements for the duration of work out, as well as which assists push more. In addition, it throws coming from event. Ipod and iphone has assisted numerous activities buffs to enjoy his or her regimen additional completely, however ipod and iphone can offer a number of authentic guidance. When the rope asked, "how a lot of us from the highway running around immediately do you consider have a nice $100 expense with their credit card, bank and even handbag.Ins I just responded,Hermes Kelly Handbags "probably not too many.Ins He was quoted saying, "my point really." The real key required his particular finances outside this jean pocket and additionally ripped of computer your $100 bill and presented with it again if you ask me in addition to said,Discount Prada Handbag "put this valuable on your bottom line, remains there along with know that you've got extra cash in the bank when compared with lots of people execute, and therefore whenever a smaller emergency shows up, you realize you'll be all right.Lanvin Pumps Centimeter So I gratefully location the cost in my purse and additionally departed. The benches and bullpens emptied in the top of the ninth after Carpenter (9-9) struck out Nyjer Morgan. The two had words and Morgan headed toward the mound before being restrained by teammate Prince Fielder. No punches were thrown and Morgan was ejected. A lot of manufacturers introduce lightweight versions of current manufacturers in the endeavor to assist runners to transgress to minimalistic footwear. [url=http://www.nikeshoesworld.co.uk/nike-free-run-men-black-red-p-391.html]nike free run cheap sale[/url]. They incorporate heel assist as nicely as ample mid foot assist with light-weight model sneakers and runners can steadily right negative behavior prior to heading full-on minimalistic. Some of the models pointed out right here are great for runners browsing for a minimalistic shoe and some will offer arch and medial help, whereas other people have all the qualities of a racing flat. Within the Fashion Designer Shoe industry, brand is just as crucial as it truly is in a element from the Fashion Industry. Be this element a Fashion Designer Brand or maybe lesser recognized manufacturers. Within the Fashion Designer Shoe industry, brand is just as critical as it's at any part of the Fashion Industry. Be this component a Fashion Designer Brand or perhaps lesser recognized brands. Ms de la Huerta has given paparazzi a lot more than they have bargained for in the previous - most notably in January when she was snapped tripping and unceremoniously exposing a breast right after becoming banned from a Golden Globes soon after get together for being far too drunk. The notorious celebration woman is has been noticed out with Jack Nicholson and was photographed with a mystery older guy in New York in July. [url=http://www.nikeshoesworld.co.uk/nike-zoom-hyperdunk-basketball-shoes-black-green-p-486.html]nike basketball shoes outlet[/url]. Yes !,Bottega Veneta Tote you will be straight into comfort and ease, longevity in addition to practicality nevertheless you basically require exclusive things which howl designer.Bvlgari Engagement Rings Immediately, many people use Nike shoes or boots. Do you know that Wellington was pointed in the 1850's by just a few People today where they started off generating boots or shoes along with silicone concentrated models like waters pots. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240751,240751#msg-240751 From nginx-forum at nginx.us Thu Jul 11 09:45:15 2013 From: nginx-forum at nginx.us (GregYoung) Date: Thu, 11 Jul 2013 05:45:15 -0400 Subject: LDAP + Header Rewrite Message-ID: <99fa5fb3ef9366746aafb82c0f08ae66.NginxMailingListEnglish@forum.nginx.org> We have been looking around for a while on this one without luck so I figured I would see if anyone here might have an idea. Nginx seems to be able to do everything if you can just figure out how :) We are trying to use nginx as a reverse proxy / trusted intermediary for authentication. We would like to have nginx authenticate via LDAP for us and then add a header to the request representing the authenticated user and the groups they belong to eg: "TrustedIntermediaryUserInformation: greg; admins, all, users" We have LDAP authentication working and we have nginx working as a pass through. We are just unsure of how to get the information from the LDAP module to add the header. Thanks in advance, Greg Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240752,240752#msg-240752 From someukdeveloper at gmail.com Thu Jul 11 11:25:35 2013 From: someukdeveloper at gmail.com (Some Developer) Date: Thu, 11 Jul 2013 12:25:35 +0100 Subject: HSTS and X-Frame-Options Message-ID: <51DE962F.7050806@googlemail.com> Hi, I've just enabled HSTS and X-Frame Options in my nginx configuration (1.2.9) and was wondering if I have done it correctly. Currently my site has 4 server blocks. One to redirect domain.com to https://www.domain.com One to redirect www.domain.com to https://www.domain.com One to redirect https://domain.com to https://www.domain.com And finally the main one for https://www.domain.com I've added the following two lines to the final server block: |add_header Strict-Transport-Security max-age=63072000;| |add_header X-Frame-Options DENY; Do I need to add them to any of the other server blocks or is my current configuration correct? If there are any other improvements to be made I'd be more than happy to hear about them. Thanks. | From nginx-forum at nginx.us Thu Jul 11 11:27:14 2013 From: nginx-forum at nginx.us (parulsood85) Date: Thu, 11 Jul 2013 07:27:14 -0400 Subject: https redirect going to infinite loop Message-ID: <4d4657d64a37a45c8e81a3232f2aae8a.NginxMailingListEnglish@forum.nginx.org> Hi, I am new to nginx. I am trying to redirect all request to https. This is the redirect i am using rewrite ^/(.*) https://example.com permanent; somehow when I hit http://example.com on browser it goes to infinite loop. Note: ssl is enabled on the load balancer Please help! Regards, Parul Sood Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240755,240755#msg-240755 From nginx-forum at nginx.us Thu Jul 11 14:28:04 2013 From: nginx-forum at nginx.us (hawkins) Date: Thu, 11 Jul 2013 10:28:04 -0400 Subject: Connection reset by peer and other problems In-Reply-To: <12a2cf2a344e2ea02f1d92e7c78133f0.NginxMailingListEnglish@forum.nginx.org> References: <12a2cf2a344e2ea02f1d92e7c78133f0.NginxMailingListEnglish@forum.nginx.org> Message-ID: This was solved. The problem was related with New Relic module. 2 weeks to find the problem. :( Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240477,240758#msg-240758 From someukdeveloper at gmail.com Thu Jul 11 14:43:01 2013 From: someukdeveloper at gmail.com (Some Developer) Date: Thu, 11 Jul 2013 15:43:01 +0100 Subject: HSTS and X-Frame-Options In-Reply-To: <51DE962F.7050806@googlemail.com> References: <51DE962F.7050806@googlemail.com> Message-ID: <51DEC475.1090603@googlemail.com> On 11/07/13 12:25, Some Developer wrote: > Hi, > > I've just enabled HSTS and X-Frame Options in my nginx configuration > (1.2.9) and was wondering if I have done it correctly. > > Currently my site has 4 server blocks. > > One to redirect domain.com to https://www.domain.com > > One to redirect www.domain.com to https://www.domain.com > > One to redirect https://domain.com to https://www.domain.com > > And finally the main one for https://www.domain.com > > I've added the following two lines to the final server block: > > |add_header Strict-Transport-Security max-age=63072000;| > > |add_header X-Frame-Options DENY; > > Do I need to add them to any of the other server blocks or is my current > configuration > correct? If there are any other improvements to be made I'd be more than > happy to hear about them. > > Thanks. Hmm seems like my copy and paste job screwed with the text. These are the actual lines: add_header X-Frame-Options DENY; add_header Strict-Transport-Security max-age=63072000; From lists at ruby-forum.com Thu Jul 11 16:08:53 2013 From: lists at ruby-forum.com (Yunior Miguel A.) Date: Thu, 11 Jul 2013 18:08:53 +0200 Subject: configuration problem whit ngingx and unicorn Message-ID: Hello again: I am working to configure a nginx-unicorn server but when I am finish to configuring that server and try to enter to de web site, the welcome to nginx default page open, and not my web site, this is my configuration files: #NGINX redmine: upstream redmine { # fail_timeout=0 means we always retry an upstream even if it failed # to return a good HTTP response (in case the Unicorn master nukes a single worker for timing out). # server unix:/tmp/myapplication.sock fail_timeout=0; server unix:/var/www/redmine/tmp/sockets/unicorn.sock fail_timeout=0; } server { listen 80; # default; client_max_body_size 4G; server_name redmine.unicorn.com.ve; root /var/www/redmine/public; keepalive_timeout 5; location / { access_log off; include proxy_params; proxy_redirect off; if (-f $request_filename) { access_log off; expires max; break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } access_log off; include proxy_params; proxy_redirect off; if (-f $request_filename) { access_log off; expires max; break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://redmine; break; } } # Rails error pages error_page 500 502 503 504 /500.html; location = /500.html { root /home/service/apps/redmine/public; } } #UNICORNS redmine.rb worker_processes 1 working_directory "/var/www/redmine/public" # This loads the application in the master process before forking # worker processes # Read more about it here: # http://unicorn.bogomips.org/Unicorn/Configurator.html preload_app true timeout 60 # This is where we specify the socket. # We will point the upstream Nginx module to this socket later on listen "/var/www/redmine/tmp/sockets/unicorn.sock", :backlog => 64 listen 8080, :tcp_nopush => true pid "/var/www/redmine/tmp/pids/unicorn.pid" # Set the path of the log files inside the log folder of the testapp stderr_path "/var/www/redmine/log/unicorn.stderr.log" stdout_path "/var/www/redmine/log/unicorn.stdout.log" before_fork do |server, worker| # This option works in together with preload_app true setting # What is does is prevent the master process from holding # the database connection defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! end and the log file I, [2013-07-11T11:14:56.151485 #5077] INFO -- : Refreshing Gem list /var/lib/gems/1.9.1/gems/ruby-debug-ide19-0.4.12/lib/ruby-debug/command.rb:32: warning: already initialized constant DEF_OPTIONS /var/lib/gems/1.9.1/gems/activesupport-3.2.12/lib/active_support/dependencies.rb:251:in `block in require': iconv will be deprecated in the future, use String#encode instead. I, [2013-07-11T11:15:09.579343 #5077] INFO -- : unlinking existing socket=/var/www/redmine/tmp/sockets/unicorn.sock I, [2013-07-11T11:15:09.579724 #5077] INFO -- : listening on addr=/var/www/redmine/tmp/sockets/unicorn.sock fd=10 I, [2013-07-11T11:15:09.580282 #5077] INFO -- : listening on addr=0.0.0.0:8080 fd=11 I, [2013-07-11T11:15:09.585867 #5077] INFO -- : master process ready I, [2013-07-11T11:15:09.606967 #5086] INFO -- : worker=0 ready some times the log file say: I, [2013-07-11T12:01:26.607025 #5518] INFO -- : Refreshing Gem list /var/lib/gems/1.9.1/gems/ruby-debug-ide19-0.4.12/lib/ruby-debug/command.rb:32: warning: already initialized constant DEF_OPTIONS /var/lib/gems/1.9.1/gems/activesupport-3.2.12/lib/active_support/dependencies.rb:251:in `block in require': iconv will be deprecated in the future, use String#encode instead. I, [2013-07-11T12:01:40.023799 #5518] INFO -- : listening on addr=/var/www/redmine/tmp/sockets/unicorn.sock fd=10 E, [2013-07-11T12:01:40.024537 #5518] ERROR -- : adding listener failed addr=0.0.0.0:8080 (in use) E, [2013-07-11T12:01:40.024630 #5518] ERROR -- : retrying in 0.5 seconds (4 tries left) E, [2013-07-11T12:01:40.525005 #5518] ERROR -- : adding listener failed addr=0.0.0.0:8080 (in use) E, [2013-07-11T12:01:40.525160 #5518] ERROR -- : retrying in 0.5 seconds (3 tries left) E, [2013-07-11T12:01:41.025482 #5518] ERROR -- : adding listener failed addr=0.0.0.0:8080 (in use) E, [2013-07-11T12:01:41.025606 #5518] ERROR -- : retrying in 0.5 seconds (2 tries left) E, [2013-07-11T12:01:41.526000 #5518] ERROR -- : adding listener failed addr=0.0.0.0:8080 (in use) E, [2013-07-11T12:01:41.526330 #5518] ERROR -- : retrying in 0.5 seconds (1 tries left) E, [2013-07-11T12:01:42.033350 #5518] ERROR -- : adding listener failed addr=0.0.0.0:8080 (in use) E, [2013-07-11T12:01:42.033491 #5518] ERROR -- : retrying in 0.5 seconds (0 tries left) E, [2013-07-11T12:01:42.533870 #5518] ERROR -- : adding listener failed addr=0.0.0.0:8080 (in use) /var/lib/gems/1.9.1/gems/unicorn-4.6.3/lib/unicorn/socket_helper.rb:147:in `initialize': Address already in use - bind(2) (Errno::EADDRINUSE) from /var/lib/gems/1.9.1/gems/unicorn-4.6.3/lib/unicorn/socket_helper.rb:147:in `new' from /var/lib/gems/1.9.1/gems/unicorn-4.6.3/lib/unicorn/socket_helper.rb:147:in `bind_listen' from /var/lib/gems/1.9.1/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:229:in `listen' from /var/lib/gems/1.9.1/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:773:in `block in bind_new_listeners!' from /var/lib/gems/1.9.1/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:773:in `each' from /var/lib/gems/1.9.1/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:773:in `bind_new_listeners!' from /var/lib/gems/1.9.1/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:141:in `start' from /var/lib/gems/1.9.1/gems/unicorn-4.6.3/bin/unicorn_rails:209:in `' from /usr/local/bin/unicorn_rails:19:in `load' from /usr/local/bin/unicorn_rails:19:in `
' Please help me. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Jul 11 16:43:06 2013 From: nginx-forum at nginx.us (Reddirt) Date: Thu, 11 Jul 2013 12:43:06 -0400 Subject: upstream timed out Message-ID: <4285e8ce12e40616e1bcfa1d3763af54.NginxMailingListEnglish@forum.nginx.org> I am a newbie. I'm running Ubuntu, Nginx and 5 Thin servers - that serve up a Ruby on Rails app. The system hangs about once a day. The Nginx log has this error: 2013/07/11 10:06:46 [error] 21344#0: *201 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.10.11, server: , and 2013/07/11 10:05:11 [warn] 21345#0: *225 an upstream response is buffered to a temporary file /opt/bitnami/nginx/tmp/proxy/2/00/0000000002 while reading upstream, client: 192.168.10.11, server: , request: "GET /assets/application-b7b1695e978934a03ade49d20bf63139.js HTTP/1.1", upstream: Is this telling me there is a problem with Thin or Ruby on Rails? I'm not seeing error in logs for those systems. Thanks, Reddirt Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240762,240762#msg-240762 From nginx-forum at nginx.us Thu Jul 11 19:02:20 2013 From: nginx-forum at nginx.us (dumorim) Date: Thu, 11 Jul 2013 15:02:20 -0400 Subject: nginx debian 6 64bits Message-ID: default.conf server { if ($host !~* ^www\.) { rewrite ^(.*)$ http://www.$host$1 permanent; } listen 80; server_name exemple.org; index index.html index.htm index.php; root /home/home; location / { root /home/malucos; index index.html index.htm index.php; } location /status { stub_status on; access_log off; } location /home/min { rewrite ^/(.*\.(css|js))$ /home/min/index.php?f=$1&debug=0 break; } client_max_body_size 120M; error_page 404 /404.html; #location /doc/ { #alias /usr/share/doc/; #autoindex on; #allow 127.0.0.1; #allow ::1; #deny all; #} location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass 127.0.0.1:9999; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } location /phpMyAdmin { rewrite ^/* /phpmyadmin last; } # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; location = /404.html { root /home/malucos; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://www.malucos-share.org; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9999; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/malucos$fastcgi_script_name; fastcgi_param SERVER_NAME $http_host; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } erro 502 bad gateway Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240765,240765#msg-240765 From nginx-forum at nginx.us Thu Jul 11 19:55:03 2013 From: nginx-forum at nginx.us (abstein2) Date: Thu, 11 Jul 2013 15:55:03 -0400 Subject: Proxy returns 504 then blocks next connections Message-ID: <6142134014a5aa1bb4bda64a3a54dd82.NginxMailingListEnglish@forum.nginx.org> I'm having an issue where I proxy a long running script and receive a 504 error when it exceeds my proxy_read_timeout setting. All of that's behaving normally -- what isn't behaving normally is that the next several requests I make to the domain via the same proxy code also return 504s after timing out, despite the fact that the request should complete properly. The first script that runs takes approximately 10 minutes to run and, once it completes on the origin, the server again takes connections. The oddest part is that all of this is specific to the web browser calling the pages on the server. If I access the long running script in Chrome, I can no longer access another page through the proxy until the script finishes on the origin. Meanwhile, I can access pages without issue on Firefox or IE. If I run the script through Firefox, it becomes locked out but Chrome and IE work fine. Here are the relevant proxy lines: proxy_cache CACHEFOLDER; proxy_cache_use_stale updating error timeout invalid_header http_500 http_502 http_503 http_504; proxy_cache_valid 60m; proxy_redirect off; proxy_connect_timeout 120; proxy_send_timeout 120; proxy_read_timeout 120; proxy_buffers 8 16k; proxy_buffer_size 16k; proxy_busy_buffers_size 64k; proxy_cache_key $host$request_uri; proxy_set_header X-Forwarded-For $IP; proxy_set_header Host $VAR_HOST; proxy_pass $REQUEST_PROTO://$PROXY_TO; Has anyone experienced anything like this before? Or is there any setting within NGINX that could be the culprit? The server being proxied is running IIS, but when I run the long running script directly against the proxied server, I'm not getting the same behavior. The long running script runs and I'm able to open a new browser tab and continue browsing as well. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240766,240766#msg-240766 From lists at ruby-forum.com Thu Jul 11 20:49:42 2013 From: lists at ruby-forum.com (Yunior Miguel A.) Date: Thu, 11 Jul 2013 22:49:42 +0200 Subject: configuration problem whit ngingx and unicorn In-Reply-To: References: Message-ID: <3e40f27c02c2c51e2bd4221e865e456d@ruby-forum.com> After many hours breaking my head, i am realize that my mistake was not create the link symbolic in nginx ln -s /etc/nginx/sites-available/redmine /etc/nginx/sites-enabled/redmine :) :() -- Posted via http://www.ruby-forum.com/. From francis at daoine.org Thu Jul 11 22:08:56 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Jul 2013 23:08:56 +0100 Subject: LDAP + Header Rewrite In-Reply-To: <99fa5fb3ef9366746aafb82c0f08ae66.NginxMailingListEnglish@forum.nginx.org> References: <99fa5fb3ef9366746aafb82c0f08ae66.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130711220856.GI11600@craic.sysops.org> On Thu, Jul 11, 2013 at 05:45:15AM -0400, GregYoung wrote: Hi there, > We have LDAP authentication working and we have nginx working as a pass > through. We are just unsure of how to get the information from the LDAP > module to add the header. Which ldap module is that? And what does its documentation say? I'd imagine that the information you want would be available in variables, possibly including $remote_user, if it is available at all. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jul 11 22:14:39 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Jul 2013 23:14:39 +0100 Subject: https redirect going to infinite loop In-Reply-To: <4d4657d64a37a45c8e81a3232f2aae8a.NginxMailingListEnglish@forum.nginx.org> References: <4d4657d64a37a45c8e81a3232f2aae8a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130711221439.GA15782@craic.sysops.org> On Thu, Jul 11, 2013 at 07:27:14AM -0400, parulsood85 wrote: Hi there, > I am new to nginx. I am trying to redirect all request to https. This is the > redirect i am using > > rewrite ^/(.*) https://example.com permanent; What server{} block is this in? What "listen" or similar directives apply in that block? > somehow when I hit http://example.com on browser it goes to infinite loop. What is the output when you do "curl -i http://example.com"? And if it is a redirect, what is the output when you do a "curl -i" on the redirected Location:? > Note: ssl is enabled on the load balancer Where is the load balancer in relation to nginx and the browser? What does the load balancer do? > Please help! If you can provide the above details, it may be clearer where the problem is and what the resolution is. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jul 11 22:21:31 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Jul 2013 23:21:31 +0100 Subject: upstream timed out In-Reply-To: <4285e8ce12e40616e1bcfa1d3763af54.NginxMailingListEnglish@forum.nginx.org> References: <4285e8ce12e40616e1bcfa1d3763af54.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130711222131.GB15782@craic.sysops.org> On Thu, Jul 11, 2013 at 12:43:06PM -0400, Reddirt wrote: Hi there, > The Nginx log has this error: > 2013/07/11 10:06:46 [error] 21344#0: *201 upstream timed out (110: > Connection timed out) while reading response header from upstream, client: > 192.168.10.11, server: , That says that as far as nginx is concerned, its upstream (presumably a Thin server) took too long before returning useful content. That suggests a problem on that Thin server, or with the "timeout" values that nginx and Thin have being different. > 2013/07/11 10:05:11 [warn] 21345#0: *225 an upstream response is buffered to > a temporary file /opt/bitnami/nginx/tmp/proxy/2/00/0000000002 while reading > upstream, client: 192.168.10.11, server: , request: "GET > /assets/application-b7b1695e978934a03ade49d20bf63139.js HTTP/1.1", > upstream: That just says that the response was too big to fit in nginx's memory buffers, and so was written to disk. Not a problem, unless you don't expect the response to be that big. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Jul 12 00:17:39 2013 From: nginx-forum at nginx.us (R1CH) Date: Thu, 11 Jul 2013 20:17:39 -0400 Subject: Proxy returns 504 then blocks next connections In-Reply-To: <6142134014a5aa1bb4bda64a3a54dd82.NginxMailingListEnglish@forum.nginx.org> References: <6142134014a5aa1bb4bda64a3a54dd82.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0665dd31e7caa96a15ee36ac6f2e43cd.NginxMailingListEnglish@forum.nginx.org> Does your script use sessions? You didn't specify with language it uses, but several popular languages such as PHP will lock a session file until the request is complete, preventing multiple requests while the first request continues to run. Mechanisms similar to session_write_close() in PHP can work around this, but it sounds like this script should run as a background process rather than as a web request. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240766,240771#msg-240771 From nginx-forum at nginx.us Fri Jul 12 06:50:56 2013 From: nginx-forum at nginx.us (parulsood85) Date: Fri, 12 Jul 2013 02:50:56 -0400 Subject: https redirect going to infinite loop In-Reply-To: <20130711221439.GA15782@craic.sysops.org> References: <20130711221439.GA15782@craic.sysops.org> Message-ID: Hello Francis, Thanks for the quick reponse. Here is the snipet of the config being used ############################################################################# http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /app/nginx/logs/access.log main; proxy_buffering off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; upstream my-backend { server 127.0.0.1:9000; } server { listen 80; server_name example.com; location / { rewrite ^(.*) https://example.com permanent; proxy_pass http://my-backend; } } ############################################################################# the output of curl -i http://example.com curl: (7) couldn't connect to host The loadbalancer is in the different DMZ it will sent the request on port 80 & 443 to nginx server on port 80. The loadbalancer urls are http://example.com & https://example.com both are working. Please let me know if any other information is required. Regards, Parul Sood Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240755,240772#msg-240772 From nginx-forum at nginx.us Fri Jul 12 06:50:56 2013 From: nginx-forum at nginx.us (parulsood85) Date: Fri, 12 Jul 2013 02:50:56 -0400 Subject: https redirect going to infinite loop In-Reply-To: <20130711221439.GA15782@craic.sysops.org> References: <20130711221439.GA15782@craic.sysops.org> Message-ID: Hello Francis, Thanks for the quick reponse. Here is the snipet of the config being used ############################################################################# http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /app/nginx/logs/access.log main; proxy_buffering off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; upstream my-backend { server 127.0.0.1:9000; } server { listen 80; server_name example.com; location / { rewrite ^(.*) https://example.com permanent; proxy_pass http://my-backend; } } ############################################################################# the output of curl -i http://example.com curl: (7) couldn't connect to host The loadbalancer is in the different DMZ it will sent the request on port 80 & 443 to nginx server on port 80. The loadbalancer urls are http://example.com & https://example.com both are working. Please let me know if any other information is required. Regards, Parul Sood Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240755,240773#msg-240773 From francis at daoine.org Fri Jul 12 08:10:53 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 12 Jul 2013 09:10:53 +0100 Subject: https redirect going to infinite loop In-Reply-To: References: <20130711221439.GA15782@craic.sysops.org> Message-ID: <20130712081053.GC15782@craic.sysops.org> On Fri, Jul 12, 2013 at 02:50:56AM -0400, parulsood85 wrote: Hi there, > server { > listen 80; > server_name example.com; So: nginx is not listening for https requests? > location / { > rewrite ^(.*) https://example.com permanent; > proxy_pass http://my-backend; Aside: It is unlikely that both of these lines do something useful. > the output of curl -i http://example.com > > curl: (7) couldn't connect to host And the http server isn't listening at all? Or maybe your routing or other proxying is broken -- this command should be run from the same machine that a browser is on that sees the failure. The aim is to see the exact response which leads to the failure. But it may not matter, see below. > The loadbalancer is in the different DMZ it will sent the request on port 80 > & 443 to nginx server on port 80. So: the loadbalancer listens for http and https, and sends both requests to nginx as http? Which means nginx can't tell whether the initial request was http or https? Do the http-to-https redirect on the load balancer, which knows whether the initial request was http or https. Or configure the load balancer to give a clue to nginx whether the initial request was http or https, and configure your nginx to respond to that clue. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Jul 12 09:08:45 2013 From: nginx-forum at nginx.us (parulsood85) Date: Fri, 12 Jul 2013 05:08:45 -0400 Subject: https redirect going to infinite loop In-Reply-To: <20130712081053.GC15782@craic.sysops.org> References: <20130712081053.GC15782@craic.sysops.org> Message-ID: <257977dca931ea5bcfda649789bee9fa.NginxMailingListEnglish@forum.nginx.org> Hello Francis, Here is the curl o/p executed from browser machine. c:\curl>curl.exe -i http://example.com HTTP/1.1 301 Moved Permanently Content-Type: text/html Date: Fri, 12 Jul 2013 08:53:12 GMT Location: https://example.com Server: nginx/1.2.8 Content-Length: 184 Connection: keep-alive 301 Moved Permanently

301 Moved Permanently


nginx/1.2.8
However I noticed that when I put a https redirect like below, it works rewrite ^/test$ https://example.com permanent; >So: the loadbalancer listens for http and https, and sends both requests to nginx as http? Yes >Do the http-to-https redirect on the load balancer, which knows whether the initial request was http or https. >Or configure the load balancer to give a clue to nginx whether the initial request was http or https, and configure your nginx to respond to that clue. I'll work on this, I think this option should work fine. Thanks for the help. Regards, Parul Sood Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240755,240775#msg-240775 From nginx-forum at nginx.us Fri Jul 12 09:09:37 2013 From: nginx-forum at nginx.us (wolfy) Date: Fri, 12 Jul 2013 05:09:37 -0400 Subject: Problem with VPN IP address and Nginx In-Reply-To: <59b39f8238b48d8e99675818d25dd19b.NginxMailingListEnglish@forum.nginx.org> References: <59b39f8238b48d8e99675818d25dd19b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Nobody can help me? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240709,240776#msg-240776 From jdmls at yahoo.com Fri Jul 12 09:26:30 2013 From: jdmls at yahoo.com (John Doe) Date: Fri, 12 Jul 2013 02:26:30 -0700 (PDT) Subject: Problem with VPN IP address and Nginx In-Reply-To: References: <59b39f8238b48d8e99675818d25dd19b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1373621190.74036.YahooMailNeo@web121605.mail.ne1.yahoo.com> From: wolfy > Nobody can help me? Maybe check the request headers...? JD From amarnath.p at globaledgesoft.com Fri Jul 12 09:45:38 2013 From: amarnath.p at globaledgesoft.com (P Amarnath) Date: Fri, 12 Jul 2013 15:15:38 +0530 Subject: Reg: SCGI application deployment using NGINX Message-ID: <51DFD042.8040900@globaledgesoft.com> Hi, I would like to know the procedure to deploy SCGI application in NGINX server. Thanks in advance -- With Best Regards, P Amarnath, IPNG - Cloud Storage, Global Edge Software Ltd. From mdounin at mdounin.ru Fri Jul 12 09:50:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Jul 2013 13:50:12 +0400 Subject: nginx debian 6 64bits In-Reply-To: References: Message-ID: <20130712095012.GY66479@mdounin.ru> Hello! On Thu, Jul 11, 2013 at 03:02:20PM -0400, dumorim wrote: [...] > erro > 502 bad gateway I guess the question is "Why the error is returned?". Answers to such questions usually can be found in the error log, try looking into it. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Jul 12 10:53:13 2013 From: nginx-forum at nginx.us (FSC) Date: Fri, 12 Jul 2013 06:53:13 -0400 Subject: SSL + Large file uploads (3GB+) In-Reply-To: <20130710152555.GG66479@mdounin.ru> References: <20130710152555.GG66479@mdounin.ru> Message-ID: <04a0b8d80d9760ac84b53c391cb66494.NginxMailingListEnglish@forum.nginx.org> At the end of the day the cause was a timeout due to long processing in the background. Thanks for having a look anyways! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240727,240783#msg-240783 From nginx-forum at nginx.us Fri Jul 12 16:43:51 2013 From: nginx-forum at nginx.us (dumorim) Date: Fri, 12 Jul 2013 12:43:51 -0400 Subject: nginx debian 6 64bits In-Reply-To: References: Message-ID: 2013/07/12 18:34:55 [error] 26317#0: *1457 connect() failed (110: Connection timed out) while connecting to upstream, client: 258.32.219.11, server: exemple.com, request: "GET /announce.php?passkey=03583fb17481254a98419d4c34&info_hash=%E9%8A%17%27%19R%26%BD6%9C%1D%BF%E7%F8%2F%FF%DB%BC%BB%60&peer_id=-lt0D30-%3E%5E%0F%9D%5E%BE%9A%27%3C%A7%C8%99&key=129f5eb3&compact=1&port=51103&uploaded=26128285017&downloaded=0&left=0 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" 2013/07/12 18:34:55 [error] 26317#0: *1462 connect() failed (110: Connection timed out) while connecting to upstream, client: 963.153.242.79, server: exemple.com, request: "GET /announce.php?passkey=2e627b05425adb01eb89c3e&info_hash=%22%f19%2c%f4%0e%3dz%40%b7%af%7f%f1%f2%c3%c8Cz%d4%d6&peer_id=-UT2010-0KiTt%d8%b4%d8%b0%9c%dc%e5&port=35258&uploaded=1163264&downloaded=0&left=0&corrupt=0&key=2D6E3C01&event=stopped&numwant=0&compact=1&no_peer_id=1 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" 2013/07/12 18:34:55 [error] 26317#0: *1464 connect() failed (110: Connection timed out) while connecting to upstream, client: 279.125.186.26, server: exemple.com, request: "GET /announce.php?passkey=2e627b05425adb01eb89c3ea619&info_hash=%c8%b0q%5c%7f%f5%3b%b7%3e-%06%16YpC%99%c6%87%9f%eb&peer_id=-UT3300-%b9s%17%87%96%d1%7c%925%04%8a%86&port=58704&uploaded=0&downloaded=0&left=0&corrupt=0&key=F5DAD5E3&event=started&numwant=200&compact=1&no_peer_id=1 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" 2013/07/12 18:34:55 [error] 26317#0: *1466 connect() failed (110: Connection timed out) while connecting to upstream, client: 300.148.26.229, server: exemple.com, request: "GET /announce.php?passkey=77113b79abf29bd67f7a4e340079&info_hash=C%f2%9d%e3%014%13Q%97%a0%be%9e%db%b6%edm%ce%7bc%e2&peer_id=-UT3300-%b9s%27%d1%a0%b2%08%05%af%abF%9b&port=29435&uploaded=0&downloaded=0&left=0&corrupt=0&key=C1C9C49D&event=started&numwant=200&compact=1&no_peer_id=1 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" 2013/07/12 18:34:55 [error] 26317#0: *1468 connect() failed (110: Connection timed out) while connecting to upstream, client: 401.37.166.153, server: exemple.com, request: "GET /announce.php?passkey=ff44e26525040d1b555363d2202&info_hash=0%8f%19i%25%03%ac%e51%5c%7c%06%24X%9ac%3f%03x%bb&peer_id=M7-8-0--%ecs%01y%bc%3b%98Ho%ba%3d%ee&port=32572&uploaded=409600&downloaded=0&left=0&corrupt=0&key=3931458D&event=stopped&numwant=0&compact=1&no_peer_id=1 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" 2013/07/12 18:35:09 [error] 27250#0: *588 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 201.34.216.131, server: exemple.com, request: "GET /scrape.php?passkey=7cc1b125f8b4a560b679dd3d7ea18e50&info_hash=%CBxB%B7%E8%3C%FE%0D%EEk%ACD%AF%BC%97%28%23*%5E%21 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" 2013/07/12 18:35:12 [error] 27251#0: *693 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 201.42.123.157, server: exemple.com, request: "GET /scrape.php?passkey=ff44e26525040d1b555363d2202b577b&info_hash=%ec%f0h%05%d0K%dd%fb%98%5c%13%17%bf%ac%88%e6%89%bd%f79 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" 2013/07/12 18:35:16 [error] 27246#0: *790 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 189.26.104.209, server: exemple.com, request: "GET /scrape.php?passkey=65fd2f3c8b70b7d38a0fb7e53ca5539a&info_hash=e%93%0A%FA%C6%87D%94%40%F4%1C%B8%97%AA%B2%BDf%D4%D8%E2 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240765,240788#msg-240788 From hostdl at gmail.com Fri Jul 12 16:45:32 2013 From: hostdl at gmail.com (Host DL) Date: Fri, 12 Jul 2013 21:15:32 +0430 Subject: nginx debian 6 64bits In-Reply-To: References: Message-ID: Check your php-fpm It doesn't seem to be responsible ============================================ On Fri, Jul 12, 2013 at 9:13 PM, dumorim wrote: > 2013/07/12 18:34:55 [error] 26317#0: *1457 connect() failed (110: > Connection > timed out) while connecting to upstream, client: 258.32.219.11, server: > exemple.com, request: "GET > > /announce.php?passkey=03583fb17481254a98419d4c34&info_hash=%E9%8A%17%27%19R%26%BD6%9C%1D%BF%E7%F8%2F%FF%DB%BC%BB%60&peer_id=-lt0D30-%3E%5E%0F%9D%5E%BE%9A%27%3C%A7%C8%99&key=129f5eb3&compact=1&port=51103&uploaded=26128285017&downloaded=0&left=0 > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" > 2013/07/12 18:34:55 [error] 26317#0: *1462 connect() failed (110: > Connection > timed out) while connecting to upstream, client: 963.153.242.79, server: > exemple.com, request: "GET > > /announce.php?passkey=2e627b05425adb01eb89c3e&info_hash=%22%f19%2c%f4%0e%3dz%40%b7%af%7f%f1%f2%c3%c8Cz%d4%d6&peer_id=-UT2010-0KiTt%d8%b4%d8%b0%9c%dc%e5&port=35258&uploaded=1163264&downloaded=0&left=0&corrupt=0&key=2D6E3C01&event=stopped&numwant=0&compact=1&no_peer_id=1 > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" > 2013/07/12 18:34:55 [error] 26317#0: *1464 connect() failed (110: > Connection > timed out) while connecting to upstream, client: 279.125.186.26, server: > exemple.com, request: "GET > > /announce.php?passkey=2e627b05425adb01eb89c3ea619&info_hash=%c8%b0q%5c%7f%f5%3b%b7%3e-%06%16YpC%99%c6%87%9f%eb&peer_id=-UT3300-%b9s%17%87%96%d1%7c%925%04%8a%86&port=58704&uploaded=0&downloaded=0&left=0&corrupt=0&key=F5DAD5E3&event=started&numwant=200&compact=1&no_peer_id=1 > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" > 2013/07/12 18:34:55 [error] 26317#0: *1466 connect() failed (110: > Connection > timed out) while connecting to upstream, client: 300.148.26.229, server: > exemple.com, request: "GET > > /announce.php?passkey=77113b79abf29bd67f7a4e340079&info_hash=C%f2%9d%e3%014%13Q%97%a0%be%9e%db%b6%edm%ce%7bc%e2&peer_id=-UT3300-%b9s%27%d1%a0%b2%08%05%af%abF%9b&port=29435&uploaded=0&downloaded=0&left=0&corrupt=0&key=C1C9C49D&event=started&numwant=200&compact=1&no_peer_id=1 > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" > 2013/07/12 18:34:55 [error] 26317#0: *1468 connect() failed (110: > Connection > timed out) while connecting to upstream, client: 401.37.166.153, server: > exemple.com, request: "GET > > /announce.php?passkey=ff44e26525040d1b555363d2202&info_hash=0%8f%19i%25%03%ac%e51%5c%7c%06%24X%9ac%3f%03x%bb&peer_id=M7-8-0--%ecs%01y%bc%3b%98Ho%ba%3d%ee&port=32572&uploaded=409600&downloaded=0&left=0&corrupt=0&key=3931458D&event=stopped&numwant=0&compact=1&no_peer_id=1 > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" > > 2013/07/12 18:35:09 [error] 27250#0: *588 FastCGI sent in stderr: "Primary > script unknown" while reading response header from upstream, client: > 201.34.216.131, server: exemple.com, request: "GET > > /scrape.php?passkey=7cc1b125f8b4a560b679dd3d7ea18e50&info_hash=%CBxB%B7%E8%3C%FE%0D%EEk%ACD%AF%BC%97%28%23*%5E%21 > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" > 2013/07/12 18:35:12 [error] 27251#0: *693 FastCGI sent in stderr: "Primary > script unknown" while reading response header from upstream, client: > 201.42.123.157, server: exemple.com, request: "GET > > /scrape.php?passkey=ff44e26525040d1b555363d2202b577b&info_hash=%ec%f0h%05%d0K%dd%fb%98%5c%13%17%bf%ac%88%e6%89%bd%f79 > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" > 2013/07/12 18:35:16 [error] 27246#0: *790 FastCGI sent in stderr: "Primary > script unknown" while reading response header from upstream, client: > 189.26.104.209, server: exemple.com, request: "GET > > /scrape.php?passkey=65fd2f3c8b70b7d38a0fb7e53ca5539a&info_hash=e%93%0A%FA%C6%87D%94%40%F4%1C%B8%97%AA%B2%BDf%D4%D8%E2 > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.exemple.com" > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,240765,240788#msg-240788 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jul 12 16:51:07 2013 From: nginx-forum at nginx.us (dumorim) Date: Fri, 12 Jul 2013 12:51:07 -0400 Subject: nginx debian 6 64bits In-Reply-To: References: Message-ID: <1c931b22e7b434db7c54499abbdde0b0.NginxMailingListEnglish@forum.nginx.org> /etc/php5/fpm/pool.d www.conf ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /usr) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. listen = localhost:9000 ; Set listen(2) backlog. ; Default Value: 128 (-1 on FreeBSD and OpenBSD) ;listen.backlog = 128 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives. With this process management, there will be ; always at least 1 children. ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; ondemand - no children are created at startup. Children will be forked when ; new requests will connect. The following parameter are used: ; pm.max_children - the maximum number of children that ; can be alive at the same time. ; pm.process_idle_timeout - The number of seconds after which ; an idle process will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. The below defaults are based on a server without much resources. Don't ; forget to tweak pm.* to fit your needs. ; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand' ; Note: This value is mandatory. pm.max_children = 900 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 70 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 40 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 80 ; The number of seconds after which an idle process will be killed. ; Note: Used only when pm is set to 'ondemand' ; Default Value: 10s ;pm.process_idle_timeout = 10s; ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 pm.max_requests = 10000 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. It shows the following informations: ; pool - the name of the pool; ; process manager - static, dynamic or ondemand; ; start time - the date and time FPM has started; ; start since - number of seconds since FPM has started; ; accepted conn - the number of request accepted by the pool; ; listen queue - the number of request in the queue of pending ; connections (see backlog in listen(2)); ; max listen queue - the maximum number of requests in the queue ; of pending connections since FPM has started; ; listen queue len - the size of the socket queue of pending connections; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes; ; max active processes - the maximum number of active processes since FPM ; has started; ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic' and 'ondemand'); ; Value are updated in real time. ; Example output: ; pool: www ; process manager: static ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 62636 ; accepted conn: 190460 ; listen queue: 0 ; max listen queue: 1 ; listen queue len: 42 ; idle processes: 4 ; active processes: 11 ; total processes: 15 ; max active processes: 12 ; max children reached: 0 ; ; By default the status page output is formatted as text/plain. Passing either ; 'html', 'xml' or 'json' in the query string will return the corresponding ; output syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; http://www.foo.bar/status?xml ; ; By default the status page only outputs short status. Passing 'full' in the ; query string will also return status for each pool process. ; Example: ; http://www.foo.bar/status?full ; http://www.foo.bar/status?json&full ; http://www.foo.bar/status?html&full ; http://www.foo.bar/status?xml&full ; The Full status returns for each process: ; pid - the PID of the process; ; state - the state of the process (Idle, Running, ...); ; start time - the date and time the process has started; ; start since - the number of seconds since the process has started; ; requests - the number of requests the process has served; ; request duration - the duration in ??s of the requests; ; request method - the request method (GET, POST, ...); ; request URI - the request URI with the query string; ; content length - the content length of the request (only with POST); ; user - the user (PHP_AUTH_USER) (or '-' if not set); ; script - the main script called (or '-' if not set); ; last request cpu - the %cpu the last request consumed ; it's always 0 if the process is not in Idle state ; because CPU calculation is done when the request ; processing has terminated; ; last request memory - the max amount of memory the last request consumed ; it's always 0 if the process is not in Idle state ; because memory calculation is done when the request ; processing has terminated; ; If the process is in Idle state, then informations are related to the ; last request the process has served. Otherwise informations are related to ; the current request being served. ; Example output: ; ************************ ; pid: 31330 ; state: Running ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 63087 ; requests: 12808 ; request duration: 1250261 ; request method: GET ; request URI: /test_mem.php?N=10000 ; content length: 0 ; user: - ; script: /home/fat/web/docs/php/test_mem.php ; last request cpu: 0.00 ; last request memory: 0 ; ; Note: There is a real-time FPM status monitoring sample web page available ; It's available in: ${prefix}/share/fpm/status.html ; ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ;ping.response = pong ; The access log file ; Default: not set ;access.log = log/$pool.access.log ; The access log format. ; The following syntax is allowed ; %%: the '%' character ; %C: %CPU used by the request ; it can accept the following format: ; - %{user}C for user CPU only ; - %{system}C for system CPU only ; - %{total}C for user + system CPU (default) ; %d: time taken to serve the request ; it can accept the following format: ; - %{seconds}d (default) ; - %{miliseconds}d ; - %{mili}d ; - %{microseconds}d ; - %{micro}d ; %e: an environment variable (same as $_ENV or $_SERVER) ; it must be associated with embraces to specify the name of the env ; variable. Some exemples: ; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e ; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e ; %f: script filename ; %l: content-length of the request (for POST request only) ; %m: request method ; %M: peak of memory allocated by PHP ; it can accept the following format: ; - %{bytes}M (default) ; - %{kilobytes}M ; - %{kilo}M ; - %{megabytes}M ; - %{mega}M ; %n: pool name ; %o: ouput header ; it must be associated with embraces to specify the name of the header: ; - %{Content-Type}o ; - %{X-Powered-By}o ; - %{Transfert-Encoding}o ; - .... ; %p: PID of the child that serviced the request ; %P: PID of the parent of the child that serviced the request ; %q: the query string ; %Q: the '?' character if query string exists ; %r: the request URI (without the query string, see %q and %Q) ; %R: remote IP address ; %s: status (response code) ; %t: server time the request was received ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; %T: time the log has been written (the request has finished) ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; %u: remote user ; ; Default: "%R - %u %t \"%m %r\" %s" ;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%" ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = / ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Limits the extensions of the main script FPM will allow to parse. This can ; prevent configuration mistakes on the web server side. You should only limit ; FPM to .php extensions to prevent malicious users to use other extensions to ; exectute php code. ; Note: set an empty value to allow all extensions. ; Default Value: .php ;security.limit_extensions = .php .php3 .php4 .php5 ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /usr) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www at my.domain.com ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240765,240790#msg-240790 From hostdl at gmail.com Fri Jul 12 16:52:33 2013 From: hostdl at gmail.com (Host DL) Date: Fri, 12 Jul 2013 21:22:33 +0430 Subject: nginx debian 6 64bits In-Reply-To: <1c931b22e7b434db7c54499abbdde0b0.NginxMailingListEnglish@forum.nginx.org> References: <1c931b22e7b434db7c54499abbdde0b0.NginxMailingListEnglish@forum.nginx.org> Message-ID: execute bellow commands to check that it is runnung properly or not: ps axu | grep php or netstat -napt | grep LIST | grep :9000 ============================================ On Fri, Jul 12, 2013 at 9:21 PM, dumorim wrote: > /etc/php5/fpm/pool.d www.conf > > ; Start a new pool named 'www'. > ; the variable $pool can we used in any directive and will be replaced by > the > ; pool name ('www' here) > [www] > > ; Per pool prefix > ; It only applies on the following directives: > ; - 'slowlog' > ; - 'listen' (unixsocket) > ; - 'chroot' > ; - 'chdir' > ; - 'php_values' > ; - 'php_admin_values' > ; When not set, the global prefix (or /usr) applies instead. > ; Note: This directive can also be relative to the global prefix. > ; Default Value: none > ;prefix = /path/to/pools/$pool > > ; Unix user/group of processes > ; Note: The user is mandatory. If the group is not set, the default user's > group > ; will be used. > user = www-data > group = www-data > > ; The address on which to accept FastCGI requests. > ; Valid syntaxes are: > ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific > address > on > ; a specific port; > ; 'port' - to listen on a TCP socket to all addresses on > a > ; specific port; > ; '/path/to/unix/socket' - to listen on a unix socket. > ; Note: This value is mandatory. > listen = localhost:9000 > > ; Set listen(2) backlog. > ; Default Value: 128 (-1 on FreeBSD and OpenBSD) > ;listen.backlog = 128 > > ; Set permissions for unix socket, if one is used. In Linux, read/write > ; permissions must be set in order to allow connections from a web server. > Many > ; BSD-derived systems allow connections regardless of permissions. > ; Default Values: user and group are set as the running user > ; mode is set to 0666 > ;listen.owner = www-data > ;listen.group = www-data > ;listen.mode = 0666 > > ; List of ipv4 addresses of FastCGI clients which are allowed to connect. > ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the > original > ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each > address > ; must be separated by a comma. If this value is left blank, connections > will be > ; accepted from any ip address. > ; Default Value: any > ;listen.allowed_clients = 127.0.0.1 > > ; Choose how the process manager will control the number of child > processes. > ; Possible Values: > ; static - a fixed number (pm.max_children) of child processes; > ; dynamic - the number of child processes are set dynamically based on > the > ; following directives. With this process management, there > will > be > ; always at least 1 children. > ; pm.max_children - the maximum number of children that > can > ; be alive at the same time. > ; pm.start_servers - the number of children created on > startup. > ; pm.min_spare_servers - the minimum number of children in > 'idle' > ; state (waiting to process). If the > number > ; of 'idle' processes is less than this > ; number then some children will be > created. > ; pm.max_spare_servers - the maximum number of children in > 'idle' > ; state (waiting to process). If the > number > ; of 'idle' processes is greater than > this > ; number then some children will be > killed. > ; ondemand - no children are created at startup. Children will be forked > when > ; new requests will connect. The following parameter are used: > ; pm.max_children - the maximum number of children > that > ; can be alive at the same time. > ; pm.process_idle_timeout - The number of seconds after which > ; an idle process will be killed. > ; Note: This value is mandatory. > pm = dynamic > > ; The number of child processes to be created when pm is set to 'static' > and > the > ; maximum number of child processes when pm is set to 'dynamic' or > 'ondemand'. > ; This value sets the limit on the number of simultaneous requests that > will > be > ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. > ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original > PHP > ; CGI. The below defaults are based on a server without much resources. > Don't > ; forget to tweak pm.* to fit your needs. > ; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand' > ; Note: This value is mandatory. > pm.max_children = 900 > > ; The number of child processes created on startup. > ; Note: Used only when pm is set to 'dynamic' > ; Default Value: min_spare_servers + (max_spare_servers - > min_spare_servers) > / 2 > pm.start_servers = 70 > > ; The desired minimum number of idle server processes. > ; Note: Used only when pm is set to 'dynamic' > ; Note: Mandatory when pm is set to 'dynamic' > pm.min_spare_servers = 40 > > ; The desired maximum number of idle server processes. > ; Note: Used only when pm is set to 'dynamic' > ; Note: Mandatory when pm is set to 'dynamic' > pm.max_spare_servers = 80 > > ; The number of seconds after which an idle process will be killed. > ; Note: Used only when pm is set to 'ondemand' > ; Default Value: 10s > ;pm.process_idle_timeout = 10s; > > ; The number of requests each child process should execute before > respawning. > ; This can be useful to work around memory leaks in 3rd party libraries. > For > ; endless request processing specify '0'. Equivalent to > PHP_FCGI_MAX_REQUESTS. > ; Default Value: 0 > pm.max_requests = 10000 > > ; The URI to view the FPM status page. If this value is not set, no URI > will > be > ; recognized as a status page. It shows the following informations: > ; pool - the name of the pool; > ; process manager - static, dynamic or ondemand; > ; start time - the date and time FPM has started; > ; start since - number of seconds since FPM has started; > ; accepted conn - the number of request accepted by the pool; > ; listen queue - the number of request in the queue of pending > ; connections (see backlog in listen(2)); > ; max listen queue - the maximum number of requests in the queue > ; of pending connections since FPM has started; > ; listen queue len - the size of the socket queue of pending > connections; > ; idle processes - the number of idle processes; > ; active processes - the number of active processes; > ; total processes - the number of idle + active processes; > ; max active processes - the maximum number of active processes since FPM > ; has started; > ; max children reached - number of times, the process limit has been > reached, > ; when pm tries to start more children (works only > for > ; pm 'dynamic' and 'ondemand'); > ; Value are updated in real time. > ; Example output: > ; pool: www > ; process manager: static > ; start time: 01/Jul/2011:17:53:49 +0200 > ; start since: 62636 > ; accepted conn: 190460 > ; listen queue: 0 > ; max listen queue: 1 > ; listen queue len: 42 > ; idle processes: 4 > ; active processes: 11 > ; total processes: 15 > ; max active processes: 12 > ; max children reached: 0 > ; > ; By default the status page output is formatted as text/plain. Passing > either > ; 'html', 'xml' or 'json' in the query string will return the corresponding > ; output syntax. Example: > ; http://www.foo.bar/status > ; http://www.foo.bar/status?json > ; http://www.foo.bar/status?html > ; http://www.foo.bar/status?xml > ; > ; By default the status page only outputs short status. Passing 'full' in > the > ; query string will also return status for each pool process. > ; Example: > ; http://www.foo.bar/status?full > ; http://www.foo.bar/status?json&full > ; http://www.foo.bar/status?html&full > ; http://www.foo.bar/status?xml&full > ; The Full status returns for each process: > ; pid - the PID of the process; > ; state - the state of the process (Idle, Running, ...); > ; start time - the date and time the process has started; > ; start since - the number of seconds since the process has > started; > ; requests - the number of requests the process has served; > ; request duration - the duration in ??s of the requests; > ; request method - the request method (GET, POST, ...); > ; request URI - the request URI with the query string; > ; content length - the content length of the request (only with > POST); > ; user - the user (PHP_AUTH_USER) (or '-' if not set); > ; script - the main script called (or '-' if not set); > ; last request cpu - the %cpu the last request consumed > ; it's always 0 if the process is not in Idle > state > ; because CPU calculation is done when the request > ; processing has terminated; > ; last request memory - the max amount of memory the last request > consumed > ; it's always 0 if the process is not in Idle > state > ; because memory calculation is done when the > request > ; processing has terminated; > ; If the process is in Idle state, then informations are related to the > ; last request the process has served. Otherwise informations are related > to > ; the current request being served. > ; Example output: > ; ************************ > ; pid: 31330 > ; state: Running > ; start time: 01/Jul/2011:17:53:49 +0200 > ; start since: 63087 > ; requests: 12808 > ; request duration: 1250261 > ; request method: GET > ; request URI: /test_mem.php?N=10000 > ; content length: 0 > ; user: - > ; script: /home/fat/web/docs/php/test_mem.php > ; last request cpu: 0.00 > ; last request memory: 0 > ; > ; Note: There is a real-time FPM status monitoring sample web page > available > ; It's available in: ${prefix}/share/fpm/status.html > ; > ; Note: The value must start with a leading slash (/). The value can be > ; anything, but it may not be a good idea to use the .php extension > or > it > ; may conflict with a real PHP file. > ; Default Value: not set > ;pm.status_path = /status > > ; The ping URI to call the monitoring page of FPM. If this value is not > set, > no > ; URI will be recognized as a ping page. This could be used to test from > outside > ; that FPM is alive and responding, or to > ; - create a graph of FPM availability (rrd or such); > ; - remove a server from a group if it is not responding (load balancing); > ; - trigger alerts for the operating team (24/7). > ; Note: The value must start with a leading slash (/). The value can be > ; anything, but it may not be a good idea to use the .php extension > or > it > ; may conflict with a real PHP file. > ; Default Value: not set > ;ping.path = /ping > > ; This directive may be used to customize the response of a ping request. > The > ; response is formatted as text/plain with a 200 response code. > ; Default Value: pong > ;ping.response = pong > > ; The access log file > ; Default: not set > ;access.log = log/$pool.access.log > > ; The access log format. > ; The following syntax is allowed > ; %%: the '%' character > ; %C: %CPU used by the request > ; it can accept the following format: > ; - %{user}C for user CPU only > ; - %{system}C for system CPU only > ; - %{total}C for user + system CPU (default) > ; %d: time taken to serve the request > ; it can accept the following format: > ; - %{seconds}d (default) > ; - %{miliseconds}d > ; - %{mili}d > ; - %{microseconds}d > ; - %{micro}d > ; %e: an environment variable (same as $_ENV or $_SERVER) > ; it must be associated with embraces to specify the name of the env > ; variable. Some exemples: > ; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e > ; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e > ; %f: script filename > ; %l: content-length of the request (for POST request only) > ; %m: request method > ; %M: peak of memory allocated by PHP > ; it can accept the following format: > ; - %{bytes}M (default) > ; - %{kilobytes}M > ; - %{kilo}M > ; - %{megabytes}M > ; - %{mega}M > ; %n: pool name > ; %o: ouput header > ; it must be associated with embraces to specify the name of the > header: > ; - %{Content-Type}o > ; - %{X-Powered-By}o > ; - %{Transfert-Encoding}o > ; - .... > ; %p: PID of the child that serviced the request > ; %P: PID of the parent of the child that serviced the request > ; %q: the query string > ; %Q: the '?' character if query string exists > ; %r: the request URI (without the query string, see %q and %Q) > ; %R: remote IP address > ; %s: status (response code) > ; %t: server time the request was received > ; it can accept a strftime(3) format: > ; %d/%b/%Y:%H:%M:%S %z (default) > ; %T: time the log has been written (the request has finished) > ; it can accept a strftime(3) format: > ; %d/%b/%Y:%H:%M:%S %z (default) > ; %u: remote user > ; > ; Default: "%R - %u %t \"%m %r\" %s" > ;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%" > > ; The log file for slow requests > ; Default Value: not set > ; Note: slowlog is mandatory if request_slowlog_timeout is set > ;slowlog = log/$pool.log.slow > > ; The timeout for serving a single request after which a PHP backtrace will > be > ; dumped to the 'slowlog' file. A value of '0s' means 'off'. > ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) > ; Default Value: 0 > ;request_slowlog_timeout = 0 > > ; The timeout for serving a single request after which the worker process > will > ; be killed. This option should be used when the 'max_execution_time' ini > option > ; does not stop script execution for some reason. A value of '0' means > 'off'. > ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) > ; Default Value: 0 > ;request_terminate_timeout = 0 > > ; Set open file descriptor rlimit. > ; Default Value: system defined value > ;rlimit_files = 1024 > > ; Set max core size rlimit. > ; Possible Values: 'unlimited' or an integer greater or equal to 0 > ; Default Value: system defined value > ;rlimit_core = 0 > > ; Chroot to this directory at the start. This value must be defined as an > ; absolute path. When this value is not set, chroot is not used. > ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one > ; of its subdirectories. If the pool prefix is not set, the global prefix > ; will be used instead. > ; Note: chrooting is a great security feature and should be used whenever > ; possible. However, all PHP paths will be relative to the chroot > ; (error_log, sessions.save_path, ...). > ; Default Value: not set > ;chroot = > > ; Chdir to this directory at the start. > ; Note: relative path can be used. > ; Default Value: current directory or / when chroot > chdir = / > > ; Redirect worker stdout and stderr into main error log. If not set, stdout > and > ; stderr will be redirected to /dev/null according to FastCGI specs. > ; Note: on highloaded environement, this can cause some delay in the page > ; process time (several ms). > ; Default Value: no > ;catch_workers_output = yes > > ; Limits the extensions of the main script FPM will allow to parse. This > can > ; prevent configuration mistakes on the web server side. You should only > limit > ; FPM to .php extensions to prevent malicious users to use other extensions > to > ; exectute php code. > ; Note: set an empty value to allow all extensions. > ; Default Value: .php > ;security.limit_extensions = .php .php3 .php4 .php5 > > ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken > from > ; the current environment. > ; Default Value: clean env > ;env[HOSTNAME] = $HOSTNAME > ;env[PATH] = /usr/local/bin:/usr/bin:/bin > ;env[TMP] = /tmp > ;env[TMPDIR] = /tmp > ;env[TEMP] = /tmp > > ; Additional php.ini defines, specific to this pool of workers. These > settings > ; overwrite the values previously defined in the php.ini. The directives > are > the > ; same as the PHP SAPI: > ; php_value/php_flag - you can set classic ini defines which > can > ; be overwritten from PHP call > 'ini_set'. > > ; php_admin_value/php_admin_flag - these directives won't be overwritten > by > ; PHP call 'ini_set' > ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. > > ; Defining 'extension' will load the corresponding shared extension from > ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not > ; overwrite previously defined php.ini values, but will append the new > value > ; instead. > > ; Note: path INI options can be relative and will be expanded with the > prefix > ; (pool, global or /usr) > > ; Default Value: nothing is defined by default except the values in php.ini > and > ; specified at startup with the -d argument > ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f > www at my.domain.com > ;php_flag[display_errors] = off > ;php_admin_value[error_log] = /var/log/fpm-php.www.log > ;php_admin_flag[log_errors] = on > ;php_admin_value[memory_limit] = 32M > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,240765,240790#msg-240790 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jul 12 17:08:44 2013 From: nginx-forum at nginx.us (dumorim) Date: Fri, 12 Jul 2013 13:08:44 -0400 Subject: nginx debian 6 64bits In-Reply-To: References: Message-ID: <8d2a1d2cbb3a62ff489c83d23fffaa93.NginxMailingListEnglish@forum.nginx.org> netstat -napt 6915/nginx: worker tcp 0 0 109.236.87.68:80 201.90.8.68:41544 ESTABLISHED 6911/nginx: worker tcp 0 0 109.236.87.68:80 189.4.107.246:2048 ESTABLISHED 6910/nginx: worker tcp 239 0 109.236.87.68:80 189.24.118.25:2356 ESTABLISHED - tcp 0 0 109.236.87.68:80 187.53.134.152:57211 ESTABLISHED 6910/nginx: worker tcp 0 0 109.236.87.68:80 189.103.65.154:49826 ESTABLISHED 6914/nginx: worker tcp 435 0 109.236.87.68:80 189.58.115.30:60761 ESTABLISHED - tcp 0 0 109.236.87.68:80 177.130.18.85:58372 ESTABLISHED 6915/nginx: worker tcp 400 0 109.236.87.68:80 202.188.79.209:59398 ESTABLISHED - tcp 204 0 109.236.87.68:80 189.24.118.25:2369 ESTABLISHED - tcp 0 0 109.236.87.68:80 177.132.91.147:63143 ESTABLISHED 6916/nginx: worker tcp 1 0 109.236.87.68:80 177.177.43.143:61641 CLOSE_WAIT 6914/nginx: worker tcp 0 0 127.0.0.1:43246 127.0.0.1:9000 ESTABLISHED 6915/nginx: worker tcp 0 0 127.0.0.1:43221 127.0.0.1:9000 ESTABLISHED 6912/nginx: worker tcp 0 0 127.0.0.1:9000 127.0.0.1:43195 TIME_WAIT - tcp 402 0 109.236.87.68:80 177.81.144.122:56741 ESTABLISHED - tcp 0 0 127.0.0.1:9000 127.0.0.1:43189 TIME_WAIT - tcp 0 0 109.236.87.68:80 177.106.194.157:62449 TIME_WAIT - tcp 254 0 109.236.87.68:80 201.37.189.181:58179 CLOSE_WAIT 6917/nginx: worker tcp 0 0 109.236.87.68:80 187.87.102.123:35576 ESTABLISHED 6910/nginx: worker tcp 413 0 109.236.87.68:80 187.59.43.60:58013 ESTABLISHED - tcp 8 0 127.0.0.1:9000 127.0.0.1:43251 ESTABLISHED 7562/php-fpm: pool tcp 0 0 109.236.87.68:80 189.102.215.123:62319 ESTABLISHED 6916/nginx: worker tcp 8 0 127.0.0.1:9000 127.0.0.1:43252 ESTABLISHED 8739/php-fpm: pool tcp 1 0 109.236.87.68:80 179.214.156.43:44270 CLOSE_WAIT 6918/nginx: worker tcp 427 0 109.236.87.68:80 187.121.113.6:1722 ESTABLISHED - tcp 0 0 127.0.0.1:43251 127.0.0.1:9000 ESTABLISHED 6915/nginx: worker tcp 0 0 109.236.87.68:80 177.158.208.195:39317 ESTABLISHED 6910/nginx: worker tcp 0 0 127.0.0.1:9000 127.0.0.1:43200 TIME_WAIT - tcp 386 0 109.236.87.68:80 189.107.79.135:39418 ESTABLISHED - tcp 0 0 127.0.0.1:43232 127.0.0.1:9000 ESTABLISHED 6910/nginx: worker tcp 0 0 109.236.87.68:80 187.74.31.46:21609 TIME_WAIT - tcp 0 0 127.0.0.1:43252 127.0.0.1:9000 ESTABLISHED 6915/nginx: worker tcp 478 0 109.236.87.68:80 187.61.209.90:49385 ESTABLISHED 6916/nginx: worker tcp 387 0 109.236.87.68:80 177.182.185.53:50529 ESTABLISHED - tcp 519 0 109.236.87.68:80 177.18.218.173:56157 ESTABLISHED 6915/nginx: worker tcp 318 0 109.236.87.68:80 189.6.233.162:56531 ESTABLISHED - tcp 1 0 127.0.0.1:43228 127.0.0.1:9000 CLOSE_WAIT 6911/nginx: worker tcp 0 0 109.236.87.68:80 177.23.185.253:12140 ESTABLISHED 6916/nginx: worker tcp 380 0 109.236.87.68:80 200.153.242.79:51682 CLOSE_WAIT - tcp 0 0 109.236.87.68:80 177.34.113.41:55728 TIME_WAIT - tcp 0 0 127.0.0.1:9000 127.0.0.1:43217 FIN_WAIT2 8885/php-fpm: pool tcp 0 0 127.0.0.1:43227 127.0.0.1:9000 ESTABLISHED 6911/nginx: worker tcp 1 0 109.236.87.68:80 187.17.181.143:18560 CLOSE_WAIT 6916/nginx: worker tcp 415 0 109.236.87.68:80 187.39.134.210:50340 ESTABLISHED - tcp 0 0 109.236.87.68:80 85.245.72.240:61665 ESTABLISHED 6911/nginx: worker tcp 0 0 127.0.0.1:43247 127.0.0.1:9000 ESTABLISHED 6915/nginx: worker tcp 1 0 109.236.87.68:80 187.7.114.240:19713 CLOSE_WAIT 6916/nginx: worker tcp 0 0 109.236.87.68:80 186.227.58.106:55657 ESTABLISHED 6916/nginx: worker tcp 402 0 109.236.87.68:80 177.8.7.33:63105 ESTABLISHED - tcp 0 0 109.236.87.68:80 201.90.8.68:41629 ESTABLISHED 6911/nginx: worker tcp 395 0 109.236.87.68:80 177.42.75.77:55690 ESTABLISHED - tcp 1 0 109.236.87.68:80 177.177.43.143:59719 CLOSE_WAIT 6914/nginx: worker tcp 0 0 109.236.87.68:80 187.61.209.90:49389 ESTABLISHED 6911/nginx: worker tcp 0 0 109.236.87.68:80 177.3.241.225:59685 ESTABLISHED 6916/nginx: worker tcp 8 0 127.0.0.1:9000 127.0.0.1:43229 ESTABLISHED 8253/php-fpm: pool tcp 8 0 127.0.0.1:9000 127.0.0.1:43224 ESTABLISHED 7843/php-fpm: pool tcp 8 0 127.0.0.1:9000 127.0.0.1:43230 ESTABLISHED 8981/php-fpm: pool tcp 424 0 109.236.87.68:80 187.114.31.253:52008 ESTABLISHED - tcp 0 0 127.0.0.1:9000 127.0.0.1:43196 TIME_WAIT - tcp 0 0 109.236.87.68:80 177.201.100.148:64246 ESTABLISHED 6910/nginx: worker tcp 0 0 109.236.87.68:80 201.89.72.18:32611 ESTABLISHED 6911/nginx: worker tcp 0 0 109.236.87.68:80 177.206.179.12:64630 TIME_WAIT - tcp 420 0 109.236.87.68:80 189.24.118.25:2348 ESTABLISHED - tcp 394 0 109.236.87.68:80 177.221.10.167:54259 ESTABLISHED - tcp 0 0 127.0.0.1:9000 127.0.0.1:43215 FIN_WAIT2 8676/php-fpm: pool tcp 0 0 127.0.0.1:43237 127.0.0.1:9000 ESTABLISHED 6910/nginx: worker tcp 0 0 109.236.87.68:80 177.101.240.10:22559 TIME_WAIT - tcp 0 0 109.236.87.68:47571 177.98.31.107:32796 TIME_WAIT - tcp 0 0 109.236.87.68:80 177.81.144.122:56657 ESTABLISHED 6910/nginx: worker tcp 0 0 109.236.87.68:80 201.29.129.188:55533 ESTABLISHED 6915/nginx: worker tcp 453 0 109.236.87.68:80 177.38.63.85:50103 ESTABLISHED 6911/nginx: worker tcp 0 0 127.0.0.1:9000 127.0.0.1:43187 TIME_WAIT - tcp 1 0 109.236.87.68:80 126.29.105.192:64907 CLOSE_WAIT 6914/nginx: worker tcp 625 0 127.0.0.1:43217 127.0.0.1:9000 CLOSE_WAIT 6916/nginx: worker tcp 0 0 127.0.0.1:9000 127.0.0.1:43213 TIME_WAIT - tcp 0 0 109.236.87.68:80 177.134.219.207:60546 TIME_WAIT - tcp 0 0 109.236.87.68:80 189.59.161.139:54528 TIME_WAIT - tcp 384 0 109.236.87.68:80 201.91.219.251:59375 CLOSE_WAIT 6912/nginx: worker tcp 498 0 109.236.87.68:80 200.140.18.250:59907 ESTABLISHED 6914/nginx: worker tcp 0 0 109.236.87.68:80 177.41.171.7:53231 ESTABLISHED 6910/nginx: worker tcp 0 0 109.236.87.68:80 177.19.46.249:60652 TIME_WAIT - tcp 1 0 109.236.87.68:80 187.59.11.132:2144 CLOSE_WAIT 6915/nginx: worker tcp 0 0 127.0.0.1:43240 127.0.0.1:9000 ESTABLISHED 6910/nginx: worker tcp 0 0 109.236.87.68:80 177.103.237.208:54757 ESTABLISHED 6915/nginx: worker tcp 0 0 109.236.87.68:80 150.70.97.121:49325 ESTABLISHED 6918/nginx: worker tcp 398 0 109.236.87.68:80 189.27.219.116:53589 ESTABLISHED - tcp 0 0 109.236.87.68:80 201.8.196.20:12291 ESTABLISHED 6911/nginx: worker tcp 0 0 109.236.87.68:80 177.205.145.111:54923 ESTABLISHED 6910/nginx: worker tcp 0 0 127.0.0.1:43224 127.0.0.1:9000 ESTABLISHED 6911/nginx: worker tcp 392 0 109.236.87.68:80 188.251.53.207:58948 ESTABLISHED - tcp 0 0 109.236.87.68:80 187.61.209.90:49390 ESTABLISHED 6916/nginx: worker tcp 1 0 109.236.87.68:80 177.177.43.143:61683 CLOSE_WAIT 6914/nginx: worker tcp 0 0 109.236.87.68:80 187.52.226.163:49535 TIME_WAIT - tcp 8 0 127.0.0.1:9000 127.0.0.1:43221 ESTABLISHED 8112/php-fpm: pool tcp 0 0 127.0.0.1:43249 127.0.0.1:9000 ESTABLISHED 6915/nginx: worker tcp 401 0 109.236.87.68:80 189.54.26.209:32325 ESTABLISHED - tcp 217 0 127.0.0.1:43219 127.0.0.1:9000 CLOSE_WAIT 6916/nginx: worker tcp 417 0 109.236.87.68:80 2.80.187.137:26628 ESTABLISHED - tcp 0 0 109.236.87.68:80 187.56.247.147:45508 ESTABLISHED 6918/nginx: worker tcp 0 0 127.0.0.1:9000 127.0.0.1:43181 TIME_WAIT - tcp 1 0 109.236.87.68:80 177.177.43.143:64379 CLOSE_WAIT 6914/nginx: worker tcp 442 0 109.236.87.68:80 187.102.249.119:61105 ESTABLISHED - tcp 499 0 109.236.87.68:80 200.140.18.250:59908 CLOSE_WAIT 6911/nginx: worker tcp 396 0 109.236.87.68:80 186.214.192.191:54323 ESTABLISHED - tcp 707 0 109.236.87.68:80 187.56.247.147:45528 ESTABLISHED - tcp 0 0 109.236.87.68:80 177.133.121.2:60460 ESTABLISHED 6915/nginx: worker tcp 453 0 109.236.87.68:80 177.159.91.128:49209 ESTABLISHED - tcp6 0 0 :::21 :::* LISTEN 3538/proftpd: (acce tcp6 0 0 :::22 :::* LISTEN 2873/sshd tcp6 0 0 ::1:25 :::* LISTEN Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240765,240792#msg-240792 From nginx-forum at nginx.us Fri Jul 12 17:22:23 2013 From: nginx-forum at nginx.us (dumorim) Date: Fri, 12 Jul 2013 13:22:23 -0400 Subject: nginx debian 6 64bits In-Reply-To: <8d2a1d2cbb3a62ff489c83d23fffaa93.NginxMailingListEnglish@forum.nginx.org> References: <8d2a1d2cbb3a62ff489c83d23fffaa93.NginxMailingListEnglish@forum.nginx.org> Message-ID: <08998ab7b8ac800cbeee65416b5310d9.NginxMailingListEnglish@forum.nginx.org> ps axu www-data 14168 0.0 0.0 232088 10192 ? S 19:16 0:00 php-fpm: pool w www-data 14169 0.0 0.0 234432 10748 ? S 19:16 0:00 php-fpm: pool w www-data 14171 0.0 0.0 231552 7856 ? S 19:16 0:00 php-fpm: pool w www-data 14172 0.0 0.0 231528 7704 ? S 19:16 0:00 php-fpm: pool w www-data 14173 0.0 0.0 231512 7792 ? S 19:16 0:00 php-fpm: pool w www-data 14174 0.0 0.0 232344 9688 ? S 19:16 0:00 php-fpm: pool w www-data 14175 0.0 0.0 232092 9736 ? S 19:16 0:00 php-fpm: pool w www-data 14177 0.0 0.0 231512 7656 ? S 19:16 0:00 php-fpm: pool w www-data 14178 0.0 0.0 231512 7756 ? S 19:16 0:00 php-fpm: pool w www-data 14179 0.0 0.0 233616 8912 ? S 19:16 0:00 php-fpm: pool w www-data 14181 0.0 0.0 231544 7752 ? S 19:16 0:00 php-fpm: pool w www-data 14220 0.0 0.0 233632 8916 ? S 19:16 0:00 php-fpm: pool w www-data 14221 0.0 0.0 233616 8852 ? S 19:16 0:00 php-fpm: pool w www-data 14222 0.0 0.0 231512 7688 ? S 19:16 0:00 php-fpm: pool w www-data 14223 0.0 0.0 232088 9708 ? S 19:16 0:00 php-fpm: pool w www-data 14224 0.0 0.0 231576 7856 ? S 19:16 0:00 php-fpm: pool w www-data 14225 0.0 0.0 232340 10304 ? S 19:16 0:00 php-fpm: pool w www-data 14226 0.0 0.0 231528 7680 ? S 19:16 0:00 php-fpm: pool w www-data 14227 0.0 0.0 231528 7648 ? S 19:16 0:00 php-fpm: pool w www-data 14228 0.0 0.0 231560 7716 ? S 19:16 0:00 php-fpm: pool w www-data 14229 0.0 0.0 231624 7872 ? S 19:16 0:00 php-fpm: pool w www-data 14230 0.0 0.0 232336 9508 ? S 19:16 0:00 php-fpm: pool w www-data 14231 0.0 0.0 231528 7668 ? S 19:16 0:00 php-fpm: pool w www-data 14232 0.0 0.0 232088 9968 ? S 19:16 0:00 php-fpm: pool w www-data 14233 0.0 0.0 231576 7744 ? S 19:16 0:00 php-fpm: pool w www-data 14234 0.0 0.0 233328 8596 ? S 19:16 0:00 php-fpm: pool w www-data 14235 0.0 0.0 231560 7708 ? S 19:16 0:00 php-fpm: pool w www-data 14236 0.0 0.0 232088 9820 ? S 19:16 0:00 php-fpm: pool w www-data 14237 0.0 0.0 232088 9948 ? S 19:16 0:00 php-fpm: pool w www-data 14238 0.0 0.0 231528 7812 ? S 19:16 0:00 php-fpm: pool w www-data 14239 0.0 0.0 233420 12120 ? S 19:16 0:00 php-fpm: pool w www-data 14240 0.0 0.0 231512 7748 ? S 19:16 0:00 php-fpm: pool w www-data 14241 0.0 0.0 232684 10240 ? S 19:16 0:00 php-fpm: pool w www-data 14242 0.0 0.0 231624 7880 ? S 19:16 0:00 php-fpm: pool w www-data 14243 0.0 0.0 232344 9636 ? S 19:16 0:00 php-fpm: pool w www-data 14244 0.0 0.0 231528 7672 ? S 19:16 0:00 php-fpm: pool w www-data 14245 0.0 0.0 231528 7684 ? S 19:16 0:00 php-fpm: pool w www-data 14246 0.0 0.0 232340 10260 ? S 19:16 0:00 php-fpm: pool w www-data 14248 0.0 0.0 231528 7800 ? S 19:16 0:00 php-fpm: pool w www-data 14249 0.0 0.0 231528 7716 ? S 19:16 0:00 php-fpm: pool w www-data 14250 0.0 0.0 232684 10304 ? S 19:16 0:00 php-fpm: pool w www-data 14251 0.0 0.0 231576 7848 ? S 19:16 0:00 php-fpm: pool w www-data 14252 0.0 0.0 234172 10760 ? S 19:16 0:00 php-fpm: pool w www-data 14338 0.0 0.0 233968 11968 ? S 19:16 0:00 php-fpm: pool w www-data 14339 0.0 0.0 232344 9592 ? S 19:16 0:00 php-fpm: pool w www-data 14340 0.0 0.0 231560 7824 ? S 19:16 0:00 php-fpm: pool w www-data 14341 0.0 0.0 231528 7700 ? S 19:16 0:00 php-fpm: pool w www-data 14342 0.0 0.0 232088 10112 ? S 19:16 0:00 php-fpm: pool w www-data 14343 0.0 0.0 231576 9556 ? S 19:16 0:00 php-fpm: pool w www-data 14345 0.0 0.0 232092 9652 ? S 19:16 0:00 php-fpm: pool w www-data 14346 0.0 0.0 232780 11184 ? S 19:16 0:00 php-fpm: pool w www-data 14347 0.0 0.0 231528 7756 ? S 19:16 0:00 php-fpm: pool w www-data 14348 0.0 0.0 231528 7780 ? S 19:16 0:00 php-fpm: pool w www-data 14351 0.0 0.0 231528 7824 ? S 19:16 0:00 php-fpm: pool w www-data 14353 0.0 0.0 232684 10340 ? S 19:16 0:00 php-fpm: pool w www-data 14354 0.0 0.0 233376 11368 ? S 19:16 0:00 php-fpm: pool w www-data 14355 0.0 0.0 232088 10800 ? S 19:16 0:00 php-fpm: pool w www-data 14357 0.0 0.0 231576 7856 ? S 19:16 0:00 php-fpm: pool w www-data 14358 0.0 0.0 232692 11156 ? S 19:16 0:00 php-fpm: pool w www-data 14360 0.0 0.0 232684 10348 ? S 19:16 0:00 php-fpm: pool w www-data 14361 0.0 0.0 231512 7708 ? S 19:16 0:00 php-fpm: pool w www-data 14362 0.0 0.0 231528 7772 ? S 19:16 0:00 php-fpm: pool w www-data 14364 0.0 0.0 231576 7856 ? S 19:16 0:00 php-fpm: pool w www-data 14365 0.0 0.0 232344 10172 ? S 19:16 0:00 php-fpm: pool w www-data 14366 0.0 0.0 232684 10308 ? S 19:16 0:00 php-fpm: pool w www-data 14368 0.0 0.0 231528 7660 ? S 19:16 0:00 php-fpm: pool w www-data 14369 0.0 0.0 231528 7652 ? S 19:16 0:00 php-fpm: pool w www-data 14371 0.0 0.0 232092 9652 ? S 19:16 0:00 php-fpm: pool w www-data 14372 0.0 0.0 232684 10356 ? S 19:16 0:00 php-fpm: pool w www-data 14374 0.0 0.0 232744 10632 ? S 19:16 0:00 php-fpm: pool w www-data 14375 0.0 0.0 232684 10956 ? S 19:16 0:00 php-fpm: pool w www-data 14376 0.0 0.0 231592 7756 ? S 19:16 0:00 php-fpm: pool w www-data 14377 0.0 0.0 231512 7756 ? S 19:16 0:00 php-fpm: pool w www-data 14380 0.0 0.0 231576 7752 ? S 19:16 0:00 php-fpm: pool w www-data 14381 0.0 0.0 231544 7768 ? S 19:16 0:00 php-fpm: pool w www-data 14434 0.0 0.0 232340 10256 ? S 19:16 0:00 php-fpm: pool w www-data 14435 0.0 0.0 231528 7688 ? S 19:16 0:00 php-fpm: pool w www-data 14436 0.0 0.0 232084 9692 ? S 19:16 0:00 php-fpm: pool w www-data 14437 0.0 0.0 231576 7856 ? S 19:16 0:00 php-fpm: pool w www-data 14438 0.0 0.0 231512 7760 ? S 19:16 0:00 php-fpm: pool w www-data 14439 0.0 0.0 231536 7688 ? S 19:16 0:00 php-fpm: pool w www-data 14440 0.0 0.0 233672 8884 ? S 19:16 0:00 php-fpm: pool w www-data 14442 0.0 0.0 232328 9776 ? S 19:16 0:00 php-fpm: pool w www-data 14444 0.0 0.0 232684 10248 ? S 19:16 0:00 php-fpm: pool w www-data 14446 0.0 0.0 231528 7668 ? S 19:16 0:00 php-fpm: pool w www-data 14448 0.0 0.0 232088 9768 ? S 19:16 0:00 php-fpm: pool w www-data 14449 0.0 0.0 231576 7856 ? S 19:16 0:00 php-fpm: pool w www-data 14450 0.0 0.0 231592 7872 ? S 19:16 0:00 php-fpm: pool w www-data 14451 0.0 0.0 234432 11532 ? S 19:16 0:00 php-fpm: pool w www-data 14452 0.0 0.0 232084 10576 ? S 19:16 0:00 php-fpm: pool w www-data 14453 0.0 0.0 231528 7652 ? S 19:16 0:00 php-fpm: pool w www-data 14454 0.0 0.0 232328 9800 ? S 19:16 0:00 php-fpm: pool w www-data 14455 0.0 0.0 233632 8788 ? S 19:16 0:00 php-fpm: pool w www-data 14458 0.0 0.0 231552 7700 ? S 19:16 0:00 php-fpm: pool w www-data 14460 0.0 0.0 235008 13040 ? S 19:16 0:00 php-fpm: pool w www-data 14461 0.0 0.0 231656 9524 ? S 19:16 0:00 php-fpm: pool w www-data 14462 0.0 0.0 232344 9968 ? S 19:16 0:00 php-fpm: pool w www-data 14464 0.0 0.0 233760 10332 ? S 19:16 0:00 php-fpm: pool w www-data 14465 0.0 0.0 231512 7772 ? S 19:16 0:00 php-fpm: pool w www-data 14466 0.0 0.0 231528 7736 ? S 19:16 0:00 php-fpm: pool w www-data 14467 0.0 0.0 232328 9676 ? S 19:16 0:00 php-fpm: pool w www-data 14469 0.0 0.0 231528 7676 ? S 19:16 0:00 php-fpm: pool w www-data 14470 0.0 0.0 234444 11312 ? S 19:16 0:00 php-fpm: pool w www-data 14473 0.0 0.0 233632 8872 ? S 19:16 0:00 php-fpm: pool w www-data 14475 0.0 0.0 231576 7828 ? S 19:16 0:00 php-fpm: pool w www-data 14476 0.0 0.0 232824 11472 ? S 19:16 0:00 php-fpm: pool w www-data 14477 0.0 0.0 232684 10536 ? S 19:16 0:00 php-fpm: pool w www-data 14530 0.0 0.0 231544 7756 ? S 19:16 0:00 php-fpm: pool w www-data 14531 0.0 0.0 231512 7896 ? S 19:16 0:00 php-fpm: pool w www-data 14532 0.0 0.0 235912 14596 ? S 19:16 0:00 php-fpm: pool w www-data 14533 0.0 0.0 231552 7828 ? S 19:16 0:00 php-fpm: pool w www-data 14534 0.0 0.0 231544 7692 ? S 19:16 0:00 php-fpm: pool w www-data 14535 0.0 0.0 231540 9100 ? S 19:16 0:00 php-fpm: pool w www-data 14536 0.0 0.0 232308 9812 ? S 19:16 0:00 php-fpm: pool w www-data 14538 0.0 0.0 232340 9704 ? S 19:16 0:00 php-fpm: pool w www-data 14540 0.0 0.0 231512 7752 ? S 19:16 0:00 php-fpm: pool w www-data 14543 0.0 0.0 231528 7836 ? S 19:16 0:00 php-fpm: pool w www-data 14544 0.0 0.0 234772 11320 ? S 19:16 0:00 php-fpm: pool w www-data 14545 0.0 0.0 231512 7748 ? S 19:16 0:00 php-fpm: pool w www-data 14546 0.0 0.0 231544 7716 ? S 19:16 0:00 php-fpm: pool w www-data 14547 0.0 0.0 231688 10368 ? S 19:16 0:00 php-fpm: pool w www-data 14549 0.0 0.0 231320 10320 ? S 19:16 0:00 php-fpm: pool w www-data 14550 0.0 0.0 232340 10756 ? S 19:16 0:00 php-fpm: pool w www-data 14551 0.0 0.0 231528 7720 ? S 19:16 0:00 php-fpm: pool w www-data 14552 0.0 0.0 231528 7708 ? S 19:16 0:00 php-fpm: pool w www-data 14553 0.0 0.0 232092 10320 ? S 19:16 0:00 php-fpm: pool w www-data 14555 0.0 0.0 232684 10308 ? S 19:16 0:00 php-fpm: pool w www-data 14558 0.0 0.0 231592 9424 ? S 19:16 0:00 php-fpm: pool w www-data 14559 0.0 0.0 231528 7680 ? S 19:16 0:00 php-fpm: pool w www-data 14560 0.0 0.0 231552 7784 ? S 19:16 0:00 php-fpm: pool w www-data 14561 0.0 0.0 231512 7732 ? S 19:16 0:00 php-fpm: pool w www-data 14563 0.0 0.0 232308 10392 ? S 19:16 0:00 php-fpm: pool w www-data 14564 0.0 0.0 231528 7756 ? S 19:16 0:00 php-fpm: pool w www-data 14565 0.0 0.0 232336 9708 ? S 19:16 0:00 php-fpm: pool w www-data 14566 0.0 0.0 231512 7748 ? S 19:16 0:00 php-fpm: pool w www-data 14567 0.0 0.0 232748 10688 ? S 19:16 0:00 php-fpm: pool w www-data 14570 0.0 0.0 233632 8808 ? S 19:16 0:00 php-fpm: pool w www-data 14571 0.0 0.0 232084 9692 ? S 19:16 0:00 php-fpm: pool w www-data 14572 0.0 0.0 231656 9428 ? S 19:16 0:00 php-fpm: pool w www-data 14586 0.0 0.0 233616 8864 ? S 19:16 0:00 php-fpm: pool w www-data 14587 0.0 0.0 231584 9512 ? S 19:16 0:00 php-fpm: pool w www-data 14588 0.0 0.0 232340 10320 ? S 19:16 0:00 php-fpm: pool w www-data 14589 0.0 0.0 231512 7736 ? S 19:16 0:00 php-fpm: pool w www-data 14590 0.0 0.0 231512 7744 ? S 19:16 0:00 php-fpm: pool w www-data 14591 0.0 0.0 231568 9256 ? S 19:16 0:00 php-fpm: pool w www-data 14592 0.0 0.0 234992 11152 ? S 19:16 0:00 php-fpm: pool w www-data 14593 0.0 0.0 232684 10256 ? S 19:16 0:00 php-fpm: pool w www-data 14594 0.0 0.0 233668 10164 ? S 19:16 0:00 php-fpm: pool w www-data 14595 0.0 0.0 232072 10384 ? S 19:16 0:00 php-fpm: pool w www-data 14596 0.0 0.0 232340 10268 ? S 19:16 0:00 php-fpm: pool w www-data 14597 0.0 0.0 232688 11184 ? S 19:16 0:00 php-fpm: pool w www-data 14598 0.0 0.0 231556 9328 ? S 19:16 0:00 php-fpm: pool w www-data 14599 0.0 0.0 231512 7708 ? S 19:16 0:00 php-fpm: pool w www-data 14600 0.0 0.0 231512 7800 ? S 19:16 0:00 php-fpm: pool w www-data 14601 0.0 0.0 231528 7784 ? S 19:16 0:00 php-fpm: pool w www-data 14602 0.0 0.0 231572 8940 ? S 19:16 0:00 php-fpm: pool w www-data 14603 0.0 0.0 231528 7704 ? S 19:16 0:00 php-fpm: pool w www-data 14604 0.0 0.0 231512 7916 ? S 19:16 0:00 php-fpm: pool w www-data 14605 0.0 0.0 232328 9940 ? S 19:16 0:00 php-fpm: pool w www-data 14612 0.0 0.0 234048 12496 ? S 19:16 0:00 php-fpm: pool w www-data 14613 0.0 0.0 232768 10468 ? S 19:16 0:00 php-fpm: pool w www-data 14614 0.0 0.0 231528 7672 ? S 19:16 0:00 php-fpm: pool w www-data 14623 0.0 0.0 231576 9476 ? S 19:16 0:00 php-fpm: pool w www-data 14624 0.0 0.0 232084 9876 ? S 19:16 0:00 php-fpm: pool w www-data 14625 0.0 0.0 233272 12344 ? S 19:16 0:00 php-fpm: pool w www-data 14626 0.0 0.0 232344 10364 ? S 19:16 0:00 php-fpm: pool w www-data 14627 0.0 0.0 232684 10628 ? S 19:16 0:00 php-fpm: pool w www-data 14628 0.0 0.0 231576 9560 ? S 19:16 0:00 php-fpm: pool w www-data 14641 0.0 0.0 232940 10492 ? S 19:16 0:00 php-fpm: pool w www-data 14642 0.0 0.0 231544 7688 ? S 19:16 0:00 php-fpm: pool w www-data 14643 0.0 0.0 234176 11180 ? S 19:16 0:00 php-fpm: pool w www-data 14644 0.0 0.0 234444 11268 ? S 19:16 0:00 php-fpm: pool w www-data 14657 0.0 0.0 231544 7692 ? S 19:16 0:00 php-fpm: pool w www-data 14658 0.0 0.0 233632 8840 ? S 19:16 0:00 php-fpm: pool w www-data 14659 0.0 0.0 232336 9676 ? S 19:16 0:00 php-fpm: pool w www-data 14660 0.0 0.0 232328 9680 ? S 19:16 0:00 php-fpm: pool w www-data 14661 0.0 0.0 231560 7768 ? S 19:16 0:00 php-fpm: pool w www-data 14662 0.0 0.0 231552 7700 ? S 19:16 0:00 php-fpm: pool w www-data 14675 0.0 0.0 231552 7804 ? S 19:17 0:00 php-fpm: pool w www-data 14676 0.0 0.0 231528 7752 ? S 19:17 0:00 php-fpm: pool w www-data 14677 0.0 0.0 231544 7796 ? S 19:17 0:00 php-fpm: pool w www-data 14678 0.0 0.0 231528 7768 ? S 19:17 0:00 php-fpm: pool w www-data 14679 0.0 0.0 232684 10316 ? S 19:17 0:00 php-fpm: pool w www-data 14680 0.0 0.0 231576 8948 ? S 19:17 0:00 php-fpm: pool w www-data 14702 0.0 0.0 232088 9696 ? S 19:17 0:00 php-fpm: pool w www-data 14703 0.0 0.0 234444 10760 ? S 19:17 0:00 php-fpm: pool w www-data 14704 0.0 0.0 231512 7800 ? S 19:17 0:00 php-fpm: pool w www-data 14705 0.0 0.0 231528 7676 ? S 19:17 0:00 php-fpm: pool w www-data 14706 0.0 0.0 231596 9088 ? S 19:17 0:00 php-fpm: pool w www-data 14707 0.0 0.0 232348 9752 ? S 19:17 0:00 php-fpm: pool w www-data 14708 0.0 0.0 231528 7792 ? S 19:17 0:00 php-fpm: pool w www-data 14709 0.0 0.0 232084 9752 ? S 19:17 0:00 php-fpm: pool w www-data 14710 0.0 0.0 232900 10676 ? S 19:17 0:00 php-fpm: pool w www-data 14723 0.0 0.0 231536 7688 ? S 19:17 0:00 php-fpm: pool w www-data 14724 0.0 0.0 232684 10280 ? S 19:17 0:00 php-fpm: pool w www-data 14725 0.0 0.0 231528 7752 ? S 19:17 0:00 php-fpm: pool w www-data 14726 0.0 0.0 231528 7744 ? S 19:17 0:00 php-fpm: pool w www-data 14727 0.0 0.0 232340 10336 ? S 19:17 0:00 php-fpm: pool w www-data 14728 0.0 0.0 232304 9840 ? S 19:17 0:00 php-fpm: pool w www-data 14741 0.0 0.0 231528 7652 ? S 19:17 0:00 php-fpm: pool w www-data 14742 0.0 0.0 232684 10260 ? S 19:17 0:00 php-fpm: pool w www-data 14743 0.0 0.0 231512 7716 ? S 19:17 0:00 php-fpm: pool w www-data 14744 0.0 0.0 231496 7436 ? S 19:17 0:00 php-fpm: pool w www-data 14745 0.0 0.0 231544 9072 ? S 19:17 0:00 php-fpm: pool w www-data 14746 0.0 0.0 232304 9764 ? S 19:17 0:00 php-fpm: pool w www-data 14751 0.0 0.0 233616 8868 ? S 19:17 0:00 php-fpm: pool w www-data 14752 0.0 0.0 231544 7788 ? S 19:17 0:00 php-fpm: pool w www-data 14761 0.0 0.0 231512 7800 ? S 19:17 0:00 php-fpm: pool w www-data 14762 0.0 0.0 232092 9656 ? S 19:17 0:00 php-fpm: pool w www-data 14763 0.0 0.0 231528 7728 ? S 19:17 0:00 php-fpm: pool w www-data 14764 0.0 0.0 234180 10652 ? S 19:17 0:00 php-fpm: pool w www-data 14785 0.0 0.0 234436 10788 ? S 19:17 0:00 php-fpm: pool w www-data 14786 0.0 0.0 231544 7792 ? S 19:17 0:00 php-fpm: pool w www-data 14787 0.0 0.0 231512 7752 ? S 19:17 0:00 php-fpm: pool w www-data 14788 0.0 0.0 234216 12168 ? S 19:17 0:00 php-fpm: pool w www-data 14789 0.0 0.0 231528 7760 ? S 19:17 0:00 php-fpm: pool w root 16427 0.0 0.0 8768 888 pts/0 T 19:22 0:00 grep php root 16681 0.0 0.0 15472 1208 pts/0 R+ 19:23 0:00 ps axu root 24548 0.0 0.0 209564 13956 ? Ss 04:52 0:01 /usr/sbin/apach root 26218 0.0 0.0 70820 3484 ? Ss 18:33 0:00 sshd: root at nott root 26256 0.0 0.0 12628 1004 ? Ss 18:34 0:00 /usr/lib/openss root 29152 0.0 0.0 71024 3712 ? Ss 18:36 0:00 sshd: root at nott root 29154 0.0 0.0 12716 1100 ? Ss 18:36 0:00 /usr/lib/openss Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240765,240793#msg-240793 From nginx-forum at nginx.us Fri Jul 12 19:10:24 2013 From: nginx-forum at nginx.us (dumorim) Date: Fri, 12 Jul 2013 15:10:24 -0400 Subject: nginx debian 6 64bits In-Reply-To: <08998ab7b8ac800cbeee65416b5310d9.NginxMailingListEnglish@forum.nginx.org> References: <8d2a1d2cbb3a62ff489c83d23fffaa93.NginxMailingListEnglish@forum.nginx.org> <08998ab7b8ac800cbeee65416b5310d9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3e4a70b44fa2aeb3859bc9f005e08cf9.NginxMailingListEnglish@forum.nginx.org> TCP localhost:9000 (LISTEN) php5-fpm 7820 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7821 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7822 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7823 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7824 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7825 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7826 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7849 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7850 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7851 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7852 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7853 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7854 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7855 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7856 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7857 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7858 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7859 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7860 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7867 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7868 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7869 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7890 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7891 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7892 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7893 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7894 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7895 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7896 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7897 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7898 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7899 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7914 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7915 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7916 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7917 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7918 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7919 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7920 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7939 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7940 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7941 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7942 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7943 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7944 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7945 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7946 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7971 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7972 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7973 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7974 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7975 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7976 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7977 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7978 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7979 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7980 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7981 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7982 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7993 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7994 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7995 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7996 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 7997 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8026 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8027 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8028 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8029 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8030 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8031 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8032 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8033 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8034 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8035 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8036 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8037 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8038 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8039 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8058 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8059 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8060 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8061 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8062 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8063 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8064 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8065 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8066 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8081 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8082 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8083 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8084 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8085 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8086 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8087 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8102 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8103 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8104 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8105 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8106 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8107 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8108 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8123 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8124 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8125 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8126 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8127 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8129 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8130 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8150 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8151 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) php5-fpm 8152 www-data 0u IPv4 6600 0t0 TCP localhost:9000 (LISTEN) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240765,240795#msg-240795 From nginx-forum at nginx.us Fri Jul 12 19:14:53 2013 From: nginx-forum at nginx.us (dumorim) Date: Fri, 12 Jul 2013 15:14:53 -0400 Subject: nginx debian 6 64bits In-Reply-To: <3e4a70b44fa2aeb3859bc9f005e08cf9.NginxMailingListEnglish@forum.nginx.org> References: <8d2a1d2cbb3a62ff489c83d23fffaa93.NginxMailingListEnglish@forum.nginx.org> <08998ab7b8ac800cbeee65416b5310d9.NginxMailingListEnglish@forum.nginx.org> <3e4a70b44fa2aeb3859bc9f005e08cf9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <06dd4ad2c7a666a46ffa505aef76c0c8.NginxMailingListEnglish@forum.nginx.org> tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2250/php-fpm.conf) excuse this post so it does not have editing Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240765,240796#msg-240796 From gfoster at realgravity.com Fri Jul 12 19:47:02 2013 From: gfoster at realgravity.com (Gary Foster) Date: Fri, 12 Jul 2013 12:47:02 -0700 Subject: duplicating or forking incoming requests Message-ID: <1BE1D5E4-51BA-4255-A103-CB57D4E96021@realgravity.com> I'm trying to figure out how to accomplish something with nginx and it seems to have me baffled. We use nginx as an endpoint to log incoming events. The default response is a 200, empty body and an entry in the access log in a very specific format. For legacy reasons, this will not change any time soon. Now, in addition to just simply logging this request, we want to forward that request to an internal server to also do something with it (and again, for legacy reasons I can't just make that server do the logging instead). I can get this working fine, except for failure modes? I specifically do not want the original nginx logging affected at all if it can't proxy the request upstream, and in point of fact specifically want to return a 200 in that case also. So basically the desired behavior looks like this: nginx running proxy running log a successful incoming GET request to nginx access log forward request to proxy return 200 nginx running proxy not running (or returns an error) log a successful incoming GET request to nginx access log forward request to proxy proxy returns an error nginx ignores the error and returns a 200 I haven't been able to get to that point, but instead have been only able to get various combinations of "nginx doesn't log anything", "nginx logs everything only when the entire chain is up" and "nginx logs incoming requests fine when chain is up but logs to the error log when the proxy is down". Here's the configuration snippets I'm currently using: location = /events { access_log events_access.log events_format; expires 1s; try_files @proxy @proxy; # try_files @proxy /empty.html; # try_files @proxy =200; } location @proxy { proxy_pass http://127.0.0.1:8887; proxy_intercept_errors on; access_log events_access.log events_format; error_page 502 =200 /empty.html; proxy_set_header X-Real-IP $remote_addr; } Basically, what I want is that if it logs every incoming request normally. If it can forward the request to the upstream proxy, it does so after logging it, and if it can't, it simply logs it and returns a 200. Is this possible and if so how? Thanks in advance! -- Gary F. From mdounin at mdounin.ru Fri Jul 12 20:04:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 13 Jul 2013 00:04:54 +0400 Subject: duplicating or forking incoming requests In-Reply-To: <1BE1D5E4-51BA-4255-A103-CB57D4E96021@realgravity.com> References: <1BE1D5E4-51BA-4255-A103-CB57D4E96021@realgravity.com> Message-ID: <20130712200453.GI66479@mdounin.ru> Hello! On Fri, Jul 12, 2013 at 12:47:02PM -0700, Gary Foster wrote: [...] > Here's the configuration snippets I'm currently using: > > location = /events { > access_log events_access.log events_format; > expires 1s; > try_files @proxy @proxy; > # try_files @proxy /empty.html; > # try_files @proxy =200; You are trying to use try_files incorrectly. Please read docs at http://nginx.org/r/try_files. > } > > location @proxy { > proxy_pass http://127.0.0.1:8887; > proxy_intercept_errors on; > access_log events_access.log events_format; > error_page 502 =200 /empty.html; > proxy_set_header X-Real-IP $remote_addr; > } > > Basically, what I want is that if it logs every incoming request > normally. If it can forward the request to the upstream proxy, > it does so after logging it, and if it can't, it simply logs it > and returns a 200. > > Is this possible and if so how? I would recommend something like this: location = /events { access_log events_access.log events_format; error_page 502 504 = /empty; proxy_pass ...; } location = /empty { access_log events_access.log events_format; return 200 ""; } Note the above configuration snippet doesn't try to intercept errors returned by upstream servers, but only handles cases when nginx can't reach them and/or an invalid response is returned. If upstream servers are expected to return various errors, proxy_intercept_errors should be used, as well as additional codes in error_page. -- Maxim Dounin http://nginx.org/en/donation.html From ianevans at digitalhit.com Fri Jul 12 20:22:35 2013 From: ianevans at digitalhit.com (Ian M. Evans) Date: Fri, 12 Jul 2013 16:22:35 -0400 Subject: different fastcgi_cache for bots and humans Message-ID: <98c7132e41711efa228dc7feeb4e9ed4.squirrel@www.digitalhit.com> As mentioned before, I'm tweaking pixabay's version of handling the new Google Image search traffic killer by making their trap URLs more cacheable. Img tags in the html will have ?i appended to the source and those "?i" are removed for bots. I thought I could use nginx's httpsubmodule to strip the ?i for bots, but I'd still like it cacheable, with one version for humans and one for bots. Doing some digging I came across an idea like this: map $http_user_agent $botornot { default 'human'; ~(Googlebot|Bing|other|bit|names) 'bot'; } I'm assuming I could then use $botornot in the fastcgi_cache_key? If so, where would I place it in my current line, which is: fastcgi_cache_key "$scheme$request_method$host$uri?$args"; Thanks. From gfoster at realgravity.com Fri Jul 12 21:01:13 2013 From: gfoster at realgravity.com (Gary Foster) Date: Fri, 12 Jul 2013 14:01:13 -0700 Subject: duplicating or forking incoming requests In-Reply-To: <20130712200453.GI66479@mdounin.ru> References: <1BE1D5E4-51BA-4255-A103-CB57D4E96021@realgravity.com> <20130712200453.GI66479@mdounin.ru> Message-ID: Thanks, that did the trick exactly! Now that I have something that works, I'm off to the docs to figure out where my fundamental misunderstandings were and correct them. Very much appreciated! -- Gary F. On Jul 12, 2013, at 1:04 PM, Maxim Dounin wrote: > Hello! > > On Fri, Jul 12, 2013 at 12:47:02PM -0700, Gary Foster wrote: > > [...] > >> Here's the configuration snippets I'm currently using: >> >> location = /events { >> access_log events_access.log events_format; >> expires 1s; >> try_files @proxy @proxy; >> # try_files @proxy /empty.html; >> # try_files @proxy =200; > > You are trying to use try_files incorrectly. Please read docs at > http://nginx.org/r/try_files. > >> } >> >> location @proxy { >> proxy_pass http://127.0.0.1:8887; >> proxy_intercept_errors on; >> access_log events_access.log events_format; >> error_page 502 =200 /empty.html; >> proxy_set_header X-Real-IP $remote_addr; >> } >> >> Basically, what I want is that if it logs every incoming request >> normally. If it can forward the request to the upstream proxy, >> it does so after logging it, and if it can't, it simply logs it >> and returns a 200. >> >> Is this possible and if so how? > > I would recommend something like this: > > location = /events { > access_log events_access.log events_format; > error_page 502 504 = /empty; > proxy_pass ...; > } > > location = /empty { > access_log events_access.log events_format; > return 200 ""; > } > > Note the above configuration snippet doesn't try to intercept > errors returned by upstream servers, but only handles cases when > nginx can't reach them and/or an invalid response is returned. If > upstream servers are expected to return various errors, > proxy_intercept_errors should be used, as well as additional codes > in error_page. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat Jul 13 02:08:56 2013 From: nginx-forum at nginx.us (bignginxfan) Date: Fri, 12 Jul 2013 22:08:56 -0400 Subject: geoip filtering not working Message-ID: <65602c4bf5c586b2d6b4827b2e3ea10d.NginxMailingListEnglish@forum.nginx.org> Hello, I'm trying to figure out why Nginx's geoip modules doesn't seem to filter out certain ip's from a banned country. I manually tested the GeoIP.dat using 'geoiplookup' against a few ips that successfully connected but were in a banned country. GeoIP.dat was fine, it wasn't the problem. Maybe its a config problem? Wondering if you guys can help. i have the following lines in config: geoip_country /usr/share/GeoIP/GeoIP.dat; server { ... if ($geoip_country_code = CN) { return 444; } .... } Please help! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240802,240802#msg-240802 From kate at elide.org Sat Jul 13 10:19:51 2013 From: kate at elide.org (Kate F) Date: Sat, 13 Jul 2013 12:19:51 +0200 Subject: EXSLT func:function not registered for XSLT filter module Message-ID: Hi, I'm trying to use EXSLT's with nginx's xslt filter module. The effect I think I'm seeing is that my functions are seemingly ignored. I made a test XSLT stylesheet: iona% cat xsl/fish.xsl iona% xsltproc xsl/fish.xsl vhost/blog.libfsm.org/index.xhtml5 123 iona% I would expect my there to give the same result as under xsltproc. But running the same under nginx gives: xmlXPathCompOpEval: function fish not found XPath error : Unregistered function xmlXPathCompiledEval: evaluation failed runtime error: file /home/kate/svn/www/xsl/fish.xsl line 19 element value-of XPath evaluation returned no result. Looking at ngx_http_xslt_filter_module.c I see exsltRegisterAll() is called, which is what should register libexslt's handler for func:function and friends: #if (NGX_HAVE_EXSLT) exsltRegisterAll(); #endif I know NGX_HAVE_EXSLT is defined because other EXSLT functions (such as things in the date: and str: namespaces) work fine. I'm using nginx 1.4.1, which is linked to the same libexslt as my xsltproc. Any suggestions, please? -- Kate From sandeepvreddy at outlook.com Sat Jul 13 12:18:22 2013 From: sandeepvreddy at outlook.com (Sandeep L) Date: Sat, 13 Jul 2013 17:48:22 +0530 Subject: ReWrite rule help Message-ID: Hi I am trying to write following rewrite url request: www.example.com/abc/xyz?a=1&b=2 It should get response from following urlwww.example.com:8080/xyz?a=1&b=2 can some one help me to do this. Thanks,Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From draxter65 at gmail.com Sat Jul 13 13:40:14 2013 From: draxter65 at gmail.com (Michael) Date: Sat, 13 Jul 2013 14:40:14 +0100 Subject: Custom 401 page not displaying or not prompting for credentials Message-ID: Hello, I was trying to set a custom error 401 page on my Nginx server version 1.5.1 using the following methods: error_page 401 /401.html By itself it still displays the default site error_page 401 http://example.com/401.html Goes straight to the custom error page without a chance to authenticate error_page 401 /401.html; location = /401.html { root G:/Files; allow all; Also goes straight to the error page. My entire config file: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; server { satisfy any; allow 192.168.0.0/24; deny all; auth_basic "Please login"; auth_basic_user_file C:\password.txt; listen 80; server_name localhost; root G:/Files; location / { index index.html index.php /_h5ai/server/php/index.php; error_page 401 /401.html; location = /401.html { root G:/Files; } } location ~ \.php$ { include fastcgi.conf; fastcgi_pass 127.0.0.1:9000; } } } If anyone has any idea what I'm doing wrong, let me know. Kind regards, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Sat Jul 13 16:07:49 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 13 Jul 2013 21:37:49 +0530 Subject: ReWrite rule help In-Reply-To: References: Message-ID: You probably want http://wiki.nginx.org/HttpProxyModule On Sat, Jul 13, 2013 at 5:48 PM, Sandeep L wrote: > Hi I am trying to write following rewrite url > > request: www.example.com/abc/xyz?a=1&b=2 > > It should get response from following url > www.example.com:8080/xyz?a=1&b=2 > > can some one help me to do this. > > Thanks, > Sandeep > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* GNUSYS -------------- next part -------------- An HTML attachment was scrubbed... URL: From oceanofweb at gmail.com Sat Jul 13 21:59:18 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Sun, 14 Jul 2013 03:29:18 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address Message-ID: Hi I just installed Nginx on my new VPS on centOS. But while accessing the phpinfo.php, I am getting belwo exception " connect() to 127.0.0.1:80 failed (99: Cannot assign requested address) while connecting to upstream, client: 127.0.0.1," Can anyone pls see to it -- Thanks' Atul Bansal TechOfWeb.com - Android Rooting OceanOfWeb.com - Funny News WordpressThemeIt.com - Best Wordpress Themes http://twitter.com/techofweb http://facebook.com/oceanofweb -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajan at noppix.com Sat Jul 13 22:12:28 2013 From: sajan at noppix.com (Sajan Parikh) Date: Sat, 13 Jul 2013 17:12:28 -0500 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: Message-ID: <51E1D0CC.3030605@noppix.com> You most likely already have something running on port 80. Perhaps another instance of nginx or Apache or something. Run netstat -plunt and show us the output. Sajan Parikh /Owner, Noppix LLC/ e: sajan at noppix.com o: (563) 726-0371 c: (563) 447-0822 Noppix LLC Logo On 07/13/2013 04:59 PM, Atul Bansal wrote: > Hi > > I just installed Nginx on my new VPS on centOS. > But while accessing the phpinfo.php, I am getting belwo exception > > " connect() to 127.0.0.1:80 failed (99: Cannot > assign requested address) while connecting to upstream, client: > 127.0.0.1," > > Can anyone pls see to it > > -- > > Thanks' > Atul Bansal > TechOfWeb.com - Android Rooting > OceanOfWeb.com - Funny News > WordpressThemeIt.com - Best Wordpress > Themes > http://twitter.com/techofweb > http://facebook.com/oceanofweb > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NewNoppixEmailLogo.png Type: image/png Size: 7312 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4473 bytes Desc: S/MIME Cryptographic Signature URL: From oceanofweb at gmail.com Sat Jul 13 22:27:59 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Sun, 14 Jul 2013 03:57:59 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <51E1D0CC.3030605@noppix.com> References: <51E1D0CC.3030605@noppix.com> Message-ID: Thanks for stopping-by. mailed you in seperate email. Pls see On Sun, Jul 14, 2013 at 3:42 AM, Sajan Parikh wrote: > You most likely already have something running on port 80. Perhaps > another instance of nginx or Apache or something. > > Run > > netstat -plunt and show us the output. > > Sajan Parikh > *Owner, Noppix LLC* > > e: sajan at noppix.com > o: (563) 726-0371 > c: (563) 447-0822 > > [image: Noppix LLC Logo] > On 07/13/2013 04:59 PM, Atul Bansal wrote: > > Hi > > I just installed Nginx on my new VPS on centOS. > But while accessing the phpinfo.php, I am getting belwo exception > > " connect() to 127.0.0.1:80 failed (99: Cannot assign requested address) > while connecting to upstream, client: 127.0.0.1," > > Can anyone pls see to it > > -- > > Thanks' > Atul Bansal > TechOfWeb.com - Android Rooting > OceanOfWeb.com - Funny News > WordpressThemeIt.com - Best Wordpress > Themes > http://twitter.com/techofweb > http://facebook.com/oceanofweb > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks' Atul Bansal TechOfWeb.com - Android Rooting OceanOfWeb.com - Funny News WordpressThemeIt.com - Best Wordpress Themes http://twitter.com/techofweb http://facebook.com/oceanofweb -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajan at noppix.com Sat Jul 13 22:39:19 2013 From: sajan at noppix.com (Sajan Parikh) Date: Sat, 13 Jul 2013 17:39:19 -0500 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <51E1D0CC.3030605@noppix.com> Message-ID: <51E1D717.6050306@noppix.com> No issue, as you can see from that output there is already an nginx process listening on port 80, which means nothing else (even another nginx) can use port 80. You need to kill that instance of nginx, which will clear port 80. Then you can start a new instance on port 80. Sajan Parikh /Owner, Noppix LLC/ e: sajan at noppix.com o: (563) 726-0371 c: (563) 447-0822 Noppix LLC Logo On 07/13/2013 05:27 PM, Atul Bansal wrote: > Thanks for stopping-by. > mailed you in seperate email. Pls see > > > On Sun, Jul 14, 2013 at 3:42 AM, Sajan Parikh > wrote: > > You most likely already have something running on port 80. > Perhaps another instance of nginx or Apache or something. > > Run > > netstat -plunt and show us the output. > > Sajan Parikh > /Owner, Noppix LLC/ > > e: sajan at noppix.com > o: (563) 726-0371 > c: (563) 447-0822 > > Noppix LLC Logo > On 07/13/2013 04:59 PM, Atul Bansal wrote: >> Hi >> >> I just installed Nginx on my new VPS on centOS. >> But while accessing the phpinfo.php, I am getting belwo exception >> >> " connect() to 127.0.0.1:80 failed (99: >> Cannot assign requested address) while connecting to upstream, >> client: 127.0.0.1," >> >> Can anyone pls see to it >> >> -- >> >> Thanks' >> Atul Bansal >> TechOfWeb.com - Android Rooting >> OceanOfWeb.com - Funny News >> WordpressThemeIt.com - Best >> Wordpress Themes >> http://twitter.com/techofweb >> http://facebook.com/oceanofweb >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > > Thanks' > Atul Bansal > TechOfWeb.com - Android Rooting > OceanOfWeb.com - Funny News > WordpressThemeIt.com - Best Wordpress > Themes > http://twitter.com/techofweb > http://facebook.com/oceanofweb > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NewNoppixEmailLogo.png Type: image/png Size: 7312 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4473 bytes Desc: S/MIME Cryptographic Signature URL: From oceanofweb at gmail.com Sat Jul 13 22:45:31 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Sun, 14 Jul 2013 04:15:31 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <51E1D717.6050306@noppix.com> References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> Message-ID: how pls.... Actually I just bought VPS server and just now installed all required softwares.... On Sun, Jul 14, 2013 at 4:09 AM, Sajan Parikh wrote: > No issue, as you can see from that output there is already an nginx > process listening on port 80, which means nothing else (even another nginx) > can use port 80. > > You need to kill that instance of nginx, which will clear port 80. Then > you can start a new instance on port 80. > > > Sajan Parikh > *Owner, Noppix LLC* > > e: sajan at noppix.com > o: (563) 726-0371 > c: (563) 447-0822 > > [image: Noppix LLC Logo] > On 07/13/2013 05:27 PM, Atul Bansal wrote: > > Thanks for stopping-by. > mailed you in seperate email. Pls see > > > On Sun, Jul 14, 2013 at 3:42 AM, Sajan Parikh wrote: > >> You most likely already have something running on port 80. Perhaps >> another instance of nginx or Apache or something. >> >> Run >> >> netstat -plunt and show us the output. >> >> Sajan Parikh >> *Owner, Noppix LLC* >> >> e: sajan at noppix.com >> o: (563) 726-0371 >> c: (563) 447-0822 >> >> [image: Noppix LLC Logo] >> On 07/13/2013 04:59 PM, Atul Bansal wrote: >> >> Hi >> >> I just installed Nginx on my new VPS on centOS. >> But while accessing the phpinfo.php, I am getting belwo exception >> >> " connect() to 127.0.0.1:80 failed (99: Cannot assign requested >> address) while connecting to upstream, client: 127.0.0.1," >> >> Can anyone pls see to it >> >> -- >> >> Thanks' >> Atul Bansal >> TechOfWeb.com - Android Rooting >> OceanOfWeb.com - Funny News >> WordpressThemeIt.com - Best Wordpress >> Themes >> http://twitter.com/techofweb >> http://facebook.com/oceanofweb >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > > Thanks' > Atul Bansal > TechOfWeb.com - Android Rooting > OceanOfWeb.com - Funny News > WordpressThemeIt.com - Best Wordpress > Themes > http://twitter.com/techofweb > http://facebook.com/oceanofweb > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks' Atul Bansal TechOfWeb.com - Android Rooting OceanOfWeb.com - Funny News WordpressThemeIt.com - Best Wordpress Themes http://twitter.com/techofweb http://facebook.com/oceanofweb -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NewNoppixEmailLogo.png Type: image/png Size: 7312 bytes Desc: not available URL: From oceanofweb at gmail.com Sat Jul 13 22:48:28 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Sun, 14 Jul 2013 04:18:28 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <51E1D717.6050306@noppix.com> References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> Message-ID: I dont think another nginx is already running as when I stopped my nginx instance, i cannot see niginx running using the mentioned command... The log error that i am getting is when I try to run any php file in my browser. However, for static html files, nginx is servig them fine Pls help On Sun, Jul 14, 2013 at 4:09 AM, Sajan Parikh wrote: > No issue, as you can see from that output there is already an nginx > process listening on port 80, which means nothing else (even another nginx) > can use port 80. > > You need to kill that instance of nginx, which will clear port 80. Then > you can start a new instance on port 80. > > > Sajan Parikh > *Owner, Noppix LLC* > > e: sajan at noppix.com > o: (563) 726-0371 > c: (563) 447-0822 > > [image: Noppix LLC Logo] > On 07/13/2013 05:27 PM, Atul Bansal wrote: > > Thanks for stopping-by. > mailed you in seperate email. Pls see > > > On Sun, Jul 14, 2013 at 3:42 AM, Sajan Parikh wrote: > >> You most likely already have something running on port 80. Perhaps >> another instance of nginx or Apache or something. >> >> Run >> >> netstat -plunt and show us the output. >> >> Sajan Parikh >> *Owner, Noppix LLC* >> >> e: sajan at noppix.com >> o: (563) 726-0371 >> c: (563) 447-0822 >> >> [image: Noppix LLC Logo] >> On 07/13/2013 04:59 PM, Atul Bansal wrote: >> >> Hi >> >> I just installed Nginx on my new VPS on centOS. >> But while accessing the phpinfo.php, I am getting belwo exception >> >> " connect() to 127.0.0.1:80 failed (99: Cannot assign requested >> address) while connecting to upstream, client: 127.0.0.1," >> >> Can anyone pls see to it >> >> -- >> >> Thanks' >> Atul Bansal >> TechOfWeb.com - Android Rooting >> OceanOfWeb.com - Funny News >> WordpressThemeIt.com - Best Wordpress >> Themes >> http://twitter.com/techofweb >> http://facebook.com/oceanofweb >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > > Thanks' > Atul Bansal > TechOfWeb.com - Android Rooting > OceanOfWeb.com - Funny News > WordpressThemeIt.com - Best Wordpress > Themes > http://twitter.com/techofweb > http://facebook.com/oceanofweb > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks' Atul Bansal TechOfWeb.com - Android Rooting OceanOfWeb.com - Funny News WordpressThemeIt.com - Best Wordpress Themes http://twitter.com/techofweb http://facebook.com/oceanofweb -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NewNoppixEmailLogo.png Type: image/png Size: 7312 bytes Desc: not available URL: From scott_ribe at elevated-dev.com Sat Jul 13 22:52:38 2013 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Sat, 13 Jul 2013 16:52:38 -0600 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> Message-ID: <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> On Jul 13, 2013, at 4:48 PM, Atul Bansal wrote: > The log error that i am getting is when I try to run any php file in my browser. Are you trying to pass requests to PHP over port 80? -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From lists at ruby-forum.com Sat Jul 13 22:53:27 2013 From: lists at ruby-forum.com (Atul B.) Date: Sun, 14 Jul 2013 00:53:27 +0200 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: Message-ID: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> I dont think another nginx is already running as when I stopped my nginx instance, i cannot see niginx running using the mentioned command... The log error that i am getting is when I try to run any php file in my browser. However, for static html files, nginx is servig them fine -- Posted via http://www.ruby-forum.com/. From oceanofweb at gmail.com Sat Jul 13 22:54:35 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Sun, 14 Jul 2013 04:24:35 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> Message-ID: ya On Sun, Jul 14, 2013 at 4:22 AM, Scott Ribe wrote: > On Jul 13, 2013, at 4:48 PM, Atul Bansal wrote: > > > The log error that i am getting is when I try to run any php file in my > browser. > > Are you trying to pass requests to PHP over port 80? > > -- > Scott Ribe > scott_ribe at elevated-dev.com > http://www.elevated-dev.com/ > (303) 722-0567 voice > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks' Atul Bansal TechOfWeb.com - Android Rooting OceanOfWeb.com - Funny News WordpressThemeIt.com - Best Wordpress Themes http://twitter.com/techofweb http://facebook.com/oceanofweb -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott_ribe at elevated-dev.com Sat Jul 13 22:57:38 2013 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Sat, 13 Jul 2013 16:57:38 -0600 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> Message-ID: <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> From nginx to something upstream? (That's what I meant.) On Jul 13, 2013, at 4:54 PM, Atul Bansal wrote: > ya > > > On Sun, Jul 14, 2013 at 4:22 AM, Scott Ribe wrote: > On Jul 13, 2013, at 4:48 PM, Atul Bansal wrote: > > > The log error that i am getting is when I try to run any php file in my browser. > > Are you trying to pass requests to PHP over port 80? > > -- > Scott Ribe > scott_ribe at elevated-dev.com > http://www.elevated-dev.com/ > (303) 722-0567 voice > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Thanks' > Atul Bansal > TechOfWeb.com - Android Rooting > OceanOfWeb.com - Funny News > WordpressThemeIt.com - Best Wordpress Themes > http://twitter.com/techofweb > http://facebook.com/oceanofweb > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From oceanofweb at gmail.com Sat Jul 13 23:01:50 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Sun, 14 Jul 2013 04:31:50 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> Message-ID: Actually, I just bought a new unmanaged VPS. I installed php, nginx etc... I increased worker_connections, worker_rlimit_nofile, worker_processes as it was giving me some "low worker_connections" issue. After I increased these limits, that issue resolved but now this issue coming Googled and implemented some changes based on that research but no fruitful result. Stull getting the error as mentioned in subject On Sun, Jul 14, 2013 at 4:27 AM, Scott Ribe wrote: > From nginx to something upstream? (That's what I meant.) > > On Jul 13, 2013, at 4:54 PM, Atul Bansal wrote: > > > ya > > > > > > On Sun, Jul 14, 2013 at 4:22 AM, Scott Ribe > wrote: > > On Jul 13, 2013, at 4:48 PM, Atul Bansal wrote: > > > > > The log error that i am getting is when I try to run any php file in > my browser. > > > > Are you trying to pass requests to PHP over port 80? > > > > -- > > Scott Ribe > > scott_ribe at elevated-dev.com > > http://www.elevated-dev.com/ > > (303) 722-0567 voice > > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > -- > > > > Thanks' > > Atul Bansal > > TechOfWeb.com - Android Rooting > > OceanOfWeb.com - Funny News > > WordpressThemeIt.com - Best Wordpress Themes > > http://twitter.com/techofweb > > http://facebook.com/oceanofweb > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Scott Ribe > scott_ribe at elevated-dev.com > http://www.elevated-dev.com/ > (303) 722-0567 voice > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks' Atul Bansal TechOfWeb.com - Android Rooting OceanOfWeb.com - Funny News WordpressThemeIt.com - Best Wordpress Themes http://twitter.com/techofweb http://facebook.com/oceanofweb -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott_ribe at elevated-dev.com Sat Jul 13 23:09:00 2013 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Sat, 13 Jul 2013 17:09:00 -0600 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> Message-ID: Are you getting requests in nginx over port 80, then trying to pass them to some PHP process over port 80? On Jul 13, 2013, at 5:01 PM, Atul Bansal wrote: > Actually, I just bought a new unmanaged VPS. I installed php, nginx etc... > > I increased worker_connections, worker_rlimit_nofile, worker_processes as it was giving me some "low worker_connections" issue. > > After I increased these limits, that issue resolved but now this issue coming > > Googled and implemented some changes based on that research but no fruitful result. > > Stull getting the error as mentioned in subject > > > On Sun, Jul 14, 2013 at 4:27 AM, Scott Ribe wrote: > From nginx to something upstream? (That's what I meant.) > > On Jul 13, 2013, at 4:54 PM, Atul Bansal wrote: > > > ya > > > > > > On Sun, Jul 14, 2013 at 4:22 AM, Scott Ribe wrote: > > On Jul 13, 2013, at 4:48 PM, Atul Bansal wrote: > > > > > The log error that i am getting is when I try to run any php file in my browser. > > > > Are you trying to pass requests to PHP over port 80? > > > > -- > > Scott Ribe > > scott_ribe at elevated-dev.com > > http://www.elevated-dev.com/ > > (303) 722-0567 voice > > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > -- > > > > Thanks' > > Atul Bansal > > TechOfWeb.com - Android Rooting > > OceanOfWeb.com - Funny News > > WordpressThemeIt.com - Best Wordpress Themes > > http://twitter.com/techofweb > > http://facebook.com/oceanofweb > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Scott Ribe > scott_ribe at elevated-dev.com > http://www.elevated-dev.com/ > (303) 722-0567 voice > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Thanks' > Atul Bansal > TechOfWeb.com - Android Rooting > OceanOfWeb.com - Funny News > WordpressThemeIt.com - Best Wordpress Themes > http://twitter.com/techofweb > http://facebook.com/oceanofweb > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From oceanofweb at gmail.com Sat Jul 13 23:12:58 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Sun, 14 Jul 2013 04:42:58 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> Message-ID: I dont know what i am doing :( As I said, i did what i mentioned above and now I just need to test what I am doing.... I referred google for setup.. After installing php mysql nginx... I just echoed some text and it gives me this error in log files On Sun, Jul 14, 2013 at 4:39 AM, Scott Ribe wrote: > Are you getting requests in nginx over port 80, then trying to pass them > to some PHP process over port 80? > > On Jul 13, 2013, at 5:01 PM, Atul Bansal wrote: > > > Actually, I just bought a new unmanaged VPS. I installed php, nginx > etc... > > > > I increased worker_connections, worker_rlimit_nofile, worker_processes > as it was giving me some "low worker_connections" issue. > > > > After I increased these limits, that issue resolved but now this issue > coming > > > > Googled and implemented some changes based on that research but no > fruitful result. > > > > Stull getting the error as mentioned in subject > > > > > > On Sun, Jul 14, 2013 at 4:27 AM, Scott Ribe > wrote: > > From nginx to something upstream? (That's what I meant.) > > > > On Jul 13, 2013, at 4:54 PM, Atul Bansal wrote: > > > > > ya > > > > > > > > > On Sun, Jul 14, 2013 at 4:22 AM, Scott Ribe < > scott_ribe at elevated-dev.com> wrote: > > > On Jul 13, 2013, at 4:48 PM, Atul Bansal wrote: > > > > > > > The log error that i am getting is when I try to run any php file in > my browser. > > > > > > Are you trying to pass requests to PHP over port 80? > > > > > > -- > > > Scott Ribe > > > scott_ribe at elevated-dev.com > > > http://www.elevated-dev.com/ > > > (303) 722-0567 voice > > > > > > > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > -- > > > > > > Thanks' > > > Atul Bansal > > > TechOfWeb.com - Android Rooting > > > OceanOfWeb.com - Funny News > > > WordpressThemeIt.com - Best Wordpress Themes > > > http://twitter.com/techofweb > > > http://facebook.com/oceanofweb > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > Scott Ribe > > scott_ribe at elevated-dev.com > > http://www.elevated-dev.com/ > > (303) 722-0567 voice > > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > -- > > > > Thanks' > > Atul Bansal > > TechOfWeb.com - Android Rooting > > OceanOfWeb.com - Funny News > > WordpressThemeIt.com - Best Wordpress Themes > > http://twitter.com/techofweb > > http://facebook.com/oceanofweb > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Scott Ribe > scott_ribe at elevated-dev.com > http://www.elevated-dev.com/ > (303) 722-0567 voice > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks' Atul Bansal TechOfWeb.com - Android Rooting OceanOfWeb.com - Funny News WordpressThemeIt.com - Best Wordpress Themes http://twitter.com/techofweb http://facebook.com/oceanofweb -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott_ribe at elevated-dev.com Sun Jul 14 00:26:45 2013 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Sat, 13 Jul 2013 18:26:45 -0600 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> Message-ID: <33FB50E2-E051-4819-B56D-B5869B712241@elevated-dev.com> On Jul 13, 2013, at 5:12 PM, Atul Bansal wrote: > I dont know what i am doing :( Everybody's got to start from 0 some time... > As I said, i did what i mentioned above and now I just need to test what I am doing.... I referred google for setup.. I think you need to include your config file in your next message. > After installing php mysql nginx... I just echoed some text and it gives me this error in log files You need to be more clear about what you're trying to do and how, step-by-step. (There are lots of ways to do things, maybe more than you know and that's why you're assuming you know what we mean by "run php" or "echo text"...) -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From oceanofweb at gmail.com Sun Jul 14 03:21:52 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Sun, 14 Jul 2013 08:51:52 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <33FB50E2-E051-4819-B56D-B5869B712241@elevated-dev.com> References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> <33FB50E2-E051-4819-B56D-B5869B712241@elevated-dev.com> Message-ID: *httpd.conf* # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, in addition to the default. See also the # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses (0.0.0.0) # #Listen 12.34.56.78:80 Listen [::]:80 default_server ipv6only=on; ============================================================================ *default.conf* # # The default server # server { listen 80 default_server; server_name domain.com www.domain.com; #charset koi8-r; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm index.php; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { proxy_pass http://127.0.0.1; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } =============================================================================== *nginx.conf* # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes 4; # set open fd limit to 30000 worker_rlimit_nofile 30000; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 63000; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # Load config files from the /etc/nginx/conf.d directory # The default server is in conf.d/default.conf include /etc/nginx/conf.d/*.conf; } ============================================= On Sun, Jul 14, 2013 at 5:56 AM, Scott Ribe wrote: > On Jul 13, 2013, at 5:12 PM, Atul Bansal wrote: > > > I dont know what i am doing :( > > Everybody's got to start from 0 some time... > > > As I said, i did what i mentioned above and now I just need to test what > I am doing.... I referred google for setup.. > > I think you need to include your config file in your next message. > > > After installing php mysql nginx... I just echoed some text and it gives > me this error in log files > > You need to be more clear about what you're trying to do and how, > step-by-step. (There are lots of ways to do things, maybe more than you > know and that's why you're assuming you know what we mean by "run php" or > "echo text"...) > > > > -- > Scott Ribe > scott_ribe at elevated-dev.com > http://www.elevated-dev.com/ > (303) 722-0567 voice > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks' Atul Bansal TechOfWeb.com - Android Rooting OceanOfWeb.com - Funny News WordpressThemeIt.com - Best Wordpress Themes http://twitter.com/techofweb http://facebook.com/oceanofweb -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jul 14 03:43:01 2013 From: nginx-forum at nginx.us (pwrlove) Date: Sat, 13 Jul 2013 23:43:01 -0400 Subject: Setting up nginx as Visual Studio 2010 project In-Reply-To: <1979d5146e9ab6c19013841c788b3bb1.NginxMailingListEnglish@forum.nginx.org> References: <1979d5146e9ab6c19013841c788b3bb1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <30c947514885130584670a9e176c8c3e.NginxMailingListEnglish@forum.nginx.org> Hi, no problem, but need to build it with mingw at least once. try it as follows: 1. you have to get 4 auto config files from http://nginx.org/en/docs/windows.html procedures. - ngx_auto_config.h (detailed configuration option info' at header topmost) - ngx_auto_headers.h - ngx_modules.c - ngx_pch.c 2. create vs 2010 new project and attach sources (core and modules etc...) - refer to objs/Makefile (mingw result). 3. build enjoy. - this is only visual studio 2010 project settings problems (you can't miss it). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227198,240822#msg-240822 From emailgrant at gmail.com Sun Jul 14 11:44:53 2013 From: emailgrant at gmail.com (Grant) Date: Sun, 14 Jul 2013 04:44:53 -0700 Subject: Strange log file behavior Message-ID: I noticed that most of my rotated nginx log files are empty (0 bytes). My only access_log directive is in nginx.conf: access_log /var/log/nginx/localhost.access_log combined; Also nginx is currently logging to /var/log/nginx/localhost.access_log.1 instead of localhost.access_log. Does anyone know why these things are happening? - Grant From nginx-forum at nginx.us Sun Jul 14 12:30:38 2013 From: nginx-forum at nginx.us (jordanmoreira57) Date: Sun, 14 Jul 2013 08:30:38 -0400 Subject: X-Accel-Redirect without internal Message-ID: <975e4d9fd47d0ba1e2b57420d58b6a54.NginxMailingListEnglish@forum.nginx.org> Hello, My problem is very simple, I'd like to change from php readfile to x-accel-redirect. I know exactly how to do it, but I have a problem: My files are organized like that: /home/ID(id of directory on mysql)/ .mp3 .xml and .zip/.rar I would like to serve the .zip or .rar files by php, but .xml and .mp3 needs to be allowed from accessing directly. When I put /home/ID as internal, isn't possible access the .xml or .mp3 files. What I need to do to have the xml and mp3 files being accessed directly and zip/rar by x-accel-redirect ON THE SAME directory? Any help is appreciated, thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240827,240827#msg-240827 From nginx-forum at nginx.us Sun Jul 14 14:23:32 2013 From: nginx-forum at nginx.us (shawnxzhou) Date: Sun, 14 Jul 2013 10:23:32 -0400 Subject: How to log POST body data? Message-ID: <823948af0fa9f42497fcfa21d05d95fb.NginxMailingListEnglish@forum.nginx.org> I'm trying to use $request_body but get '-' in my log file for this field here is my configure file, is there sth wrong or the $request_body has other deps to work? http { log_format client '$remote_addr - $remote_user $request_time $upstream_response_time ' '[$time_local] "$request" $status $body_bytes_sent $request_body "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; ...... server { ...... location = /c.gif { empty_gif; access_log logs/uaa_access.log client; } ...... } } I'm using linux command 'curl -d name=xxxx myip/my_location' to fire a POST request, and just get '-' for $request_body field. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240828,240828#msg-240828 From mdounin at mdounin.ru Sun Jul 14 18:21:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 14 Jul 2013 22:21:29 +0400 Subject: Strange log file behavior In-Reply-To: References: Message-ID: <20130714182129.GJ66479@mdounin.ru> Hello! On Sun, Jul 14, 2013 at 04:44:53AM -0700, Grant wrote: > I noticed that most of my rotated nginx log files are empty (0 bytes). > > My only access_log directive is in nginx.conf: > > access_log /var/log/nginx/localhost.access_log combined; > > Also nginx is currently logging to > /var/log/nginx/localhost.access_log.1 instead of localhost.access_log. > > Does anyone know why these things are happening? This usually happens if someone don't ask nginx to reopen log files after a rotation. See here for details: http://nginx.org/en/docs/control.html#logs -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sun Jul 14 18:41:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 14 Jul 2013 22:41:21 +0400 Subject: X-Accel-Redirect without internal In-Reply-To: <975e4d9fd47d0ba1e2b57420d58b6a54.NginxMailingListEnglish@forum.nginx.org> References: <975e4d9fd47d0ba1e2b57420d58b6a54.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130714184121.GK66479@mdounin.ru> Hello! On Sun, Jul 14, 2013 at 08:30:38AM -0400, jordanmoreira57 wrote: > Hello, > > My problem is very simple, I'd like to change from php readfile to > x-accel-redirect. I know exactly how to do it, but I have a problem: > > My files are organized like that: > /home/ID(id of directory on mysql)/ .mp3 .xml and .zip/.rar > > I would like to serve the .zip or .rar files by php, but .xml and .mp3 needs > to be allowed from accessing directly. When I put /home/ID as internal, > isn't possible access the .xml or .mp3 files. > > What I need to do to have the xml and mp3 files being accessed directly and > zip/rar by x-accel-redirect ON THE SAME directory? Try something like this: location /directory/ { location ~ \.(xml|mp3)$ { # accessible } location ~ \.(zip|rar)$ { internal; } } -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sun Jul 14 18:54:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 14 Jul 2013 22:54:36 +0400 Subject: Custom 401 page not displaying or not prompting for credentials In-Reply-To: References: Message-ID: <20130714185435.GL66479@mdounin.ru> Hello! On Sat, Jul 13, 2013 at 02:40:14PM +0100, Michael wrote: > Hello, I was trying to set a custom error 401 page on my Nginx server > version 1.5.1 using the following methods: > > error_page 401 /401.html > By itself it still displays the default site > > error_page 401 http://example.com/401.html > Goes straight to the custom error page without a chance to authenticate > > error_page 401 /401.html; > location = /401.html > { > root G:/Files; > allow all; > Also goes straight to the error page. > > > > My entire config file: > > worker_processes 1; > > events > { > worker_connections 1024; > } > > http > { > include mime.types; > default_type application/octet-stream; > sendfile on; > > server > { > satisfy any; > allow 192.168.0.0/24; > deny all; > auth_basic "Please login"; > auth_basic_user_file C:\password.txt; > listen 80; > server_name localhost; > root G:/Files; > > location / > { > index index.html index.php /_h5ai/server/php/index.php; > error_page 401 /401.html; > location = /401.html > { > root G:/Files; > } > } > location ~ \.php$ > { > include fastcgi.conf; > fastcgi_pass 127.0.0.1:9000; > } > } > } > > > If anyone has any idea what I'm doing wrong, let me know. Access to /401.html requires authentication, which prevents the error page configured from being returned. Try adding auth_basic off; into location /401.html. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sun Jul 14 20:07:56 2013 From: nginx-forum at nginx.us (jordanmoreira57) Date: Sun, 14 Jul 2013 16:07:56 -0400 Subject: X-Accel-Redirect without internal In-Reply-To: <20130714184121.GK66479@mdounin.ru> References: <20130714184121.GK66479@mdounin.ru> Message-ID: <3da63ebe6c7cdab396c341e7e4a72dae.NginxMailingListEnglish@forum.nginx.org> I tried something like that: default.conf: location / { root /home/; index index.html index.htm index.php; location ~ \.(zip|rar)$ { internal; } proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } teste.php: Apparently the file was acessed but the download don't started, see what was shown on browser: "PK ?z?BLWESLEY SAFADAO & BANDA GAROTA SAFADA 2013 - FORRICO 2013 by WWW.ANDRECD.COM/UX S??Q\??Q?PK{?B?WESLEY SAFADAO & BANDA GAROTA SAFADA 2013 - FORRICO 2013 by WWW.ANDRECD.COM/01 - WWW.ANDRECD.COM - Wesley Safadao & Banda Garota Safada no Forrico 2013.mp3UX ???Q???Q???eT?M?-?x?? ?4????i?F ??5???? ?4@ 9?????w?{??;??c?? 5???V?YOU???? ??? ? d4?a/???? --??????? ?,???_?3.?-????Q?z??_?????5???;??R???????????Q??hM???PW??y?????, at 6???6 ??????lEYY ???????/??TT???D???0??@1 @|?p/\;;??'? ?& ??a??b/ @v?????_??_??_?????;;yz??^(Rv/????7)?_?_??"E????+?W??_"?etm?_??????????/???)??????? p?G ???C???? ??Q????*??? ?G at y????? ??????Q???????Qpp?^?P ???9i$?h5,???q??%??3h&C?x???u,m??R{g???%e??^J??3????*??""?+$??aM?Bd$$$dtd??Q *?<???rn?? u??+|??K.?`"M ?>T?746??)???? References: <20130714184121.GK66479@mdounin.ru> <3da63ebe6c7cdab396c341e7e4a72dae.NginxMailingListEnglish@forum.nginx.org> Message-ID: Everything appears to be as you asked for. Did you perhaps forget your "Content-Disposition" header? Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From draxter65 at gmail.com Sun Jul 14 22:33:15 2013 From: draxter65 at gmail.com (Michael) Date: Sun, 14 Jul 2013 23:33:15 +0100 Subject: Custom 401 page not displaying or not prompting for credentials In-Reply-To: <20130714185435.GL66479@mdounin.ru> References: <20130714185435.GL66479@mdounin.ru> Message-ID: Added auth_basic off; as so: location = /401.html { auth_basic off; root G:/Files; } returns error 403 and 'access forbidden by rule' error. So then I added allow all; like this: location = /401.html { allow all; auth_basic off; root G:/Files; } This takes the user straight to the custom error 401 page without authentication. The full config is: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; server { satisfy any; allow 192.168.0.0/24; deny all; auth_basic "Please login"; auth_basic_user_file C:\password.txt; listen 80; server_name localhost; root G:/Files; location / { index index.html index.php /_h5ai/server/php/index.php; error_page 401 /401.html; location = /401.html { allow all; auth_basic off; root G:/Files; } } location ~ \.php$ { include fastcgi.conf; fastcgi_pass 127.0.0.1:9000; } } } On 14 July 2013 19:54, Maxim Dounin wrote: > Hello! > > On Sat, Jul 13, 2013 at 02:40:14PM +0100, Michael wrote: > > > Hello, I was trying to set a custom error 401 page on my Nginx server > > version 1.5.1 using the following methods: > > > > error_page 401 /401.html > > By itself it still displays the default site > > > > error_page 401 http://example.com/401.html > > Goes straight to the custom error page without a chance to authenticate > > > > error_page 401 /401.html; > > location = /401.html > > { > > root G:/Files; > > allow all; > > Also goes straight to the error page. > > > > > > > > My entire config file: > > > > worker_processes 1; > > > > events > > { > > worker_connections 1024; > > } > > > > http > > { > > include mime.types; > > default_type application/octet-stream; > > sendfile on; > > > > server > > { > > satisfy any; > > allow 192.168.0.0/24; > > deny all; > > auth_basic "Please login"; > > auth_basic_user_file C:\password.txt; > > listen 80; > > server_name localhost; > > root G:/Files; > > > > location / > > { > > index index.html index.php /_h5ai/server/php/index.php; > > error_page 401 /401.html; > > location = /401.html > > { > > root G:/Files; > > } > > } > > location ~ \.php$ > > { > > include fastcgi.conf; > > fastcgi_pass 127.0.0.1:9000; > > } > > } > > } > > > > > > If anyone has any idea what I'm doing wrong, let me know. > > Access to /401.html requires authentication, which prevents the > error page configured from being returned. Try adding > > auth_basic off; > > into location /401.html. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruilue.zengrl at alibaba-inc.com Mon Jul 15 03:18:34 2013 From: ruilue.zengrl at alibaba-inc.com (=?UTF-8?B?5pu+55Ge55Wl?=) Date: Mon, 15 Jul 2013 11:18:34 +0800 Subject: nginx proxy_pass In-Reply-To: CADJYcZGkjL+gAJ-PmdkmYrh_kTPgdWZFOb7__qLpg46BSAus_g@mail.gmail.com References: <20130714185435.GL66479@mdounin.ru>, CADJYcZGkjL+gAJ-PmdkmYrh_kTPgdWZFOb7__qLpg46BSAus_g@mail.gmail.com Message-ID: <804d2eea-1091-4669-8a5f-99a11fb7f53f@alibaba-inc.com> Hello,in nginx1 use proxy_pass to nginx2,when http method is get that is ok,but when http method is post ,nginx2 has no access log,nginx1's response ?status is 302; ?help please,thank you!! nginx-config details: nginx1-config: server { ? listen 80; ? server_name ocs.aliyun.test; ? access_log "pipe:/usr/sbin/cronolog /system/logs/ocs.%Y-%m-%d.log" aliyun_com; ? location /widget/ { ? ?proxy_pass http://openwidget.aliyun.test/; ? ?proxy_redirect http://openwidget.aliyun.test/ /widget/; ?} } ? nginx2-config: server { ? listen 80; ? server_name openwidget.aliyun.test; ? access_log "pipe:/usr/sbin/cronolog /system/logs/openwidget.%Y-%m-%d.log" aliyun_com; ? location / { ? ?proxy_pass http://127.0.0.1:38782/openWidget/; ? ?proxy_redirect http://openwidget.aliyun.test/openWidget/ /; ?} } -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at disqus.com Mon Jul 15 03:50:46 2013 From: john at disqus.com (John Watson) Date: Sun, 14 Jul 2013 20:50:46 -0700 Subject: How to log POST body data? In-Reply-To: <823948af0fa9f42497fcfa21d05d95fb.NginxMailingListEnglish@forum.nginx.org> References: <823948af0fa9f42497fcfa21d05d95fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Trying adding this directive to your location: echo_read_request_body; It needs this 3rd party module though: http://wiki.nginx.org/HttpEchoModule#echo_read_request_body On Sun, Jul 14, 2013 at 7:23 AM, shawnxzhou wrote: > I'm trying to use $request_body but get '-' in my log file for this field > here is my configure file, is there sth wrong or the $request_body has > other > deps to work? > > http { > log_format client '$remote_addr - $remote_user $request_time > $upstream_response_time ' > '[$time_local] "$request" $status $body_bytes_sent > $request_body "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > ...... > server { > ...... > location = /c.gif { > empty_gif; > access_log logs/uaa_access.log client; > } > ...... > } > } > > I'm using linux command 'curl -d name=xxxx myip/my_location' to fire a POST > request, and just get '-' for $request_body field. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,240828,240828#msg-240828 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jul 15 04:46:41 2013 From: nginx-forum at nginx.us (abstein2) Date: Mon, 15 Jul 2013 00:46:41 -0400 Subject: Proxy returns 504 then blocks next connections In-Reply-To: <0665dd31e7caa96a15ee36ac6f2e43cd.NginxMailingListEnglish@forum.nginx.org> References: <6142134014a5aa1bb4bda64a3a54dd82.NginxMailingListEnglish@forum.nginx.org> <0665dd31e7caa96a15ee36ac6f2e43cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8c7ccce3c9e1feb70003e9961cb28869.NginxMailingListEnglish@forum.nginx.org> The script is an ASPX script and, to my knowledge, it doesn't use sessions. I don't control the script, but I can't duplicate the behavior when running against the proxied server. It only occurs when going through the proxy. I don't believe sessions are the issue. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240766,240842#msg-240842 From nginx-forum at nginx.us Mon Jul 15 05:33:50 2013 From: nginx-forum at nginx.us (shawnxzhou) Date: Mon, 15 Jul 2013 01:33:50 -0400 Subject: How to log POST body data? In-Reply-To: References: Message-ID: yes, it works! thanks a lot Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240828,240843#msg-240843 From nginx-forum at nginx.us Mon Jul 15 06:00:44 2013 From: nginx-forum at nginx.us (shawnxzhou) Date: Mon, 15 Jul 2013 02:00:44 -0400 Subject: How to log POST body data? In-Reply-To: References: Message-ID: <727e70bf1a2e3d9cf559039e8887c04e.NginxMailingListEnglish@forum.nginx.org> Followups, I used echo_read_request_body; and the empty_gif directive seems not work again, they shoud be exclusive in a location? John Watson Wrote: ------------------------------------------------------- > Hi, > > Trying adding this directive to your location: > echo_read_request_body; > > It needs this 3rd party module though: > http://wiki.nginx.org/HttpEchoModule#echo_read_request_body > > > On Sun, Jul 14, 2013 at 7:23 AM, shawnxzhou > wrote: > > > I'm trying to use $request_body but get '-' in my log file for this > field > > here is my configure file, is there sth wrong or the $request_body > has > > other > > deps to work? > > > > http { > > log_format client '$remote_addr - $remote_user $request_time > > $upstream_response_time ' > > '[$time_local] "$request" $status $body_bytes_sent > > $request_body "$http_referer" ' > > '"$http_user_agent" "$http_x_forwarded_for"'; > > ...... > > server { > > ...... > > location = /c.gif { > > empty_gif; > > access_log logs/uaa_access.log client; > > } > > ...... > > } > > } > > > > I'm using linux command 'curl -d name=xxxx myip/my_location' to fire > a POST > > request, and just get '-' for $request_body field. > > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,240828,240828#msg-240828 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240828,240844#msg-240844 From nginx-forum at nginx.us Mon Jul 15 09:34:46 2013 From: nginx-forum at nginx.us (alexandernst) Date: Mon, 15 Jul 2013 05:34:46 -0400 Subject: HTTP Upload Progress bug Message-ID: <82fbdd367f23f98a4243d23527556c48.NginxMailingListEnglish@forum.nginx.org> I'm using NGINX 1.4.1 with the HTTP Upload Progress module but I'm getting some strange result when GETing the current progress. The uploaded size is always 0, while the total size is the size of the file I'm uploading. I know that the file is being uploaded because of wireshark, and also because once the file is completely uploaded, the HTTP Upload Progress module returns both the uploaded size and the total size correctly and then returns "done". (And also, the file is uploaded correctly and I'm able to see it on the server). What could be the problem? bw, I cloned the module from git this morning so this is the really latest version. Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240845,240845#msg-240845 From emailgrant at gmail.com Mon Jul 15 09:49:47 2013 From: emailgrant at gmail.com (Grant) Date: Mon, 15 Jul 2013 02:49:47 -0700 Subject: Strange log file behavior In-Reply-To: <20130714182129.GJ66479@mdounin.ru> References: <20130714182129.GJ66479@mdounin.ru> Message-ID: >> I noticed that most of my rotated nginx log files are empty (0 bytes). >> >> My only access_log directive is in nginx.conf: >> >> access_log /var/log/nginx/localhost.access_log combined; >> >> Also nginx is currently logging to >> /var/log/nginx/localhost.access_log.1 instead of localhost.access_log. >> >> Does anyone know why these things are happening? > > This usually happens if someone don't ask nginx to reopen log > files after a rotation. See here for details: > > http://nginx.org/en/docs/control.html#logs I use logrotate: /var/log/nginx/*_log { missingok sharedscripts postrotate test -r /run/nginx.pid && kill -USR1 `cat /run/nginx.pid` endscript } Does it look OK? - Grant From jdmls at yahoo.com Mon Jul 15 10:31:24 2013 From: jdmls at yahoo.com (John Doe) Date: Mon, 15 Jul 2013 03:31:24 -0700 (PDT) Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> <33FB50E2-E051-4819-B56D-B5869B712241@elevated-dev.com> Message-ID: <1373884284.98314.YahooMailNeo@web121606.mail.ne1.yahoo.com> From: Atul Bansal >httpd.conf >Listen [::]:80 default_server ipv6only=on; > >default.conf >? ? listen ? ? ? 80 default_server; >? ? ? ? proxy_pass ? http://127.0.0.1; >? ? ? ? fastcgi_pass ? 127.0.0.1:9000; So, apache listens on ipv6 :80 and nginx tries to listen on ipv4/v6 :80 and pass to ipv4 :80 ? I think something is wrong... Do you need apache at all? Can you describe your processing chain? => nginx => php-cgi ? JD From mdounin at mdounin.ru Mon Jul 15 10:36:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Jul 2013 14:36:40 +0400 Subject: Strange log file behavior In-Reply-To: References: <20130714182129.GJ66479@mdounin.ru> Message-ID: <20130715103640.GR66479@mdounin.ru> Hello! On Mon, Jul 15, 2013 at 02:49:47AM -0700, Grant wrote: > >> I noticed that most of my rotated nginx log files are empty (0 bytes). > >> > >> My only access_log directive is in nginx.conf: > >> > >> access_log /var/log/nginx/localhost.access_log combined; > >> > >> Also nginx is currently logging to > >> /var/log/nginx/localhost.access_log.1 instead of localhost.access_log. > >> > >> Does anyone know why these things are happening? > > > > This usually happens if someone don't ask nginx to reopen log > > files after a rotation. See here for details: > > > > http://nginx.org/en/docs/control.html#logs > > I use logrotate: > > /var/log/nginx/*_log { > missingok > sharedscripts > postrotate > test -r /run/nginx.pid && kill -USR1 `cat /run/nginx.pid` > endscript > } > > Does it look OK? Make sure paths used in postrotate are correct. -- Maxim Dounin http://nginx.org/en/donation.html From emailgrant at gmail.com Mon Jul 15 10:43:26 2013 From: emailgrant at gmail.com (Grant) Date: Mon, 15 Jul 2013 03:43:26 -0700 Subject: Strange log file behavior In-Reply-To: <20130715103640.GR66479@mdounin.ru> References: <20130714182129.GJ66479@mdounin.ru> <20130715103640.GR66479@mdounin.ru> Message-ID: >> >> I noticed that most of my rotated nginx log files are empty (0 bytes). >> >> >> >> My only access_log directive is in nginx.conf: >> >> >> >> access_log /var/log/nginx/localhost.access_log combined; >> >> >> >> Also nginx is currently logging to >> >> /var/log/nginx/localhost.access_log.1 instead of localhost.access_log. >> >> >> >> Does anyone know why these things are happening? >> > >> > This usually happens if someone don't ask nginx to reopen log >> > files after a rotation. See here for details: >> > >> > http://nginx.org/en/docs/control.html#logs >> >> I use logrotate: >> >> /var/log/nginx/*_log { >> missingok >> sharedscripts >> postrotate >> test -r /run/nginx.pid && kill -USR1 `cat /run/nginx.pid` >> endscript >> } >> >> Does it look OK? > > Make sure paths used in postrotate are correct. The paths are correct. I made some tweaks and I'll report back tomorrow on how it goes. Any other ideas? - Grant From oceanofweb at gmail.com Mon Jul 15 10:45:13 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Mon, 15 Jul 2013 16:15:13 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <1373884284.98314.YahooMailNeo@web121606.mail.ne1.yahoo.com> References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> <33FB50E2-E051-4819-B56D-B5869B712241@elevated-dev.com> <1373884284.98314.YahooMailNeo@web121606.mail.ne1.yahoo.com> Message-ID: I want to run wordpress sites on this server. The processing sud be fast and efficient. Websites sud be able to open up speedly. So, pls suggest me anything here. Thanks On Jul 15, 2013 4:01 PM, "John Doe" wrote: > From: Atul Bansal > > >httpd.conf > >Listen [::]:80 default_server ipv6only=on; > > > >default.conf > > listen 80 default_server; > > proxy_pass http://127.0.0.1; > > fastcgi_pass 127.0.0.1:9000; > > > So, apache listens on ipv6 :80 and > > nginx tries to listen on ipv4/v6 :80 and pass to ipv4 :80 ? > I think something is wrong... > Do you need apache at all? > Can you describe your processing chain? > => nginx => php-cgi ? > > > JD > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jul 15 11:05:11 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Jul 2013 15:05:11 +0400 Subject: Proxy returns 504 then blocks next connections In-Reply-To: <8c7ccce3c9e1feb70003e9961cb28869.NginxMailingListEnglish@forum.nginx.org> References: <6142134014a5aa1bb4bda64a3a54dd82.NginxMailingListEnglish@forum.nginx.org> <0665dd31e7caa96a15ee36ac6f2e43cd.NginxMailingListEnglish@forum.nginx.org> <8c7ccce3c9e1feb70003e9961cb28869.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130715110511.GV66479@mdounin.ru> Hello! On Mon, Jul 15, 2013 at 12:46:41AM -0400, abstein2 wrote: > The script is an ASPX script and, to my knowledge, it doesn't use sessions. > I don't control the script, but I can't duplicate the behavior when running > against the proxied server. It only occurs when going through the proxy. I > don't believe sessions are the issue. For me it looks like the problem is a concurent connections limit in a browser. Not sure why it's triggered though, but probably there are other connections to the same host open. If you are using Chrome, try looking into chrome://net-internals/ page, it might be helpful. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Jul 15 11:11:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Jul 2013 15:11:56 +0400 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> References: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> Message-ID: <20130715111156.GW66479@mdounin.ru> Hello! On Sun, Jul 14, 2013 at 12:53:27AM +0200, Atul B. wrote: > I dont think another nginx is already running as when I stopped my nginx > instance, i cannot see niginx running using the mentioned command... > The log error that i am getting is when I try to run any php file in my > browser. > However, for static html files, nginx is servig them fine The message suggests you've either run out of local sockets/ports, or connections are administratively prohibited. You may try unix sockets to see if it helps. -- Maxim Dounin http://nginx.org/en/donation.html From ru at nginx.com Mon Jul 15 11:44:35 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 15 Jul 2013 15:44:35 +0400 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> References: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> Message-ID: <20130715114435.GG15133@lo0.su> On Sun, Jul 14, 2013 at 12:53:27AM +0200, Atul B. wrote: > I dont think another nginx is already running as when I stopped my nginx > instance, i cannot see niginx running using the mentioned command... > The log error that i am getting is when I try to run any php file in my > browser. > However, for static html files, nginx is servig them fine The error you see is when a local port range gets exhausted for the (src=127.0.0.1, dst=127.0.0.1:80) triple. This is because there's no Apache listening on 127.0.0.1:80, so nginx proxies request to itself in an endless loop. From oceanofweb at gmail.com Mon Jul 15 11:51:47 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Mon, 15 Jul 2013 17:21:47 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <20130715114435.GG15133@lo0.su> References: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> <20130715114435.GG15133@lo0.su> Message-ID: Ok... So please suggest solution here.. I need to setup sites on nginx so tht processing of sites sud b fast Much Thanks Thanks' Atul Bansal www.techofweb.com www.wordpressthemeit.com www.oceanofweb.com On Jul 15, 2013 5:15 PM, "Ruslan Ermilov" wrote: > On Sun, Jul 14, 2013 at 12:53:27AM +0200, Atul B. wrote: > > I dont think another nginx is already running as when I stopped my nginx > > instance, i cannot see niginx running using the mentioned command... > > The log error that i am getting is when I try to run any php file in my > > browser. > > However, for static html files, nginx is servig them fine > > The error you see is when a local port range gets exhausted > for the (src=127.0.0.1, dst=127.0.0.1:80) triple. > > This is because there's no Apache listening on 127.0.0.1:80, > so nginx proxies request to itself in an endless loop. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor.sverkos at googlemail.com Mon Jul 15 12:52:49 2013 From: igor.sverkos at googlemail.com (Igor Sverkos) Date: Mon, 15 Jul 2013 14:52:49 +0200 Subject: Strange log file behavior In-Reply-To: References: <20130714182129.GJ66479@mdounin.ru> <20130715103640.GR66479@mdounin.ru> Message-ID: Hi, > Any other ideas? Not sure if relevant, but in Gentoo's bug tracker are some open bugs regarding current logrotate versions: https://bugs.gentoo.org/show_bug.cgi?id=476202 https://bugs.gentoo.org/show_bug.cgi?id=474572 https://bugs.gentoo.org/show_bug.cgi?id=476720 Seems to be upstream bugs (not Gentoo specific). So maybe you are affected, too? Which logrotate version do you use? -- Regards, Igor From oceanofweb at gmail.com Mon Jul 15 13:11:59 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Mon, 15 Jul 2013 18:41:59 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> <20130715114435.GG15133@lo0.su> Message-ID: Hi Any helps here Thanks' Atul Bansal www.techofweb.com www.wordpressthemeit.com www.oceanofweb.com On Jul 15, 2013 5:21 PM, "Atul Bansal" wrote: > Ok... So please suggest solution here.. > I need to setup sites on nginx so tht processing of sites sud b fast > > Much Thanks > > Thanks' > Atul Bansal > www.techofweb.com > www.wordpressthemeit.com > www.oceanofweb.com > On Jul 15, 2013 5:15 PM, "Ruslan Ermilov" wrote: > >> On Sun, Jul 14, 2013 at 12:53:27AM +0200, Atul B. wrote: >> > I dont think another nginx is already running as when I stopped my nginx >> > instance, i cannot see niginx running using the mentioned command... >> > The log error that i am getting is when I try to run any php file in my >> > browser. >> > However, for static html files, nginx is servig them fine >> >> The error you see is when a local port range gets exhausted >> for the (src=127.0.0.1, dst=127.0.0.1:80) triple. >> >> This is because there's no Apache listening on 127.0.0.1:80, >> so nginx proxies request to itself in an endless loop. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian at navarro.at Mon Jul 15 13:17:17 2013 From: adrian at navarro.at (=?utf-8?B?QWRyacOhbiBOYXZhcnJv?=) Date: Mon, 15 Jul 2013 13:17:17 +0000 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> <20130715114435.GG15133@lo0.su> Message-ID: <1334981346-1373894242-cardhu_decombobulator_blackberry.rim.net-1711041105-@b3.c1.bise7.blackberry> Remove the proxy_pass and its section altogether. Your config has duplicate routing: first a proxy_pass (apache? But to itself) then a fastcgi route. If you want pure nginx, remove the location..part with the proxy pass and reboot nginx. Sent from my BlackBerry -----Original Message----- From: Atul Bansal Sender: nginx-bounces at nginx.orgDate: Mon, 15 Jul 2013 18:41:59 To: Reply-To: nginx at nginx.org Subject: Re: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Jul 15 13:20:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Jul 2013 17:20:52 +0400 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> <20130715114435.GG15133@lo0.su> Message-ID: <20130715132052.GD66479@mdounin.ru> Hello! On Mon, Jul 15, 2013 at 06:41:59PM +0530, Atul Bansal wrote: > Any helps here You should actually start Apache on your server - to do so on Linux you have to configure it to listen on a port different from one nginx is listening on (or to configure nginx to listen on an ip address instead of *). -- Maxim Dounin http://nginx.org/en/donation.html From jim at ohlste.in Mon Jul 15 13:25:29 2013 From: jim at ohlste.in (Jim Ohlstein) Date: Mon, 15 Jul 2013 09:25:29 -0400 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> <20130715114435.GG15133@lo0.su> Message-ID: <51E3F849.7090104@ohlste.in> On 7/15/13 7:51 AM, Atul Bansal wrote: > Ok... So please suggest solution here.. > I need to setup sites on nginx so tht processing of sites sud b fast > I mean no disrespect, but the solution is to get a competent sysadmin. This can all be fixed in about 5-10 minutes. Your configuration is *completely* wrong, you're trying to run two web servers which are competing for ::80, you have a fastcgi_pass and a proxy_pass statement in the same location, and the list goes on. People are giving you advice that you clearly do not understand. First thing you need to do is get Apache stopped (or at least not listening on port 80), and prevent it from restarting. If you think Apache is not running, run this command and show the output: # ps aux | grep httpd Next, use Google to learn how to stop Apache and prevent it from restarting on each reboot. After that, again go back to Google and learn how to write a proper nginx.conf (generically and for WordPress). Then decide how you're going to handle PHP requests and set up the proper daemon to run at boot. Same thing with nginx. Now I've given you a roadmap. This is not an "nginx for dummies" mailing list. You need to do some of the work yourself. -- Jim Ohlstein From oceanofweb at gmail.com Mon Jul 15 13:32:49 2013 From: oceanofweb at gmail.com (Atul Bansal) Date: Mon, 15 Jul 2013 19:02:49 +0530 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <20130715132052.GD66479@mdounin.ru> References: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> <20130715114435.GG15133@lo0.su> <20130715132052.GD66479@mdounin.ru> Message-ID: Thanks.. So, which option should I go here: 1. Install only Nginx and remove Apache completely 2. Http request will be from Apache to Nginx 3. Http request will be from Nginx to Apache. 4. Any other best way Thanks' Atul Bansal www.techofweb.com www.wordpressthemeit.com www.oceanofweb.com On Jul 15, 2013 6:51 PM, "Maxim Dounin" wrote: > Hello! > > On Mon, Jul 15, 2013 at 06:41:59PM +0530, Atul Bansal wrote: > > > Any helps here > > You should actually start Apache on your server - to do so on > Linux you have to configure it to listen on a port different from > one nginx is listening on (or to configure nginx to listen on an ip > address instead of *). > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian at navarro.at Mon Jul 15 13:44:48 2013 From: adrian at navarro.at (=?utf-8?B?QWRyacOhbiBOYXZhcnJv?=) Date: Mon, 15 Jul 2013 13:44:48 +0000 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <0eaa0ac002b659db3492775b681cd7ec@ruby-forum.com> <20130715114435.GG15133@lo0.su> <20130715132052.GD66479@mdounin.ru> Message-ID: <1551373538-1373895891-cardhu_decombobulator_blackberry.rim.net-1163863674-@b3.c1.bise7.blackberry> Given how lost you are I'd say go with Apache altogether, and forget nginx, as you'll have to deal with wp-specific config later on. And you seem pretty much lost. But if you want to stick to nginx, go with a correct configuration: forget apache and remove the proxy_pass. And if you want to use nginx+apache , then learn using the wiki and learn how it should work and why things are done that way. You don't seem to understand even half of it :( Sent from my BlackBerry -----Original Message----- From: Atul Bansal Sender: nginx-bounces at nginx.orgDate: Mon, 15 Jul 2013 19:02:49 To: Reply-To: nginx at nginx.org Subject: Re: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jdmls at yahoo.com Mon Jul 15 13:53:37 2013 From: jdmls at yahoo.com (John Doe) Date: Mon, 15 Jul 2013 06:53:37 -0700 (PDT) Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: References: <51E1D0CC.3030605@noppix.com> <51E1D717.6050306@noppix.com> <1C9A70B8-AC2A-4D24-9BD8-A2F789D2B60B@elevated-dev.com> <11EA7132-76B7-4A26-B745-59BC91EEE34A@elevated-dev.com> <33FB50E2-E051-4819-B56D-B5869B712241@elevated-dev.com> <1373884284.98314.YahooMailNeo@web121606.mail.ne1.yahoo.com> Message-ID: <1373896417.6668.YahooMailNeo@web121604.mail.ne1.yahoo.com> From: Atul Bansal >On Jul 15, 2013 4:01 PM, "John Doe" wrote: >>Do you need apache at all? >>Can you describe your processing chain? >>=> nginx => php-cgi ? >I want to run wordpress sites on this server. The processing sud be fast and efficient. Websites sud be able to open up speedly. So, pls suggest me anything here. So... your setup is... "sud be fast and efficient. Websites sud be able to open up speedly"... You are not describing your setup. Who receives the requests? nginx? apache? To which one are the requests forwarded? nginx? apache+php? php-cgi? Why apache? Just repeating that it "sud be fast" is not helpful at all... If you do not describe your current setup, people won't be able (or willing) to help you. JD From amarnath.p at globaledgesoft.com Mon Jul 15 14:46:42 2013 From: amarnath.p at globaledgesoft.com (P Amarnath) Date: Mon, 15 Jul 2013 20:16:42 +0530 Subject: Reg: SCGI application running error in NGINX server Message-ID: <51E40B52.2050803@globaledgesoft.com> Hi, I got this error in log file while running SCGI client in address 172.16.8.180 and nginx running on 172.16.8.143. /Error/: *upstream timed out (110: Connection timed out) while reading response header from upstream, client: 172.16.8.180, server: 172.16.8.143, request: "POST / HTTP/1.1", upstream: "scgi://172.16.8.180:9000", host: "172.16.8.143"* Please do the needful. -- With Best Regards, P Amarnath, IPNG - Cloud Storage, Global Edge Software Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Mon Jul 15 15:06:25 2013 From: emailgrant at gmail.com (Grant) Date: Mon, 15 Jul 2013 08:06:25 -0700 Subject: Strange log file behavior In-Reply-To: References: <20130714182129.GJ66479@mdounin.ru> <20130715103640.GR66479@mdounin.ru> Message-ID: >> Any other ideas? > > Not sure if relevant, but in Gentoo's bug tracker are some open bugs > regarding current logrotate versions: > > https://bugs.gentoo.org/show_bug.cgi?id=476202 > https://bugs.gentoo.org/show_bug.cgi?id=474572 > https://bugs.gentoo.org/show_bug.cgi?id=476720 > > Seems to be upstream bugs (not Gentoo specific). So maybe you are > affected, too? Which logrotate version do you use? I'm on Gentoo also and I think you nailed it. I will watch those bugs. Thank you! - Grant From nginx-forum at nginx.us Mon Jul 15 15:54:24 2013 From: nginx-forum at nginx.us (mosiac) Date: Mon, 15 Jul 2013 11:54:24 -0400 Subject: SSL Reverse Proxy issues Message-ID: <500546938e6e7757422ba1cf52fa22cf.NginxMailingListEnglish@forum.nginx.org> So I have a nginx.conf file that has multiple server blocks in it and they all are working except this one, and this one is half working so I assume I'm just missing one thing. Basically what happens is you can goto the server name that is set and the proxy pass works for that first site, but that site is also a login page that after authentication forwards the user to another page and what I'd like to make sure happens is that after authentication the ssl and server name still work as opposed to what's happening now which is it breaks down completely. server { chunkin on; error_page 411 = @my_411_error; location @my_411_error { chunkin_resume; } listen 8897 ssl; server_name myhttpaddress.com; ### SSL log files ### access_log /var/log/nginx/ssl-access.log; error_log /var/log/nginx/ssl-error.log; ### SSL cert files ### ssl_certificate /etc/nginx/ssl/mycert.crt; ssl_certificate_key /etc/nginx/ssl/mycert.key; ### Add SSL specific settings here ### keepalive_timeout 60; ### Limiting Ciphers ################ # Uncomment as per your setup # ssl_ciphers HIGH:!ADH # ssl_perfer_server_ciphers on; # ssl_protocols SSLv3; ##################################### # We want full access to SSL via backend ### location /brim/ { more_clear_input_headers 'Transfer-Encoding'; proxy_pass http://myserver.com:8897/brim/; ### force timeouts if one of backend is died ## proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; ### Set headers #### proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; ### Most PHP, Python, Rails, Java App can use this header ### proxy_set_header X-Forwarded_Proto https; ### By default we don't want to redirect it #### proxy_redirect off; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240878,240878#msg-240878 From mdounin at mdounin.ru Mon Jul 15 17:32:14 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Jul 2013 21:32:14 +0400 Subject: EXSLT func:function not registered for XSLT filter module In-Reply-To: References: Message-ID: <20130715173214.GF66479@mdounin.ru> Hello! On Sat, Jul 13, 2013 at 12:19:51PM +0200, Kate F wrote: > Hi, > > I'm trying to use EXSLT's with nginx's xslt filter > module. The effect I think I'm seeing is that my functions are > seemingly ignored. [...] > Looking at ngx_http_xslt_filter_module.c I see exsltRegisterAll() is > called, which is what should register libexslt's handler for > func:function and friends: > > #if (NGX_HAVE_EXSLT) > exsltRegisterAll(); > #endif > > I know NGX_HAVE_EXSLT is defined because other EXSLT functions (such > as things in the date: and str: namespaces) work fine. It looks like exsltRegisterAll() is called too late for EXSLT Functions extension. Please try the following patch: # HG changeset patch # User Maxim Dounin # Date 1373909466 -14400 # Node ID bc1cf51a5b0a5e8512a8170dc7991f9e966c5533 # Parent 8e7db77e5d88b20d113e77b574e676737d67bf0e Xslt: exsltRegisterAll() moved to preconfiguration. The exsltRegisterAll() needs to be called before XSLT stylesheets are compiled, else stylesheet compilation hooks will not work. This change fixes EXSLT Functions extension. diff --git a/src/http/modules/ngx_http_xslt_filter_module.c b/src/http/modules/ngx_http_xslt_filter_module.c --- a/src/http/modules/ngx_http_xslt_filter_module.c +++ b/src/http/modules/ngx_http_xslt_filter_module.c @@ -104,6 +104,7 @@ static void *ngx_http_xslt_filter_create static void *ngx_http_xslt_filter_create_conf(ngx_conf_t *cf); static char *ngx_http_xslt_filter_merge_conf(ngx_conf_t *cf, void *parent, void *child); +static ngx_int_t ngx_http_xslt_filter_preconfiguration(ngx_conf_t *cf); static ngx_int_t ngx_http_xslt_filter_init(ngx_conf_t *cf); static void ngx_http_xslt_filter_exit(ngx_cycle_t *cycle); @@ -163,7 +164,7 @@ static ngx_command_t static ngx_http_module_t ngx_http_xslt_filter_module_ctx = { - NULL, /* preconfiguration */ + ngx_http_xslt_filter_preconfiguration, /* preconfiguration */ ngx_http_xslt_filter_init, /* postconfiguration */ ngx_http_xslt_filter_create_main_conf, /* create main configuration */ @@ -1111,7 +1112,7 @@ ngx_http_xslt_filter_merge_conf(ngx_conf static ngx_int_t -ngx_http_xslt_filter_init(ngx_conf_t *cf) +ngx_http_xslt_filter_preconfiguration(ngx_conf_t *cf) { xmlInitParser(); @@ -1119,6 +1120,13 @@ ngx_http_xslt_filter_init(ngx_conf_t exsltRegisterAll(); #endif + return NGX_OK; +} + + +static ngx_int_t +ngx_http_xslt_filter_init(ngx_conf_t *cf) +{ ngx_http_next_header_filter = ngx_http_top_header_filter; ngx_http_top_header_filter = ngx_http_xslt_header_filter; -- Maxim Dounin http://nginx.org/en/donation.html From shahzaib.cb at gmail.com Tue Jul 16 04:59:30 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 16 Jul 2013 09:59:30 +0500 Subject: Nginx worker processors D state and high I/O utilization !! Message-ID: Hello, We're using nginx-1.2.8 to serve large static files for video streaming. However all nginx worker_processes are in "D" state and HDD I/O utilization is 99%. [root at DNTX010 ~]# ps aux |grep nginx root 3046 0.0 0.0 20272 688 ? Ss 20:39 0:00 nginx: master process nginx nginx 3047 3.2 0.9 94480 74808 ? D 20:39 0:03 nginx: worker process nginx 3048 1.4 0.3 52104 31388 ? D 20:39 0:01 nginx: worker process nginx 3049 0.2 0.1 33156 12156 ? S 20:39 0:00 nginx: worker process nginx 3050 0.1 0.1 29968 8844 ? D 20:39 0:00 nginx: worker process nginx 3051 0.2 0.1 30332 10076 ? D 20:39 0:00 nginx: worker process nginx 3052 2.7 0.8 91788 69504 ? D 20:39 0:02 nginx: worker process nginx 3053 0.3 0.0 25632 5384 ? D 20:39 0:00 nginx: worker process nginx 3054 0.2 0.1 36032 15852 ? D 20:39 0:00 nginx: worker process nginx 3055 0.4 0.2 37592 17396 ? D 20:39 0:00 nginx: worker process nginx 3056 0.2 0.1 32580 11028 ? S 20:39 0:00 nginx: worker process nginx 3057 0.3 0.2 39288 19116 ? D 20:39 0:00 nginx: worker process nginx 3058 0.3 0.2 41764 19744 ? D 20:39 0:00 nginx: worker process nginx 3059 0.3 0.1 31124 10480 ? D 20:39 0:00 nginx: worker process nginx 3060 1.0 0.3 52736 31776 ? D 20:39 0:01 nginx: worker process nginx 3061 1.1 0.3 51920 29956 ? D 20:39 0:01 nginx: worker process nginx 3062 1.6 0.4 58808 35548 ? D 20:39 0:01 nginx: worker process [root at DNTX010 ~]# iostat -x -d 3 Linux 2.6.32-358.6.2.el6.x86_64 (DNTX010.local) 07/16/2013 _x86_64_ (8 CPU) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 30.28 177.37 260.32 2.96 38169.26 1442.70 150.46 2.29 8.70 3.52 92.78 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 4.33 0.00 544.00 0.00 34376.00 0.00 63.19 43.83 75.25 1.84 100.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 9.00 6.33 547.67 0.67 34637.33 56.00 63.27 48.01 86.20 1.82 100.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.67 568.00 2.33 36024.00 29.33 63.21 54.98 101.10 1.75 100.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 4.33 560.33 1.33 35712.00 45.33 63.66 37.20 65.06 1.78 100.00 Nginx.conf : http { include mime.types; default_type application/octet-stream; client_body_buffer_size 128K; sendfile_max_chunk 128k; client_max_body_size 800m; client_header_buffer_size 256k; large_client_header_buffers 4 256k; output_buffers 1 512k; server_tokens off; #Conceals nginx version #access_log logs/access.log main; access_log off; error_log warn; sendfile on; # aio on; # directio 512k; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; keepalive_timeout 0; reset_timedout_connection on; } We've also tried enabling aio directive but nothing changed. Help will be highly appreciated. Thanks Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Tue Jul 16 05:06:24 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 16 Jul 2013 17:06:24 +1200 Subject: Nginx worker processors D state and high I/O utilization !! In-Reply-To: References: Message-ID: <1373951184.16239.2.camel@steve-new> Your disk maxes out at 20MB/sec read? On Tue, 2013-07-16 at 09:59 +0500, shahzaib shahzaib wrote: > Hello, > > > We're using nginx-1.2.8 to serve large static files for video > streaming. However all nginx worker_processes are in "D" state and HDD > I/O utilization is 99%. > > [root at DNTX010 ~]# ps aux |grep nginx > root 3046 0.0 0.0 20272 688 ? Ss 20:39 0:00 > nginx: master process nginx > nginx 3047 3.2 0.9 94480 74808 ? D 20:39 0:03 > nginx: worker process > nginx 3048 1.4 0.3 52104 31388 ? D 20:39 0:01 > nginx: worker process > nginx 3049 0.2 0.1 33156 12156 ? S 20:39 0:00 > nginx: worker process > nginx 3050 0.1 0.1 29968 8844 ? D 20:39 0:00 > nginx: worker process > nginx 3051 0.2 0.1 30332 10076 ? D 20:39 0:00 > nginx: worker process > nginx 3052 2.7 0.8 91788 69504 ? D 20:39 0:02 > nginx: worker process > nginx 3053 0.3 0.0 25632 5384 ? D 20:39 0:00 > nginx: worker process > nginx 3054 0.2 0.1 36032 15852 ? D 20:39 0:00 > nginx: worker process > nginx 3055 0.4 0.2 37592 17396 ? D 20:39 0:00 > nginx: worker process > nginx 3056 0.2 0.1 32580 11028 ? S 20:39 0:00 > nginx: worker process > nginx 3057 0.3 0.2 39288 19116 ? D 20:39 0:00 > nginx: worker process > nginx 3058 0.3 0.2 41764 19744 ? D 20:39 0:00 > nginx: worker process > nginx 3059 0.3 0.1 31124 10480 ? D 20:39 0:00 > nginx: worker process > nginx 3060 1.0 0.3 52736 31776 ? D 20:39 0:01 > nginx: worker process > nginx 3061 1.1 0.3 51920 29956 ? D 20:39 0:01 > nginx: worker process > nginx 3062 1.6 0.4 58808 35548 ? D 20:39 0:01 > nginx: worker process > > > [root at DNTX010 ~]# iostat -x -d 3 > Linux 2.6.32-358.6.2.el6.x86_64 (DNTX010.local) 07/16/2013 > _x86_64_ (8 CPU) > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > sda 30.28 177.37 260.32 2.96 38169.26 1442.70 > 150.46 2.29 8.70 3.52 92.78 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > sda 4.33 0.00 544.00 0.00 34376.00 0.00 > 63.19 43.83 75.25 1.84 100.00 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > sda 9.00 6.33 547.67 0.67 34637.33 56.00 > 63.27 48.01 86.20 1.82 100.00 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > sda 0.00 0.67 568.00 2.33 36024.00 29.33 > 63.21 54.98 101.10 1.75 100.00 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > sda 0.00 4.33 560.33 1.33 35712.00 45.33 > 63.66 37.20 65.06 1.78 100.00 > > > > Nginx.conf : > > http { > include mime.types; > default_type application/octet-stream; > client_body_buffer_size 128K; > sendfile_max_chunk 128k; > client_max_body_size 800m; > client_header_buffer_size 256k; > large_client_header_buffers 4 256k; > output_buffers 1 512k; > server_tokens off; #Conceals nginx version > #access_log logs/access.log main; > access_log off; > error_log warn; > sendfile on; > > # aio on; > > # directio 512k; > > ignore_invalid_headers on; > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; > keepalive_timeout 0; > reset_timedout_connection on; > } > > > We've also tried enabling aio directive but nothing changed. Help will > be highly appreciated. > > > Thanks > Shahzaib > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MNZCS http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From shahzaib.cb at gmail.com Tue Jul 16 05:10:50 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 16 Jul 2013 10:10:50 +0500 Subject: Nginx worker processors D state and high I/O utilization !! In-Reply-To: <1373951184.16239.2.camel@steve-new> References: <1373951184.16239.2.camel@steve-new> Message-ID: We're using 4XSata HDD 7200 rpm and yes it is hardly crossing 20MB/sec. Could you please guide me a bit about the Max speed of SATA read? Thanks for prompt reply. On Tue, Jul 16, 2013 at 10:06 AM, Steve Holdoway wrote: > Your disk maxes out at 20MB/sec read? > > On Tue, 2013-07-16 at 09:59 +0500, shahzaib shahzaib wrote: > > Hello, > > > > > > We're using nginx-1.2.8 to serve large static files for video > > streaming. However all nginx worker_processes are in "D" state and HDD > > I/O utilization is 99%. > > > > [root at DNTX010 ~]# ps aux |grep nginx > > root 3046 0.0 0.0 20272 688 ? Ss 20:39 0:00 > > nginx: master process nginx > > nginx 3047 3.2 0.9 94480 74808 ? D 20:39 0:03 > > nginx: worker process > > nginx 3048 1.4 0.3 52104 31388 ? D 20:39 0:01 > > nginx: worker process > > nginx 3049 0.2 0.1 33156 12156 ? S 20:39 0:00 > > nginx: worker process > > nginx 3050 0.1 0.1 29968 8844 ? D 20:39 0:00 > > nginx: worker process > > nginx 3051 0.2 0.1 30332 10076 ? D 20:39 0:00 > > nginx: worker process > > nginx 3052 2.7 0.8 91788 69504 ? D 20:39 0:02 > > nginx: worker process > > nginx 3053 0.3 0.0 25632 5384 ? D 20:39 0:00 > > nginx: worker process > > nginx 3054 0.2 0.1 36032 15852 ? D 20:39 0:00 > > nginx: worker process > > nginx 3055 0.4 0.2 37592 17396 ? D 20:39 0:00 > > nginx: worker process > > nginx 3056 0.2 0.1 32580 11028 ? S 20:39 0:00 > > nginx: worker process > > nginx 3057 0.3 0.2 39288 19116 ? D 20:39 0:00 > > nginx: worker process > > nginx 3058 0.3 0.2 41764 19744 ? D 20:39 0:00 > > nginx: worker process > > nginx 3059 0.3 0.1 31124 10480 ? D 20:39 0:00 > > nginx: worker process > > nginx 3060 1.0 0.3 52736 31776 ? D 20:39 0:01 > > nginx: worker process > > nginx 3061 1.1 0.3 51920 29956 ? D 20:39 0:01 > > nginx: worker process > > nginx 3062 1.6 0.4 58808 35548 ? D 20:39 0:01 > > nginx: worker process > > > > > > [root at DNTX010 ~]# iostat -x -d 3 > > Linux 2.6.32-358.6.2.el6.x86_64 (DNTX010.local) 07/16/2013 > > _x86_64_ (8 CPU) > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 30.28 177.37 260.32 2.96 38169.26 1442.70 > > 150.46 2.29 8.70 3.52 92.78 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 4.33 0.00 544.00 0.00 34376.00 0.00 > > 63.19 43.83 75.25 1.84 100.00 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 9.00 6.33 547.67 0.67 34637.33 56.00 > > 63.27 48.01 86.20 1.82 100.00 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 0.00 0.67 568.00 2.33 36024.00 29.33 > > 63.21 54.98 101.10 1.75 100.00 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 0.00 4.33 560.33 1.33 35712.00 45.33 > > 63.66 37.20 65.06 1.78 100.00 > > > > > > > > Nginx.conf : > > > > http { > > include mime.types; > > default_type application/octet-stream; > > client_body_buffer_size 128K; > > sendfile_max_chunk 128k; > > client_max_body_size 800m; > > client_header_buffer_size 256k; > > large_client_header_buffers 4 256k; > > output_buffers 1 512k; > > server_tokens off; #Conceals nginx version > > #access_log logs/access.log main; > > access_log off; > > error_log warn; > > sendfile on; > > > > # aio on; > > > > # directio 512k; > > > > ignore_invalid_headers on; > > client_header_timeout 3m; > > client_body_timeout 3m; > > send_timeout 3m; > > keepalive_timeout 0; > > reset_timedout_connection on; > > } > > > > > > We've also tried enabling aio directive but nothing changed. Help will > > be highly appreciated. > > > > > > Thanks > > Shahzaib > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MNZCS > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Tue Jul 16 05:17:26 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 16 Jul 2013 17:17:26 +1200 Subject: Nginx worker processors D state and high I/O utilization !! In-Reply-To: References: <1373951184.16239.2.camel@steve-new> Message-ID: <1373951846.16239.3.camel@steve-new> How are you using them?? Raid 10? On the offchance, is http://www.greengecko.co.nz/content/western-digital-can-i-have-my-2-days-back relevant? Cheers, Steve On Tue, 2013-07-16 at 10:10 +0500, shahzaib shahzaib wrote: > We're using 4XSata HDD 7200 rpm and yes it is hardly crossing > 20MB/sec. Could you please guide me a bit about the Max speed of SATA > read? > > > Thanks for prompt reply. > > > > On Tue, Jul 16, 2013 at 10:06 AM, Steve Holdoway > wrote: > Your disk maxes out at 20MB/sec read? > > On Tue, 2013-07-16 at 09:59 +0500, shahzaib shahzaib wrote: > > Hello, > > > > > > We're using nginx-1.2.8 to serve large static files > for video > > streaming. However all nginx worker_processes are in "D" > state and HDD > > I/O utilization is 99%. > > > > [root at DNTX010 ~]# ps aux |grep nginx > > root 3046 0.0 0.0 20272 688 ? Ss 20:39 > 0:00 > > nginx: master process nginx > > nginx 3047 3.2 0.9 94480 74808 ? D 20:39 > 0:03 > > nginx: worker process > > nginx 3048 1.4 0.3 52104 31388 ? D 20:39 > 0:01 > > nginx: worker process > > nginx 3049 0.2 0.1 33156 12156 ? S 20:39 > 0:00 > > nginx: worker process > > nginx 3050 0.1 0.1 29968 8844 ? D 20:39 > 0:00 > > nginx: worker process > > nginx 3051 0.2 0.1 30332 10076 ? D 20:39 > 0:00 > > nginx: worker process > > nginx 3052 2.7 0.8 91788 69504 ? D 20:39 > 0:02 > > nginx: worker process > > nginx 3053 0.3 0.0 25632 5384 ? D 20:39 > 0:00 > > nginx: worker process > > nginx 3054 0.2 0.1 36032 15852 ? D 20:39 > 0:00 > > nginx: worker process > > nginx 3055 0.4 0.2 37592 17396 ? D 20:39 > 0:00 > > nginx: worker process > > nginx 3056 0.2 0.1 32580 11028 ? S 20:39 > 0:00 > > nginx: worker process > > nginx 3057 0.3 0.2 39288 19116 ? D 20:39 > 0:00 > > nginx: worker process > > nginx 3058 0.3 0.2 41764 19744 ? D 20:39 > 0:00 > > nginx: worker process > > nginx 3059 0.3 0.1 31124 10480 ? D 20:39 > 0:00 > > nginx: worker process > > nginx 3060 1.0 0.3 52736 31776 ? D 20:39 > 0:01 > > nginx: worker process > > nginx 3061 1.1 0.3 51920 29956 ? D 20:39 > 0:01 > > nginx: worker process > > nginx 3062 1.6 0.4 58808 35548 ? D 20:39 > 0:01 > > nginx: worker process > > > > > > [root at DNTX010 ~]# iostat -x -d 3 > > Linux 2.6.32-358.6.2.el6.x86_64 (DNTX010.local) > 07/16/2013 > > _x86_64_ (8 CPU) > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 30.28 177.37 260.32 2.96 38169.26 > 1442.70 > > 150.46 2.29 8.70 3.52 92.78 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 4.33 0.00 544.00 0.00 34376.00 > 0.00 > > 63.19 43.83 75.25 1.84 100.00 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 9.00 6.33 547.67 0.67 34637.33 > 56.00 > > 63.27 48.01 86.20 1.82 100.00 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 0.00 0.67 568.00 2.33 36024.00 > 29.33 > > 63.21 54.98 101.10 1.75 100.00 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > wsec/s > > avgrq-sz avgqu-sz await svctm %util > > sda 0.00 4.33 560.33 1.33 35712.00 > 45.33 > > 63.66 37.20 65.06 1.78 100.00 > > > > > > > > Nginx.conf : > > > > http { > > include mime.types; > > default_type application/octet-stream; > > client_body_buffer_size 128K; > > sendfile_max_chunk 128k; > > client_max_body_size 800m; > > client_header_buffer_size 256k; > > large_client_header_buffers 4 256k; > > output_buffers 1 512k; > > server_tokens off; #Conceals nginx version > > #access_log logs/access.log main; > > access_log off; > > error_log warn; > > sendfile on; > > > > # aio on; > > > > # directio 512k; > > > > ignore_invalid_headers on; > > client_header_timeout 3m; > > client_body_timeout 3m; > > send_timeout 3m; > > keepalive_timeout 0; > > reset_timedout_connection on; > > } > > > > > > We've also tried enabling aio directive but nothing changed. > Help will > > be highly appreciated. > > > > > > Thanks > > Shahzaib > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MNZCS > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MNZCS http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From shahzaib.cb at gmail.com Tue Jul 16 05:25:19 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 16 Jul 2013 10:25:19 +0500 Subject: Nginx worker processors D state and high I/O utilization !! In-Reply-To: <1373951846.16239.3.camel@steve-new> References: <1373951184.16239.2.camel@steve-new> <1373951846.16239.3.camel@steve-new> Message-ID: Yes we're using raid-10. On Tue, Jul 16, 2013 at 10:17 AM, Steve Holdoway wrote: > How are you using them?? Raid 10? > > On the offchance, is > > http://www.greengecko.co.nz/content/western-digital-can-i-have-my-2-days-back > relevant? > > Cheers, > > Steve > > On Tue, 2013-07-16 at 10:10 +0500, shahzaib shahzaib wrote: > > We're using 4XSata HDD 7200 rpm and yes it is hardly crossing > > 20MB/sec. Could you please guide me a bit about the Max speed of SATA > > read? > > > > > > Thanks for prompt reply. > > > > > > > > On Tue, Jul 16, 2013 at 10:06 AM, Steve Holdoway > > wrote: > > Your disk maxes out at 20MB/sec read? > > > > On Tue, 2013-07-16 at 09:59 +0500, shahzaib shahzaib wrote: > > > Hello, > > > > > > > > > We're using nginx-1.2.8 to serve large static files > > for video > > > streaming. However all nginx worker_processes are in "D" > > state and HDD > > > I/O utilization is 99%. > > > > > > [root at DNTX010 ~]# ps aux |grep nginx > > > root 3046 0.0 0.0 20272 688 ? Ss 20:39 > > 0:00 > > > nginx: master process nginx > > > nginx 3047 3.2 0.9 94480 74808 ? D 20:39 > > 0:03 > > > nginx: worker process > > > nginx 3048 1.4 0.3 52104 31388 ? D 20:39 > > 0:01 > > > nginx: worker process > > > nginx 3049 0.2 0.1 33156 12156 ? S 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3050 0.1 0.1 29968 8844 ? D 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3051 0.2 0.1 30332 10076 ? D 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3052 2.7 0.8 91788 69504 ? D 20:39 > > 0:02 > > > nginx: worker process > > > nginx 3053 0.3 0.0 25632 5384 ? D 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3054 0.2 0.1 36032 15852 ? D 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3055 0.4 0.2 37592 17396 ? D 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3056 0.2 0.1 32580 11028 ? S 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3057 0.3 0.2 39288 19116 ? D 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3058 0.3 0.2 41764 19744 ? D 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3059 0.3 0.1 31124 10480 ? D 20:39 > > 0:00 > > > nginx: worker process > > > nginx 3060 1.0 0.3 52736 31776 ? D 20:39 > > 0:01 > > > nginx: worker process > > > nginx 3061 1.1 0.3 51920 29956 ? D 20:39 > > 0:01 > > > nginx: worker process > > > nginx 3062 1.6 0.4 58808 35548 ? D 20:39 > > 0:01 > > > nginx: worker process > > > > > > > > > [root at DNTX010 ~]# iostat -x -d 3 > > > Linux 2.6.32-358.6.2.el6.x86_64 (DNTX010.local) > > 07/16/2013 > > > _x86_64_ (8 CPU) > > > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > > wsec/s > > > avgrq-sz avgqu-sz await svctm %util > > > sda 30.28 177.37 260.32 2.96 38169.26 > > 1442.70 > > > 150.46 2.29 8.70 3.52 92.78 > > > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > > wsec/s > > > avgrq-sz avgqu-sz await svctm %util > > > sda 4.33 0.00 544.00 0.00 34376.00 > > 0.00 > > > 63.19 43.83 75.25 1.84 100.00 > > > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > > wsec/s > > > avgrq-sz avgqu-sz await svctm %util > > > sda 9.00 6.33 547.67 0.67 34637.33 > > 56.00 > > > 63.27 48.01 86.20 1.82 100.00 > > > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > > wsec/s > > > avgrq-sz avgqu-sz await svctm %util > > > sda 0.00 0.67 568.00 2.33 36024.00 > > 29.33 > > > 63.21 54.98 101.10 1.75 100.00 > > > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s > > wsec/s > > > avgrq-sz avgqu-sz await svctm %util > > > sda 0.00 4.33 560.33 1.33 35712.00 > > 45.33 > > > 63.66 37.20 65.06 1.78 100.00 > > > > > > > > > > > > Nginx.conf : > > > > > > http { > > > include mime.types; > > > default_type application/octet-stream; > > > client_body_buffer_size 128K; > > > sendfile_max_chunk 128k; > > > client_max_body_size 800m; > > > client_header_buffer_size 256k; > > > large_client_header_buffers 4 256k; > > > output_buffers 1 512k; > > > server_tokens off; #Conceals nginx version > > > #access_log logs/access.log main; > > > access_log off; > > > error_log warn; > > > sendfile on; > > > > > > # aio on; > > > > > > # directio 512k; > > > > > > ignore_invalid_headers on; > > > client_header_timeout 3m; > > > client_body_timeout 3m; > > > send_timeout 3m; > > > keepalive_timeout 0; > > > reset_timedout_connection on; > > > } > > > > > > > > > We've also tried enabling aio directive but nothing changed. > > Help will > > > be highly appreciated. > > > > > > > > > Thanks > > > Shahzaib > > > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Steve Holdoway BSc(Hons) MNZCS > > http://www.greengecko.co.nz > > Linkedin: http://www.linkedin.com/in/steveholdoway > > Skype: sholdowa > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MNZCS > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Jul 16 07:34:30 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 16 Jul 2013 11:34:30 +0400 Subject: Nginx worker processors D state and high I/O utilization !! In-Reply-To: References: Message-ID: On Jul 16, 2013, at 8:59 AM, shahzaib shahzaib wrote: > Hello, > > We're using nginx-1.2.8 to serve large static files for video streaming. However all nginx worker_processes are in "D" state and HDD I/O utilization is 99%. > [?] > [root at DNTX010 ~]# iostat -x -d 3 > Linux 2.6.32-358.6.2.el6.x86_64 (DNTX010.local) 07/16/2013 _x86_64_ (8 CPU) > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util > sda 30.28 177.37 260.32 2.96 38169.26 1442.70 150.46 2.29 8.70 3.52 92.78 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util > sda 4.33 0.00 544.00 0.00 34376.00 0.00 63.19 43.83 75.25 1.84 100.00 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util > sda 9.00 6.33 547.67 0.67 34637.33 56.00 63.27 48.01 86.20 1.82 100.00 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util > sda 0.00 0.67 568.00 2.33 36024.00 29.33 63.21 54.98 101.10 1.75 100.00 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util > sda 0.00 4.33 560.33 1.33 35712.00 45.33 63.66 37.20 65.06 1.78 100.00 > You are likely hitting the IOPS limit. 550r/s is quite enough to saturate 4xSATA 7200 in RAID10. There are reads of 64 of something per request in average. I'd first look at what does something designated as "rsec/s" exactly mean in linux terms here. -- Sergey Kandaurov pluknet at nginx.com From shahzaib.cb at gmail.com Tue Jul 16 07:47:17 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 16 Jul 2013 12:47:17 +0500 Subject: Nginx worker processors D state and high I/O utilization !! In-Reply-To: References: Message-ID: What should be my next step ? Should i buy SAS Drive with hard-raid-10 ? On Tue, Jul 16, 2013 at 12:34 PM, Sergey Kandaurov wrote: > On Jul 16, 2013, at 8:59 AM, shahzaib shahzaib > wrote: > > Hello, > > > > We're using nginx-1.2.8 to serve large static files for video > streaming. However all nginx worker_processes are in "D" state and HDD I/O > utilization is 99%. > > [?] > > [root at DNTX010 ~]# iostat -x -d 3 > > Linux 2.6.32-358.6.2.el6.x86_64 (DNTX010.local) 07/16/2013 > _x86_64_ (8 CPU) > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > > sda 30.28 177.37 260.32 2.96 38169.26 1442.70 > 150.46 2.29 8.70 3.52 92.78 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > > sda 4.33 0.00 544.00 0.00 34376.00 0.00 > 63.19 43.83 75.25 1.84 100.00 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > > sda 9.00 6.33 547.67 0.67 34637.33 56.00 > 63.27 48.01 86.20 1.82 100.00 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > > sda 0.00 0.67 568.00 2.33 36024.00 29.33 > 63.21 54.98 101.10 1.75 100.00 > > > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > avgrq-sz avgqu-sz await svctm %util > > sda 0.00 4.33 560.33 1.33 35712.00 45.33 > 63.66 37.20 65.06 1.78 100.00 > > > > > You are likely hitting the IOPS limit. 550r/s is quite enough to saturate > 4xSATA 7200 in RAID10. There are reads of 64 of something per request > in average. I'd first look at what does something designated as "rsec/s" > exactly mean in linux terms here. > > -- > Sergey Kandaurov > pluknet at nginx.com > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor.sverkos at googlemail.com Tue Jul 16 08:41:39 2013 From: igor.sverkos at googlemail.com (Igor Sverkos) Date: Tue, 16 Jul 2013 10:41:39 +0200 Subject: Nginx worker processors D state and high I/O utilization !! In-Reply-To: References: Message-ID: Hi, can you tell use the HDD modell you are currently using? And are you using real dedicated servers or some kind of virtualization? Current SATA disks (Seagate ST1000DM-003 for example) are able to provide an avg speed of 190MB/s (keep in mind: It's their avg speed. So when the disk is full, the speed will drop). When you reach the disk limit, you would add more spindles (e.g. add more disks). Do your own math. Calculate how many IOPS one user needs (if you cannot calculate IOPS, just calculate in MB/s for the beginning). For example: Your videos have an avg bit rate of ~8000kbit/s (HD videos). To serve one stream, you would need at least 1 MB/s disk speed. So using a disk with avg speed of ~120 MB/s will allow you to serve ~100 concurrent requests (we keep a 20% buffer, to be sure). Keep in mind that in this example, the disk is dedicated to serve your videos only. If you run your web, mail and streaming server on the same disk... :> BTW: Serving ~100 concurrent HD streams would utilize one 1 Gbit uplink. Now, doubling the spindles (=add one more disk, RAID 0) would allow you to serve 200 concurrent requests (you want to go with RAID 10, to add redundancy). Doing the calculation in MB/s is not as accurate as doing the calculation in IOPS, but it's better than nothing. Yes, you should buy real SAS server disks. Don't ever use green SATA disks in servers :) But when your current disks only provide 30MB/s make sure if it is their limit or if there is another problem (for example the alignment problem Sergey mentioned). That's why I ask you for the modell in the beginning. -- Regards, Igor From me at myconan.net Tue Jul 16 09:06:06 2013 From: me at myconan.net (edogawaconan) Date: Tue, 16 Jul 2013 18:06:06 +0900 Subject: Nginx worker processors D state and high I/O utilization !! In-Reply-To: References: Message-ID: On Tue, Jul 16, 2013 at 5:41 PM, Igor Sverkos wrote: > > Yes, you should buy real SAS server disks. Don't ever use green SATA > disks in servers :) > Or SSDs for good IOPS :) -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From flatfender at gmail.com Tue Jul 16 14:08:17 2013 From: flatfender at gmail.com (Flatfender) Date: Tue, 16 Jul 2013 10:08:17 -0400 Subject: Reverse Proxy Stats Message-ID: All, I'm wondering what people are doing for Reverse Proxy stats. Most project seem to be incomplete or may have bit rot. I'm looking to see the following. 1. Active sessions to upstream servers 2. Request distribution to upstream servers 3. Upstream health status. 4. Upstream server stats, response time, etc. I know I can log some of this to the log files, but it seems like a status page is needed. What are people doing using this in production as a reverse proxy? Matt P. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jul 16 14:34:17 2013 From: nginx-forum at nginx.us (dfumagalli) Date: Tue, 16 Jul 2013 10:34:17 -0400 Subject: logs show massive spam of "proxy.php" access Message-ID: Hello, since I have installed nxginx on my Ubuntu LTS, I see the logs absolutely full of text like this: 2013/07/16 11:07:43 [error] 24590#0: *445 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 91.237.249.99, server: www.[my host name].com, request: "POST http://50.56.191.147/~cashcorp/proxies/engine.php HTTP/1.0", host: "50.56.191.147", referrer: "http://50.56.191.147/~cashcorp/proxies/engine.php" 2013/07/16 11:09:23 [error] 24590#0: *449 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 89.69.13.95, server: www.[my host name].com, request: "POST http://discountbaby63.ru/proxyc/engine.php HTTP/1.0", host: "discountbaby63.ru", referrer: "http://discountbaby63.ru/proxyc/engine.php" 2013/07/16 11:10:29 [error] 24590#0: *453 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 142.4.98.163, server: www.fleurworld.com, request: "POST http://spamstats.net/proxyc/engine.php HTTP/1.0", host: "spamstats.net", referrer: "http://spamstats.net/proxyc/engine.php" Is this an issue due to mis-configuration? Are they hackers? Are they succeeding at their hacking? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240914,240914#msg-240914 From nginx-forum at nginx.us Tue Jul 16 15:26:29 2013 From: nginx-forum at nginx.us (danslimmon) Date: Tue, 16 Jul 2013 11:26:29 -0400 Subject: Logging a variable set by nginx's Lua module Message-ID: <6a2d2da1ae6a17bec839b5768a2edb3b.NginxMailingListEnglish@forum.nginx.org> I am trying to use the Lua module in nginx to set a variable ("foo") based on JSON in the body of a request. Then I want to log the value of that variable to the access log. Like so: https://gist.github.com/danslimmon/17f5bf4736566737cc65 However, nginx won't start with this configuration. It complains thusly: https://gist.github.com/danslimmon/f5f789d8af8bbb06b224 Here is my "nginx -V" output: https://gist.github.com/danslimmon/9ed99c63aa6c04bd1a41/raw/e73bbdfb1c0ead08dd4df80b936dc233542df8bc/gistfile1.txt Thoughts? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240917,240917#msg-240917 From nginx-forum at nginx.us Tue Jul 16 15:51:40 2013 From: nginx-forum at nginx.us (bruiselee) Date: Tue, 16 Jul 2013 11:51:40 -0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: <0a5f4b39e71a5ab67db7d365d2d64cf9.NginxMailingListEnglish@forum.nginx.org> References: <0a5f4b39e71a5ab67db7d365d2d64cf9.NginxMailingListEnglish@forum.nginx.org> Message-ID: we just encountered the same problem at 0.002% level, solved it by removing following line in conf file: "gzip_http_version 1.0" very wired problem, still not understand the true reason Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205826,240918#msg-240918 From marcin.deranek at booking.com Tue Jul 16 16:01:57 2013 From: marcin.deranek at booking.com (Marcin Deranek) Date: Tue, 16 Jul 2013 18:01:57 +0200 Subject: Logging a variable set by nginx's Lua module In-Reply-To: <6a2d2da1ae6a17bec839b5768a2edb3b.NginxMailingListEnglish@forum.nginx.org> References: <6a2d2da1ae6a17bec839b5768a2edb3b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130716180157.1ab30dc3@booking.com> On Tue, 16 Jul 2013 11:26:29 -0400 "danslimmon" wrote: > However, nginx won't start with this configuration. It complains > thusly: https://gist.github.com/danslimmon/f5f789d8af8bbb06b224 http://wiki.nginx.org/HttpLuaModule#ngx.var.VARIABLE "Note that only already defined nginx variables can be written to." Look at example how to do that. Marcin From nginx-forum at nginx.us Tue Jul 16 16:22:19 2013 From: nginx-forum at nginx.us (danslimmon) Date: Tue, 16 Jul 2013 12:22:19 -0400 Subject: Logging a variable set by nginx's Lua module In-Reply-To: <20130716180157.1ab30dc3@booking.com> References: <20130716180157.1ab30dc3@booking.com> Message-ID: <46518957b9311ad6e85879825b031b39.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply. My first thought was to put a "set foo '-'" in the config, but when I use this pared-down example that emulates the documentation except to replace "content_by_lua" with "rewrite_by_lua": https://gist.github.com/danslimmon/1ba367780f0efdd2afc5 I get "-" in the logs instead of "bar". Is it the "content"/"rewrite" difference? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240917,240920#msg-240920 From marcin.deranek at booking.com Tue Jul 16 16:44:44 2013 From: marcin.deranek at booking.com (Marcin Deranek) Date: Tue, 16 Jul 2013 18:44:44 +0200 Subject: Logging a variable set by nginx's Lua module In-Reply-To: <46518957b9311ad6e85879825b031b39.NginxMailingListEnglish@forum.nginx.org> References: <20130716180157.1ab30dc3@booking.com> <46518957b9311ad6e85879825b031b39.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130716184444.629be41a@booking.com> On Tue, 16 Jul 2013 12:22:19 -0400 "danslimmon" wrote: > Thanks for the reply. > > My first thought was to put a "set foo '-'" in the config, but when I > use this pared-down example that emulates the documentation except to > replace "content_by_lua" with "rewrite_by_lua": > > https://gist.github.com/danslimmon/1ba367780f0efdd2afc5 > > I get "-" in the logs instead of "bar". Is it the "content"/"rewrite" > difference? Shouldn't be and works for me (nginx 1.2.8) I noticed that your example cannot work as you missed semicolon after at the end of line with set (I get syntax error). Marcin From nginx-forum at nginx.us Tue Jul 16 16:55:16 2013 From: nginx-forum at nginx.us (tunist) Date: Tue, 16 Jul 2013 12:55:16 -0400 Subject: 502 errors while running elgg social network app.. used the common solutions already Message-ID: greetings! does anyone here have a reliable way of accurately calculating all the various parameters that are used in the .conf files for nginx hosted sites - that may cause 502 errors if set inaccurately? i am running an elgg social network (php code + mysql - http://www.elgg.org) and recently i have been seeing 502 errors each time i clear the site's cache and then also on the main site sometimes after that.. after a while the issue clears and i am unsure why - the site is not busy with traffic. i also have a version of the site installed on my home pc which does not have this issue - so i am thinking that possibly the server is not a high enough specification (1GB RAM + 1 CPU (2.4GHz). these are some of the relevant entries for the conf file for the site that are being used currently: nginx site conf: large_client_header_buffers 4 8k; client_header_buffer_size 2k; fastcgi_index index.php; client_max_body_size 200M; client_body_buffer_size 650K; proxy_read_timeout 300; location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 26k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; expires max; } php-fpm conf: pm.max_children = 16 pm.start_servers = 9 pm.min_spare_servers = 3 pm.max_spare_servers = 16 pm.max_requests = 250 -- as you can see i have set some of these values much higher than is commonly recommended - partially because i'm not entirely sure how large they need to be and partially because the data transfer of some of the pages on my site is higher than most websites. any tips welcome.. thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240922,240922#msg-240922 From francis at daoine.org Tue Jul 16 17:50:01 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 16 Jul 2013 18:50:01 +0100 Subject: logs show massive spam of "proxy.php" access In-Reply-To: References: Message-ID: <20130716175001.GD15782@craic.sysops.org> On Tue, Jul 16, 2013 at 10:34:17AM -0400, dfumagalli wrote: Hi there, > since I have installed nxginx on my Ubuntu LTS, I see the logs absolutely > full of text like this: > > 2013/07/16 11:07:43 [error] 24590#0: *445 rewrite or internal redirection > cycle while internally redirecting to "/index.php", client: 91.237.249.99, > server: www.[my host name].com, request: "POST > http://50.56.191.147/~cashcorp/proxies/engine.php HTTP/1.0", host: > "50.56.191.147", referrer: > "http://50.56.191.147/~cashcorp/proxies/engine.php" It looks like the client is trying to use you as a proxy server, and you have a config problem which leads to this cycle mentioned in your error log. There's no obvious mention of the proxy.php in your Subject: line, though. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jul 16 17:56:23 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 16 Jul 2013 18:56:23 +0100 Subject: Reg: SCGI application running error in NGINX server In-Reply-To: <51E40B52.2050803@globaledgesoft.com> References: <51E40B52.2050803@globaledgesoft.com> Message-ID: <20130716175623.GE15782@craic.sysops.org> On Mon, Jul 15, 2013 at 08:16:42PM +0530, P Amarnath wrote: Hi there, > /Error/: *upstream timed out ... > upstream: "scgi://172.16.8.180:9000" What's unclear? nginx was waiting for the scgi server to respond, then nginx gave up waiting. What do your scgi server logs say about this request? How long do you expect this request take to respond? How long did you configure nginx to wait for this request to respond? f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jul 16 18:01:33 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 16 Jul 2013 19:01:33 +0100 Subject: SSL Reverse Proxy issues In-Reply-To: <500546938e6e7757422ba1cf52fa22cf.NginxMailingListEnglish@forum.nginx.org> References: <500546938e6e7757422ba1cf52fa22cf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130716180133.GF15782@craic.sysops.org> On Mon, Jul 15, 2013 at 11:54:24AM -0400, mosiac wrote: Hi there, > Basically what happens is you can goto the > server name that is set and the proxy pass works for that first site, but > that site is also a login page that after authentication forwards the user > to another page and what I'd like to make sure happens is that after > authentication the ssl and server name still work as opposed to what's > happening now which is it breaks down completely. What request do you make (ideally, using "curl -i")? What response do you get? What response do you expect? That information may make it more obvious where to look for the resolution to the problem. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Jul 17 00:55:31 2013 From: nginx-forum at nginx.us (tunist) Date: Tue, 16 Jul 2013 20:55:31 -0400 Subject: 502 errors while running elgg social network app.. used the common solutions already In-Reply-To: References: Message-ID: looks like i found the answer to this.. i needed to increase the fast_cgi_connect_timeout to 120 seconds as my server is not so quick. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240922,240927#msg-240927 From tszming at gmail.com Wed Jul 17 02:34:48 2013 From: tszming at gmail.com (Tsz Ming WONG) Date: Wed, 17 Jul 2013 10:34:48 +0800 Subject: stub_status always give me 0 Reading when upgraded to 1.4.1 Message-ID: Hi, We have upgraded one of our nginx from 0.7.65 to 1.4.1 (keeping existing configuration) and the stub_status module now always give me "0 Reading" 0.7.65 Active connections: 4798 server accepts handled requests 1690444496 1690415734 1814927218 Reading: 573 Writing: 184 Waiting: 4041 1.4.1 Active connections: 4858 server accepts handled requests 33056255 33056255 37277559 Reading: 0 Writing: 196 Waiting: 4386 They are both installed using Ubuntu package on 10.04 (1.4.1-1ppa1~lucid , 0.7.65-1ubuntu2.3) nginx version: nginx/0.7.65 TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug --with-http_stub_status_module --with-http_flv_module --with-http_ssl_module --with-http_dav_module --with-http_gzip_static_module --with-http_realip_module --with-mail --with-mail_ssl_module --with-ipv6 --add-module=/build/buildd/nginx-0.7.65/modules/nginx-upstream-fair nginx version: nginx/1.4.1 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-dav-ext-module --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.4.1/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.4.1/debian/modules/ngx_http_substitutions_filter_module Any idea? Best Regards, tszming -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 17 10:29:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jul 2013 14:29:52 +0400 Subject: stub_status always give me 0 Reading when upgraded to 1.4.1 In-Reply-To: References: Message-ID: <20130717102952.GA49108@mdounin.ru> Hello! On Wed, Jul 17, 2013 at 10:34:48AM +0800, Tsz Ming WONG wrote: > Hi, > > We have upgraded one of our nginx from 0.7.65 to 1.4.1 (keeping existing > configuration) and the stub_status module now always give me "0 Reading" > > 0.7.65 > Active connections: 4798 > server accepts handled requests > 1690444496 1690415734 1814927218 > Reading: 573 Writing: 184 Waiting: 4041 > > 1.4.1 > Active connections: 4858 > server accepts handled requests > 33056255 33056255 37277559 > Reading: 0 Writing: 196 Waiting: 4386 This is an effect from the following change in 1.3.15: *) Change: opening and closing a connection without sending any data in it is no longer logged to access_log with error code 400. Connections in question no longer have an associated request created, and as a result are counted in "waiting" instead of "reading". -- Maxim Dounin http://nginx.org/en/donation.html From tszming at gmail.com Wed Jul 17 10:48:53 2013 From: tszming at gmail.com (Tsz Ming WONG) Date: Wed, 17 Jul 2013 18:48:53 +0800 Subject: stub_status always give me 0 Reading when upgraded to 1.4.1 In-Reply-To: <20130717102952.GA49108@mdounin.ru> References: <20130717102952.GA49108@mdounin.ru> Message-ID: Hi, On Wed, Jul 17, 2013 at 6:29 PM, Maxim Dounin wrote: > stub_status Thanks for the explanation. But given that this is a busy server, sound abnormal if it always return a 0? (according to the doc: reading - nginx reads request header) Best Regards, tszming -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed Jul 17 11:21:33 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 17 Jul 2013 15:21:33 +0400 Subject: stub_status always give me 0 Reading when upgraded to 1.4.1 In-Reply-To: References: <20130717102952.GA49108@mdounin.ru> Message-ID: <20130717112133.GA52274@lo0.su> On Wed, Jul 17, 2013 at 06:48:53PM +0800, Tsz Ming WONG wrote: > Hi, > On Wed, Jul 17, 2013 at 6:29 PM, Maxim Dounin <[1]mdounin at mdounin.ru> > wrote: > > stub_status > > Thanks for the explanation. > But given that this is a busy server, sound abnormal if it always return a > 0? (according to the doc: reading - nginx reads request header) It means that your clients and nginx are fast enough, enjoy. If you absolutely need it to become non-zero, you can emulate a slow client like this: ( echo 'GET / HTTP/1.0' ; sleep 42 ; echo ) | nc 127.0.0.1 80 substituting the address and port of your server. During these 42 seconds, the status will show at least one connection in the "reading" state. From mdounin at mdounin.ru Wed Jul 17 12:32:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jul 2013 16:32:06 +0400 Subject: stub_status always give me 0 Reading when upgraded to 1.4.1 In-Reply-To: References: <20130717102952.GA49108@mdounin.ru> Message-ID: <20130717123205.GB49108@mdounin.ru> Hello! On Wed, Jul 17, 2013 at 06:48:53PM +0800, Tsz Ming WONG wrote: > Thanks for the explanation. > > But given that this is a busy server, sound abnormal if it always return a > 0? (according to the doc: reading - nginx reads request header) Reading request headers happens almost immediately, especially if request headers fit into a single packet. That is, the number is expected to be near 0 even on a busy system unless there are some specific conditions like big cookies used. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Jul 17 13:30:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jul 2013 17:30:08 +0400 Subject: nginx-1.4.2 Message-ID: <20130717133007.GC49108@mdounin.ru> Changes with nginx 1.4.2 17 Jul 2013 *) Bugfix: the $r->header_in() embedded perl method did not return value of the "Cookie" and "X-Forwarded-For" request header lines; the bug had appeared in 1.3.14. *) Bugfix: nginx could not be built with the ngx_mail_ssl_module, but without ngx_http_ssl_module; the bug had appeared in 1.3.14. *) Bugfix: in the "proxy_set_body" directive. Thanks to Lanshun Zhou. *) Bugfix: the "fail_timeout" parameter of the "server" directive in the "upstream" context might not work if "max_fails" parameter was used; the bug had appeared in 1.3.0. *) Bugfix: a segmentation fault might occur in a worker process if the "ssl_stapling" directive was used. Thanks to Piotr Sikora. *) Bugfix: nginx/Windows might stop accepting connections if several worker processes were used. -- Maxim Dounin http://nginx.org/en/donation.html From tszming at gmail.com Wed Jul 17 14:30:30 2013 From: tszming at gmail.com (Tsz Ming WONG) Date: Wed, 17 Jul 2013 22:30:30 +0800 Subject: stub_status always give me 0 Reading when upgraded to 1.4.1 In-Reply-To: <20130717112133.GA52274@lo0.su> References: <20130717102952.GA49108@mdounin.ru> <20130717112133.GA52274@lo0.su> Message-ID: Thank you (and Maxim) for the detail explanation! Best Regards, tszming On Wed, Jul 17, 2013 at 7:21 PM, Ruslan Ermilov wrote: > On Wed, Jul 17, 2013 at 06:48:53PM +0800, Tsz Ming WONG wrote: > > Hi, > > On Wed, Jul 17, 2013 at 6:29 PM, Maxim Dounin <[1]mdounin at mdounin.ru> > > wrote: > > > > stub_status > > > > Thanks for the explanation. > > But given that this is a busy server, sound abnormal if it always > return a > > 0? (according to the doc: reading - nginx reads request header) > > It means that your clients and nginx are fast enough, enjoy. If you > absolutely need it to become non-zero, you can emulate a slow client > like this: > > ( echo 'GET / HTTP/1.0' ; sleep 42 ; echo ) | nc 127.0.0.1 80 > > substituting the address and port of your server. During these 42 > seconds, the status will show at least one connection in the > "reading" state. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at glanzmann.de Wed Jul 17 15:39:52 2013 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Wed, 17 Jul 2013 17:39:52 +0200 Subject: nginx as loadbalancer for tomcat with session stickyness bases on jvmRoute Message-ID: <20130717153952.GA25925@glanzmann.de> Hello everyone, I'm currently using apache mod_jk to load balance over four backend tomcat servers. The sessions are not replicated so I need stickyness based on jvmRoute. The mod_jk configuration is: worker.list=router, jkstatus worker.router.type=lb worker.router.balance_workers=tomcat-01, tomcat-02, tomcat-03, tomcat-04 worker.tomcat-01.type=ajp13 worker.tomcat-01.host=tomcat-01 worker.tomcat-01.port=8009 worker.tomcat-01.lbfactor=1 worker.tomcat-02.type=ajp13 worker.tomcat-02.host=tomcat-02 worker.tomcat-02.port=8009 worker.tomcat-02.lbfactor=1 ... worker.jkstatus.type=status I would like to replace apache with nginx. What ways exist to do that and what are the pros and cons? I use Debian Wheezy and would like to stick if possible with the packaged nginx. What about tomcat maintenance? I would like the ability gracefully reroute traffic (taking a tomcat in maintance let still existing sessions to get through but not route any new connections to a particular tomcat). Cheers, Thomas From kate at elide.org Wed Jul 17 16:18:03 2013 From: kate at elide.org (Kate F) Date: Wed, 17 Jul 2013 18:18:03 +0200 Subject: EXSLT func:function not registered for XSLT filter module In-Reply-To: <20130715173214.GF66479@mdounin.ru> References: <20130715173214.GF66479@mdounin.ru> Message-ID: On 15 July 2013 19:32, Maxim Dounin wrote: > Hello! > > On Sat, Jul 13, 2013 at 12:19:51PM +0200, Kate F wrote: > >> Hi, >> >> I'm trying to use EXSLT's with nginx's xslt filter >> module. The effect I think I'm seeing is that my functions are >> seemingly ignored. > > [...] > >> Looking at ngx_http_xslt_filter_module.c I see exsltRegisterAll() is >> called, which is what should register libexslt's handler for >> func:function and friends: >> >> #if (NGX_HAVE_EXSLT) >> exsltRegisterAll(); >> #endif >> >> I know NGX_HAVE_EXSLT is defined because other EXSLT functions (such >> as things in the date: and str: namespaces) work fine. > > It looks like exsltRegisterAll() is called too late for EXSLT > Functions extension. > > Please try the following patch: > > # HG changeset patch > # User Maxim Dounin > # Date 1373909466 -14400 > # Node ID bc1cf51a5b0a5e8512a8170dc7991f9e966c5533 > # Parent 8e7db77e5d88b20d113e77b574e676737d67bf0e > Xslt: exsltRegisterAll() moved to preconfiguration. > > The exsltRegisterAll() needs to be called before XSLT stylesheets > are compiled, else stylesheet compilation hooks will not work. This > change fixes EXSLT Functions extension. Awesome! Good catch. Thanks for that. Your patch works fine. -- Kate From kworthington at gmail.com Wed Jul 17 16:26:53 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 17 Jul 2013 12:26:53 -0400 Subject: nginx-1.4.2 In-Reply-To: <20130717133007.GC49108@mdounin.ru> References: <20130717133007.GC49108@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.4.2 for Windows http://goo.gl/x1GbY (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Wed, Jul 17, 2013 at 9:30 AM, Maxim Dounin wrote: > Changes with nginx 1.4.2 17 Jul > 2013 > > *) Bugfix: the $r->header_in() embedded perl method did not return > value > of the "Cookie" and "X-Forwarded-For" request header lines; the bug > had appeared in 1.3.14. > > *) Bugfix: nginx could not be built with the ngx_mail_ssl_module, but > without ngx_http_ssl_module; the bug had appeared in 1.3.14. > > *) Bugfix: in the "proxy_set_body" directive. > Thanks to Lanshun Zhou. > > *) Bugfix: the "fail_timeout" parameter of the "server" directive in > the > "upstream" context might not work if "max_fails" parameter was used; > the bug had appeared in 1.3.0. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "ssl_stapling" directive was used. > Thanks to Piotr Sikora. > > *) Bugfix: nginx/Windows might stop accepting connections if several > worker processes were used. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jul 17 20:51:41 2013 From: nginx-forum at nginx.us (ThomasLohner) Date: Wed, 17 Jul 2013 16:51:41 -0400 Subject: Trouble with $uri in subrequest Message-ID: Hi there, i'm having trouble getting the request_uri via lua in a subrequest. ngx.var.uri will always return the uri of the parent request whereas something like ngx.req.get_uri_args will return the correct args for the subrequest. any ideas on how to get the subrequets uri? or am i missing something here? i'm using 1.4.1, thanks in advance Thomas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240968,240968#msg-240968 From agentzh at gmail.com Wed Jul 17 23:21:46 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 17 Jul 2013 16:21:46 -0700 Subject: Trouble with $uri in subrequest In-Reply-To: References: Message-ID: Hello! On Wed, Jul 17, 2013 at 1:51 PM, ThomasLohner wrote: > i'm having trouble getting the request_uri via lua in a subrequest. > > ngx.var.uri will always return the uri of the parent request whereas > something like ngx.req.get_uri_args will return the correct args for the > subrequest. > > any ideas on how to get the subrequets uri? or am i missing something here? > Could you please give a minimal example that can reproduce this issue? Apparently, the following example works as expected on my side with ngx_lua 0.8.4 + nginx 1.4.1 on Linux x86_64: location = /sub { internal; content_by_lua ' ngx.say("sr uri: ", ngx.var.uri) '; } location = /main { echo_subrequest GET /sub; } Accessing /main with curl yields $ curl localhost:1985/main sr uri: /sub That is, ngx.var.uri evaluates to the URI of the subrequest, /sub, not that of the parent request, /main. Best regards, -agentzh From nginx-forum at nginx.us Thu Jul 18 00:58:35 2013 From: nginx-forum at nginx.us (cavedon) Date: Wed, 17 Jul 2013 20:58:35 -0400 Subject: proxy_pass via HTTP proxy Message-ID: <5a49f0ce6075a68b276034bab9686b59.NginxMailingListEnglish@forum.nginx.org> Hi, I am trying to configure my nginx instance so that it "proxy_pass"es to another HTTPS server S. However, in order to reach S, I need to go though an HTTP server P. This means nginx would need to connect to P, issue a CONNECT request, and then tunnel the HTTPS request to S. Is this supported? How to enable it? I could not find mention in the documentation, and it is kind of hard to search for :) Thank you, Ludovico Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240973,240973#msg-240973 From nginx-forum at nginx.us Thu Jul 18 03:24:44 2013 From: nginx-forum at nginx.us (lvella) Date: Wed, 17 Jul 2013 23:24:44 -0400 Subject: Generating a new request when handling one in a module Message-ID: I want to modify third party longpoll module nginx_http_push_module ( http://pushmodule.slact.net/ ) to notify upstream every time a request hangs waiting for a new publication, sending original headers and all, so I know in my upstream application that there is a user connected on the channel, and even what user is it, if I can have access to the session cookies. The results of this notification request isn't really going anywhere and is not of interest. My idea was to generate a new request to a "internal" marked location, copying the relevant headers from the original request, while handling the later (thus inserting a new request in the nginx HTTP request processing stack, and discarding its results). This new request would be independent from there on. At first it seemed a simple approach, but now that I browsed through some examples modules, I am not so sure anymore. I failed to find any module that does something like this, and I could not find the relevant API to "offspring" such relatively independent HTTP request. Do you think this is a good approach? Can you point me the API I should use, or a sample module that does something like this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240974,240974#msg-240974 From nginx-forum at nginx.us Thu Jul 18 07:01:32 2013 From: nginx-forum at nginx.us (nurettin) Date: Thu, 18 Jul 2013 03:01:32 -0400 Subject: nginx caching successive requests to reduce server load Message-ID: Hi, I couldn't find what I was looking for in the search option. For a page that displays current time when refreshed, when the client presses and holds refresh button, nginx should serve a cached version of the page instead of getting the latest. When client stops pressing refresh and waits for X ms, then refreshes again, I should see the newest time. Is there an option for this in nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240976,240976#msg-240976 From nginx-forum at nginx.us Thu Jul 18 07:13:13 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 18 Jul 2013 03:13:13 -0400 Subject: proxy_pass via HTTP proxy In-Reply-To: <5a49f0ce6075a68b276034bab9686b59.NginxMailingListEnglish@forum.nginx.org> References: <5a49f0ce6075a68b276034bab9686b59.NginxMailingListEnglish@forum.nginx.org> Message-ID: just a try / not sure if it will work - when starting your nginx try to use a shellscript script that sets http_proxy / https_proxy: export http_proxy=http://server-ip:port/ ; i'm not sure it nginx has some options to use a 3rd proxy. - maybe you can use firewall-rules to do a simple portforwarding to your proxy P, but i'm not sure it will work (for intercepting http-traffic and using squid as a transparent proxy it works) maybe https is an issue regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240973,240977#msg-240977 From nginx-forum at nginx.us Thu Jul 18 08:39:54 2013 From: nginx-forum at nginx.us (ThomasLohner) Date: Thu, 18 Jul 2013 04:39:54 -0400 Subject: Trouble with $uri in subrequest In-Reply-To: References: Message-ID: <76ec711b3bfe0c3bdea79faddba21467.NginxMailingListEnglish@forum.nginx.org> oh my... i was fooled by $request_uri which returns the parent uri. i understand that this is intended behavior. sorry! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240968,240978#msg-240978 From limkimyong at gmail.com Thu Jul 18 09:06:26 2013 From: limkimyong at gmail.com (Kim Yong) Date: Thu, 18 Jul 2013 17:06:26 +0800 Subject: Setting Keepalive_timeout for specific location. Message-ID: Hi I'd like to know if setting keepalive for a specific location is possible. Right now I have only managed to get it working on server {} directive but not location {} directive. Thanks, Kim Yong -- There's no place like ~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfs.world at gmail.com Thu Jul 18 09:15:59 2013 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Thu, 18 Jul 2013 17:15:59 +0800 Subject: Setting Keepalive_timeout for specific location. In-Reply-To: References: Message-ID: On Thu, Jul 18, 2013 at 5:06 PM, Kim Yong wrote: > Hi I'd like to know if setting keepalive for a specific location is > possible. Right now I have only managed to get it working on server {} > directive but not location {} directive. > it seems possible according to the docs. http://wiki.nginx.org/HttpCoreModule#keepalive_timeout indicates that you can use the directive in a location as well. What version of nginx is this, and what behaviour are you seeing? -jf -- He who settles on the idea of the intelligent man as a static entity only shows himself to be a fool. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." --Richard Stallman From limkimyong at gmail.com Thu Jul 18 09:28:38 2013 From: limkimyong at gmail.com (Kim Yong) Date: Thu, 18 Jul 2013 17:28:38 +0800 Subject: Setting Keepalive_timeout for specific location. In-Reply-To: References: Message-ID: So says the documentation. But I have yet to see any working config using location directive. I'm using a fairly old 1.0.15-2 off epel. No joy. wouldn't want to upgrade my webclusters... that could open other cans of worms. On Thu, Jul 18, 2013 at 5:15 PM, Jeffrey 'jf' Lim wrote: > On Thu, Jul 18, 2013 at 5:06 PM, Kim Yong wrote: > > Hi I'd like to know if setting keepalive for a specific location is > > possible. Right now I have only managed to get it working on server {} > > directive but not location {} directive. > > > > it seems possible according to the docs. > http://wiki.nginx.org/HttpCoreModule#keepalive_timeout indicates that > you can use the directive in a location as well. > > What version of nginx is this, and what behaviour are you seeing? > > -jf > > -- > He who settles on the idea of the intelligent man as a static entity > only shows himself to be a fool. > > "Every nonfree program has a lord, a master -- > and if you use the program, he is your master." > --Richard Stallman > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- There's no place like ~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfs.world at gmail.com Thu Jul 18 09:35:06 2013 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Thu, 18 Jul 2013 17:35:06 +0800 Subject: Setting Keepalive_timeout for specific location. In-Reply-To: References: Message-ID: On Thu, Jul 18, 2013 at 5:28 PM, Kim Yong wrote: > So says the documentation. yup. I would dig into the source... but I think the wiki should be accurate. For the most recent version of nginx, of course! (which is why I asked about your version) I wouldnt be surprised if the older versions dont have the directive at the location level. Only certain directives were available at that level iirc. > But I have yet to see any working config using > location directive. > I'm using a fairly old 1.0.15-2 off epel. No joy. wouldn't want to upgrade > my webclusters... that could open other cans of worms. > eh... no kidding? IF you really, really have to stick to that version, perhaps a chain of nginxes might work. -jf > > On Thu, Jul 18, 2013 at 5:15 PM, Jeffrey 'jf' Lim > wrote: >> >> On Thu, Jul 18, 2013 at 5:06 PM, Kim Yong wrote: >> > Hi I'd like to know if setting keepalive for a specific location is >> > possible. Right now I have only managed to get it working on server {} >> > directive but not location {} directive. >> > >> >> it seems possible according to the docs. >> http://wiki.nginx.org/HttpCoreModule#keepalive_timeout indicates that >> you can use the directive in a location as well. >> >> What version of nginx is this, and what behaviour are you seeing? >> >> -jf >> >> -- >> He who settles on the idea of the intelligent man as a static entity >> only shows himself to be a fool. >> >> "Every nonfree program has a lord, a master -- >> and if you use the program, he is your master." >> --Richard Stallman >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > There's no place like ~ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From limkimyong at gmail.com Thu Jul 18 09:40:05 2013 From: limkimyong at gmail.com (Kim Yong) Date: Thu, 18 Jul 2013 17:40:05 +0800 Subject: Setting Keepalive_timeout for specific location. In-Reply-To: References: Message-ID: Well it went through configtest. and that turned out okay. upgrading nginx would require me to run through the QA gauntlet. Just as equally unpleasant. On Thu, Jul 18, 2013 at 5:35 PM, Jeffrey 'jf' Lim wrote: > On Thu, Jul 18, 2013 at 5:28 PM, Kim Yong wrote: > > So says the documentation. > > yup. I would dig into the source... but I think the wiki should be > accurate. For the most recent version of nginx, of course! (which is > why I asked about your version) I wouldnt be surprised if the older > versions dont have the directive at the location level. Only certain > directives were available at that level iirc. > > > > But I have yet to see any working config using > > location directive. > > I'm using a fairly old 1.0.15-2 off epel. No joy. wouldn't want to > upgrade > > my webclusters... that could open other cans of worms. > > > > eh... no kidding? IF you really, really have to stick to that version, > perhaps a chain of nginxes might work. > > -jf > > > > > > On Thu, Jul 18, 2013 at 5:15 PM, Jeffrey 'jf' Lim > > wrote: > >> > >> On Thu, Jul 18, 2013 at 5:06 PM, Kim Yong wrote: > >> > Hi I'd like to know if setting keepalive for a specific location is > >> > possible. Right now I have only managed to get it working on server {} > >> > directive but not location {} directive. > >> > > >> > >> it seems possible according to the docs. > >> http://wiki.nginx.org/HttpCoreModule#keepalive_timeout indicates that > >> you can use the directive in a location as well. > >> > >> What version of nginx is this, and what behaviour are you seeing? > >> > >> -jf > >> > >> -- > >> He who settles on the idea of the intelligent man as a static entity > >> only shows himself to be a fool. > >> > >> "Every nonfree program has a lord, a master -- > >> and if you use the program, he is your master." > >> --Richard Stallman > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > -- > > There's no place like ~ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- There's no place like ~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian.hobson at ntlworld.com Thu Jul 18 09:55:53 2013 From: ian.hobson at ntlworld.com (Ian Hobson) Date: Thu, 18 Jul 2013 10:55:53 +0100 Subject: Help needed with configuration Message-ID: <51E7BBA9.60401@ntlworld.com> Hi All, I am trying to set up Nginx for a reseller scenario. The reseller will get his own root, and be able to create his versions of any file on the site, or, by leaving the file out, simply have the basic one used instead. My test configuration is below, and it so nearly works! The URL reseller.anake.hcs does not result in serving ...coachmaster3dev/htdocs/index.php as I want, but in a 403 Forbidden error. server { server_name reseller.anake.hcs; listen 80; fastcgi_read_timeout 300; index index.php index.html index.htm; root /home/ian/websites/reseller/htdocs; # don't serve templates location ~ \.tpl { return 404; } # handle php location ~ \.php$ { try_files $uri $uri/ @masterphp; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location @masterphp { root /home/ian/websites/coachmaster3dev/htdocs; try_files $uri $uri/ =404; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } # try reseller's static files location / { try_files $uri $uri/ @master; } # switch to master set location @master { root /home/ian/websites/coachmaster3dev/htdocs; try_files $uri $uri/ =404; } } I am using nginx 1.2.6. How can I get it to work? Thanks Ian From mdounin at mdounin.ru Thu Jul 18 10:14:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Jul 2013 14:14:16 +0400 Subject: Setting Keepalive_timeout for specific location. In-Reply-To: References: Message-ID: <20130718101416.GI49108@mdounin.ru> Hello! On Thu, Jul 18, 2013 at 05:06:26PM +0800, Kim Yong wrote: > Hi I'd like to know if setting keepalive for a specific location is > possible. Right now I have only managed to get it working on server {} > directive but not location {} directive. Yes, it's possible, and works fine on all versions I know. Test config: location / { keepalive_timeout 5; return 204; } location /short { keepalive_timeout 1; return 204; } Simple test which shows it actually works as expected: $ time telnet 127.0.0.1 8080 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. GET / HTTP/1.1 Host: x HTTP/1.1 204 No Content Server: nginx/1.5.3 Date: Thu, 18 Jul 2013 10:11:18 GMT Connection: keep-alive Connection closed by foreign host. 5.45 real 0.00 user 0.00 sys $ time telnet 127.0.0.1 8080 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. GET /short HTTP/1.1 Host: x HTTP/1.1 204 No Content Server: nginx/1.5.3 Date: Thu, 18 Jul 2013 10:12:10 GMT Connection: keep-alive Connection closed by foreign host. 1.50 real 0.00 user 0.00 sys If it doesn't work for you - you probably did something wrong. -- Maxim Dounin http://nginx.org/en/donation.html From limkimyong at gmail.com Thu Jul 18 10:15:00 2013 From: limkimyong at gmail.com (Kim Yong) Date: Thu, 18 Jul 2013 18:15:00 +0800 Subject: Setting Keepalive_timeout for specific location. In-Reply-To: References: Message-ID: Okay seems like location works for 1.2.x. Now I have another battle to fight :| On Thu, Jul 18, 2013 at 5:40 PM, Kim Yong wrote: > Well it went through configtest. and that turned out okay. upgrading nginx > would require me to run through the QA gauntlet. Just as equally unpleasant. > > > On Thu, Jul 18, 2013 at 5:35 PM, Jeffrey 'jf' Lim wrote: > >> On Thu, Jul 18, 2013 at 5:28 PM, Kim Yong wrote: >> > So says the documentation. >> >> yup. I would dig into the source... but I think the wiki should be >> accurate. For the most recent version of nginx, of course! (which is >> why I asked about your version) I wouldnt be surprised if the older >> versions dont have the directive at the location level. Only certain >> directives were available at that level iirc. >> >> >> > But I have yet to see any working config using >> > location directive. >> > I'm using a fairly old 1.0.15-2 off epel. No joy. wouldn't want to >> upgrade >> > my webclusters... that could open other cans of worms. >> > >> >> eh... no kidding? IF you really, really have to stick to that version, >> perhaps a chain of nginxes might work. >> >> -jf >> >> >> > >> > On Thu, Jul 18, 2013 at 5:15 PM, Jeffrey 'jf' Lim >> > wrote: >> >> >> >> On Thu, Jul 18, 2013 at 5:06 PM, Kim Yong >> wrote: >> >> > Hi I'd like to know if setting keepalive for a specific location is >> >> > possible. Right now I have only managed to get it working on server >> {} >> >> > directive but not location {} directive. >> >> > >> >> >> >> it seems possible according to the docs. >> >> http://wiki.nginx.org/HttpCoreModule#keepalive_timeout indicates that >> >> you can use the directive in a location as well. >> >> >> >> What version of nginx is this, and what behaviour are you seeing? >> >> >> >> -jf >> >> >> >> -- >> >> He who settles on the idea of the intelligent man as a static entity >> >> only shows himself to be a fool. >> >> >> >> "Every nonfree program has a lord, a master -- >> >> and if you use the program, he is your master." >> >> --Richard Stallman >> >> >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > >> > >> > >> > -- >> > There's no place like ~ >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > There's no place like ~ > -- There's no place like ~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jul 18 10:33:40 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 18 Jul 2013 06:33:40 -0400 Subject: nginx caching successive requests to reduce server load In-Reply-To: References: Message-ID: <69111458fd8bd009eef32f5d514c4435.NginxMailingListEnglish@forum.nginx.org> you can adjust (proxy)-cache-time in seconds Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240976,240987#msg-240987 From nginx-forum at nginx.us Thu Jul 18 10:39:52 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 18 Jul 2013 06:39:52 -0400 Subject: Help needed with configuration In-Reply-To: <51E7BBA9.60401@ntlworld.com> References: <51E7BBA9.60401@ntlworld.com> Message-ID: iirc there is something with the order/length of location $var {} - content. is there an index.php in /home/ian/websites/reseller/htdocs? what is the dir_permissions from /home/ian/websites/reseller/htdocs? what are the file_permissions for /home/ian/websites/coachmaster3dev/htdocs/index.php? what is the dir_permissions home/ian/websites/coachmaster3dev/htdocs? 403 Forbidden is a trustable output. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240984,240988#msg-240988 From nginx-forum at nginx.us Thu Jul 18 10:47:36 2013 From: nginx-forum at nginx.us (vikas) Date: Thu, 18 Jul 2013 06:47:36 -0400 Subject: setting max active connection Message-ID: How to increase max active connection limits from default 520. System configuration CentOS release 5.8 (Final) with whm. nginx conf file is user nginx root; worker_processes 16; worker_rlimit_nofile 200000; worker_connections 10240; error_log /var/log/nginx/error.log; error_log /var/log/nginx/error.log notice; error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 10240; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; index index.php index.htm index.html; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log off; sendfile on; tcp_nopush on; tcp_nodelay on; server_tokens off; gzip on; gzip_static on; gzip_comp_level 5; gzip_min_length 10240; keepalive_timeout 30; keepalive_requests 100000; limit_conn_zone $binary_remote_addr zone=addr:10m; include /etc/nginx/conf.d/*.conf; server { limit_conn addr 20000; listen 7007; server_name _; root /usr/share/nginx/html; location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } location /favicon.ico { empty_gif; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~ \.php$ { root /usr/share/nginx/html; fastcgi_pass unix:/tmp/php.sock; fastcgi_index index.php; fastcgi_send_timeout 8m; fastcgi_read_timeout 8m; fastcgi_connect_timeout 8m; include /etc/nginx/fastcgi.conf; } location /status { stub_status on; access_log off; } } } /proc/sys/net/ipv4/ip_local_port_range 9000 65535 /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_time_wait 1 If any other information need pls tell me. I am not able to find where I have to set max active connection limit for nginx in the conf. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240989,240989#msg-240989 From mrvisser at gmail.com Thu Jul 18 11:10:27 2013 From: mrvisser at gmail.com (Branden Visser) Date: Thu, 18 Jul 2013 07:10:27 -0400 Subject: Question about failure and fail-over Message-ID: Hi all, I have a general question about server failure and failover within an upstream group to ensure I understand it correctly. Lets say I have the configuration: proxy_next_upstream timeout; proxy_connect_timeout 5; ... upstream { 127.0.0.1 max_fails=3 fail_timeout=10s 127.0.0.2 max_fails=3 fail_timeout=10s 127.0.0.3 max_fails=3 fail_timeout=10s } And then the server 127.0.0.1 starts "hanging" indefinitely on connection attempts. a) Once 3 connection attempts timeout after 5 seconds on 127.0.0.1, it will be marked down. However, during that 5 second timeout, it is possible that 30, or N connections / requests may be in process of timing out as well, so you may end up with 30 internal connection failures as a result of 127.0.0.1's issue. Although they all are retried on the next available upstream, 30 end-users noticed a 5 second hang in their request as a result of waiting for the timeout to occur. b) After 10 seconds, if the server is still hanging, a) basically repeats in the same manner. Is this correct? If I add "keepalive 64;" into the upstream block, does the above scenario change? If a server is marked down as a result of no new connections being able to connect, are all persistent connections destroyed as well? Any insight on this understanding would be appreciated. Cheers, Branden From nginx-forum at nginx.us Thu Jul 18 11:15:43 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 18 Jul 2013 07:15:43 -0400 Subject: setting max active connection In-Reply-To: References: Message-ID: <20c658eb545550de27a7655a4884b2b8.NginxMailingListEnglish@forum.nginx.org> you config is somewhat messed_up, but it think this is not an issue here. are you sure your fastcgi_process is able to deliver more than 520 parallel connections? http://wiki.nginx.org/EventsModule#worker_connections -> to be defined in event {} max clients = worker_processes * worker_connections In a reverse proxy situation, max clients becomes max clients = worker_processes * worker_connections/4 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240989,240991#msg-240991 From boudenomar1 at gmail.com Thu Jul 18 11:16:59 2013 From: boudenomar1 at gmail.com (Omar Bouden) Date: Thu, 18 Jul 2013 12:16:59 +0100 Subject: No subject Message-ID: Hi, I have been on nginx for over a month now , i have got some backgrounds for the Nginx configuration and integrating module , right now i want to use the perl embeded module to execute perl scripts from my Nginx configuration actuallu that went successfully , i want to know if i can exchange variables between the Nginx configuration and some perl function loctaed in the location context. Best , -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jul 18 11:46:01 2013 From: nginx-forum at nginx.us (feanorknd) Date: Thu, 18 Jul 2013 07:46:01 -0400 Subject: proxy_cache calculating size error under SSD drive but not SATA drive Message-ID: <3039e11effb5a960c01da6dedf3fe189.NginxMailingListEnglish@forum.nginx.org> Hello: I think I am on the right way, but not sure... The scenario: - Have 2 drives: -> SSD drive - XFS - default options (with noatime, discard) - almost empty -> SATA drive - XFS - default options - almost empty - Kernel variables, shared memory, ulimits, max-file, etc... all correctly configured - Nginx and virtualhost correctly configured. - proxy_cache_path levels=1:2 keys_zone=catalogo_fotos:2500m max_size=2500m inactive=120d; Ok.. here I am... it depends of the path... the problem is: - If the path is at SATA drive -> cache is growing until maximum 2500m are reached... no problem here. Normal.. - If the path is at SSD drive -> cache is not growing until limit... it stale... problem... why? If debug see: 2013/07/18 13:15:52 [debug] 17641#0: http file cache size: 640039 2013/07/18 13:15:52 [debug] 17641#0: http file cache forced expire 2013/07/18 13:15:52 [debug] 17641#0: http file cache forced expire: #0 1 933590ab 2013/07/18 13:15:52 [debug] 17641#0: http file cache expire: "/DATA/cache/catalogo_fotos/7/3a/c1de7a39f5393c24933590ab20ba43a7" while: root at megaserver1 /DATA/cache # du -hs catalogomodacom_fotos/ 1.8G catalogo_fotos/ root at megaserver1 /DATA/cache # find ./catalogo_fotos/ -type f | wc -l 17134 What happens? Every 10 seconds, http file cache size detects max size and starts to force expire and delete cached files... What is that "http file cache size: 640039" ???? That number, which way is calculated??? What it is? I have enough shared memory, already properly configured (1Gb for each app and 14Gb total).... and the question is if I change path and set it under SATA drive there is no problem. I am thinking at all at this "http file cache size: 640039"..... I think it is the key about what is happening... that number is not near the max size of 2500m defined for the cache deposit, and is not near the number of files actually at cache, around 17134... Could you help me please? Thanks so much in advance. Gino. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240993,240993#msg-240993 From nginx-forum at nginx.us Thu Jul 18 11:53:15 2013 From: nginx-forum at nginx.us (vikas) Date: Thu, 18 Jul 2013 07:53:15 -0400 Subject: setting max active connection In-Reply-To: <20c658eb545550de27a7655a4884b2b8.NginxMailingListEnglish@forum.nginx.org> References: <20c658eb545550de27a7655a4884b2b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: This is the php-fastcgi file, /usr/bin/spawn-fcgi -s /tmp/php.sock -M 0666 -C 9 -u nginx -g nginx -U nginx -G nginx -f /usr/bin/php-cgi -P /var/run/fastcgi-php.pid And in this file I didn't restrict parallel connections to 520. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240989,240994#msg-240994 From nginx-forum at nginx.us Thu Jul 18 12:32:35 2013 From: nginx-forum at nginx.us (feanorknd) Date: Thu, 18 Jul 2013 08:32:35 -0400 Subject: proxy_cache calculating size error under SSD drive but not SATA drive In-Reply-To: <3039e11effb5a960c01da6dedf3fe189.NginxMailingListEnglish@forum.nginx.org> References: <3039e11effb5a960c01da6dedf3fe189.NginxMailingListEnglish@forum.nginx.org> Message-ID: <36cd19d9d55c580f258cd9e2c31a13d0.NginxMailingListEnglish@forum.nginx.org> Hello: Each 10 seconds, the core force expires files until http cache size below 640000 (limits are defined at 2500Mb at config) grep "cache size" /STORAGE/log.txt 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640339 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640317 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640297 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640262 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640248 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640236 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640226 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640217 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640207 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640197 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640167 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640157 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640141 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640124 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640115 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640057 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 640040 2013/07/18 13:15:22 [debug] 17641#0: http file cache size: 639904 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640192 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640188 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640154 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640115 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640106 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640097 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640083 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640074 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640050 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640042 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640026 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 640011 2013/07/18 13:15:32 [debug] 17641#0: http file cache size: 639997 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640397 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640380 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640368 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640343 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640330 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640306 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640158 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640149 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640139 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640130 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640119 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640110 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640101 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640092 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640072 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640062 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640049 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 640025 2013/07/18 13:15:42 [debug] 17641#0: http file cache size: 639944 Hope it helps... For the proccesor, the max http file cache size is 640000 (but only for this proxy_cache... i have more!!!) Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240993,240995#msg-240995 From emailgrant at gmail.com Thu Jul 18 12:48:34 2013 From: emailgrant at gmail.com (Grant) Date: Thu, 18 Jul 2013 05:48:34 -0700 Subject: Strange log file behavior In-Reply-To: <20130714182129.GJ66479@mdounin.ru> References: <20130714182129.GJ66479@mdounin.ru> Message-ID: >> I noticed that most of my rotated nginx log files are empty (0 bytes). >> >> My only access_log directive is in nginx.conf: >> >> access_log /var/log/nginx/localhost.access_log combined; >> >> Also nginx is currently logging to >> /var/log/nginx/localhost.access_log.1 instead of localhost.access_log. >> >> Does anyone know why these things are happening? > > This usually happens if someone don't ask nginx to reopen log > files after a rotation. See here for details: > > http://nginx.org/en/docs/control.html#logs I tried issuing kill -USR1 `cat /run/nginx.pid` manually but nginx-1.4.1 still logged to the old file. I got the following in error_log: signal 10 (SIGUSR1) received, reopening logs reopening logs It does start logging to the new file if I restart nginx afterward. I also noticed this in error_log from when logrotate executes: open() "/var/log/nginx/error_log" failed (13: Permission denied) open() "/var/log/nginx/localhost.access_log" failed (13: Permission denied) open() "/var/log/nginx/localhost.error_log" failed (13: Permission denied) Is something happening out of order? - Grant From igor.sverkos at googlemail.com Thu Jul 18 13:14:10 2013 From: igor.sverkos at googlemail.com (Igor Sverkos) Date: Thu, 18 Jul 2013 15:14:10 +0200 Subject: Strange log file behavior In-Reply-To: References: <20130714182129.GJ66479@mdounin.ru> Message-ID: Hi, you are right. There is a problem: https://bugs.gentoo.org/show_bug.cgi?id=473036 Upstream (nginx) accepted the report: http://trac.nginx.org/nginx/ticket/376 -- Regards, Igor From emailgrant at gmail.com Thu Jul 18 13:54:30 2013 From: emailgrant at gmail.com (Grant) Date: Thu, 18 Jul 2013 06:54:30 -0700 Subject: Strange log file behavior In-Reply-To: References: <20130714182129.GJ66479@mdounin.ru> Message-ID: > you are right. There is a problem: > > https://bugs.gentoo.org/show_bug.cgi?id=473036 > > Upstream (nginx) accepted the report: > http://trac.nginx.org/nginx/ticket/376 Many thanks Igor! You've saved me a lot of trouble. - Grant From nginx-forum at nginx.us Thu Jul 18 13:56:04 2013 From: nginx-forum at nginx.us (parulsood85) Date: Thu, 18 Jul 2013 09:56:04 -0400 Subject: Not able add nginx upload module to nginx 1.2.8 Message-ID: Hi, I am trying add the nginx upload module 2.2.0 with nginx 1.2.8 and getting below error. after./configure --add-module= and make this one fails > make install make -f objs/Makefile install make[1]: Entering directory `/app/build/nginx-1.2.8' gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx_upload_module-2.2.0/ngx_http_upload_module.o \ /app/build/nginx-1.2.8/nginx_upload_module-2.2.0/ngx_http_upload_module.c /app/build/nginx-1.2.8/nginx_upload_module-2.2.0/ngx_http_upload_module.c:14:17: fatal error: md5.h: No such file or directory compilation terminated. make[1]: *** [objs/addon/nginx_upload_module-2.2.0/ngx_http_upload_module.o] Error 1 make[1]: Leaving directory `/app/build/nginx-1.2.8' make: *** [install] Error 2 > make install make -f objs/Makefile install make[1]: Entering directory `/app/build/nginx-1.2.8' gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx_upload_module-2.2.0/ngx_http_upload_module.o \ /app/build/nginx-1.2.8/nginx_upload_module-2.2.0/ngx_http_upload_module.c /app/build/nginx-1.2.8/nginx_upload_module-2.2.0/ngx_http_upload_module.c:26:17: fatal error: sha.h: No such file or directory compilation terminated. make[1]: *** [objs/addon/nginx_upload_module-2.2.0/ngx_http_upload_module.o] Error 1 make[1]: Leaving directory `/app/build/nginx-1.2.8' make: *** [install] Error 2 Should I try some lower version of this module? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240999,240999#msg-240999 From nginx-forum at nginx.us Thu Jul 18 13:57:51 2013 From: nginx-forum at nginx.us (parulsood85) Date: Thu, 18 Jul 2013 09:57:51 -0400 Subject: Not able add nginx upload module to nginx 1.2.8 In-Reply-To: References: Message-ID: Please reply... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240999,241000#msg-241000 From dewanggaba at gmail.com Thu Jul 18 14:21:56 2013 From: dewanggaba at gmail.com (antituhan) Date: Thu, 18 Jul 2013 07:21:56 -0700 (PDT) Subject: Webapps config doesn't work because of overridden by nginx Message-ID: <1374157316235-7586017.post@n2.nabble.com> Hi All, I've some problem when using reverse proxy, my configuration are like this : I think the configuration are correct, and the site are running well. But, if I want to access some subdomain (eg. techno.domain.tld), they automatically overridden by nginx config, and the correct pages won't show. The pages show only home pages of www.domain.tld Any suggestion? ----- [daemon at antituhan.com ~]# -- View this message in context: http://nginx.2469901.n2.nabble.com/Webapps-config-doesn-t-work-because-of-overridden-by-nginx-tp7586017.html Sent from the nginx mailing list archive at Nabble.com. From mdounin at mdounin.ru Thu Jul 18 14:28:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Jul 2013 18:28:08 +0400 Subject: Question about failure and fail-over In-Reply-To: References: Message-ID: <20130718142808.GJ49108@mdounin.ru> Hello! On Thu, Jul 18, 2013 at 07:10:27AM -0400, Branden Visser wrote: > Hi all, I have a general question about server failure and failover > within an upstream group to ensure I understand it correctly. > > Lets say I have the configuration: > > proxy_next_upstream timeout; > proxy_connect_timeout 5; > ... > upstream { > 127.0.0.1 max_fails=3 fail_timeout=10s > 127.0.0.2 max_fails=3 fail_timeout=10s > 127.0.0.3 max_fails=3 fail_timeout=10s > } > > And then the server 127.0.0.1 starts "hanging" indefinitely on > connection attempts. > > a) Once 3 connection attempts timeout after 5 seconds on 127.0.0.1, it > will be marked down. However, during that 5 second timeout, it is > possible that 30, or N connections / requests may be in process of > timing out as well, so you may end up with 30 internal connection > failures as a result of 127.0.0.1's issue. Although they all are > retried on the next available upstream, 30 end-users noticed a 5 > second hang in their request as a result of waiting for the timeout to > occur. Yep. Use least_conn balancer to mitigate such kind of backend problems, see http://nginx.org/r/least_conn. Additionally, it's usually good idea to make sure your backends return RST on listen queue overflow. On most Linux systems default seems to be just to drop SYN packets on listen queue overflow, which will result in an unbound number of connections waiting for a timeout. Changing /proc/sys/net/ipv4/tcp_abort_on_overflow might be good idea, see here for details: http://man7.org/linux/man-pages/man7/tcp.7.html > b) After 10 seconds, if the server is still hanging, a) basically > repeats in the same manner. No. As of 1.1.6+, only single request will be routed to the server after fail_timeout. The server will be considered up only if it will be able to respond to this request. > Is this correct? If I add "keepalive 64;" into the upstream block, > does the above scenario change? If a server is marked down as a result > of no new connections being able to connect, are all persistent > connections destroyed as well? Balancing doesn't know anything about cached connections. If a server is marked down, no attempts to use cached connections to the server will be made, and eventually all connections to the server will be replaced with connections to other servers, as per LRU algorthm. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Jul 18 14:43:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Jul 2013 18:43:48 +0400 Subject: proxy_cache calculating size error under SSD drive but not SATA drive In-Reply-To: <3039e11effb5a960c01da6dedf3fe189.NginxMailingListEnglish@forum.nginx.org> References: <3039e11effb5a960c01da6dedf3fe189.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130718144348.GK49108@mdounin.ru> Hello! On Thu, Jul 18, 2013 at 07:46:01AM -0400, feanorknd wrote: > Hello: > > I think I am on the right way, but not sure... > > The scenario: > > - Have 2 drives: > -> SSD drive - XFS - default options (with > noatime, discard) - almost empty > -> SATA drive - XFS - default options - almost > empty > > - Kernel variables, shared memory, ulimits, max-file, etc... all correctly > configured > > - Nginx and virtualhost correctly configured. > > - proxy_cache_path levels=1:2 keys_zone=catalogo_fotos:2500m > max_size=2500m inactive=120d; > > > Ok.. here I am... it depends of the path... the problem is: > > - If the path is at SATA drive -> cache is growing until maximum 2500m are > reached... no problem here. Normal.. > - If the path is at SSD drive -> cache is not growing until limit... it > stale... problem... why? Try looking into this ticket: http://trac.nginx.org/nginx/ticket/157 With XFS, a file size reported on just created files before a file is closed is incorrect, and this might confuse nginx. [...] > What is that "http file cache size: 640039" ???? That number, which way is > calculated??? What it is? This is the cache size in blocks. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Jul 18 15:02:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Jul 2013 19:02:16 +0400 Subject: Strange log file behavior In-Reply-To: References: <20130714182129.GJ66479@mdounin.ru> Message-ID: <20130718150216.GL49108@mdounin.ru> Hello! On Thu, Jul 18, 2013 at 03:14:10PM +0200, Igor Sverkos wrote: > Hi, > > you are right. There is a problem: > > https://bugs.gentoo.org/show_bug.cgi?id=473036 > > Upstream (nginx) accepted the report: > http://trac.nginx.org/nginx/ticket/376 The "accepted" part is about future enhancement. The Gentoo part seems to be about wrong permissions on a log directory, which result in non-working USR1. -- Maxim Dounin http://nginx.org/en/donation.html From emailgrant at gmail.com Thu Jul 18 15:34:20 2013 From: emailgrant at gmail.com (Grant) Date: Thu, 18 Jul 2013 08:34:20 -0700 Subject: Strange log file behavior In-Reply-To: <20130718150216.GL49108@mdounin.ru> References: <20130714182129.GJ66479@mdounin.ru> <20130718150216.GL49108@mdounin.ru> Message-ID: >> you are right. There is a problem: >> >> https://bugs.gentoo.org/show_bug.cgi?id=473036 >> >> Upstream (nginx) accepted the report: >> http://trac.nginx.org/nginx/ticket/376 > > The "accepted" part is about future enhancement. The Gentoo part > seems to be about wrong permissions on a log directory, which > result in non-working USR1. It appears you are right. Thank you for clearing that up Maxim. - Grant From bdfy at mail.ru Thu Jul 18 15:50:16 2013 From: bdfy at mail.ru (=?UTF-8?B?SXZhbg==?=) Date: Thu, 18 Jul 2013 19:50:16 +0400 Subject: Not able add nginx upload module to nginx 1.2.8 In-Reply-To: References: Message-ID: <1374162616.417488771@f331.i.mail.ru> http://portage.perestoroniny.ru/www-servers/nginx/files/ use nginx-1.3.9_upload_module.patch ???????, 18 ???? 2013, 9:57 -04:00 ?? "parulsood85" : >Please reply... > >Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240999,241000#msg-241000 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- ???? ?. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdfy at mail.ru Thu Jul 18 15:53:57 2013 From: bdfy at mail.ru (=?UTF-8?B?SXZhbg==?=) Date: Thu, 18 Jul 2013 19:53:57 +0400 Subject: Not able add nginx upload module to nginx 1.2.8 In-Reply-To: <1374162616.417488771@f331.i.mail.ru> References: <1374162616.417488771@f331.i.mail.ru> Message-ID: <1374162837.292646494@f331.i.mail.ru> : fatal error: md5.h: No such file or directory fatal error: sha.h: No such file or directory try to install openssl-devel? package ???????, 18 ???? 2013, 19:50 +04:00 ?? Ivan : >http://portage.perestoroniny.ru/www-servers/nginx/files/ > >use nginx-1.3.9_upload_module.patch > > >???????, 18 ???? 2013, 9:57 -04:00 ?? "parulsood85" : >>Please reply... >> >>Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240999,241000#msg-241000 >> >>_______________________________________________ >>nginx mailing list >>nginx at nginx.org >>http://mailman.nginx.org/mailman/listinfo/nginx > > >-- >???? ?. >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- ???? ?. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian.hobson at ntlworld.com Thu Jul 18 18:38:49 2013 From: ian.hobson at ntlworld.com (Ian Hobson) Date: Thu, 18 Jul 2013 19:38:49 +0100 Subject: Help needed with configuration In-Reply-To: References: <51E7BBA9.60401@ntlworld.com> Message-ID: <51E83639.5060703@ntlworld.com> Hi Mex, Thanks for your interest. On 18/07/2013 11:39, mex wrote: > iirc there is something with the order/length of location $var {} - > content. I have tried moving the location / to the first position, but that changed nothing. > > is there an index.php in /home/ian/websites/reseller/htdocs? No, but when I create one, it is served. I don't want to create one unless it is necessary for reseller's configuration. > what is the dir_permissions from /home/ian/websites/reseller/htdocs? 775 if I specify the file, it is served as required. > what are the file_permissions for > /home/ian/websites/coachmaster3dev/htdocs/index.php? 744 > what is the dir_permissions home/ian/websites/coachmaster3dev/htdocs? 777 > > 403 Forbidden is a trustable output. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240984,240988#msg-240988 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > ----- > No virus found in this message. > Checked by AVG - www.avg.com > Version: 2013.0.3349 / Virus Database: 3204/6500 - Release Date: 07/17/13 > > -- Ian Hobson 31 Sheerwater, Northampton NN3 5HU, Tel: 01604 513875 Preparing eBooks for Kindle and ePub formats to give the best reader experience. From nginx-forum at nginx.us Thu Jul 18 18:49:51 2013 From: nginx-forum at nginx.us (oops_im_a_sysadmin) Date: Thu, 18 Jul 2013 14:49:51 -0400 Subject: If statements, string manipulation and file detection Message-ID: <98d55664fc12626a6b9bdbbe1a3217d1.NginxMailingListEnglish@forum.nginx.org> Hi, I'd like to use some if statements in a config. I know if in nginx is evil, but I think it's what I want. Here is the pseudo-nginx-conf-code for what I need: http://pastebin.com/BxrtZ695 ? can someone please help me get the syntax right? In english, the idea is: if the requested filename ends with either "F.jpg" or "M.jpg", and another file with almost the same name but the _other_ ending instead does not exist, break/fail. Thanks for any help! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241010,241010#msg-241010 From nginx-forum at nginx.us Thu Jul 18 19:22:39 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 18 Jul 2013 15:22:39 -0400 Subject: setting max active connection In-Reply-To: References: <20c658eb545550de27a7655a4884b2b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <32e522ff49c5d5d7d869bc298096d018.NginxMailingListEnglish@forum.nginx.org> your ngfinx-config seems ok (except that part that should be deleted from global-section and appear only in event {...} can you test your fastcgi_process with ab (apache benchmark - tool) oder httperf until you reach max_clients w/out reverse_proxying through nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240989,241011#msg-241011 From francis at daoine.org Thu Jul 18 22:20:05 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 18 Jul 2013 23:20:05 +0100 Subject: Webapps config doesn't work because of overridden by nginx In-Reply-To: <1374157316235-7586017.post@n2.nabble.com> References: <1374157316235-7586017.post@n2.nabble.com> Message-ID: <20130718222005.GG15782@craic.sysops.org> On Thu, Jul 18, 2013 at 07:21:56AM -0700, antituhan wrote: Hi there, > I've some problem when using reverse proxy, my configuration are like this : > Any suggestion? nabble ate half your message. See http://forum.nginx.org/read.php?2,241003 to see what everyone thinks your mail says. You'll probably be better off if you find out how to stop nabble breaking things, and then re-sending the full message. f -- Francis Daly francis at daoine.org From agentzh at gmail.com Thu Jul 18 23:21:15 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 18 Jul 2013 16:21:15 -0700 Subject: [ANN] ngx_openresty devel version 1.4.1.1 released Message-ID: Hello guys! I am glad to announce that the new development version of ngx_openresty, 1.4.1.1, is now released: http://openresty.org/#Download This is the first release based on the Nginx 1.4.x stable series. Special thanks go to all our contributors and users for helping make this release happen! Below is the complete change log for this release, as compared to the last (stable) release, 1.2.8.6: * upgraded the Nginx core to 1.4.1. * see for changes. * bugfix: ./configure: use of spaces in the "--with-cc" option values resulted in errors. * bugfix: applied the unix_socket_accept_over_read patch to fix a buffer over-read issue in the Nginx core when Nginx is configured to listen on a unix domain socket. * bugfix: applied the gcc-maybe-uninitialized-warning patch to the Nginx core to fix a gcc warning with gcc 4.7.3/4.7.2. * upgraded LuaNginxModule to 0.8.5. * change: made ngx.say/ngx.print/ngx.eof/ngx.flush/ngx.send_headers return "nil" and a string describing the error in case of most of the common errors (instead of throwing out an exception), and return 1 for success. * feature: added new directive lua_regex_match_limit for setting PCRE's "match_limit" protection for regex execution. * feature: now we store the nginx request object as a named Lua global variable "__ngx_req" to help FFI-based Lua code directly access it. * bugfix: the ngx.ctx tables would leak memory when ngx.ctx, ngx.exec()/ngx.req.set_uri(uri, true), and log_by_lua were used together in a single location. thanks Guanlan Dai for writing the gdb utils to catch this. * bugfix: setting ngx.var.VARIABLE could lead to buffer over-read in "luaL_error" when an error happened. * bugfix: tcpsock:send("") resulted in the error log alert message "send() returned zero". * bugfix: ngx.flush(true) might not return 1 on success. * bugfix: when compiling with "-DDDEBUG=1", there was a compilation error. thanks tigeryang for the report. * optimize: avoided use of the nginx request objects in ngx.escape_uri, ngx.unescape_uri, ngx.quote_sql_str, ngx.decode_base64, ngx.encode_base64, ngx.encode_args, and ngx.decode_args. * optimize: no longer store "cf->log" into the Lua registry table because we can always directly access the global "ngx_cycle->log" thing. * refactor: added inline functions "ngx_http_lua_get_req" and "ngx_http_lua_set_req" to eliminate code duplication when storing or fetching the nginx request object from the lua global variable table. * docs: typo fixes in the code sample for body_filter_by_lua. thanks cyberty for the patch. * docs: mentioned my Nginx Systemtap Toolkit which is very useful for online debugging on Linux. * upgraded HeadersMoreNginxModule to 0.21. * bugfix: segmentation fault might happen in Nginx 1.4.x when using the more_set_input_headers directive on the Cookie request headers because recent versions of Nginx no longer always initialize "r->headers_in.cookies". The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004001 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From nginx-forum at nginx.us Fri Jul 19 00:54:48 2013 From: nginx-forum at nginx.us (feanorknd) Date: Thu, 18 Jul 2013 20:54:48 -0400 Subject: proxy_cache calculating size error under SSD drive but not SATA drive In-Reply-To: <20130718144348.GK49108@mdounin.ru> References: <20130718144348.GK49108@mdounin.ru> Message-ID: <8bbe1cd1ffc3578ae710d5fa7f9a05f7.NginxMailingListEnglish@forum.nginx.org> Thanks thanks so much!!! :D I even saw that ticket before posting, but I figured out it was not the problem just because I do use XFS for my nginx_caches at 4 servers without this problem, and also I did test changing the path to point to a XFS partition on SATA drive, and the problem dissapeared. If you have a look at the ticket, that user also use a SSD drive and have the alloc problem.... it seems like if only at SSD drives, the size notification is incorrect because of the allocsize, but not on SATA drives, although the allocsize is default for both of them, at least in my case!!!! Have a look at my first post and see at SATA drive, with XFS defaults, the cache gets fullfilled exactly and if I configure 2500Gb, the "du -hs" when core starts force expiring objects, is just that 2500Gb..... the measures al exact and real. Not the same for SSD.. do you think people is something to review by developers somehow? Thanks. Maxim Dounin Wrote: ------------------------------------------------------- > > Try looking into this ticket: > > http://trac.nginx.org/nginx/ticket/157 > > With XFS, a file size reported on just created files before a file > is closed is incorrect, and this might confuse nginx. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240993,241022#msg-241022 From sosogh at 126.com Fri Jul 19 01:48:09 2013 From: sosogh at 126.com (sosogh at 126.com) Date: Fri, 19 Jul 2013 09:48:09 +0800 Subject: nginx-reverse-proxy-proxy-cache-inside-if-block-possible Message-ID: <201307190948074835821@126.com> Hi I am running a BBS server(nginx). And now I want to setup a cache server (using nginx)in front of the BBS server to cache the pic . The URL for the pic is something like this: http://www.mysite.com/forum.php?mod=attachment&aid=MjIwODgyfDhiNWNiNzE4fDEzNzQxMTc1MzB8OTgyNzR8OTgwNw%3D%3D&noupdate=yes It is not as usual as something like this --- http://www.mysite.com/img/xxxx.jpg Following this page: http://serverfault.com/questions/389571/nginx-reverse-proxy-proxy-cache-inside-if-block-possible The cache config on the cache server is :(nginx version: nginx/1.1.19) proxy_temp_path /nginx-cache/tmp 1 2; proxy_cache_path /nginx-cache/cache1 levels=1:2 keys_zone=cache1:100m inactive=1d max_size=15g; map $arg_mod $skip_cache { default 1; attachment 0; } log_format format1 '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" $upstream_cache_status'; location /forum.php { proxy_pass http://1.1.1.1; # 1.1.1.1 is the real ip of BBS server. proxy_cache cache1; proxy_cache_key $host$uri$is_args$args; proxy_cache_valid 720m; expires 3d; proxy_cache_bypass $skip_cache; proxy_no_cache $skip_cache; } When I access http://www.mysite.com/forum.php?mod=attachment&aid=MjIwODgyfDhiNWNiNzE4fDEzNzQxMTc1MzB8OTgyNzR8OTgwNw%3D%3D&noupdate=yes on my IE. I see the following log on cache server: client IP - - [18/Jul/2013:14:21:37 +0400] "GET /forum.php?mod=attachment&aid=MjIwODgyfDhiNWNiNzE4fDEzNzQxMTc1MzB8OTgyNzR8OTgwNw%3D%3D&noupdate=yes HTTP/1.1" 200 4194 "http://www.mysite.com/thread-9971-1-1.html" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0;; .)" MISS And at that moment,the pic did not shown on my IE. My questions are: What does "MISS" excatly mean? Does it mean that cache server does not get the pic from BBS server ? If so , why? or are there any other ways to setup nginx cache for my situation? And hint is appreciated. Thank you ! sosogh at 126.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jul 19 08:26:10 2013 From: nginx-forum at nginx.us (mex) Date: Fri, 19 Jul 2013 04:26:10 -0400 Subject: nginx-reverse-proxy-proxy-cache-inside-if-block-possible In-Reply-To: <201307190948074835821@126.com> References: <201307190948074835821@126.com> Message-ID: <01bbc7e517a5477783acaff212d50cd9.NginxMailingListEnglish@forum.nginx.org> MISS means the ressouce is not found in the cache btw, do you see any requests getting cahced / your caching-dir is filling or do you see 100% MISS? maybe: http://forum.nginx.org/read.php?11,163400,163695 do you use cache-control-headerrs? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241023,241024#msg-241024 From mdounin at mdounin.ru Fri Jul 19 10:05:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Jul 2013 14:05:29 +0400 Subject: proxy_cache calculating size error under SSD drive but not SATA drive In-Reply-To: <8bbe1cd1ffc3578ae710d5fa7f9a05f7.NginxMailingListEnglish@forum.nginx.org> References: <20130718144348.GK49108@mdounin.ru> <8bbe1cd1ffc3578ae710d5fa7f9a05f7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130719100529.GM49108@mdounin.ru> Hello! On Thu, Jul 18, 2013 at 08:54:48PM -0400, feanorknd wrote: > Thanks thanks so much!!! :D > > I even saw that ticket before posting, but I figured out it was not the > problem just because I do use XFS for my nginx_caches at 4 servers without > this problem, and also I did test changing the path to point to a XFS > partition on SATA drive, and the problem dissapeared. > > If you have a look at the ticket, that user also use a SSD drive and have > the alloc problem.... it seems like if only at SSD drives, the size > notification is incorrect because of the allocsize, but not on SATA drives, > although the allocsize is default for both of them, at least in my case!!!! > > Have a look at my first post and see at SATA drive, with XFS defaults, the > cache gets fullfilled exactly and if I configure 2500Gb, the "du -hs" when > core starts force expiring objects, is just that 2500Gb..... the measures al > exact and real. > > Not the same for SSD.. do you think people is something to review by > developers somehow? Depending on media used XFS might apply different defaults, and/or observed behaviour might be different due to timing reasons. Quick search suggests XFS currently uses dynamic allocsize by default, see here: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=055388a3188f56676c21e92962fc366ac8b5cb72 Try forcing allocsize to something like 4k to see if it helps. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Jul 19 12:03:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Jul 2013 16:03:09 +0400 Subject: EXSLT func:function not registered for XSLT filter module In-Reply-To: References: <20130715173214.GF66479@mdounin.ru> Message-ID: <20130719120309.GN49108@mdounin.ru> Hello! On Wed, Jul 17, 2013 at 06:18:03PM +0200, Kate F wrote: > On 15 July 2013 19:32, Maxim Dounin wrote: > > Hello! > > > > On Sat, Jul 13, 2013 at 12:19:51PM +0200, Kate F wrote: > > > >> Hi, > >> > >> I'm trying to use EXSLT's with nginx's xslt filter > >> module. The effect I think I'm seeing is that my functions are > >> seemingly ignored. > > > > [...] > > > >> Looking at ngx_http_xslt_filter_module.c I see exsltRegisterAll() is > >> called, which is what should register libexslt's handler for > >> func:function and friends: > >> > >> #if (NGX_HAVE_EXSLT) > >> exsltRegisterAll(); > >> #endif > >> > >> I know NGX_HAVE_EXSLT is defined because other EXSLT functions (such > >> as things in the date: and str: namespaces) work fine. > > > > It looks like exsltRegisterAll() is called too late for EXSLT > > Functions extension. > > > > Please try the following patch: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1373909466 -14400 > > # Node ID bc1cf51a5b0a5e8512a8170dc7991f9e966c5533 > > # Parent 8e7db77e5d88b20d113e77b574e676737d67bf0e > > Xslt: exsltRegisterAll() moved to preconfiguration. > > > > The exsltRegisterAll() needs to be called before XSLT stylesheets > > are compiled, else stylesheet compilation hooks will not work. This > > change fixes EXSLT Functions extension. > > Awesome! Good catch. > > Thanks for that. Your patch works fine. Committed, thanks for testing. -- Maxim Dounin http://nginx.org/en/donation.html From jan.algermissen at nordsc.com Fri Jul 19 14:45:38 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Fri, 19 Jul 2013 16:45:38 +0200 Subject: Info about original request URI in access phase Message-ID: Hi, I am writing a handler that checks a request signature during the access phase. When there is URI rewriting, the URI the client used when signing does not match the URI the handler sees when checking the signature. Question: How can I access the original request URI during the access phase? Or would you rather suggest I hook the handler into the rewriting phase instead? Caveat here: when signature validation is successsful, the rwriting still needs to take place. Can I control that by putting both handlers into the rewriting phase, in the correct order? Jan From david at styleflare.com Fri Jul 19 14:55:57 2013 From: david at styleflare.com (David | StyleFlare) Date: Fri, 19 Jul 2013 10:55:57 -0400 Subject: Hostname in Root directive. Message-ID: <51E9537D.7040701@styleflare.com> I know this may not be safe, but how can I set the hostname in the root directive location /static { ie; root /www/$hostname/static; } From francis at daoine.org Fri Jul 19 15:41:48 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 19 Jul 2013 16:41:48 +0100 Subject: Hostname in Root directive. In-Reply-To: <51E9537D.7040701@styleflare.com> References: <51E9537D.7040701@styleflare.com> Message-ID: <20130719154148.GH15782@craic.sysops.org> On Fri, Jul 19, 2013 at 10:55:57AM -0400, David | StyleFlare wrote: Hi there, > I know this may not be safe, but how can I set the hostname in the root > directive > ie; root /www/$hostname/static; By using a variable, just like you've done there. Two things you need to decide: what exact variable do you want to use? (There's a list of core-module pre-defined variables at http://nginx.org/en/docs/http/ngx_http_core_module.html#variables for example); and do you want to accept whatever that variable happens to hold, or do you want to use something like a map (http://nginx.org/r/map) to set it to a default value if it doesn't have a "safe" value, where you define "safe"? "hostname" might be $http_host, or $host, or $server_name (with increasing amounts of trust), or maybe something else entirely. f -- Francis Daly francis at daoine.org From jan.algermissen at nordsc.com Fri Jul 19 16:31:49 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Fri, 19 Jul 2013 18:31:49 +0200 Subject: Info about original request URI in access phase In-Reply-To: References: Message-ID: <132CB02C-5B9A-4392-B8AA-6B9FEDD44A1D@nordsc.com> On 19.07.2013, at 16:45, Jan Algermissen wrote: > Hi, > > I am writing a handler that checks a request signature during the access phase. > > When there is URI rewriting, the URI the client used when signing does not match the URI the handler sees when checking the signature. > > Question: How can I access the original request URI during the access phase? I found r->unparsed_uri which seems to do the trick. Jan > > Or would you rather suggest I hook the handler into the rewriting phase instead? Caveat here: when signature validation is successsful, the rwriting still needs to take place. > > Can I control that by putting both handlers into the rewriting phase, in the correct order? > > Jan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From Kevin.Johns at Level3.com Fri Jul 19 16:33:56 2013 From: Kevin.Johns at Level3.com (Johns, Kevin) Date: Fri, 19 Jul 2013 16:33:56 +0000 Subject: Nginx Cache Config with Multiple Disk Drives Message-ID: <59566FAA26861246A0E785066534B42A26F5F85E@USIDCWVEMBX07.corp.global.level3.com> Hi, I am looking for guidance on how best to configure Nginx Proxy Cache in a multi-disk drive environment. Our typical server setup is such that each drive is its own partition, for example, if we have a 10 drive server we may setup drives 4-10 for storage such as: /dev/sdd1 /nginx/cached /dev/sde1 /nginx/cachee /dev/sdf1 /nginx/cachef /dev/sdg1 /nginx/cacheg /dev/sdh1 /nginx/cacheh /dev/sdi1 /nginx/cachei /dev/sdj1 /nginx/cachej I see that in the Nginx Proxy config, you can have multiple proxy_cache_path directives, each of which can point to the various disk drives. The proxy_cache directive is then used to determine which zone is used for a given configuration block (http, server, location). However, I am unable to determine how to spread the cache across the multiple drives as essentially a shared resource. Having to define which disk to use for each server or location block is undesirable as we don't want to leave some disks underutilized and others over utilized. Any guidance as to how best configure Nginx for this situation would be greatly appreciated. Regards, Kevin From nginx-forum at nginx.us Sat Jul 20 01:02:39 2013 From: nginx-forum at nginx.us (momyc) Date: Fri, 19 Jul 2013 21:02:39 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <20130311121220.GX15378@mdounin.ru> References: <20130311121220.GX15378@mdounin.ru> Message-ID: <0a3296d4d4f48b56af2e9807ef537333.NginxMailingListEnglish@forum.nginx.org> You clearly do not understand what the biggest FastCGI connection multiplexing advantage is. It makes it possible to use much less TCP connections (read "less ports"). Each TCP connection requires separate port and "local" TCP connection requires two ports. Add ports used by browser-to-Web-server connections and you'll see the whole picture. Even if Unix-sockets are used between Web-server and FastCGI-server there is an advantage in using connection multiplexing - less used file descriptors. FastCGI connection multiplexing could be great tool for beating C10K problem. And long-polling HTTP-requests would benefit from connection multiplexing even more. Of course, if you're running 1000 hits/day Web-site it is not someting you'd worry about. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241040#msg-241040 From reallfqq-nginx at yahoo.fr Sat Jul 20 01:25:53 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 19 Jul 2013 21:25:53 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <0a3296d4d4f48b56af2e9807ef537333.NginxMailingListEnglish@forum.nginx.org> References: <20130311121220.GX15378@mdounin.ru> <0a3296d4d4f48b56af2e9807ef537333.NginxMailingListEnglish@forum.nginx.org> Message-ID: It is yet to prove that C10K-related problems are based on sockets/ports exhaustion... The common struggling points on a machine involve multiples locations and your harddisks, RAM & processing capabilities will be quickly overwelmed before you lack sockets and/or ports... If you are tempted of using enormous capacities against the struggling points to finally achieve socket exhaustion, you are using the old 'mainframe' paradigm : few machines responsible for the whole work. Google proved the opposite one (several 'standard' machines working in a cluster) was more accessible/profitable/scalable/affordable. Could you provide some real-world-based insights on the absolute necessity of the FastCGI multiplexing capability? And please mind your words. Stating that someone 'clearly doesn't understand' might be understood as calling that person 'stupid'. That rhetorical level might lead the debate to a quick and sound end. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Jul 20 02:05:13 2013 From: nginx-forum at nginx.us (momyc) Date: Fri, 19 Jul 2013 22:05:13 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: References: Message-ID: <0985fd976f9e71d9448f26ea4e748025.NginxMailingListEnglish@forum.nginx.org> Consider Comet application (aka long-polling Ajax requests). There is no CPU-load since most of the time application just waits for some event to happen and nothing is being transmitted. Something like chat or stock monitoring Web application used by thousands of users simultaneously. Every request (one socket/one port) would generate one connection to backend (another socket/port). So each request would take two sockets or theoretical limit is approximately 32K simulteneous requests. Even using keep-alive feature on backend side does not help here since connection can be used by another request only after current one is fully served. With FastCGI connection multiplexing we can effectively serve twice as many requests/clients. Of course, there are applications that are limited by other resources rather that sockets/ports. Is it really so difficult to implement? P.S. I remember when some people were saying that keep-alive feature for FastCGI backends side would be pointless. P.P.S. English is not my first language. Please accept my sincere apologies for making offencive statement. I did not mean to do so. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241042#msg-241042 From nginx-forum at nginx.us Sat Jul 20 02:50:17 2013 From: nginx-forum at nginx.us (momyc) Date: Fri, 19 Jul 2013 22:50:17 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <0985fd976f9e71d9448f26ea4e748025.NginxMailingListEnglish@forum.nginx.org> References: <0985fd976f9e71d9448f26ea4e748025.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5d942971c4d6ee1d50283aaf83a32e3e.NginxMailingListEnglish@forum.nginx.org> Another scenario. Consider application that takes few seconds to process single request. In non-multiplexing mode we're still limited to roughly 32K simultaneous requests even though we could install enough backend servers to handle 64K such requests per second. Now, imagine we can use FastCGI connection multiplexing. It could be just single connection per backend. And, again, we are able to serve roughly twice as many requests per second with the same hardware but little tiny feature called FastCGI connection multiplexing. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241043#msg-241043 From nginx-forum at nginx.us Sat Jul 20 02:59:49 2013 From: nginx-forum at nginx.us (momyc) Date: Fri, 19 Jul 2013 22:59:49 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <5d942971c4d6ee1d50283aaf83a32e3e.NginxMailingListEnglish@forum.nginx.org> References: <0985fd976f9e71d9448f26ea4e748025.NginxMailingListEnglish@forum.nginx.org> <5d942971c4d6ee1d50283aaf83a32e3e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6b93dcc96483c04388d96d7a64d3355e.NginxMailingListEnglish@forum.nginx.org> Many projects would kill for 100% performance or scalability gain. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241044#msg-241044 From nginx-forum at nginx.us Sat Jul 20 03:07:31 2013 From: nginx-forum at nginx.us (momyc) Date: Fri, 19 Jul 2013 23:07:31 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <6b93dcc96483c04388d96d7a64d3355e.NginxMailingListEnglish@forum.nginx.org> References: <0985fd976f9e71d9448f26ea4e748025.NginxMailingListEnglish@forum.nginx.org> <5d942971c4d6ee1d50283aaf83a32e3e.NginxMailingListEnglish@forum.nginx.org> <6b93dcc96483c04388d96d7a64d3355e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7a6770c76877f73c98343ff0b8416f50.NginxMailingListEnglish@forum.nginx.org> Funny thing is that resistance to implement that feature is so dence that it feels like its about breaking compatibility. It is all about more complete protocol specification implementation without any penalties beside making some internal changes. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241045#msg-241045 From reallfqq-nginx at yahoo.fr Sat Jul 20 03:16:48 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 19 Jul 2013 23:16:48 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <7a6770c76877f73c98343ff0b8416f50.NginxMailingListEnglish@forum.nginx.org> References: <0985fd976f9e71d9448f26ea4e748025.NginxMailingListEnglish@forum.nginx.org> <5d942971c4d6ee1d50283aaf83a32e3e.NginxMailingListEnglish@forum.nginx.org> <6b93dcc96483c04388d96d7a64d3355e.NginxMailingListEnglish@forum.nginx.org> <7a6770c76877f73c98343ff0b8416f50.NginxMailingListEnglish@forum.nginx.org> Message-ID: Scenario 1: With long-polling requests, each client uses only one port since the same connection is continuously used, HTTP being stateless. The loss of the connection would mean potential loss of data. 32K simultaneous active connections to the same service on a single machine? I suspect the bottleneck is somewhere else... Scenario 2: So you would use several backend and a single frontend? Frontend, espacially when only used as proxy/cache, are the easiest components to replicate... Once again, I highly suspect that managing 32K connections on a single server is CPU-consuming... I am no among the developers at all... I am merely discussing the usefulness of such a request. I prefer developers to concentrate on usable stuff rather than on superfluous features: the product will be more efficient and usage-based and not an all-in-one monster. My 2 cents. I'll stop there. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Jul 20 03:55:03 2013 From: nginx-forum at nginx.us (momyc) Date: Fri, 19 Jul 2013 23:55:03 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: References: Message-ID: You clearly... err. > 32K simultaneous active connections to the same service on a single machine? I suspect the bottleneck is somewhere else... I don't know what exactly "service" means in context of our conversation but if that means server then I did not say that everything should be handled by single FastCGI server. I said single Nginx server can easily dispatch thousands of HTTP requests to a number of remote FastCGI backends. > I am no among the developers at all That's what I thought. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241047#msg-241047 From reallfqq-nginx at yahoo.fr Sat Jul 20 04:10:23 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 20 Jul 2013 00:10:23 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 11:55 PM, momyc wrote: > You clearly... err. > ?Hmmm?? > > > 32K simultaneous active connections to the same service on a single > machine? I suspect the bottleneck is somewhere else... > > I don't know what exactly "service" means in context of our conversation > but > if that means server then I did not say that everything should be handled > by > single FastCGI server. I said single Nginx server can easily dispatch > thousands of HTTP requests to a number of remote FastCGI backends. > ?... and I haven't seen a clue indicating that multiplexing would be as useful in practice as? ?it is claimed to be in theory.? > > > I am no among the developers at all > > That's what I thought. > ?Well. You must be an expert on the matter. I'll probably be enlightened reading whatever follows.? ?..? ? :o)? Developer omniscience? ?I am done here.? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Sat Jul 20 04:26:05 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 20 Jul 2013 08:26:05 +0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <0a3296d4d4f48b56af2e9807ef537333.NginxMailingListEnglish@forum.nginx.org> References: <20130311121220.GX15378@mdounin.ru> <0a3296d4d4f48b56af2e9807ef537333.NginxMailingListEnglish@forum.nginx.org> Message-ID: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> On Jul 20, 2013, at 5:02 , momyc wrote: > You clearly do not understand what the biggest FastCGI connection > multiplexing advantage is. It makes it possible to use much less TCP > connections (read "less ports"). Each TCP connection requires separate port > and "local" TCP connection requires two ports. Add ports used by > browser-to-Web-server connections and you'll see the whole picture. Even if > Unix-sockets are used between Web-server and FastCGI-server there is an > advantage in using connection multiplexing - less used file descriptors. > > FastCGI connection multiplexing could be great tool for beating C10K > problem. And long-polling HTTP-requests would benefit from connection > multiplexing even more. The main issue with FastCGI connection multiplexing is lack of flow control. Suppose a client stalls but a FastCGI backend continues to send data to it. At some point nginx should say the backend to stop sending to the client but the only way to do it is just to close all multiplexed connections. -- Igor Sysoev http://nginx.com/services.html From nginx-forum at nginx.us Sat Jul 20 04:36:15 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 00:36:15 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> References: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> Message-ID: <5cafca3b1c26f3ee0c50759a50d5646f.NginxMailingListEnglish@forum.nginx.org> > The main issue with FastCGI connection multiplexing is lack of flow control. Suppose a client stalls but a FastCGI backend continues to send data to it. At some point nginx should say the backend to stop sending to the client but the only way to do it is just to close all multiplexed connections The FastCGI spec has some fuzzy points. This one is easy. What Nginx does in case client stalls and proxied server still sends data? HTTP protocol has no flow control either. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241050#msg-241050 From igor at sysoev.ru Sat Jul 20 04:40:39 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 20 Jul 2013 08:40:39 +0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <5cafca3b1c26f3ee0c50759a50d5646f.NginxMailingListEnglish@forum.nginx.org> References: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> <5cafca3b1c26f3ee0c50759a50d5646f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7DB5AE0F-3189-426E-9BB7-C9ACFC4902A0@sysoev.ru> On Jul 20, 2013, at 8:36 , momyc wrote: >> The main issue with FastCGI connection multiplexing is lack of flow > control. > Suppose a client stalls but a FastCGI backend continues to send data to it. > At some point nginx should say the backend to stop sending to the client > but the only way to do it is just to close all multiplexed connections > > The FastCGI spec has some fuzzy points. This one is easy. What Nginx does in > case client stalls and proxied server still sends data? HTTP protocol has no > flow control either. It closes both connections to a client and a backend, since HTTP lacks both flow control and multiplexing. -- Igor Sysoev http://nginx.com/services.html From nginx-forum at nginx.us Sat Jul 20 04:41:18 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 00:41:18 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <5cafca3b1c26f3ee0c50759a50d5646f.NginxMailingListEnglish@forum.nginx.org> References: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> <5cafca3b1c26f3ee0c50759a50d5646f.NginxMailingListEnglish@forum.nginx.org> Message-ID: OK, it probably closes connection to backend server. Well, in case of multiplexed FastCGI Nginx should do two things: 1) send FCGI_ABORT_REQUEST to backend for given request 2) start dropping records for given request if it still receives records from backend for given request Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241051#msg-241051 From nginx-forum at nginx.us Sat Jul 20 04:43:27 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 00:43:27 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: References: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> <5cafca3b1c26f3ee0c50759a50d5646f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3e2fd939b095ab58890d407f923b8895.NginxMailingListEnglish@forum.nginx.org> Actually 2) is natural since there is supposed to be de-multiplexer on Nginx side and it should know where to dispatch the record received from backend Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241053#msg-241053 From nginx-forum at nginx.us Sat Jul 20 04:50:00 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 00:50:00 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> References: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> Message-ID: <090b1a5eeecf821c9542648d098efad6.NginxMailingListEnglish@forum.nginx.org> It's my next task to implement connection multiplexing feature in Nginx's FastCGI module. I haven't looked at recent sources yet and I am not familiar with Nginx architecture so if you could give me some pointers on where I could to start it would be great. Sure thing anything I produce would be available for merging with main Nginx sources. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241054#msg-241054 From nginx-forum at nginx.us Sat Jul 20 04:51:19 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 00:51:19 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: References: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> <5cafca3b1c26f3ee0c50759a50d5646f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8bf95c6a6ce5ec1d4249ef8bb78fcd29.NginxMailingListEnglish@forum.nginx.org> And, possible 3) if there is no other requests for that connection, just close it like it never existed Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241055#msg-241055 From steve at greengecko.co.nz Sat Jul 20 04:59:48 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sat, 20 Jul 2013 16:59:48 +1200 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <090b1a5eeecf821c9542648d098efad6.NginxMailingListEnglish@forum.nginx.org> References: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> <090b1a5eeecf821c9542648d098efad6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1374296388.16239.178.camel@steve-new> On Sat, 2013-07-20 at 00:50 -0400, momyc wrote: > It's my next task to implement connection multiplexing feature in Nginx's > FastCGI module. I haven't looked at recent sources yet and I am not familiar > with Nginx architecture so if you could give me some pointers on where I > could to start it would be great. Sure thing anything I produce would be > available for merging with main Nginx sources. > This career cynic - sorry sysadmin - looks forward to this fabled doubling in performance... -- Steve Holdoway BSc(Hons) MNZCS http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From igor at sysoev.ru Sat Jul 20 05:00:44 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 20 Jul 2013 09:00:44 +0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: References: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> <5cafca3b1c26f3ee0c50759a50d5646f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2DC9A4B7-9220-4137-974F-76F1682F0951@sysoev.ru> On Jul 20, 2013, at 8:41 , momyc wrote: > OK, it probably closes connection to backend server. Well, in case of > multiplexed FastCGI Nginx should do two things: > 1) send FCGI_ABORT_REQUEST to backend for given request > 2) start dropping records for given request if it still receives records > from backend for given request Suppose a slow client. Since nginx receives data quickly backend will send data quickly too because it does not know about slow client. At some point buffered data surpasses limit and nginx has to abort connection to backend. It does not happen if backend knows a real speed of the client. -- Igor Sysoev http://nginx.com/services.html From nginx-forum at nginx.us Sat Jul 20 05:02:59 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 01:02:59 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <3e2fd939b095ab58890d407f923b8895.NginxMailingListEnglish@forum.nginx.org> References: <73ECA12A-BC41-478C-B441-28A302AF149C@sysoev.ru> <5cafca3b1c26f3ee0c50759a50d5646f.NginxMailingListEnglish@forum.nginx.org> <3e2fd939b095ab58890d407f923b8895.NginxMailingListEnglish@forum.nginx.org> Message-ID: Well, there is supposed to be one FCGI_REQUEST_COMPLETE set in reply to FCGI_ABORT_REQUEST but it can be ignored in this particular case. I can see Nginx drops connections before receiving final FCGI_REQUEST_COMPLETE at the end of normal request processing in some cases. And that's something about running out of file descriptors. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241058#msg-241058 From nginx-forum at nginx.us Sat Jul 20 05:05:10 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 01:05:10 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <2DC9A4B7-9220-4137-974F-76F1682F0951@sysoev.ru> References: <2DC9A4B7-9220-4137-974F-76F1682F0951@sysoev.ru> Message-ID: <302181aa21b6b95edbe173310614a2f0.NginxMailingListEnglish@forum.nginx.org> What proxy module does in that case? You said earlier HTTP lacks flow conrol too. So what is the difference? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241059#msg-241059 From igor at sysoev.ru Sat Jul 20 05:09:14 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 20 Jul 2013 09:09:14 +0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <302181aa21b6b95edbe173310614a2f0.NginxMailingListEnglish@forum.nginx.org> References: <2DC9A4B7-9220-4137-974F-76F1682F0951@sysoev.ru> <302181aa21b6b95edbe173310614a2f0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <38360113-3A0B-464F-8F79-46F9B2A8CB94@sysoev.ru> On Jul 20, 2013, at 9:05 , momyc wrote: > What proxy module does in that case? You said earlier HTTP lacks flow conrol > too. So what is the difference? The proxy module stops reading from backend, but it does not close backend connection. It reads again from backend when some buffers will send to the slow client. -- Igor Sysoev http://nginx.com/services.html From nginx-forum at nginx.us Sat Jul 20 05:11:22 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 01:11:22 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <302181aa21b6b95edbe173310614a2f0.NginxMailingListEnglish@forum.nginx.org> References: <2DC9A4B7-9220-4137-974F-76F1682F0951@sysoev.ru> <302181aa21b6b95edbe173310614a2f0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <820dbc5644d48f5cadd111f3e854bdfd.NginxMailingListEnglish@forum.nginx.org> If it's time to close backend connection in non-multiplexed configuration just send FCGI_ABORT_REQUEST for that particular request, and start dropping records for that request received from the backend. Please shoot me any other questions about problems with implementing that feature. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241061#msg-241061 From nginx-forum at nginx.us Sat Jul 20 05:23:32 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 01:23:32 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <38360113-3A0B-464F-8F79-46F9B2A8CB94@sysoev.ru> References: <38360113-3A0B-464F-8F79-46F9B2A8CB94@sysoev.ru> Message-ID: <3ae9bd97e9e34d1414bab260440dd315.NginxMailingListEnglish@forum.nginx.org> What do you mean by "stop readning"? Oh, you just stop checking if anything is ready for reading. I see. Well, this is rude flow control I'd say. Proxied server could unexpectedly drop connection because it would think Nginx is dead. There is a nice feature I don't remember how exactly it's called when some content could be buffered on Nginx (in proxy mode) and there is strict limit of how much could be buffered and when it goes to file. This is what could be used for that case. If buffer overflow happens close client, abort backend, drop records for that request. Keep connection and keep receiving and de-multiplexing records for good requests. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241062#msg-241062 From nginx-forum at nginx.us Sat Jul 20 05:25:27 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 01:25:27 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <3ae9bd97e9e34d1414bab260440dd315.NginxMailingListEnglish@forum.nginx.org> References: <38360113-3A0B-464F-8F79-46F9B2A8CB94@sysoev.ru> <3ae9bd97e9e34d1414bab260440dd315.NginxMailingListEnglish@forum.nginx.org> Message-ID: <698bedf410da05fd12fe57a49d99ce3e.NginxMailingListEnglish@forum.nginx.org> "abort backend" meant "abort request" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241063#msg-241063 From igor at sysoev.ru Sat Jul 20 05:42:31 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 20 Jul 2013 09:42:31 +0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <3ae9bd97e9e34d1414bab260440dd315.NginxMailingListEnglish@forum.nginx.org> References: <38360113-3A0B-464F-8F79-46F9B2A8CB94@sysoev.ru> <3ae9bd97e9e34d1414bab260440dd315.NginxMailingListEnglish@forum.nginx.org> Message-ID: <50ECEF37-FEF5-46A2-9B6A-6B599B45412E@sysoev.ru> On Jul 20, 2013, at 9:23 , momyc wrote: > What do you mean by "stop readning"? Oh, you just stop checking if anything > is ready for reading. I see. Well, this is rude flow control I'd say. > Proxied server could unexpectedly drop connection because it would think > Nginx is dead. TCP will say to backend that nginx is alive. It can drop only after some timeout. > There is a nice feature I don't remember how exactly it's called when some > content could be buffered on Nginx (in proxy mode) and there is strict limit > of how much could be buffered and when it goes to file. This is what could > be used for that case. If buffer overflow happens close client, abort > backend, drop records for that request. Keep connection and keep receiving > and de-multiplexing records for good requests. Yes, but it is useless to buffer a long polling connection in a file. -- Igor Sysoev http://nginx.com/services.html From nginx-forum at nginx.us Sat Jul 20 07:52:46 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 03:52:46 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <50ECEF37-FEF5-46A2-9B6A-6B599B45412E@sysoev.ru> References: <50ECEF37-FEF5-46A2-9B6A-6B599B45412E@sysoev.ru> Message-ID: > it is useless to buffer a long polling connection in a file. For Nginx there is no any difference between long-polling or other request. It would't even know. All it should care is how much to buffer and for how long to keep those buffers until droping them and aborting request. I do not see any technical problem here. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241066#msg-241066 From nginx-forum at nginx.us Sat Jul 20 08:17:54 2013 From: nginx-forum at nginx.us (momyc) Date: Sat, 20 Jul 2013 04:17:54 -0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <50ECEF37-FEF5-46A2-9B6A-6B599B45412E@sysoev.ru> References: <50ECEF37-FEF5-46A2-9B6A-6B599B45412E@sysoev.ru> Message-ID: > Yes, but it is useless to buffer a long polling connection in a file. Buffering of some data on Web-server is fine as long as client receives whatever server has sent or client gets closed connection. If sending is not possible after buffers are full dropping client connection and aborting request is not a problem. Problems like that should be dealt with on higher level of abstraction. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237158,241068#msg-241068 From mdounin at mdounin.ru Sat Jul 20 08:58:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 20 Jul 2013 12:58:44 +0400 Subject: Nginx Cache Config with Multiple Disk Drives In-Reply-To: <59566FAA26861246A0E785066534B42A26F5F85E@USIDCWVEMBX07.corp.global.level3.com> References: <59566FAA26861246A0E785066534B42A26F5F85E@USIDCWVEMBX07.corp.global.level3.com> Message-ID: <20130720085844.GA81710@mdounin.ru> Hello! On Fri, Jul 19, 2013 at 04:33:56PM +0000, Johns, Kevin wrote: > Hi, > > I am looking for guidance on how best to configure Nginx Proxy > Cache in a multi-disk drive environment. Our typical server > setup is such that each drive is its own partition, for example, > if we have a 10 drive server we may setup drives 4-10 for > storage such as: > > /dev/sdd1 /nginx/cached > /dev/sde1 /nginx/cachee > /dev/sdf1 /nginx/cachef > /dev/sdg1 /nginx/cacheg > /dev/sdh1 /nginx/cacheh > /dev/sdi1 /nginx/cachei > /dev/sdj1 /nginx/cachej > > I see that in the Nginx Proxy config, you can have multiple > proxy_cache_path directives, each of which can point to the > various disk drives. The proxy_cache directive is then used to > determine which zone is used for a given configuration block > (http, server, location). However, I am unable to determine how > to spread the cache across the multiple drives as essentially a > shared resource. Having to define which disk to use for each > server or location block is undesirable as we don't want to > leave some disks underutilized and others over utilized. > > Any guidance as to how best configure Nginx for this situation > would be greatly appreciated. There are two basic options: - create one filesystem over multiple disks, using software RAID or something like - use symlinks to spread nginx's proxy cache path hierarchy over multiple filesystems, see levels= description at http://nginx.org/r/proxy_cache_path -- Maxim Dounin http://nginx.org/en/donation.html From igor at sysoev.ru Sat Jul 20 09:18:25 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 20 Jul 2013 13:18:25 +0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: References: <50ECEF37-FEF5-46A2-9B6A-6B599B45412E@sysoev.ru> Message-ID: <6535E158-7A98-4440-BF8F-BB4964E5606F@sysoev.ru> On Jul 20, 2013, at 11:52 , momyc wrote: >> it is useless to buffer a long polling connection in a file. > > For Nginx there is no any difference between long-polling or other request. > It would't even know. All it should care is how much to buffer and for how > long to keep those buffers until droping them and aborting request. I do not > see any technical problem here. There is no technical problem. There is an issue of practical utility of such backend. There are two types of backend: 1) The first one uses a large amount of memory to process request. It should send a generated response as soon as possible and then moves to a next request. nginx can buffer thousands of such responses and sends them to clients. Persistent connection between nginx and backend and nginx buffering help in this case. Multiplexing just complicates the backend logic without any benefit. The bottle neck here is not number of connections to a single listen port (64K) but amount of memory. 2) The second type of backend uses a small amount of memory per request, can process simultaneously thousands of clients and does NOT need buffering at all. Multiplexing helps such backends but only together with a flow control. -- Igor Sysoev http://nginx.com/services.html From shahzaib.cb at gmail.com Sat Jul 20 09:29:49 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 20 Jul 2013 14:29:49 +0500 Subject: Limit_rate for different resolutions !! Message-ID: Hello, Is there a way that i could use different limit_rate in nginx for different files ? I.e 1. limit_rate 500k for 720p video files. 2. limit_rate 180k for 320p video files. Best Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Jul 20 09:35:40 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 20 Jul 2013 10:35:40 +0100 Subject: Limit_rate for different resolutions !! In-Reply-To: References: Message-ID: <20130720093540.GI15782@craic.sysops.org> On Sat, Jul 20, 2013 at 02:29:49PM +0500, shahzaib shahzaib wrote: Hi there, > Is there a way that i could use different limit_rate in nginx for > different files ? I.e > > 1. limit_rate 500k for 720p video files. > 2. limit_rate 180k for 320p video files. Yes. http://nginx.org/r/limit_rate One possibility: have your 720p video files and your 320p video files handled in different location{} blocks. f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Sat Jul 20 09:47:14 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 20 Jul 2013 14:47:14 +0500 Subject: Limit_rate for different resolutions !! In-Reply-To: <20130720093540.GI15782@craic.sysops.org> References: <20130720093540.GI15782@craic.sysops.org> Message-ID: Hello Francis, what if both(720p,360p) are in same directory i.e /var/www/html/videos ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Jul 20 09:58:15 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 20 Jul 2013 10:58:15 +0100 Subject: Limit_rate for different resolutions !! In-Reply-To: References: <20130720093540.GI15782@craic.sysops.org> Message-ID: <20130720095815.GJ15782@craic.sysops.org> On Sat, Jul 20, 2013 at 02:47:14PM +0500, shahzaib shahzaib wrote: Hi there, > what if both(720p,360p) are in same directory i.e /var/www/html/videos ? Where's the problem? http://nginx.org/r/limit_rate for limit_rate. http://nginx.org/r/location, if you choose to handle the different types of files in different location{} blocks. f -- Francis Daly francis at daoine.org From Kevin.Johns at Level3.com Sat Jul 20 17:21:27 2013 From: Kevin.Johns at Level3.com (Johns, Kevin) Date: Sat, 20 Jul 2013 17:21:27 +0000 Subject: Nginx Cache Config with Multiple Disk Drives In-Reply-To: <20130720085844.GA81710@mdounin.ru> Message-ID: <59566FAA26861246A0E785066534B42A26F6218C@USIDCWVEMBX07.corp.global.level3.com> On 7/20/13 2:58 AM, "Maxim Dounin" wrote: >Hello! > >On Fri, Jul 19, 2013 at 04:33:56PM +0000, Johns, Kevin wrote: > >> Hi, >> >> I am looking for guidance on how best to configure Nginx Proxy >> Cache in a multi-disk drive environment. Our typical server >> setup is such that each drive is its own partition, for example, >> if we have a 10 drive server we may setup drives 4-10 for >> storage such as: >> >> /dev/sdd1 /nginx/cached >> /dev/sde1 /nginx/cachee >> /dev/sdf1 /nginx/cachef >> /dev/sdg1 /nginx/cacheg >> /dev/sdh1 /nginx/cacheh >> /dev/sdi1 /nginx/cachei >> /dev/sdj1 /nginx/cachej >> >> I see that in the Nginx Proxy config, you can have multiple >> proxy_cache_path directives, each of which can point to the >> various disk drives. The proxy_cache directive is then used to >> determine which zone is used for a given configuration block >> (http, server, location). However, I am unable to determine how >> to spread the cache across the multiple drives as essentially a >> shared resource. Having to define which disk to use for each >> server or location block is undesirable as we don't want to >> leave some disks underutilized and others over utilized. >> >> Any guidance as to how best configure Nginx for this situation >> would be greatly appreciated. > >There are two basic options: > >- create one filesystem over multiple disks, using software RAID > or something like Yes this is something we are looking at as well. > >- use symlinks to spread nginx's proxy cache path hierarchy over > multiple filesystems, see levels= description at > http://nginx.org/r/proxy_cache_path I read over the configuration and am not sure I understand what you suggest. Can you elaborate? > >-- >Maxim Dounin >http://nginx.org/en/donation.html > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat Jul 20 21:01:39 2013 From: nginx-forum at nginx.us (Peleke) Date: Sat, 20 Jul 2013 17:01:39 -0400 Subject: Location recursive downloads php files Message-ID: <87a5f0044d708aa75cb08e7527d10692.NginxMailingListEnglish@forum.nginx.org> I try to secure a specific folder and all files and subfolders with this location block: location ^~ /folder1/admin { auth_basic "Login"; auth_basic_user_file /var/www/domain.tld/www/folder1/admin/.htpasswd; } With this code nginx offers always to download the php files. With this code everything works as expected except that files and subfolders are not secured: location /folder1/admin { auth_basic "Login"; auth_basic_user_file /var/www/domain.tld/www/folder1/admin/.htpasswd; } Why is that and how can I fix the problem from the first block? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241080,241080#msg-241080 From mdounin at mdounin.ru Sun Jul 21 00:21:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 21 Jul 2013 04:21:39 +0400 Subject: Nginx Cache Config with Multiple Disk Drives In-Reply-To: <59566FAA26861246A0E785066534B42A26F6218C@USIDCWVEMBX07.corp.global.level3.com> References: <20130720085844.GA81710@mdounin.ru> <59566FAA26861246A0E785066534B42A26F6218C@USIDCWVEMBX07.corp.global.level3.com> Message-ID: <20130721002139.GA90722@mdounin.ru> Hello! On Sat, Jul 20, 2013 at 05:21:27PM +0000, Johns, Kevin wrote: > > > On 7/20/13 2:58 AM, "Maxim Dounin" wrote: > > >Hello! > > > >On Fri, Jul 19, 2013 at 04:33:56PM +0000, Johns, Kevin wrote: > > > >> Hi, > >> > >> I am looking for guidance on how best to configure Nginx Proxy > >> Cache in a multi-disk drive environment. Our typical server > >> setup is such that each drive is its own partition, for example, > >> if we have a 10 drive server we may setup drives 4-10 for > >> storage such as: > >> > >> /dev/sdd1 /nginx/cached > >> /dev/sde1 /nginx/cachee > >> /dev/sdf1 /nginx/cachef > >> /dev/sdg1 /nginx/cacheg > >> /dev/sdh1 /nginx/cacheh > >> /dev/sdi1 /nginx/cachei > >> /dev/sdj1 /nginx/cachej > >> > >> I see that in the Nginx Proxy config, you can have multiple > >> proxy_cache_path directives, each of which can point to the > >> various disk drives. The proxy_cache directive is then used to > >> determine which zone is used for a given configuration block > >> (http, server, location). However, I am unable to determine how > >> to spread the cache across the multiple drives as essentially a > >> shared resource. Having to define which disk to use for each > >> server or location block is undesirable as we don't want to > >> leave some disks underutilized and others over utilized. > >> > >> Any guidance as to how best configure Nginx for this situation > >> would be greatly appreciated. > > > >There are two basic options: > > > >- create one filesystem over multiple disks, using software RAID > > or something like > > Yes this is something we are looking at as well. > > > >- use symlinks to spread nginx's proxy cache path hierarchy over > > multiple filesystems, see levels= description at > > http://nginx.org/r/proxy_cache_path > > I read over the configuration and am not sure I understand what you > suggest. Can you elaborate? Assuming levels=1 and two disks: proxy_cache_path /path/to/cache levels=1 ... $ cd /path/to/cache $ for i in 0 1 2 3 4 5 6 7; do mkdir /disk1/$i; ln -s /disk1/$i .; done; $ for i in 8 9 a b c d e f; do mkdir /disk2/$i; ln -s /disk2/$i .; done; -- Maxim Dounin http://nginx.org/en/donation.html From francis at daoine.org Sun Jul 21 08:04:28 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Jul 2013 09:04:28 +0100 Subject: Location recursive downloads php files In-Reply-To: <87a5f0044d708aa75cb08e7527d10692.NginxMailingListEnglish@forum.nginx.org> References: <87a5f0044d708aa75cb08e7527d10692.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130721080428.GK15782@craic.sysops.org> On Sat, Jul 20, 2013 at 05:01:39PM -0400, Peleke wrote: Hi there, > I try to secure a specific folder and all files and subfolders with this > location block: > > location ^~ /folder1/admin { > auth_basic "Login"; > auth_basic_user_file > /var/www/domain.tld/www/folder1/admin/.htpasswd; > } > > With this code nginx offers always to download the php files. > > With this code everything works as expected except that files and subfolders > are not secured: > > location /folder1/admin { > auth_basic "Login"; > auth_basic_user_file > /var/www/domain.tld/www/folder1/admin/.htpasswd; > } > > Why is that and how can I fix the problem from the first block? In nginx, one request is handled in one location. http://nginx.org/r/location describes the rules, so that you can know which one location will be used for a particular request. In the above location{} blocks, you give no indication of what nginx should do with the request, so it uses its default of "serve it from the filesystem". The difference between your two observations is that in the first case, the location{} block is used for the request that you made; and in the second case, the location{} block is not used. To fix your configuration, you must put all of the configuration that you want to apply to a request, in the one location{} that handles that request. f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Sun Jul 21 08:47:57 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 21 Jul 2013 13:47:57 +0500 Subject: Limit_rate for different resolutions !! In-Reply-To: <20130720095815.GJ15782@craic.sysops.org> References: <20130720093540.GI15782@craic.sysops.org> <20130720095815.GJ15782@craic.sysops.org> Message-ID: Hello, Following two are the file types whom i want to assign different rate_limits : http://domain.com/files/videos/2013/07/20/137430161313bb6-360.mp4 ---> limit_rate 180k; http://domain.com/files/videos/2013/07/20/137430161313bb6-720.mp4 --> limit_rate 500; Following is the virtualhost config : server { listen 80; server_name domain.com; client_max_body_size 800m; limit_rate 180k; # access_log /websites/theos.in/logs/access.log main; location / { root /var/www/html/videos; index index.html index.htm index.php; } location ~ \.(flv|jpg|jpeg)$ { flv; root /var/www/html/videos; expires 7d; valid_referers none blocked domain.com; if ($invalid_referer) { return 403; } } location ~ \.(mp4)$ { mp4; root /var/www/html/videos; expires 7d; } Can you please guide me a bit that how to configure the specific limit_rate in different location{} blocks? I am a bit confused on it :( Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Jul 21 09:06:48 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Jul 2013 10:06:48 +0100 Subject: Limit_rate for different resolutions !! In-Reply-To: References: <20130720093540.GI15782@craic.sysops.org> <20130720095815.GJ15782@craic.sysops.org> Message-ID: <20130721090648.GL15782@craic.sysops.org> On Sun, Jul 21, 2013 at 01:47:57PM +0500, shahzaib shahzaib wrote: Hi there, > Following two are the file types whom i want to assign different > rate_limits : > > http://domain.com/files/videos/2013/07/20/137430161313bb6-360.mp4 ---> > limit_rate 180k; > http://domain.com/files/videos/2013/07/20/137430161313bb6-720.mp4 --> > limit_rate 500; What rate limit do you want to apply to http://domain.com/file.flv? What rate limit actually applies to it? And did you measure it? ("curl" is usually a good tool to see what is really happening.) > location / { > location ~ \.(flv|jpg|jpeg)$ { > location ~ \.(mp4)$ { http://nginx.org/r/location What do your "location" lines up there mean? For each request you make, what one location is used? What happens if you add a location ~ -720\.mp4$ {} block at the end? And what happens if you instead add it at the start? f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Sun Jul 21 09:24:31 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 21 Jul 2013 14:24:31 +0500 Subject: Limit_rate for different resolutions !! In-Reply-To: <20130721090648.GL15782@craic.sysops.org> References: <20130720093540.GI15782@craic.sysops.org> <20130720095815.GJ15782@craic.sysops.org> <20130721090648.GL15782@craic.sysops.org> Message-ID: Hello, I added the 720p in the location{} and checked it by downloading the single file using wget and got the 500K speed :). location ~ -720\.(mp4)$ { mp4; expires 7d; limit_rate 500k; root /var/www/html/tunefiles; } That worked :). Thanks a lot @francis. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sun Jul 21 09:29:26 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 21 Jul 2013 14:29:26 +0500 Subject: Limit_rate for different resolutions !! In-Reply-To: References: <20130720093540.GI15782@craic.sysops.org> <20130720095815.GJ15782@craic.sysops.org> <20130721090648.GL15782@craic.sysops.org> Message-ID: What rate limit actually applies to it? And did you measure it? ("curl" is usually a good tool to see what is really happening.) > location / { > location ~ \.(flv|jpg|jpeg)$ { > location ~ \.(mp4)$ { Well the rate_limit was 180K before for all the files because i added it into the server{} block and these location blocks were actually means the rate_limit 180k will apply to any flv,mp4,jpeg file and after adding -720 before the location ~\.(mp4)$, the only 720p files will be served on limit_rate 500k and the rest would remain the same which is 180k in my case. I didn't used curl instead wget. On Sun, Jul 21, 2013 at 2:24 PM, shahzaib shahzaib wrote: > Hello, > > I added the 720p in the location{} and checked it by downloading the > single file using wget and got the 500K speed :). > > location ~ -720\.(mp4)$ { > mp4; > expires 7d; > limit_rate 500k; > root /var/www/html/tunefiles; > } > That worked :). Thanks a lot @francis. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jul 21 17:08:18 2013 From: nginx-forum at nginx.us (Peleke) Date: Sun, 21 Jul 2013 13:08:18 -0400 Subject: Location recursive downloads php files In-Reply-To: <20130721080428.GK15782@craic.sysops.org> References: <20130721080428.GK15782@craic.sysops.org> Message-ID: Okay, it works if I add this: location ^~ /folder1/admin { auth_basic "Login"; auth_basic_user_file /var/www/domain.tld/www/folder1/admin/.htpasswd; location ~ \.php$ { #limit_req zone=limit burst=5 nodelay; try_files $uri =404; #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_intercept_errors off; fastcgi_read_timeout 120; fastcgi_buffers 256 4k; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } ## # Pass PHP-Files To Socket ## location ~ \.php$ { #limit_req zone=limit burst=5 nodelay; try_files $uri =404; #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_intercept_errors off; fastcgi_read_timeout 120; fastcgi_buffers 256 4k; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } But that is redundancy and can be complicated if you have many of those entries and then want to change a .php setting (you have to do it multiple times). Isn't it possible to make it simpler? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241080,241094#msg-241094 From francis at daoine.org Sun Jul 21 20:13:12 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Jul 2013 21:13:12 +0100 Subject: Limit_rate for different resolutions !! In-Reply-To: References: <20130720093540.GI15782@craic.sysops.org> <20130720095815.GJ15782@craic.sysops.org> <20130721090648.GL15782@craic.sysops.org> Message-ID: <20130721201312.GM15782@craic.sysops.org> On Sun, Jul 21, 2013 at 02:29:26PM +0500, shahzaib shahzaib wrote: Hi there, > Well the rate_limit was 180K before for all the files because i added it > into the server{} block and these location blocks were actually means the > rate_limit 180k will apply to any flv,mp4,jpeg file and after adding -720 > before the location ~\.(mp4)$, the only 720p files will be served on > limit_rate 500k and the rest would remain the same which is 180k in my case. Good stuff -- each directive has a meaning, and if you wanted most requests to have the 180 rate limit, then you put it in the right place. Changing the "mp4" location to be "~ -720\.mp4$" means that those ones will have the higher rate; but if you don't have a second location{} for the other mp4s, they won't see the "mp4" directive, and so may not be served correctly. > I didn't used curl instead wget. That's fine -- wget does a similar task to curl; and it's always good to be able to demonstrate that the change you made fixed what was broken (and didn't break what was already right). Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Jul 21 20:16:34 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Jul 2013 21:16:34 +0100 Subject: Location recursive downloads php files In-Reply-To: References: <20130721080428.GK15782@craic.sysops.org> Message-ID: <20130721201634.GN15782@craic.sysops.org> On Sun, Jul 21, 2013 at 01:08:18PM -0400, Peleke wrote: Hi there, > Okay, it works if I add this: > > location ^~ /folder1/admin { > location ~ \.php$ { > } > } > location ~ \.php$ { > } > But that is redundancy and can be complicated if you have many of those > entries and then want to change a .php setting (you have to do it multiple > times). > Isn't it possible to make it simpler? Yes. Either use an external-to-nginx thing to create the complicated config file from less complicated parts; or put the repeated parts in a file and "include" it. http://nginx.org/r/include f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Mon Jul 22 04:27:59 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 22 Jul 2013 09:27:59 +0500 Subject: Limit_rate for different resolutions !! In-Reply-To: <20130721201312.GM15782@craic.sysops.org> References: <20130720093540.GI15782@craic.sysops.org> <20130720095815.GJ15782@craic.sysops.org> <20130721090648.GL15782@craic.sysops.org> <20130721201312.GM15782@craic.sysops.org> Message-ID: hello, >Changing the "mp4" location to be "~ -720\.mp4$" means that those ones >will have the higher rate; but if you don't have a second location{} >for the other mp4s, they won't see the "mp4" directive, and so may not >be served correctly. Yes, i've created different block locations for mp4 files and every mp4 file is working fine, you can check the config below and let me know if there's anything wrong you see ? location ~ -720\.(mp4)$ { mp4; expires 7d; limit_rate 500k; root /var/www/html/videos; valid_referers none blocked domain.com; if ($invalid_referer) { return 403; } } location ~ -480\.(mp4)$ { mp4; expires 7d; limit_rate 250k; root /var/www/html/videos; valid_referers none blocked domain.com; if ($invalid_referer) { return 403; } } location ~ \.(mp4)$ { mp4; expires 7d; root /var/www/html/videos; valid_referers none blocked domain.com; if ($invalid_referer) { return 403; } } On Mon, Jul 22, 2013 at 1:13 AM, Francis Daly wrote: > On Sun, Jul 21, 2013 at 02:29:26PM +0500, shahzaib shahzaib wrote: > > Hi there, > > > Well the rate_limit was 180K before for all the files because i added it > > into the server{} block and these location blocks were actually means the > > rate_limit 180k will apply to any flv,mp4,jpeg file and after adding -720 > > before the location ~\.(mp4)$, the only 720p files will be served on > > limit_rate 500k and the rest would remain the same which is 180k in my > case. > > Good stuff -- each directive has a meaning, and if you wanted most > requests to have the 180 rate limit, then you put it in the right place. > > Changing the "mp4" location to be "~ -720\.mp4$" means that those ones > will have the higher rate; but if you don't have a second location{} > for the other mp4s, they won't see the "mp4" directive, and so may not > be served correctly. > > > I didn't used curl instead wget. > > That's fine -- wget does a similar task to curl; and it's always good > to be able to demonstrate that the change you made fixed what was broken > (and didn't break what was already right). > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.hnat at bluegras.de Mon Jul 22 04:28:45 2013 From: m.hnat at bluegras.de (m.hnat at bluegras.de) Date: Mon, 22 Jul 2013 06:28:45 +0200 Subject: Out of office vom 22.-26.7. Message-ID: <2b9cb0b298f9465ea28ea24a0638790e@d0f420d3dbdc45899396948f0704fccf> Guten Tag, vielen Dank f?r Ihre Nachricht. Ich bin vom 22. bis 26. Juli viel unterwegs und kann nur beschr?nkt auf meine E-Mails zugreifen. Ich werde mich aber schnellstm?glich bei Ihnen melden. Freundliche Gr??e, Michael Hnat From nginx-forum at nginx.us Mon Jul 22 08:11:20 2013 From: nginx-forum at nginx.us (finey) Date: Mon, 22 Jul 2013 04:11:20 -0400 Subject: =?UTF-8?Q?Ergobaby=C2=A0sport=C2=A0with_comfortable_and_convenient_baby?= =?UTF-8?Q?=C2=A0carrier=C2=A0pinksale_sale?= Message-ID: <5a3c81344fe8c756c3de2dab7780333c.NginxMailingListEnglish@forum.nginx.org> Features may look slightly intimidating, but just know that parents often compare [url=http://www.ergobabycarrier4u.net]ergo baby carrier sale[/url] to wearing a shirt -- it's comfortable and an easy way to keep your child close. The simple design wraps over both shoulders and ties around the waist, eliminating the need for buckles and straps. With multiple carrying positions, your baby can be placed in the newborn hug hold, hug hold, hip hold and kangaroo wrap and hold. All instructions and safety checks can be found on the [url=http://www.ergobabycarrier4u.net]ergo baby carrier organic brown[/url]. Baby carrier was one of the first of its kind, and to this day, continues to be a parent favorite [url=http://www.ergobabycarrier4u.net]beco baby carrier[/url]. The newest model, the Miracle Carrier, has an ergonomic waist belt complete with an adjustable lumbar support -- making it easier and more comfortable for parents. The innovative lumbar support can be adjusted while the carrier is in use to change baby's weight distribution from shoulder to hips or vice versa. In addition to being comfortable for baby, parents find the Ergo to be easy on their backs due to the wide, padded waist strap, which distributes baby's weight to the parents' hips and shoulders -- instead of the back.Our spine is not perfectly straight, even though it may appear so from the front or back . When you look at a person from the side, four slight curves are visible, forming an elongated ?S? shape. These curves help keep us flexible and balanced. They also help to absorb stresses placed on our bodies through [url=http://www.ergobabycarrier4u.net]ergobabycarrier4u.net[/url] that impact our spine like walking, running and jumping. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241113,241113#msg-241113 From david at gwynne.id.au Mon Jul 22 11:42:38 2013 From: david at gwynne.id.au (David Gwynne) Date: Mon, 22 Jul 2013 21:42:38 +1000 Subject: SPDY + proxy cache static content failures In-Reply-To: References: <201304051456.00233.vbart@nginx.com> Message-ID: <32F6D688-5016-451D-A43A-72FC22C5DBBB@gwynne.id.au> i am also experiencing this problem running 1.4.2. cheers, dlg On 08/04/2013, at 10:16 PM, spdyg wrote: > Thanks Valentin. I have emailed you (off-list) some debug logs / > screenshots. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233497,238195#msg-238195 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nick at livejournalinc.com Mon Jul 22 15:39:08 2013 From: nick at livejournalinc.com (Nick Toseland) Date: Mon, 22 Jul 2013 16:39:08 +0100 Subject: Help with code Message-ID: <51ED521C.3030605@livejournalinc.com> Hi All, I have the following code: location = /favicon.ico { if ($host = "abc.com"){ return 301 "http://www.abc.com/favicon.ico"; } } If I make a request to abc.com/favicon.ico I get a 301 and then a 200 OK However the issue is that if I make a request to xyz.abc.com/favicon.ico I get a 404 as it passes the first match statement but fails the second and trys to get the favicon resource from the nginx root directory Is there a better way of doing it so it continues on to be evaluated by the other rules? or Is there a better way of writing this part of the code? Thanks in advance. Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.algermissen at nordsc.com Mon Jul 22 16:10:06 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Mon, 22 Jul 2013 18:10:06 +0200 Subject: Help with code In-Reply-To: <51ED521C.3030605@livejournalinc.com> References: <51ED521C.3030605@livejournalinc.com> Message-ID: <2EB8C9FB-822F-4124-8F59-16DBCF6ECE1A@nordsc.com> On 22.07.2013, at 17:39, Nick Toseland wrote: > Hi All, > > I have the following code: > > location = /favicon.ico { > if ($host = "abc.com"){ > return 301 "http://www.abc.com/favicon.ico"; > } > } > > If I make a request to abc.com/favicon.ico I get a 301 and then a 200 OK > > However the issue is that if I make a request to xyz.abc.com/favicon.ico I get a 404 as it passes the first match statement but fails the second and trys to get the favicon resource from the nginx root directory Why should the check process even go inside the if, given that xyz.abc.com is not host abc.com? Jan > > Is there a better way of doing it so it continues on to be evaluated by the other rules? or Is there a better way of writing this part of the code? > > Thanks in advance. > > Nick > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jul 22 16:21:44 2013 From: nginx-forum at nginx.us (brainchill) Date: Mon, 22 Jul 2013 12:21:44 -0400 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <1551373538-1373895891-cardhu_decombobulator_blackberry.rim.net-1163863674-@b3.c1.bise7.blackberry> References: <1551373538-1373895891-cardhu_decombobulator_blackberry.rim.net-1163863674-@b3.c1.bise7.blackberry> Message-ID: <702f455a4e59b2aa01a606cc59fe2cc1.NginxMailingListEnglish@forum.nginx.org> Everyone is telling this guy that he's clueless but the bottom line is the very first guy that answered the question nailed it ... "The message suggests you've either run out of local sockets/ports, or connections are administratively prohibited. You may try unix sockets to see if it helps." Try setting kernel option net.ipv4.ip_local_port_range to 1024 65000 The default setting is 32768 61000 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240816,241121#msg-241121 From mdounin at mdounin.ru Mon Jul 22 16:51:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Jul 2013 20:51:40 +0400 Subject: connect() to 127.0.0.1:80 failed (99: Cannot assign requested address In-Reply-To: <702f455a4e59b2aa01a606cc59fe2cc1.NginxMailingListEnglish@forum.nginx.org> References: <1551373538-1373895891-cardhu_decombobulator_blackberry.rim.net-1163863674-@b3.c1.bise7.blackberry> <702f455a4e59b2aa01a606cc59fe2cc1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130722165139.GC90722@mdounin.ru> Hello! On Mon, Jul 22, 2013 at 12:21:44PM -0400, brainchill wrote: > Everyone is telling this guy that he's clueless but the bottom line is the > very first guy that answered the question nailed it ... > > "The message suggests you've either run out of local sockets/ports, > or connections are administratively prohibited. You may try unix > sockets to see if it helps." > > Try setting > kernel option net.ipv4.ip_local_port_range to 1024 65000 > > The default setting is 32768 61000 Not really. The real problem seems to be outlined by Ruslan - due to no Apache running nginx is essentially configured to proxy to itself, and adding more sockets won't help - they will be exhaused as well due to the proxy loop. -- Maxim Dounin http://nginx.org/en/donation.html From howachen at gmail.com Mon Jul 22 17:13:47 2013 From: howachen at gmail.com (howard chen) Date: Tue, 23 Jul 2013 01:13:47 +0800 Subject: Update nginx with Ubuntu PPA Message-ID: I am upgrading nginx to latest 1.4.1 using PPA. repository. 1. After install, do I need to restart it manually, or it is restarted automatically? 2. Is reload enough for the nginx upgrade? Or do I need to restart or stop/start? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jul 22 19:28:44 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 22 Jul 2013 20:28:44 +0100 Subject: Limit_rate for different resolutions !! In-Reply-To: References: <20130720093540.GI15782@craic.sysops.org> <20130720095815.GJ15782@craic.sysops.org> <20130721090648.GL15782@craic.sysops.org> <20130721201312.GM15782@craic.sysops.org> Message-ID: <20130722192844.GQ15782@craic.sysops.org> On Mon, Jul 22, 2013 at 09:27:59AM +0500, shahzaib shahzaib wrote: Hi there, > location ~ -720\.(mp4)$ { > mp4; > expires 7d; > limit_rate 500k; > root /var/www/html/videos; > valid_referers none blocked domain.com; > if ($invalid_referer) { > return 403; > } > } > location ~ -480\.(mp4)$ { > mp4; > expires 7d; > limit_rate 250k; > root /var/www/html/videos; > valid_referers none blocked domain.com; > if ($invalid_referer) { > return 403; > } > } > location ~ \.(mp4)$ { > mp4; > expires 7d; > root /var/www/html/videos; > valid_referers none blocked domain.com; > if ($invalid_referer) { > return 403; > } > } It looks reasonable from here. The various repeated directives may fit better one level higher, but that depends on the rest of the configuration. The parentheses ("()") around mp4 in the locations look unnecessary. But if it does what you want, it's good. f -- Francis Daly francis at daoine.org From bingbang2 at yandex.com Mon Jul 22 21:44:50 2013 From: bingbang2 at yandex.com (Bing Bang) Date: Tue, 23 Jul 2013 02:14:50 +0430 Subject: Nginx 1.4.2 Centos Packages Message-ID: <1204161374529490@web7e.yandex.ru> Hi, I just updated my nginx from 1.4.1 to 1.4.2 from packages provided by nginx for centos and then i realized that many modules available with 1.4.1 are not compiled in premade packages of 1.4.2 for centos, so i've just downgraded my nginx to 1.4.1 again Please provide those modules with this package too for example geoip module. Thanks From francis at daoine.org Mon Jul 22 22:43:30 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 22 Jul 2013 23:43:30 +0100 Subject: proxy_pass via HTTP proxy In-Reply-To: <5a49f0ce6075a68b276034bab9686b59.NginxMailingListEnglish@forum.nginx.org> References: <5a49f0ce6075a68b276034bab9686b59.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130722224330.GS15782@craic.sysops.org> On Wed, Jul 17, 2013 at 08:58:35PM -0400, cavedon wrote: Hi there, > However, in order to reach S, I need to go though an HTTP server P. This > means nginx would need to connect to P, issue a CONNECT request, and then > tunnel the HTTPS request to S. > Is this supported? No. > How to enable it? Start coding :-) Right now, nginx proxy_pass speaks http to a http server, or http-over-ssl to a https server. It doesn't speak proxied-http to a http proxy server. (Including the CONNECT method of proxied-http.) So if you want that, you'll need to look outside of current-nginx. f -- Francis Daly francis at daoine.org From paulnpace at gmail.com Mon Jul 22 22:55:18 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Mon, 22 Jul 2013 15:55:18 -0700 Subject: Update nginx with Ubuntu PPA In-Reply-To: References: Message-ID: On Mon, Jul 22, 2013 at 10:13 AM, howard chen wrote: > I am upgrading nginx to latest 1.4.1 using PPA. repository. > > 1. After install, do I need to restart it manually, or it is restarted > automatically? > 2. Is reload enough for the nginx upgrade? Or do I need to restart or > stop/start? If you are using the apt-get upgrade or aptitude upgrade commands, the service will be restarted for you. You may want to run sudo nginx -t to check for errors. > > Thanks. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From robm at fastmail.fm Tue Jul 23 00:06:09 2013 From: robm at fastmail.fm (Robert Mueller) Date: Tue, 23 Jul 2013 10:06:09 +1000 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <20130709120817.GK38853@mdounin.ru> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> <1373338344.3599.140661253413574.544426D0@webmail.messagingengine.com> <20130709120817.GK38853@mdounin.ru> Message-ID: <1374537969.6400.9223372036855884725.43713815@webmail.messagingengine.com> > > Yes, I haven't heard in a while what the status of this is. I'm > > currently using our existing patch, but would love to drop it and > > upgrade when it's included in nginx core... > > As far as I can tell the state is roughly the same (though patch > in question improved a bit). Valentin? Still no update? Any chance we can include the patch I developed in the short term, until Valentin's patch is ready? It definitely solves a real world issue for us, and by the sounds of it, it would for some other people as well... Rob From nginx-forum at nginx.us Tue Jul 23 00:20:04 2013 From: nginx-forum at nginx.us (rstarkov) Date: Mon, 22 Jul 2013 20:20:04 -0400 Subject: Backend responding with 100 Continue results in the actual response being lost Message-ID: I'm using nginx as a reverse proxy, configured to use HTTP 1.1 so as to support range requests. The server responds to some of the requests with a "100 Continue", even if there was no "Expect: 100-continue" in the request. The server then proceeds to read the rest of the request and, eventually, sends the "200 OK" reponse. In my testing, in this scenario Nginx will forward the 100 Continue to the requesting browser, but the 200 OK response never makes it. It seems to be silently dropped. The result is that the browser is stuck waiting for a response until the request times out. Is this expected behaviour? From my reading of the HTTP 1/1 spec, this server is not in violation of the spec. I'm using v1.4.2 on Windows. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241130,241130#msg-241130 From shahzaib.cb at gmail.com Tue Jul 23 05:26:45 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 23 Jul 2013 10:26:45 +0500 Subject: Limit_rate for different resolutions !! In-Reply-To: <20130722192844.GQ15782@craic.sysops.org> References: <20130720093540.GI15782@craic.sysops.org> <20130720095815.GJ15782@craic.sysops.org> <20130721090648.GL15782@craic.sysops.org> <20130721201312.GM15782@craic.sysops.org> <20130722192844.GQ15782@craic.sysops.org> Message-ID: Going with the same configuration as far as they are working for me. Thanks again Francis :) Best Regards. Shahzaib On Tue, Jul 23, 2013 at 12:28 AM, Francis Daly wrote: > On Mon, Jul 22, 2013 at 09:27:59AM +0500, shahzaib shahzaib wrote: > > Hi there, > > > location ~ -720\.(mp4)$ { > > mp4; > > expires 7d; > > limit_rate 500k; > > root /var/www/html/videos; > > valid_referers none blocked domain.com; > > if ($invalid_referer) { > > return 403; > > } > > } > > location ~ -480\.(mp4)$ { > > mp4; > > expires 7d; > > limit_rate 250k; > > root /var/www/html/videos; > > valid_referers none blocked domain.com; > > if ($invalid_referer) { > > return 403; > > } > > } > > location ~ \.(mp4)$ { > > mp4; > > expires 7d; > > root /var/www/html/videos; > > valid_referers none blocked domain.com; > > if ($invalid_referer) { > > return 403; > > } > > } > > It looks reasonable from here. > > The various repeated directives may fit better one level higher, but > that depends on the rest of the configuration. > > The parentheses ("()") around mp4 in the locations look unnecessary. > > But if it does what you want, it's good. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jul 23 08:24:38 2013 From: nginx-forum at nginx.us (JackB) Date: Tue, 23 Jul 2013 04:24:38 -0400 Subject: Update nginx with Ubuntu PPA In-Reply-To: References: Message-ID: <14f193d1417e1440cc9ed4563ce36fbd.NginxMailingListEnglish@forum.nginx.org> openletter Wrote: ------------------------------------------------------- > If you are using the apt-get upgrade or aptitude upgrade commands, the > service will be restarted for you. This might be a little off topic, but how can one upgrade nginx on ubuntu with the official ppa via apt without having a restart of nginx but an upgrade instead? (/etc/init.d/nginx upgrade) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241123,241132#msg-241132 From ar at xlrs.de Tue Jul 23 08:33:59 2013 From: ar at xlrs.de (Axel) Date: Tue, 23 Jul 2013 10:33:59 +0200 Subject: Update nginx with Ubuntu PPA In-Reply-To: <14f193d1417e1440cc9ed4563ce36fbd.NginxMailingListEnglish@forum.nginx.org> References: <14f193d1417e1440cc9ed4563ce36fbd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <11086615.uyqPLn4cN5@lxrosenski.pag> You can download and extract the package and replace all files. But you will have to load the new binary at one time. I might be wrong - but I suppose you can do that gracefully Regards, Axl Am Dienstag, 23. Juli 2013, 04:24:38 schrieb JackB: > This might be a little off topic, but how can one upgrade nginx on ubuntu > with the official ppa via apt without having a restart of nginx but an > upgrade instead? (/etc/init.d/nginx upgrade) > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,241123,241132#msg-241132 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From adrien.saladin at gmail.com Tue Jul 23 09:22:26 2013 From: adrien.saladin at gmail.com (Adrien Saladin) Date: Tue, 23 Jul 2013 11:22:26 +0200 Subject: port redirection issue while using ssh tunnel Message-ID: Hi list, I have a web app proxied by nginx. Everything works fine locally. However the web server is on our private network and I would like to access it though a ssh tunnel from the outside. Most operations works fine except when the web app returns a 302 redirection. In that case it seems that nginx removes the http port (detailed issue below). Here are the details: The ssh tunnel is made through our ssh gateway: ssh me at ourgateway.tld -L8080:privateWebServer:80 I then connect to a 'normal' page and everything looks good: $ curl http://localhost:8080/wiki/wiki -D - HTTP/1.1 200 OK Server: nginx/0.7.67 Date: Tue, 23 Jul 2013 08:52:54 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Content-Length: 98961 $ curl http://localhost:8080/wiki -D - However when I try a page that returns a 302 redirect, I have this: HTTP/1.1 302 Found Server: nginx/0.7.67 Date: Tue, 23 Jul 2013 08:54:46 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Content-Length: 93937 Location: http://localhost/wiki/wiki (the 8080 port was removed from the location). If I try to contact directly the web app throug the tunnel (on port 6544) I have this: $ curl http://localhost:6544/wiki HTTP/1.1 302 Found Content-Length: 178 Content-Type: text/html; charset=UTF-8 Date: Tue, 23 Jul 2013 08:57:16 GMT Location: http://localhost:6544/wiki/wiki Server: waitress So it looks like that the problem comes from my nginx configuration. location /wiki { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; port_in_redirect on; proxy_pass http://127.0.0.1:6544; } I actived the debug log and this extract looks interesting: 2013/07/23 11:08:10 [debug] 14798#0: *6845 http header: "Host: localhost:8080" [...] 2013/07/23 11:08:10 [debug] 14798#0: *6845 http proxy header: 2013/07/23 11:08:10 [debug] 14798#0: *6845 http script copy: "Host: " 2013/07/23 11:08:10 [debug] 14798#0: *6845 http script var: "localhost" 2013/07/23 11:08:10 [debug] 14798#0: *6845 http script copy: " [...] "GET /wiki HTTP/1.0 Host: localhost X-Real-IP: *our_ssh_gateway_ip* X-Forwarded-For: *our_ssh_gateway_ip* X-Forwarded-Proto: http Connection: close User-Agent: curl/7.26.0 Accept: */* " I would really appreciate any help regarding this issue. Regards, Adrien From sb at waeme.net Tue Jul 23 10:14:21 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Tue, 23 Jul 2013 14:14:21 +0400 Subject: Nginx 1.4.2 Centos Packages In-Reply-To: <1204161374529490@web7e.yandex.ru> References: <1204161374529490@web7e.yandex.ru> Message-ID: <752D59C0-9F96-4DA2-908E-B28EC8C19B27@waeme.net> On 23 Jul2013, at 01:44 , Bing Bang wrote: > Hi, > > I just updated my nginx from 1.4.1 to 1.4.2 from packages provided by nginx for centos and then i realized that many modules available with 1.4.1 are not compiled in premade packages of 1.4.2 for centos, so i've just downgraded my nginx to 1.4.1 again List of the compiled in modules has not changed in 1.4.2. You probably downloaded nginx 1.4.1 rpm from third party repositories, not nginx one. From adrien.saladin at gmail.com Tue Jul 23 10:49:23 2013 From: adrien.saladin at gmail.com (Adrien Saladin) Date: Tue, 23 Jul 2013 12:49:23 +0200 Subject: port redirection issue while using ssh tunnel In-Reply-To: References: Message-ID: Hi again, I found the problem: I had `proxy_set_header Host $host;` in the configuration file. If I replace this line by `proxy_set_header Host $http_host;` the port is now correctly set on http 302. Regards, > Hi list, > > I have a web app proxied by nginx. Everything works fine locally. > However the web server is on our private network and I would like to > access it though a ssh tunnel from the outside. > Most operations works fine except when the web app returns a 302 > redirection. In that case it seems that nginx removes the http port > (detailed issue below). From mdounin at mdounin.ru Tue Jul 23 11:20:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Jul 2013 15:20:45 +0400 Subject: Backend responding with 100 Continue results in the actual response being lost In-Reply-To: References: Message-ID: <20130723112044.GF90722@mdounin.ru> Hello! On Mon, Jul 22, 2013 at 08:20:04PM -0400, rstarkov wrote: > I'm using nginx as a reverse proxy, configured to use HTTP 1.1 so as to > support range requests. The server responds to some of the requests with a > "100 Continue", even if there was no "Expect: 100-continue" in the request. > The server then proceeds to read the rest of the request and, eventually, > sends the "200 OK" reponse. > > In my testing, in this scenario Nginx will forward the 100 Continue to the > requesting browser, but the 200 OK response never makes it. It seems to be > silently dropped. The result is that the browser is stuck waiting for a > response until the request times out. > > Is this expected behaviour? From my reading of the HTTP 1/1 spec, this > server is not in violation of the spec. I'm using v1.4.2 on Windows. As of now nginx doesn't know how to handle 1xx informational responses, and it always removes Expect from a requests sent to backends due to this. RFC 2616 say: An origin server SHOULD NOT send a 100 (Continue) response if the request message does not include an Expect request-header field with the "100-continue" expectation, and MUST NOT send a 100 (Continue) response if such a request comes from an HTTP/1.0 (or earlier) client. That is, it's 100 Continue isn't expected to be returned to nginx from a complaint HTTP/1.1 server even if a request is via HTTP/1.1. If your backend server returns 100 Continue for some reason, you may try switching proxy_http_version back to 1.0 as by default. Most of HTTP servers are capable of handling Range requests via HTTP/1.0 as well, you shouldn't need HTTP/1.1 for Range requests to work. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jul 23 11:45:19 2013 From: nginx-forum at nginx.us (inspiron) Date: Tue, 23 Jul 2013 07:45:19 -0400 Subject: Nginx 1.4.2 Centos Packages In-Reply-To: <752D59C0-9F96-4DA2-908E-B28EC8C19B27@waeme.net> References: <752D59C0-9F96-4DA2-908E-B28EC8C19B27@waeme.net> Message-ID: I haven't changed anything since last update from 1.4 to 1.4.1 and now again like before i just used "yum update nginx" which updated nginx from 1.4.1 to 1.4.2 just like what i've done before to upgrade from 1.4 to 1.4.1 ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241126,241141#msg-241141 From nginx-forum at nginx.us Tue Jul 23 11:53:00 2013 From: nginx-forum at nginx.us (rstarkov) Date: Tue, 23 Jul 2013 07:53:00 -0400 Subject: Backend responding with 100 Continue results in the actual response being lost In-Reply-To: <20130723112044.GF90722@mdounin.ru> References: <20130723112044.GF90722@mdounin.ru> Message-ID: <9c6b1a1697d0ccfb0c28bc93e2497396.NginxMailingListEnglish@forum.nginx.org> Hello Maxim, Thanks for your recommendations. I realise that what this server is doing is a bit unusual, however: > That is, it's 100 Continue isn't expected to be returned to nginx > from a complaint HTTP/1.1 server even if a request is via > HTTP/1.1. I just wanted to clarify that the server *is* conditionally compliant with HTTP/1.1, since this is a "SHOULD" requirement. Would be nice if this were supported, since otherwise you get timeouts that are very hard to explain and debug (but are easily attributable to nginx's proxying). In any case. I'm glad I got to the bottom of this very puzzling issue, and that future googlers will finally find something when googling for "nginx 100 continue timeout". Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241130,241142#msg-241142 From sb at waeme.net Tue Jul 23 11:54:55 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Tue, 23 Jul 2013 15:54:55 +0400 Subject: Nginx 1.4.2 Centos Packages In-Reply-To: References: <752D59C0-9F96-4DA2-908E-B28EC8C19B27@waeme.net> Message-ID: <691571AF-9F6D-42FD-A37D-48EA42FE2EA7@waeme.net> On 23 Jul2013, at 15:45 , inspiron wrote: > I haven't changed anything since last update from 1.4 to 1.4.1 and now again > like before i just used "yum update nginx" which updated nginx from 1.4.1 to > 1.4.2 just like what i've done before to upgrade from 1.4 to 1.4.1 ! Please show output of: yum repolist -v and nginx -V From nginx-forum at nginx.us Tue Jul 23 12:55:29 2013 From: nginx-forum at nginx.us (slowhand84) Date: Tue, 23 Jul 2013 08:55:29 -0400 Subject: Access to query param with dashes in the name Message-ID: <93e247e9ebd6da77683c2cd540382a3f.NginxMailingListEnglish@forum.nginx.org> Hi, for the http header Nginx Nginx convert the dashes in the header name to underscores. I have a query string with a param with a dash in the name, but i can't see it with $arg_PARAMETER variable. There is a way to have the value of this parameter in a variable? For example my query string is: response-content-disposition=attachment I know i can parse manually the query string, but I think if there is an Nginx embeded function, this function could by faster than a manual parsing. Regards, Luca Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241144,241144#msg-241144 From nginx-forum at nginx.us Tue Jul 23 12:58:50 2013 From: nginx-forum at nginx.us (mortadelo_de) Date: Tue, 23 Jul 2013 08:58:50 -0400 Subject: access to email body and attachments within mail proxy? Message-ID: Hi, I'm wondering whether the MailProxy module allows access to the e-mail content (body and attachments) of a proxied email. I'd like to intercept (encrypt) emails within SMTP, POP3 and IMAP requests. I haven't been able to find any configuration samples that would allow access to the email content. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241145,241145#msg-241145 From mdounin at mdounin.ru Tue Jul 23 13:04:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Jul 2013 17:04:10 +0400 Subject: access to email body and attachments within mail proxy? In-Reply-To: References: Message-ID: <20130723130410.GK90722@mdounin.ru> Hello! On Tue, Jul 23, 2013 at 08:58:50AM -0400, mortadelo_de wrote: > Hi, I'm wondering whether the MailProxy module allows access to the e-mail > content (body and attachments) of a proxied email. I'd like to intercept > (encrypt) emails within SMTP, POP3 and IMAP requests. I haven't been able to > find any configuration samples that would allow access to the email content. It's not something nginx mail proxy allows to do. It only checks authentication and then establishes an opaque pipe between a client and an upstream server. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jul 23 14:10:30 2013 From: nginx-forum at nginx.us (inspiron) Date: Tue, 23 Jul 2013 10:10:30 -0400 Subject: Nginx 1.4.2 Centos Packages In-Reply-To: <691571AF-9F6D-42FD-A37D-48EA42FE2EA7@waeme.net> References: <691571AF-9F6D-42FD-A37D-48EA42FE2EA7@waeme.net> Message-ID: <47104e377e51ee12b4693c0aed6d5461.NginxMailingListEnglish@forum.nginx.org> here are those outputs : http://pastebin.com/0Epq6KGj the above output is for nginx 1.4.1 if i update to 1.4.2 i loose nearly half of those lines in nginx -v output including geoip and etc. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241126,241151#msg-241151 From nginx-forum at nginx.us Tue Jul 23 14:59:26 2013 From: nginx-forum at nginx.us (toriacht) Date: Tue, 23 Jul 2013 10:59:26 -0400 Subject: Load Balancing and High Availability Message-ID: <69d298ba9931de18e1d623767274de9a.NginxMailingListEnglish@forum.nginx.org> Hi, I am a nginx newbie. I have nginx configured as a reverse proxy/load balancer in front of a small cluster of Jboss servers. I have configured as per the tutorials on the web like this one ref: https://www.digitalocean.com/community/articles/how-to-set-up-nginx-load-balancing and all works fine with simple round robin load balancing. My question is, if one of the backend Jboss servers goes down how do I stop nginx from load balancing requests to the dead application server? Thanks, W Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241152,241152#msg-241152 From nginx-forum at nginx.us Tue Jul 23 15:06:46 2013 From: nginx-forum at nginx.us (toriacht) Date: Tue, 23 Jul 2013 11:06:46 -0400 Subject: Load Balancing and High Availability In-Reply-To: <69d298ba9931de18e1d623767274de9a.NginxMailingListEnglish@forum.nginx.org> References: <69d298ba9931de18e1d623767274de9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6a0049258d12c417ccdae404fd296f0a.NginxMailingListEnglish@forum.nginx.org> Hi, IN answer to my own question I found this.. +------+ Max Fails According to the default round robin settings, nginx will continue to send data to the virtual private servers, even if the servers are not responding. Max fails can automatically prevent this by rendering unresponsive servers inoperative for a set amount of time. There are two factors associated with the max fails: max_fails and fall_timeout. Max fails refers to the maximum number of failed attempts to connect to a server should occur before it is considered inactive. Fall_timeout specifies the length of that the server is considered inoperative. Once the time expires, new attempts to reach the server will start up again. The default timeout value is 10 seconds. +------+ Is this the beast way? Is there any gotchas/best practices to be aware of? Thanks W Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241152,241153#msg-241153 From nginx-forum at nginx.us Tue Jul 23 15:16:13 2013 From: nginx-forum at nginx.us (nitesh) Date: Tue, 23 Jul 2013 11:16:13 -0400 Subject: Load Balancing and High Availability In-Reply-To: <69d298ba9931de18e1d623767274de9a.NginxMailingListEnglish@forum.nginx.org> References: <69d298ba9931de18e1d623767274de9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <28af536bc334b12c5b06dfbcacf0cee2.NginxMailingListEnglish@forum.nginx.org> Hi Team, I am newbiw too and i am setting up load balancer with nginx from " https://www.digitalocean.com/community/articles/how-to-set-up-nginx-load-balancing" but my reuest are not going to the servers which i have configured. below is the my nginx.conf setup upstream nitesh { server 192.168.1.2; server 192.168.1.3; server 192.168.1.4; } } and below is my virtual.conf setup server { listen *:80; server_name nginx.whmcs.co.in; access_log /var/log/nginx/nginx.access.log; error_log /var/log/nginx/nginx_error.log debug; log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time'; location / { proxy_pass http://nitesh; } } but i am not getting the setup page while running website. my another three server are running with apache. so update me on the same that i should have to change in that configuration. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241152,241154#msg-241154 From mdounin at mdounin.ru Tue Jul 23 16:10:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Jul 2013 20:10:54 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <1374537969.6400.9223372036855884725.43713815@webmail.messagingengine.com> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> <1373338344.3599.140661253413574.544426D0@webmail.messagingengine.com> <20130709120817.GK38853@mdounin.ru> <1374537969.6400.9223372036855884725.43713815@webmail.messagingengine.com> Message-ID: <20130723161053.GM90722@mdounin.ru> Hello! On Tue, Jul 23, 2013 at 10:06:09AM +1000, Robert Mueller wrote: > > > > Yes, I haven't heard in a while what the status of this is. I'm > > > currently using our existing patch, but would love to drop it and > > > upgrade when it's included in nginx core... > > > > As far as I can tell the state is roughly the same (though patch > > in question improved a bit). Valentin? > > Still no update? Valentin worked on this during his vacation, and submitted a patch series for an internal review. It didn't passed review though, and needs more work. > Any chance we can include the patch I developed in the short term, until > Valentin's patch is ready? It definitely solves a real world issue for > us, and by the sounds of it, it would for some other people as well... I don't think it's a good idea. Your patch definetely has interoperability problems and will break epoll for kernels before 2.6.17. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jul 23 16:39:01 2013 From: nginx-forum at nginx.us (imran_kh) Date: Tue, 23 Jul 2013 12:39:01 -0400 Subject: Not listing proxy_pass port 8009 Message-ID: Hello, I am using Nginx web server and getting error ?502 bad gateway? while accessing some sites. I have observed that, proxy_pass port is not listing in the server. e.g.:- In above configuration example.com is running on port 80 and proxy_pass localhost is running on port 8009. Port 80 is listing perfectly but port 8009 is not listing because of this reason example.com is not working. Please suggest me on this. server { listen 80; server_name example.com; location / { proxy_pass http://localhost:8009; send_timeout 6000; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } } Thanks, Imran Khan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241157#msg-241157 From reallfqq-nginx at yahoo.fr Tue Jul 23 16:55:39 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 23 Jul 2013 12:55:39 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: References: Message-ID: Your configuration means that Nginx is listening on port 80 and will forward any request form example.com to a backend located on localhost listening on port 8009. Since Nginx is a proxy, you need a backend to serve content to which requests sent to Nginx will be forwarded. You seem not to understand what you are doing... You machine is doing precisely what you ask it to do. Not more, not less. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkearsley at blueyonder.co.uk Tue Jul 23 17:03:01 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Tue, 23 Jul 2013 18:03:01 +0100 Subject: Not listing proxy_pass port 8009 In-Reply-To: References: Message-ID: <51EEB745.9060501@blueyonder.co.uk> the port in proxy_pass is not for listening/accepting incoming connections - it is for connecting outwards to another server/service You must have something else (another httpd, probably not nginx) listening on 8009......? On 23/07/13 17:39, imran_kh wrote: > Hello, > > I am using Nginx web server and getting error ?502 bad gateway? while > accessing some sites. > I have observed that, proxy_pass port is not listing in the server. > > e.g.:- In above configuration example.com is running on port 80 and > proxy_pass localhost is running on port 8009. > Port 80 is listing perfectly but port 8009 is not listing because of this > reason example.com is not working. Please suggest me on this. > > server { > listen 80; > server_name example.com; > > location / { > proxy_pass http://localhost:8009; > send_timeout 6000; > proxy_read_timeout 120; > proxy_connect_timeout 120; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; > } > } > > Thanks, > Imran Khan. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241157#msg-241157 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Jul 23 18:45:21 2013 From: nginx-forum at nginx.us (imran_kh) Date: Tue, 23 Jul 2013 14:45:21 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: <51EEB745.9060501@blueyonder.co.uk> References: <51EEB745.9060501@blueyonder.co.uk> Message-ID: <0f23db92de1e4ed33e0b47fe468cce02.NginxMailingListEnglish@forum.nginx.org> Hello, Thanks for the reply but when I am browsing example.com or using ip address getting error " 502 bad gateway". Please suggest me on this.. Thanks, Imran Khan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241160#msg-241160 From miguelmclara at gmail.com Tue Jul 23 18:57:17 2013 From: miguelmclara at gmail.com (Miguel C.) Date: Tue, 23 Jul 2013 19:57:17 +0100 Subject: Not listing proxy_pass port 8009 In-Reply-To: <0f23db92de1e4ed33e0b47fe468cce02.NginxMailingListEnglish@forum.nginx.org> References: <51EEB745.9060501@blueyonder.co.uk> <0f23db92de1e4ed33e0b47fe468cce02.NginxMailingListEnglish@forum.nginx.org> Message-ID: You can't browse to the URL because there is no web application running on port 8009. You tell nginx to listen on port 80...so far all good. But then you are telling nginx to proxy_pass the request to a diferent port... So the question is more: is this really what you want? If so... may I ask what's supposed to be listening on port 8009? I mean you said before that nothing is listening on port 8009... but you want nginx to proxy the requests to that port... this makes no sense. imran_kh wrote: >Hello, > >Thanks for the reply but when I am browsing example.com or using ip >address >getting error " 502 bad gateway". >Please suggest me on this.. > >Thanks, >Imran Khan. > >Posted at Nginx Forum: >http://forum.nginx.org/read.php?2,241157,241160#msg-241160 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From nginx-forum at nginx.us Tue Jul 23 19:20:11 2013 From: nginx-forum at nginx.us (imran_kh) Date: Tue, 23 Jul 2013 15:20:11 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: References: Message-ID: <6b0fa5a3087259faccdb268c839d0bbb.NginxMailingListEnglish@forum.nginx.org> Hello, Thanks for the reply. I have never worked on Nginx server. So please help me to resolve the issue. File /etc/nginx/nginx.conf content are as follow. user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } File /etc/nginx/sites-enabled/default contains are as follows. server { listen 80; server_name abc.com; location / { proxy_pass http://localhost:8007; send_timeout 600; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } } server { listen 80; server_name pqr.com; client_max_body_size 200m; access_log /var/log/nginx/openerp-access.log; error_log /var/log/nginx/openerp-error.log; #ssl on; #ssl_certificate /etc/ssl/nginx/server.crt; #ssl_certificate_key /etc/ssl/nginx/server.key; #ssl_session_timeout 5m; #ssl_prefer_server_ciphers on; #ssl_protocols SSLv2 SSLv3 TLSv1; #ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; #send_timeout 10m; proxy_max_temp_file_size 0; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; location /agromanager { send_timeout 600; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_pass http://localhost:8003; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } location /web { send_timeout 600; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_pass http://127.0.0.1:8005; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } location / { send_timeout 600; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_pass http://127.0.0.1:8002; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } } server { listen 80; server_name xyz.com; location / { proxy_pass http://localhost:8010; send_timeout 6000; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } } Only xyz.com is working properly expect abc.com, pqr.com and also trying browse through IP address but getting ?502 bad gateway?. Thanks, Imran khan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241162#msg-241162 From contact at jpluscplusm.com Tue Jul 23 20:33:23 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 23 Jul 2013 21:33:23 +0100 Subject: Not listing proxy_pass port 8009 In-Reply-To: <6b0fa5a3087259faccdb268c839d0bbb.NginxMailingListEnglish@forum.nginx.org> References: <6b0fa5a3087259faccdb268c839d0bbb.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 23 July 2013 20:20, imran_kh wrote: > Thanks for the reply. I have never worked on Nginx server. So please help me > to resolve the issue. You've been told what the problem is 3 times already. It's nothing to do with nginx. The problem is that the process that nginx has been configured *to*talk*to* on port 8009 is *not* listening on port 8009. Fix that. J From ar at xlrs.de Tue Jul 23 22:03:08 2013 From: ar at xlrs.de (Axel) Date: Wed, 24 Jul 2013 00:03:08 +0200 Subject: Load Balancing and High Availability In-Reply-To: <6a0049258d12c417ccdae404fd296f0a.NginxMailingListEnglish@forum.nginx.org> References: <69d298ba9931de18e1d623767274de9a.NginxMailingListEnglish@forum.nginx.org> <6a0049258d12c417ccdae404fd296f0a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2932283.Pn3NbrNsg0@lxrosenski.pag> Hi, Am Dienstag, 23. Juli 2013, 11:06:46 schrieb toriacht: > Hi, > > IN answer to my own question I found this.. > > +------+ > Max Fails > Fall_timeout specifies the length of that the server is considered > inoperative. Once the time expires, new attempts to reach the server will > start up again. The default timeout value is 10 seconds. > +------+ this is the way I did it. I set max_fails=1 and fail_timeout in my upstream definition and in my location block proxy_next_upstream http_502 http_503 error; You can use any allowed http status code here Rgds, Axel From nginx-forum at nginx.us Tue Jul 23 23:41:10 2013 From: nginx-forum at nginx.us (imran_kh) Date: Tue, 23 Jul 2013 19:41:10 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: References: Message-ID: <9daec5992533b30e8495f2a9f1870781.NginxMailingListEnglish@forum.nginx.org> Hello, Thanks for the reply, Actually openerp is also hosted on this server. Is this create ?502 bad gateway? error? Please find the error log and suggest me on this. #openerp-error.log 2013/07/23 18:21:15 [error] 1465#0: *196 connect() failed (111: Connection refused) while connecting to upstream, client: 10.10.10.10, server: pqr.com, request: "GET / HTTP/1.0", upstream: "http://127.0.0.1:8002/", host: "pqr.com" #error.log 2013/07/23 18:21:16 [error] 1465#0: *198 connect() failed (111: Connection refused) while connecting to upstream, client: 10.10.10.10, server: abc.com, request: "GET / HTTP/1.0", upstream: "http://127.0.0.1:8007/", host: "abc.com" Thanks, Imran Khan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241168#msg-241168 From nik.molnar at consbio.org Wed Jul 24 00:00:03 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Tue, 23 Jul 2013 17:00:03 -0700 Subject: Not listing proxy_pass port 8009 In-Reply-To: <9daec5992533b30e8495f2a9f1870781.NginxMailingListEnglish@forum.nginx.org> References: <9daec5992533b30e8495f2a9f1870781.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51EF1903.9060008@consbio.org> "502 Bad Gateway" almost always means something is wrong with the upstream server (i.e., Nginx is working fine, whatever it's proxying to is having problems) so look for a problem there. _Nik On 7/23/2013 4:41 PM, imran_kh wrote: > Hello, > > Thanks for the reply, > > Actually openerp is also hosted on this server. Is this create ?502 bad > gateway? error? Please find the error log and suggest me on this. > > #openerp-error.log > 2013/07/23 18:21:15 [error] 1465#0: *196 connect() failed (111: Connection > refused) while connecting to upstream, client: 10.10.10.10, server: pqr.com, > request: "GET / HTTP/1.0", upstream: "http://127.0.0.1:8002/", host: > "pqr.com" > > #error.log > 2013/07/23 18:21:16 [error] 1465#0: *198 connect() failed (111: Connection > refused) while connecting to upstream, client: 10.10.10.10, server: abc.com, > request: "GET / HTTP/1.0", upstream: "http://127.0.0.1:8007/", host: > "abc.com" > > Thanks, > Imran Khan. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241168#msg-241168 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From miguelmclara at gmail.com Wed Jul 24 00:07:29 2013 From: miguelmclara at gmail.com (Miguel C.) Date: Wed, 24 Jul 2013 01:07:29 +0100 Subject: Not listing proxy_pass port 8009 In-Reply-To: <51EF1903.9060008@consbio.org> References: <9daec5992533b30e8495f2a9f1870781.NginxMailingListEnglish@forum.nginx.org> <51EF1903.9060008@consbio.org> Message-ID: <13c67b4c-2101-4a6d-9842-5fc20614cc95@email.android.com> The log tells you much... although has was already said the problem is that something's wrong with whatever should be listening on those ports... Openerp is one of those and probably is not running or at least it's not listening on the port specified in nginx configuration... In any case the problem is not in nginx... find out what is supposed to be listening on those ports... I doubt you will have problems with nginx after that. Nikolas Stevenson-Molnar wrote: >"502 Bad Gateway" almost always means something is wrong with the >upstream server (i.e., Nginx is working fine, whatever it's proxying to >is having problems) so look for a problem there. > >_Nik > >On 7/23/2013 4:41 PM, imran_kh wrote: >> Hello, >> >> Thanks for the reply, >> >> Actually openerp is also hosted on this server. Is this create ?502 >bad >> gateway? error? Please find the error log and suggest me on this. >> >> #openerp-error.log >> 2013/07/23 18:21:15 [error] 1465#0: *196 connect() failed (111: >Connection >> refused) while connecting to upstream, client: 10.10.10.10, server: >pqr.com, >> request: "GET / HTTP/1.0", upstream: "http://127.0.0.1:8002/", host: >> "pqr.com" >> >> #error.log >> 2013/07/23 18:21:16 [error] 1465#0: *198 connect() failed (111: >Connection >> refused) while connecting to upstream, client: 10.10.10.10, server: >abc.com, >> request: "GET / HTTP/1.0", upstream: "http://127.0.0.1:8007/", host: >> "abc.com" >> >> Thanks, >> Imran Khan. >> >> Posted at Nginx Forum: >http://forum.nginx.org/read.php?2,241157,241168#msg-241168 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From nginx-forum at nginx.us Wed Jul 24 00:14:38 2013 From: nginx-forum at nginx.us (drinsnow) Date: Tue, 23 Jul 2013 20:14:38 -0400 Subject: ssl handshake fail when proxy between two tomcat with mutual authentication Message-ID: <9f165963da2af2f5fad4a68f555cef2a.NginxMailingListEnglish@forum.nginx.org> Hi, I've got a problem when setting up nginx as load balancer between two tomcats with mutual authentication. The system is like: Tomcat1 <--https-> Nginx <--https--> Tomcat2. Before adding nginx, the mutual authentication between tomcat1 and tomcat2 works fine, using cert/key and keystore/truststore. Now with nginx, links between tomcat1 and nginx is OK, but the SSL handshake between nginx and tomcat2 not work. Wonder how to assign the keystore/truststore stuff that needed when communicating with tomcat2, can't find related directive in nginx ssl module configuration. Any idea for this? Thanks! My nginx configuration is like: upstream backend { server 10.1.1.1:8443; server 10.1.1.2:8443; } server { listen 8443 ssl; server_name localhost; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca.crt; ssl_ciphers ALL:!ADH:!kEDH:!SSLv2:!EXPORT40:!EXP:!LOW; ssl_verify_client on; ssl_verify_depth 2; location / { proxy_pass https://backend; } } And tomcat2 configuration is like: And the error log is: 2013/07/23 20:25:11 [error] 18116#0: *1 SSL_do_handshake() failed (SSL: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:SSL alert number 40) while SSL handshaking to upstream, client *** Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241171,241171#msg-241171 From nginx-forum at nginx.us Wed Jul 24 00:20:20 2013 From: nginx-forum at nginx.us (imran_kh) Date: Tue, 23 Jul 2013 20:20:20 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: <13c67b4c-2101-4a6d-9842-5fc20614cc95@email.android.com> References: <13c67b4c-2101-4a6d-9842-5fc20614cc95@email.android.com> Message-ID: <572251c36a09aeae11ba7b4b5da70905.NginxMailingListEnglish@forum.nginx.org> Hello, Thanks for the prompt reply. I have scanned the listing ports in the servers. Please help me out to fix this issue. I never worked on Nginx server and am totally stuck. # sudo nmap localhost Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-23 20:15 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.000018s latency). Not shown: 991 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 80/tcp open http 631/tcp open ipp 5432/tcp open postgresql 5666/tcp open nrpe 8008/tcp open http 8009/tcp open ajp13 8090/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds # sudo nmap IP_Address Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-23 20:15 EDT Nmap scan report for hostname (IP_Address) Host is up (0.000019s latency). Not shown: 992 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 80/tcp open http 5666/tcp open nrpe 8008/tcp open http 8009/tcp open ajp13 8080/tcp open http-proxy 8090/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds Thanks, Imran Khan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241172#msg-241172 From nik.molnar at consbio.org Wed Jul 24 00:30:19 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Tue, 23 Jul 2013 17:30:19 -0700 Subject: Not listing proxy_pass port 8009 In-Reply-To: <572251c36a09aeae11ba7b4b5da70905.NginxMailingListEnglish@forum.nginx.org> References: <13c67b4c-2101-4a6d-9842-5fc20614cc95@email.android.com> <572251c36a09aeae11ba7b4b5da70905.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51EF201B.9030508@consbio.org> Note that the log entries from your previous email indicate upstream servers on ports 8002 and 8007. According to what you've posted here, there's nothing listening on either of those ports. _Nik On 7/23/2013 5:20 PM, imran_kh wrote: > Hello, > > Thanks for the prompt reply. > > I have scanned the listing ports in the servers. Please help me out to fix > this issue. > I never worked on Nginx server and am totally stuck. > > # sudo nmap localhost > Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-23 20:15 EDT > Nmap scan report for localhost (127.0.0.1) > Host is up (0.000018s latency). > Not shown: 991 closed ports > PORT STATE SERVICE > 21/tcp open ftp > 22/tcp open ssh > 80/tcp open http > 631/tcp open ipp > 5432/tcp open postgresql > 5666/tcp open nrpe > 8008/tcp open http > 8009/tcp open ajp13 > 8090/tcp open unknown > > Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds > # sudo nmap IP_Address > > Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-23 20:15 EDT > Nmap scan report for hostname (IP_Address) > Host is up (0.000019s latency). > Not shown: 992 closed ports > PORT STATE SERVICE > 21/tcp open ftp > 22/tcp open ssh > 80/tcp open http > 5666/tcp open nrpe > 8008/tcp open http > 8009/tcp open ajp13 > 8080/tcp open http-proxy > 8090/tcp open unknown > > Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds > > Thanks, > Imran Khan. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241172#msg-241172 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jul 24 00:44:09 2013 From: nginx-forum at nginx.us (imran_kh) Date: Tue, 23 Jul 2013 20:44:09 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: <51EF201B.9030508@consbio.org> References: <51EF201B.9030508@consbio.org> Message-ID: <9e2bbb7b1fb77014e098657f85992894.NginxMailingListEnglish@forum.nginx.org> Hello, Correct. So how should I resolve this issue? Thanks, Imran Khan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241175#msg-241175 From nik.molnar at consbio.org Wed Jul 24 01:15:55 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Tue, 23 Jul 2013 18:15:55 -0700 Subject: Not listing proxy_pass port 8009 In-Reply-To: <9e2bbb7b1fb77014e098657f85992894.NginxMailingListEnglish@forum.nginx.org> References: <51EF201B.9030508@consbio.org> <9e2bbb7b1fb77014e098657f85992894.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51EF2ACB.6050803@consbio.org> If 8009 is the desired port, then change the upstream server in your nginx conf to use port 8009. If 8002/8007 are the correct ports, then change the upstream server to listen on those ports. _Nik On 7/23/2013 5:44 PM, imran_kh wrote: > Hello, > > Correct. So how should I resolve this issue? > > Thanks, > Imran Khan. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241175#msg-241175 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jul 24 01:36:16 2013 From: nginx-forum at nginx.us (imran_kh) Date: Tue, 23 Jul 2013 21:36:16 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: <51EF2ACB.6050803@consbio.org> References: <51EF2ACB.6050803@consbio.org> Message-ID: Hello, Thanks but how can I resolve the error ?502 bad gateway? for IP Address? Getting this error while browsing site using IP Address. Thanks. Imran khan, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241177#msg-241177 From piotr at cloudflare.com Wed Jul 24 01:47:46 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 23 Jul 2013 18:47:46 -0700 Subject: Backend responding with 100 Continue results in the actual response being lost In-Reply-To: <9c6b1a1697d0ccfb0c28bc93e2497396.NginxMailingListEnglish@forum.nginx.org> References: <20130723112044.GF90722@mdounin.ru> <9c6b1a1697d0ccfb0c28bc93e2497396.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, I've made a patch for this issue a while ago: http://mailman.nginx.org/pipermail/nginx-devel/2012-December/003152.html Best regards, Piotr Sikora From nik.molnar at consbio.org Wed Jul 24 01:59:04 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Tue, 23 Jul 2013 18:59:04 -0700 Subject: Not listing proxy_pass port 8009 In-Reply-To: References: <51EF2ACB.6050803@consbio.org> Message-ID: <51EF34E8.6050404@consbio.org> Are there supposed to be services running on ports 8002 and 8007? If so, then they don't seem to actually be running and you need to fix that (that's not nginx-related). If you actually meant to proxy to another port(s), then look at your nginx config, find where you're proxying to 8002 and 8007 and change those to the correct ports. _Nik On 7/23/2013 6:36 PM, imran_kh wrote: > Hello, > > Thanks but how can I resolve the error ?502 bad gateway? for IP Address? > Getting this error while browsing site using IP Address. > > Thanks. > Imran khan, > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241177#msg-241177 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jul 24 03:12:40 2013 From: nginx-forum at nginx.us (bkoski) Date: Tue, 23 Jul 2013 23:12:40 -0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: References: <20130522152034.GJ69760@mdounin.ru> Message-ID: I had a very similar problem using nginx to front rainbows/rails. Even though rails was definitely returning a JSON response, ngnix always responded with 200 OK, Content-Length: 0, and an empty body. The strange thing was that the error was related to the size of the incoming POST. When the request body was small everything worked fine, the full response from Rails appeared. But as soon as the request was >100kb the response was always blank. In my case, the logs were filled with errors like "readv() failed (104: Connection reset by peer) while reading upstream." That error led me to this post http://stackoverflow.com/questions/10393203/error-readv-failed-104-connection-reset-by-peer-while-reading-upstream. It turns out that if you do not read incoming POST data (in Rails, using request.body.read) this will cause the "connection reset" error which in turn results in a blank response body. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205826,241182#msg-241182 From mdounin at mdounin.ru Wed Jul 24 07:33:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Jul 2013 11:33:43 +0400 Subject: Backend responding with 100 Continue results in the actual response being lost In-Reply-To: References: <20130723112044.GF90722@mdounin.ru> <9c6b1a1697d0ccfb0c28bc93e2497396.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130724073343.GR90722@mdounin.ru> Hello! On Tue, Jul 23, 2013 at 06:47:46PM -0700, Piotr Sikora wrote: > Hello, > I've made a patch for this issue a while ago: > http://mailman.nginx.org/pipermail/nginx-devel/2012-December/003152.html I believe I've already reviewd this patch and explained why it's wrong. -- Maxim Dounin http://nginx.org/en/donation.html From piotr at cloudflare.com Wed Jul 24 07:45:16 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 24 Jul 2013 00:45:16 -0700 Subject: Backend responding with 100 Continue results in the actual response being lost In-Reply-To: <20130724073343.GR90722@mdounin.ru> References: <20130723112044.GF90722@mdounin.ru> <9c6b1a1697d0ccfb0c28bc93e2497396.NginxMailingListEnglish@forum.nginx.org> <20130724073343.GR90722@mdounin.ru> Message-ID: Hey Maxim, >> I've made a patch for this issue a while ago: >> http://mailman.nginx.org/pipermail/nginx-devel/2012-December/003152.html > > I believe I've already reviewd this patch and explained why it's > wrong. Yes, you did... But fixing it ended very low on my TODO list. I just pointed it to rstarkov, because it seems that he could use working patch right now. Best regards, Piotr Sikora From nginx-forum at nginx.us Wed Jul 24 11:36:20 2013 From: nginx-forum at nginx.us (spdyg) Date: Wed, 24 Jul 2013 07:36:20 -0400 Subject: SPDY + proxy cache static content failures In-Reply-To: <32F6D688-5016-451D-A43A-72FC22C5DBBB@gwynne.id.au> References: <32F6D688-5016-451D-A43A-72FC22C5DBBB@gwynne.id.au> Message-ID: <928247ac3646300bd8845120ca3aae9b.NginxMailingListEnglish@forum.nginx.org> David Gwynne Wrote: ------------------------------------------------------- > i am also experiencing this problem running 1.4.2. Hi David. The response I got from my Valentin off-list was that there is a known incompatibility with SPDY and proxy cache, due to the way proxy cache will kill the connection to prevent it sending the body response (but before it has had a chance to finish dealing with all the SPDY requests, I think). Only solution now is to either disable SPDY or proxy cache (I went with SPDY, of course). I wonder if a note in the documentation might be in order to say there are known issues. There is supposed to be a rewrite coming for proxy cache that will help, but not sure when it is likely to appear. Phil Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233497,241193#msg-241193 From grails at jmsd.co.uk Wed Jul 24 12:24:57 2013 From: grails at jmsd.co.uk (John Moore) Date: Wed, 24 Jul 2013 12:24:57 +0000 Subject: Prevent nginx caching empty pages? Message-ID: <4tph3nvcir9c.1gq09-oftx6b2h@elasticemail.com> Hi, I'm trying to diagnose a problem we have on a rather complex system with an nginx acting as a remote proxy server, where what appears to be a completely empty response is being returned, seemingly from the nginx cache. It's early days yet in my troubleshooting, so this may not actually be the problem, but just in case, I have two questions: 1. Is this actually possible? Will nginx actually cache an empty response? (I am not sure how an empty response could actually come from the back end without being marked with some kind of error response header, which would presumably prevent any caching, but that's a different issue). 2. If it is possible, is there any way to prevent it from happening? E.g., setting some kind of minimal cachable size for the cache? This is using the now rather ancient version 0.7.65, I'm afraid. John M From sb at nginx.com Wed Jul 24 12:37:30 2013 From: sb at nginx.com (Sergey Budnevitch) Date: Wed, 24 Jul 2013 16:37:30 +0400 Subject: Nginx 1.4.2 Centos Packages In-Reply-To: <47104e377e51ee12b4693c0aed6d5461.NginxMailingListEnglish@forum.nginx.org> References: <691571AF-9F6D-42FD-A37D-48EA42FE2EA7@waeme.net> <47104e377e51ee12b4693c0aed6d5461.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7517C510-9B2A-4AB6-A22A-BD63929F5DDD@nginx.com> On 23 Jul2013, at 18:10 , inspiron wrote: > here are those outputs : http://pastebin.com/0Epq6KGj > > the above output is for nginx 1.4.1 if i update to 1.4.2 i loose nearly half > of those lines in nginx -v output including geoip and etc. Your 1.4.1 package is probably from atomic repo. We will not include third-party modules and modules with additional dependencies. From nginx-forum at nginx.us Wed Jul 24 19:19:47 2013 From: nginx-forum at nginx.us (imran_kh) Date: Wed, 24 Jul 2013 15:19:47 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: References: Message-ID: <2cad4907846aa6193bac5a8d336e0f25.NginxMailingListEnglish@forum.nginx.org> Hello, I have observed that, Nginx configured on port 80 as per /etc/nginx/sites-enabled/default file but it is listening on port 8080 and 80. Please see the details for the same. server { listen 80; server_name example.com; location / { proxy_pass http://localhost:8009; send_timeout 6000; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } } Port 8080 is listening on Public IP address. # sudo netstat -anop | grep :8080 tcp 0 0 Public_IP:8080 0.0.0.0:* LISTEN 18674/nginx off (0.00/0/0) Port 80 is listening on 0.0.0.0 # sudo netstat -anop | grep :80 tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 18674/nginx off (0.00/0/0) When I am trying to start the nginx service without init.d script getting below message. #sudo /usr/sbin/nginx [sudo] password for xyz: nginx: [warn] conflicting server name " example.com " on 0.0.0.0:80, ignored nginx: [warn] conflicting server name " example1.com " on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "example.com " on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "example2.com " on 0.0.0.0:80, ignored nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] still could not bind() Thanks, Imran Khan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241203#msg-241203 From nik.molnar at consbio.org Wed Jul 24 19:36:32 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Wed, 24 Jul 2013 12:36:32 -0700 Subject: Not listing proxy_pass port 8009 In-Reply-To: <2cad4907846aa6193bac5a8d336e0f25.NginxMailingListEnglish@forum.nginx.org> References: <2cad4907846aa6193bac5a8d336e0f25.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51F02CC0.6030202@consbio.org> Go back over all your nginx conf files, making sure to look at included files as well... it seems that somewhere you have "listen 8080" and possibly overlapping "server_name" directives. If that fails, I'd suggest you backup then throw out your current config, then start simple and build it up one piece at a time, testing each addition. E.g., start with just a single "server" block, get it working, then add the next one, etc. etc. _Nik On 7/24/2013 12:19 PM, imran_kh wrote: > Hello, > > I have observed that, Nginx configured on port 80 as per > /etc/nginx/sites-enabled/default file but it is listening on port 8080 and > 80. Please see the details for the same. > > server { > listen 80; > server_name example.com; > > location / { > proxy_pass http://localhost:8009; > send_timeout 6000; > proxy_read_timeout 120; > proxy_connect_timeout 120; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; > } > } > > Port 8080 is listening on Public IP address. > # sudo netstat -anop | grep :8080 > tcp 0 0 Public_IP:8080 0.0.0.0:* LISTEN > 18674/nginx off (0.00/0/0) > > Port 80 is listening on 0.0.0.0 > # sudo netstat -anop | grep :80 > tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN > 18674/nginx off (0.00/0/0) > > When I am trying to start the nginx service without init.d script getting > below message. > > #sudo /usr/sbin/nginx > [sudo] password for xyz: > nginx: [warn] conflicting server name " example.com " on 0.0.0.0:80, > ignored > nginx: [warn] conflicting server name " example1.com " on 0.0.0.0:80, > ignored > nginx: [warn] conflicting server name "example.com " on 0.0.0.0:80, ignored > nginx: [warn] conflicting server name "example2.com " on 0.0.0.0:80, > ignored > nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to Public_IP:8080 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] still could not bind() > > Thanks, > Imran Khan. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241203#msg-241203 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at ruby-forum.com Wed Jul 24 20:29:52 2013 From: lists at ruby-forum.com (Maik Unger) Date: Wed, 24 Jul 2013 22:29:52 +0200 Subject: =?UTF-8?Q?Hashes_/_Arrays_f=C3=BCr_=24arg=5FPARAMETER?= Message-ID: <261a71e64ba23ba24feb21405e134571@ruby-forum.com> Hello @all, Is it possible to define array / hashes in nginx? I have the following idea: nginx should check $arg_PARAMETER which is defined by an array. If the right element in the array, then die access is permitted, otherwise denied. example: the $arg_PARAMETER can have the following values: "ab" or "bc" or cd" if $arg_PARAMATER != array -> Access is denied, otherwise permitted I would appreciate about a solution. Regards, Maik -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Jul 24 20:36:28 2013 From: nginx-forum at nginx.us (imran_kh) Date: Wed, 24 Jul 2013 16:36:28 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: <51F02CC0.6030202@consbio.org> References: <51F02CC0.6030202@consbio.org> Message-ID: Hello, Thanks for the reply. Yes you are correct. I have tried to change the port from 8080 to 80 in /etc/nginx/conf.d/default.conf. Browse the Public IP address and xyz.com site, getting ?no handler found? error. File /etc/nginx/conf.d/default.conf content as follows. # sudo cat default.conf ## Basic reverse proxy server ## ## Apache (vm02) backend for www.example.com ## upstream apachephp { server Public_IP:8069; #Apache1 } ## Start www.example.com ## server { listen Public_IP:8080; #server_name www.example.com; #access_log /var/log/nginx/log/openerp; #error_log /var/log/nginx/log/openerp.log; #root /usr/share/nginx/html; #index index.html index.htm; ## send request back to apache1 ## location / { proxy_pass http://Public_IP:8069; #proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } File /etc/nginx/nginx.conf content as follows. user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } File /etc/nginx/sites-enabled/default contains as follows. server { listen 80; server_name abc.com; location / { proxy_pass http://localhost:8007; send_timeout 600; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } } server { listen 80; server_name pqr.com; client_max_body_size 200m; access_log /var/log/nginx/openerp-access.log; error_log /var/log/nginx/openerp-error.log; #ssl on; #ssl_certificate /etc/ssl/nginx/server.crt; #ssl_certificate_key /etc/ssl/nginx/server.key; #ssl_session_timeout 5m; #ssl_prefer_server_ciphers on; #ssl_protocols SSLv2 SSLv3 TLSv1; #ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; #send_timeout 10m; proxy_max_temp_file_size 0; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; location /agromanager { send_timeout 600; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_pass http://localhost:8003; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } location /web { send_timeout 600; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_pass http://127.0.0.1:8005; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } location / { send_timeout 600; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_pass http://127.0.0.1:8002; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } } server { listen 80; server_name xyz.com; location / { proxy_pass http://localhost:8010; send_timeout 6000; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; } } Thanks, Imran Khan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241206#msg-241206 From nginx-forum at nginx.us Wed Jul 24 21:50:35 2013 From: nginx-forum at nginx.us (imran_kh) Date: Wed, 24 Jul 2013 17:50:35 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: References: Message-ID: <8ae7f76e3cebea09f5d1768938e27227.NginxMailingListEnglish@forum.nginx.org> Hello, Any suggestion or advice? Thanks, Imran Khan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241207#msg-241207 From nik.molnar at consbio.org Wed Jul 24 21:58:26 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Wed, 24 Jul 2013 14:58:26 -0700 Subject: Not listing proxy_pass port 8009 In-Reply-To: References: <51F02CC0.6030202@consbio.org> Message-ID: <51F04E02.9050701@consbio.org> That looks like an OpenERP problem: http://bit.ly/1aJm0DB _Nik On 7/24/2013 1:36 PM, imran_kh wrote: > Hello, > > Thanks for the reply. > > Yes you are correct. I have tried to change the port from 8080 to 80 in > /etc/nginx/conf.d/default.conf. > Browse the Public IP address and xyz.com site, getting ?no handler found? > error. > > File /etc/nginx/conf.d/default.conf content as follows. > # sudo cat default.conf > ## Basic reverse proxy server ## > ## Apache (vm02) backend for www.example.com ## > upstream apachephp { > server Public_IP:8069; #Apache1 > } > > ## Start www.example.com ## > server { > listen Public_IP:8080; > #server_name www.example.com; > > #access_log /var/log/nginx/log/openerp; > #error_log /var/log/nginx/log/openerp.log; > #root /usr/share/nginx/html; > #index index.html index.htm; > > ## send request back to apache1 ## > location / { > proxy_pass http://Public_IP:8069; > #proxy_next_upstream error timeout invalid_header http_500 http_502 > http_503 http_504; > proxy_redirect off; > proxy_buffering off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > } > } > > File /etc/nginx/nginx.conf content as follows. > > user www-data; > worker_processes 4; > pid /var/run/nginx.pid; > > events { > worker_connections 768; > # multi_accept on; > } > > http { > > ## > # Basic Settings > ## > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > # server_tokens off; > > # server_names_hash_bucket_size 64; > # server_name_in_redirect off; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > ## > # Logging Settings > ## > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > ## > # Gzip Settings > ## > > gzip on; > gzip_disable "msie6"; > > # gzip_vary on; > # gzip_proxied any; > # gzip_comp_level 6; > # gzip_buffers 16 8k; > # gzip_http_version 1.1; > # gzip_types text/plain text/css application/json > application/x-javascript text/xml application/xml application/xml+rss > text/javascript; > > ## > # nginx-naxsi config > ## > # Uncomment it if you installed nginx-naxsi > ## > > #include /etc/nginx/naxsi_core.rules; > > ## > # nginx-passenger config > ## > # Uncomment it if you installed nginx-passenger > ## > > #passenger_root /usr; > #passenger_ruby /usr/bin/ruby; > > ## > # Virtual Host Configs > ## > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > } > > File /etc/nginx/sites-enabled/default contains as follows. > > server { > listen 80; > server_name abc.com; > > location / { > proxy_pass http://localhost:8007; > send_timeout 600; > proxy_read_timeout 120; > proxy_connect_timeout 120; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; > } > } > > server { > listen 80; > server_name pqr.com; > client_max_body_size 200m; > access_log /var/log/nginx/openerp-access.log; > error_log /var/log/nginx/openerp-error.log; > #ssl on; > #ssl_certificate /etc/ssl/nginx/server.crt; > #ssl_certificate_key /etc/ssl/nginx/server.key; > #ssl_session_timeout 5m; > #ssl_prefer_server_ciphers on; > #ssl_protocols SSLv2 SSLv3 TLSv1; > #ssl_ciphers > ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; > > #send_timeout 10m; > > proxy_max_temp_file_size 0; > > client_header_timeout 10m; > client_body_timeout 10m; > send_timeout 10m; > > location /agromanager { > send_timeout 600; > proxy_read_timeout 120; > proxy_connect_timeout 120; > proxy_pass http://localhost:8003; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; > } > > location /web { > send_timeout 600; > proxy_read_timeout 120; > proxy_connect_timeout 120; > proxy_pass http://127.0.0.1:8005; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; > } > > location / { > send_timeout 600; > proxy_read_timeout 120; > proxy_connect_timeout 120; > proxy_pass http://127.0.0.1:8002; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; > } > > > } > > server { > listen 80; > server_name xyz.com; > > location / { > proxy_pass http://localhost:8010; > send_timeout 6000; > proxy_read_timeout 120; > proxy_connect_timeout 120; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; > } > } > > Thanks, > Imran Khan. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241206#msg-241206 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From glenn at zewt.org Wed Jul 24 22:03:38 2013 From: glenn at zewt.org (Glenn Maynard) Date: Wed, 24 Jul 2013 17:03:38 -0500 Subject: Incorrect redirect protocol when behind a reverse proxy Message-ID: Our nginx server is running on Heroku, which proxies SSL. This mostly works fine, but nginx has one problem with it: since it thinks the protocol is http, any redirects (such as trailing-slash redirects) go to http instead of https. The usual fix for this is X-Forwarded-Proto, but nginx doesn't support that yet, and I haven't found any way to configure it, eg. a configuration directive to override the protocol. Nginx seems to decide whether redirects should go to http or https entirely based on whether the connection has an SSL context associated (ngx_http_header_filter), so it doesn't look like there's any way to affect this in configuration. Is there any workaround? -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at disqus.com Wed Jul 24 22:49:05 2013 From: john at disqus.com (John Watson) Date: Wed, 24 Jul 2013 15:49:05 -0700 Subject: cannot build variables_hash Message-ID: Upgraded from 1.2.9 to 1.4.1 and now started getting: [emerg] could not build the variables_hash, you should increase either variables_hash_max_size: 512 or variables_hash_bucket_size: 64 Same configuration and even dropped (2) 3rd party modules. nginx.conf and ./configure params: https://gist.github.com/dctrwatson/6075317 adding this to http block fixes it: variables_hash_max_size 1024; Any ideas? Or direction on debugging? Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jul 24 22:53:34 2013 From: nginx-forum at nginx.us (toriacht) Date: Wed, 24 Jul 2013 18:53:34 -0400 Subject: Load Balancing and High Availability In-Reply-To: <2932283.Pn3NbrNsg0@lxrosenski.pag> References: <2932283.Pn3NbrNsg0@lxrosenski.pag> Message-ID: <774cdbd22205834f57ab8ed76fdc3841.NginxMailingListEnglish@forum.nginx.org> Hi Axel, Thank you for the reply. I have pasted some of my nginx.conf file below. Can you conform if i'm setting proxy_next_upstream in correct location please? Also, is there a way to have fail_timeout increment per fail? i.e if it failed once try again in 30 secs as it might be minor issue, then if that fails try a again in 2 mins, then 4 mins etc? user nginx; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #set headers proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Queue-Start "t=${msec}000"; # switch to next upstream server in these scenarios proxy_next_upstream http_502 http_503 error; # load balancer # ip_hash provides sticky session upstream balancer { ip_hash; server 127.0.0.1:8180 max_fails=1 fail_timeout=2000s; server 127.0.0.1:8280 max_fails=1 fail_timeout=2000s; } #gzip on; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80; server_name localhost; #charset koi8-r; access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #rules for rest location /rest/ { #proxy_pass http://127.0.0.1:8080/MyApp/rest/; proxy_pass http://balancer/MyApp/rest/; } ---- ---- Many thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241152,241211#msg-241211 From francis at daoine.org Thu Jul 25 00:35:50 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 25 Jul 2013 01:35:50 +0100 Subject: cannot build variables_hash In-Reply-To: References: Message-ID: <20130725003550.GA6710@craic.sysops.org> On Wed, Jul 24, 2013 at 03:49:05PM -0700, John Watson wrote: Hi there, > Upgraded from 1.2.9 to 1.4.1 and now started getting: > > [emerg] could not build the variables_hash, you should increase either > variables_hash_max_size: 512 or variables_hash_bucket_size: 64 > adding this to http block fixes it: > variables_hash_max_size 1024; > > Any ideas? Or direction on debugging? What's the problem? You got an error message which asked you to change one of two directives; you checked the documentation for those directives which pointed you to another document which told you which one to change first; you changed that one; and the error state no longer applies. It looks perfect from here. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jul 25 00:36:52 2013 From: nginx-forum at nginx.us (naseeb0077) Date: Wed, 24 Jul 2013 20:36:52 -0400 Subject: Not listing proxy_pass port 8009 In-Reply-To: References: Message-ID: <5a3918fa927d63c5deb14620338b85a3.NginxMailingListEnglish@forum.nginx.org> Hello, Thanks for the prompt reply. I have scanned the listing ports in the servers. Please help me out to fix this issue. I never worked on Nginx server and am totally stuck. # sudo nmap localhost Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-23 20:15 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.000018s latency). Not shown: 991 closed ports PORT STATE SERVICE 21/tcp open ftp http://www.health.com/health/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241157,241212#msg-241212 From nginx-forum at nginx.us Thu Jul 25 07:23:53 2013 From: nginx-forum at nginx.us (drook) Date: Thu, 25 Jul 2013 03:23:53 -0400 Subject: listen directive Message-ID: <39a1328c60978934bd147f0632f73ee3.NginxMailingListEnglish@forum.nginx.org> Hi. I've noticed that in configuratuion like http { server { server_name some.domain.tld; listen 1.1.1.1; } server { server_name another.domain.tld; listen 1.1.1.1; } server { server_name one_more.domain.tld *.domain.tld; listen 1.1.1.1; } server { server_name last.domain.tld; listen 80; } } weird thing happens: this somehow is similar to the apache directives, where you should use exactly the same "address:port" in "VirtualHost" clause as you used in "NameVirtualHost " directive. For example if you used *, you should use it everywhere, otherwise namevhost won't work. I found that at least on 1.2.1 similar thing happens: vhost "last.domain.tld" may not get any requests at all, they will be served by the *.domain.tld, and, if removed, by the default vhost. So I changed everything to the "listen 80" and it resolved the situation. Is it some bug I stepped on, or is this normal, or, may be I just see things that aren't there ? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241215,241215#msg-241215 From lists at ruby-forum.com Thu Jul 25 07:26:50 2013 From: lists at ruby-forum.com (Furious S.) Date: Thu, 25 Jul 2013 09:26:50 +0200 Subject: 404 not found afther reboot Message-ID: <6e758bbacb1ec0484b0de473471bdae1@ruby-forum.com> Hi all, I have a problem Well yesterday everthing was working fine the website etc etc etc but the system behind whmcs needed ioncube loaders so i did install everything right and reboot the server afther that and now im seeing 404 not found can someone help me ? -- Posted via http://www.ruby-forum.com/. From igor at sysoev.ru Thu Jul 25 07:29:53 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 25 Jul 2013 11:29:53 +0400 Subject: listen directive In-Reply-To: <39a1328c60978934bd147f0632f73ee3.NginxMailingListEnglish@forum.nginx.org> References: <39a1328c60978934bd147f0632f73ee3.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Jul 25, 2013, at 11:23 , drook wrote: > Hi. > > I've noticed that in configuratuion like > > http { > server { > server_name some.domain.tld; > listen 1.1.1.1; > } > server { > server_name another.domain.tld; > listen 1.1.1.1; > } > server { > server_name one_more.domain.tld *.domain.tld; > listen 1.1.1.1; > } > server { > server_name last.domain.tld; > listen 80; > } > } > > weird thing happens: this somehow is similar to the apache directives, where > you should use exactly the same "address:port" in "VirtualHost" clause as > you used in "NameVirtualHost " directive. For example if you > used *, you should use it everywhere, otherwise namevhost won't work. I > found that at least on 1.2.1 similar thing happens: vhost "last.domain.tld" > may not get any requests at all, they will be served by the *.domain.tld, > and, if removed, by the default vhost. So I changed everything to the > "listen 80" and it resolved the situation. > > Is it some bug I stepped on, or is this normal, or, may be I just see things > that aren't there ? http://nginx.org/en/docs/http/request_processing.html#mixed_name_ip_based_servers -- Igor Sysoev http://nginx.com/services.html From nginx-forum at nginx.us Thu Jul 25 07:50:02 2013 From: nginx-forum at nginx.us (drook) Date: Thu, 25 Jul 2013 03:50:02 -0400 Subject: listen directive In-Reply-To: References: Message-ID: <055b085215487d71fc954a80c2fdd259.NginxMailingListEnglish@forum.nginx.org> Thanks, I have read this carefully, but this hasn't become more clear for me - all the examples contain the IP defined, what about undefined IP (which, as I understand, means "listen to this port everywhere") ? Does it receive less priority ? Plus, in my case server_name is defined too. I'm not complaining, I'm trying to understand better. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241215,241218#msg-241218 From francis at daoine.org Thu Jul 25 08:00:30 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 25 Jul 2013 09:00:30 +0100 Subject: listen directive In-Reply-To: <055b085215487d71fc954a80c2fdd259.NginxMailingListEnglish@forum.nginx.org> References: <055b085215487d71fc954a80c2fdd259.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130725080030.GB6710@craic.sysops.org> On Thu, Jul 25, 2013 at 03:50:02AM -0400, drook wrote: Hi there, > Thanks, I have read this carefully, but this hasn't become more clear for me > - all the examples contain the IP defined, what about undefined IP (which, > as I understand, means "listen to this port everywhere") ? Does it receive > less priority ? Plus, in my case server_name is defined too. I'm not > complaining, I'm trying to understand better. A request comes in on an ip:port. The "listen" directives in each server{} block are considered, to find the best-match one for that ip:port. Only server{} blocks with that "listen" directive are considered for the next stage. (Absent "listen", or "listen" without both a port and address, assume the default values for the missing parts.) After that, the server_name is considered, and the one best-match server_name from the many best-match "listen" servers is chosen. Is that any clearer? Can you suggest a wording for the manual that might have made it clearer to you on first reading? A consequence is that if you want every server{} to be potentially considered for server_name matching for all requests, you must include the same best-match "listen" directive in each server{}. Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jul 25 08:34:53 2013 From: nginx-forum at nginx.us (drook) Date: Thu, 25 Jul 2013 04:34:53 -0400 Subject: listen directive In-Reply-To: <20130725080030.GB6710@craic.sysops.org> References: <20130725080030.GB6710@craic.sysops.org> Message-ID: <33e0de4d53507a73ea0348d054df5ec2.NginxMailingListEnglish@forum.nginx.org> Thanks a lot, this is much clearer. Actually, 'absent "listen", or "listen" without both a port and address, assume the default values for the missing parts' explains everything, and I think the manual would be more clear mentioning this explicitly (still imo). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241215,241220#msg-241220 From nginx-forum at nginx.us Thu Jul 25 08:41:51 2013 From: nginx-forum at nginx.us (olfativo) Date: Thu, 25 Jul 2013 04:41:51 -0400 Subject: Use directive value inside nginx.conf Message-ID: <2cd999e401eaede11601f38bc8122fb0.NginxMailingListEnglish@forum.nginx.org> Hi! I need to send to my application through "fastcgi_param" the value of "client_max_body_size" directive, which is defined in http block in nginx.conf Is there any way to do this without moving the declaration of that directive from http block? So I have this: http { ... client_max_body_size 5m; ... } And need this: server { ... location xxx { fastcgi_param SOME_NAME ; } ... } Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241221,241221#msg-241221 From nginx-forum at nginx.us Thu Jul 25 08:52:38 2013 From: nginx-forum at nginx.us (shawnxzhou) Date: Thu, 25 Jul 2013 04:52:38 -0400 Subject: How to make the log file printed by access_log split Message-ID: <48735034f9788f65dd333523824a5652.NginxMailingListEnglish@forum.nginx.org> what's the limit of the size of log file, and what will happen when it reaches the limitation? if I want to split the log file by timeline, say start a new file on the beginning of an hour, how can I configure ngnix? thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241226,241226#msg-241226 From ru at nginx.com Thu Jul 25 09:13:12 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 25 Jul 2013 13:13:12 +0400 Subject: listen directive In-Reply-To: <33e0de4d53507a73ea0348d054df5ec2.NginxMailingListEnglish@forum.nginx.org> References: <20130725080030.GB6710@craic.sysops.org> <33e0de4d53507a73ea0348d054df5ec2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130725091312.GG55404@lo0.su> On Thu, Jul 25, 2013 at 04:34:53AM -0400, drook wrote: > Thanks a lot, this is much clearer. Actually, 'absent "listen", or "listen" > without both a port and address, assume > the default values for the missing parts' explains everything, and I think > the manual would be more clear mentioning this explicitly (still imo). The documentation http://nginx.org/en/docs/http/ngx_http_core_module.html#listen says, in particular: : If only _address_ is given, the port 80 is used. : : If directive is not present then either the *:80 is used if nginx : runs with superuser privileges, or *:8000 otherwise. From mdounin at mdounin.ru Thu Jul 25 09:29:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 13:29:13 +0400 Subject: How to make the log file printed by access_log split In-Reply-To: <48735034f9788f65dd333523824a5652.NginxMailingListEnglish@forum.nginx.org> References: <48735034f9788f65dd333523824a5652.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130725092913.GF90722@mdounin.ru> Hello! On Thu, Jul 25, 2013 at 04:52:38AM -0400, shawnxzhou wrote: > what's the limit of the size of log file, and what will happen when it > reaches the limitation? > if I want to split the log file by timeline, say start a new file on the > beginning of an hour, how can I configure ngnix? This isn't something nginx is expected to do. Instead, this is what your favorite log rotation program does, and then uses USR1 signal to instruct nginx to reopen log files, see [1]. Consult your OS documentation for more details, usually "man newsyslog" or "man logrotate" helps. [1] http://nginx.org/en/docs/control.html#logs -- Maxim Dounin http://nginx.org/en/donation.html From rkearsley at blueyonder.co.uk Thu Jul 25 09:39:28 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Thu, 25 Jul 2013 10:39:28 +0100 Subject: How to make the log file printed by access_log split In-Reply-To: <48735034f9788f65dd333523824a5652.NginxMailingListEnglish@forum.nginx.org> References: <48735034f9788f65dd333523824a5652.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51F0F250.7000907@blueyonder.co.uk> Hi There's no size limit, it will keep getting bigger until your disk is full Here's a script I use to rotate the log, run it from cron every hour hope it helps #!/bin/sh PID=`cat /usr/local/nginx/logs/nginx.pid` LOG="/usr/local/nginx/logs/access.log" NOW=$(date +"%Y-%m-%d-%H-%M") NEWLOG="${LOG}.${NOW} mv ${LOG} ${NEWLOG} kill -USR1 ${PID} gzip ${NEWLOG} On 25/07/13 09:52, shawnxzhou wrote: > what's the limit of the size of log file, and what will happen when it > reaches the limitation? > if I want to split the log file by timeline, say start a new file on the > beginning of an hour, how can I configure ngnix? > > thanks > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241226,241226#msg-241226 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From contact at jpluscplusm.com Thu Jul 25 09:58:23 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 25 Jul 2013 10:58:23 +0100 Subject: 404 not found afther reboot In-Reply-To: <6e758bbacb1ec0484b0de473471bdae1@ruby-forum.com> References: <6e758bbacb1ec0484b0de473471bdae1@ruby-forum.com> Message-ID: On 25 Jul 2013 08:27, "Furious S." wrote: > I have a problem [snip no detail] > can someone help me ? Come on chap - what would /you/ do if someone sent you a bug report with so little information in it? /Think/ before you email a list! J -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Thu Jul 25 10:03:55 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 25 Jul 2013 11:03:55 +0100 Subject: =?UTF-8?Q?Re=3A_Hashes_/_Arrays_f=C3=BCr_=24arg=5FPARAMETER?= In-Reply-To: <261a71e64ba23ba24feb21405e134571@ruby-forum.com> References: <261a71e64ba23ba24feb21405e134571@ruby-forum.com> Message-ID: On 24 Jul 2013 21:30, "Maik Unger" wrote: > > Hello @all, > > Is it possible to define array / hashes in nginx? I have the following > idea: > > nginx should check $arg_PARAMETER which is defined by an array. If the > right element in the array, then die access is permitted, otherwise > denied. > > example: > > the $arg_PARAMETER can have the following values: > > "ab" or "bc" or cd" > > if $arg_PARAMATER != array -> Access is denied, otherwise permitted $arg_* are strings because they derive from the query string. They are not arrays. You can interpret these strings using any of the methods nginx gives you. > I would appreciate about a solution. I suggest you'll have to concoct a solution using nginx maps with regular expressions. That's a very powerful method of switching behaviours based on user input. Have a Google for their docs and for interesting ways of using them. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Thu Jul 25 10:11:02 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 25 Jul 2013 11:11:02 +0100 Subject: Incorrect redirect protocol when behind a reverse proxy In-Reply-To: References: Message-ID: On 24 Jul 2013 23:03, "Glenn Maynard" wrote: > > Our nginx server is running on Heroku, which proxies SSL. What does this mean? Do you see SSL traffic, or do you mean heroku terminates the ssl leaving you with http connections only? > This mostly works fine, but nginx has one problem with it: since it thinks the protocol is http, any redirects (such as trailing-slash redirects) go to http instead of https. Show us some config that generates these redirects. > The usual fix for this is X-Forwarded-Proto, but nginx doesn't support that yet That doesn't make sense to me. What is there to support? You can just write your redirect directives using X-F-P instead of hard coding the scheme. > and I haven't found any way to configure it, eg. a configuration directive to override the protocol. > Nginx seems to decide whether redirects should go to http or https entirely based on whether the connection has an SSL context associated (ngx_http_header_filter), so it doesn't look like there's any way to affect this in configuration. You've gone into the code far too early IMHO. There's usually a way to change nginx's behaviour in config. > > Is there any workaround? Do all requests have x-f-p? 100%? Then just change your redirects to reference it. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Thu Jul 25 10:55:07 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Thu, 25 Jul 2013 14:55:07 +0400 Subject: cannot build variables_hash In-Reply-To: References: Message-ID: <613C34BB-BDDA-4352-84DF-232198BDD8BC@nginx.com> On Jul 25, 2013, at 2:49 AM, John Watson wrote: > Upgraded from 1.2.9 to 1.4.1 and now started getting: > > [emerg] could not build the variables_hash, you should increase either variables_hash_max_size: 512 or variables_hash_bucket_size: 64 > > Same configuration and even dropped (2) 3rd party modules. > > nginx.conf and ./configure params: https://gist.github.com/dctrwatson/6075317 > > adding this to http block fixes it: > variables_hash_max_size 1024; > > Any ideas? Or direction on debugging? Hi John, Any chance you could check it with 1.4.1+ and without _any_ 3rd party modules? > Thanks, > > John > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Thu Jul 25 11:34:34 2013 From: lists at ruby-forum.com (Maik Unger) Date: Thu, 25 Jul 2013 13:34:34 +0200 Subject: =?UTF-8?Q?Re=3A_Hashes_/_Arrays_f=C3=BCr_=24arg=5FPARAMETER?= In-Reply-To: <261a71e64ba23ba24feb21405e134571@ruby-forum.com> References: <261a71e64ba23ba24feb21405e134571@ruby-forum.com> Message-ID: <20e003b56a58411ac76fe321fc053677@ruby-forum.com> hi, i think that you don't understand me. i will define an array that will check against the $arg_PARAMETER. Example: Array with the following Values: array ("test"; "test2) Query-String: http://domain.com/?user=test $arg_user = "test" if the $arg_PARAMETER match with on of the array values, the access to the webseite is allow, otherwise the access is disallowed. -- Posted via http://www.ruby-forum.com/. From me at myconan.net Thu Jul 25 12:50:56 2013 From: me at myconan.net (edogawaconan) Date: Thu, 25 Jul 2013 21:50:56 +0900 Subject: =?UTF-8?Q?Re=3A_Hashes_/_Arrays_f=C3=BCr_=24arg=5FPARAMETER?= In-Reply-To: <20e003b56a58411ac76fe321fc053677@ruby-forum.com> References: <261a71e64ba23ba24feb21405e134571@ruby-forum.com> <20e003b56a58411ac76fe321fc053677@ruby-forum.com> Message-ID: On Thu, Jul 25, 2013 at 8:34 PM, Maik Unger wrote: > hi, > > i think that you don't understand me. > > i will define an array that will check against the $arg_PARAMETER. > > Example: > > Array with the following Values: > > array ("test"; "test2) > > Query-String: > > http://domain.com/?user=test > > $arg_user = "test" > > if the $arg_PARAMETER match with on of the array values, the access to > the webseite is allow, otherwise the access is disallowed. > Something like this should work: map $arg_user $allowed { default 0; test 1; test2 1; } -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From dennisml at conversis.de Thu Jul 25 13:05:13 2013 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Thu, 25 Jul 2013 15:05:13 +0200 Subject: Custom forced 503 page does not work Message-ID: <51F12289.3060804@conversis.de> Hi, I'm trying to deliver a custom 503 page for all requests the server receives but with the following config it doesn't work. server { listen 81; server_name localhost; error_log /var/log/nginx/error.log debug; root /usr/share/nginx/html; location / { return 503; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } I get a 503 response but the 50x.html page is not used even though it is present in /usr/share/nginx/html/50x.html. I see no error in the error log even in debug mode. What is wrong with this config? Regards, Dennis From contact at jpluscplusm.com Thu Jul 25 14:26:53 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 25 Jul 2013 15:26:53 +0100 Subject: =?UTF-8?Q?Re=3A_Hashes_/_Arrays_f=C3=BCr_=24arg=5FPARAMETER?= In-Reply-To: <20e003b56a58411ac76fe321fc053677@ruby-forum.com> References: <261a71e64ba23ba24feb21405e134571@ruby-forum.com> <20e003b56a58411ac76fe321fc053677@ruby-forum.com> Message-ID: On 25 Jul 2013 12:34, "Maik Unger" wrote: > > hi, > > i think that you don't understand me. I understand you just fine. You want to use the nginx config language in a way it can't be, because it's not a general purpose language and the implementors didn't put in the specific feature you want to use. I've given you a pointer towards the nginx-y way of achieving the end result you've described. HTH, J -------------- next part -------------- An HTML attachment was scrubbed... URL: From glenn at zewt.org Thu Jul 25 14:42:49 2013 From: glenn at zewt.org (Glenn Maynard) Date: Thu, 25 Jul 2013 09:42:49 -0500 Subject: Incorrect redirect protocol when behind a reverse proxy In-Reply-To: References: Message-ID: On Thu, Jul 25, 2013 at 5:11 AM, Jonathan Matthews wrote: > What does this mean? Do you see SSL traffic, or do you mean heroku > terminates the ssl leaving you with http connections only? > Heroku handles SSL, and nginx sees only HTTP traffic. > > This mostly works fine, but nginx has one problem with it: since it > thinks the protocol is http, any redirects (such as trailing-slash > redirects) go to http instead of https. > > Show us some config that generates these redirects. > > > The usual fix for this is X-Forwarded-Proto, but nginx doesn't support > that yet > > That doesn't make sense to me. What is there to support? You can just > write your redirect directives using X-F-P instead of hard coding the > scheme. > I'm not hardcoding anything. Nginx is generating its own redirects. The case I'm seeing currently is ngx_http_static_module redirecting to add a trailing slashes to URLs. > You've gone into the code far too early IMHO. There's usually a way to > change nginx's behaviour in config. > I've gone into the code precisely to find out how to do that, since the documentation wasn't helping. I was surprised to discover that the protocol seems to be hardcoded. > Do all requests have x-f-p? 100%? Then just change your redirects to > reference it. > I don't have any redirects. Nginx is doing this on its own. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Thu Jul 25 15:53:25 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 25 Jul 2013 16:53:25 +0100 Subject: Incorrect redirect protocol when behind a reverse proxy In-Reply-To: References: Message-ID: On 25 Jul 2013 15:43, "Glenn Maynard" wrote: > > On Thu, Jul 25, 2013 at 5:11 AM, Jonathan Matthews < contact at jpluscplusm.com> wrote: >> >> What does this mean? Do you see SSL traffic, or do you mean heroku terminates the ssl leaving you with http connections only? > > Heroku handles SSL, and nginx sees only HTTP traffic. >> >> > This mostly works fine, but nginx has one problem with it: since it thinks the protocol is http, any redirects (such as trailing-slash redirects) go to http instead of https. >> >> Show us some config that generates these redirects. >> >> > The usual fix for this is X-Forwarded-Proto, but nginx doesn't support that yet >> >> That doesn't make sense to me. What is there to support? You can just write your redirect directives using X-F-P instead of hard coding the scheme. > > I'm not hardcoding anything. Nginx is generating its own redirects. The case I'm seeing currently is ngx_http_static_module redirecting to add a trailing slashes to URLs On my phone's browser, searching for that module name doesn't bring me anything useful I'm afraid. Are you just serving local files off disk? I bet you have redirects configured somewhere, or a backend is generating them ;-) Please post your entire config. >> You've gone into the code far too early IMHO. There's usually a way to change nginx's behaviour in config. > > I've gone into the code precisely to find out how to do that, since the documentation wasn't helping. I was surprised to discover that the protocol seems to be hardcoded. >> >> Do all requests have x-f-p? 100%? Then just change your redirects to reference it. > > I don't have any redirects. Nginx is doing this on its own. In response to what class of request? What's common across them? J -------------- next part -------------- An HTML attachment was scrubbed... URL: From glenn at zewt.org Thu Jul 25 16:14:40 2013 From: glenn at zewt.org (Glenn Maynard) Date: Thu, 25 Jul 2013 11:14:40 -0500 Subject: Incorrect redirect protocol when behind a reverse proxy In-Reply-To: References: Message-ID: On Thu, Jul 25, 2013 at 10:53 AM, Jonathan Matthews wrote: > On my phone's browser, searching for that module name doesn't bring me > anything useful I'm afraid. Are you just serving local files off disk? > src/http/modules/ngx_http_static_module.c. This is where the trailing-slash redirects originate. > I bet you have redirects configured somewhere, or a backend is > generating them ;-) > > Please post your entire config. > I don't. It happens with a minimal configuration. events { } http { server { listen 10000; root "data"; } } mkdir -p data/test/, and then accessing "http://localhost:10000/test" redirects to "http://localhost:10000/test/". -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Jul 25 17:29:00 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 25 Jul 2013 21:29:00 +0400 Subject: Update nginx with Ubuntu PPA In-Reply-To: <14f193d1417e1440cc9ed4563ce36fbd.NginxMailingListEnglish@forum.nginx.org> References: <14f193d1417e1440cc9ed4563ce36fbd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201307252129.00920.vbart@nginx.com> On Tuesday 23 July 2013 12:24:38 JackB wrote: > openletter Wrote: > ------------------------------------------------------- > > > If you are using the apt-get upgrade or aptitude upgrade commands, the > > service will be restarted for you. > > This might be a little off topic, but how can one upgrade nginx on ubuntu > with the official ppa via apt without having a restart of nginx but an > upgrade instead? (/etc/init.d/nginx upgrade) > Please note, there is no "official ppa". Official nginx repositories for Ubuntu (and other Linux ditros) are here: http://nginx.org/en/linux_packages.html wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From contact at jpluscplusm.com Thu Jul 25 18:41:27 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 25 Jul 2013 19:41:27 +0100 Subject: Incorrect redirect protocol when behind a reverse proxy In-Reply-To: References: Message-ID: On 25 July 2013 17:14, Glenn Maynard wrote: > On Thu, Jul 25, 2013 at 10:53 AM, Jonathan Matthews > wrote: >> >> On my phone's browser, searching for that module name doesn't bring me >> anything useful I'm afraid. Are you just serving local files off disk? > > src/http/modules/ngx_http_static_module.c. This is where the trailing-slash > redirects originate. >> >> I bet you have redirects configured somewhere, or a backend is generating >> them ;-) >> >> Please post your entire config. > > I don't. It happens with a minimal configuration. > > events { } > http { > server { > listen 10000; > root "data"; > } > } > > mkdir -p data/test/, and then accessing "http://localhost:10000/test" > redirects to "http://localhost:10000/test/". I've just got to a box and can ACK that. I can make that stop with a correctly configured try_files, which I would always choose to have set up, myself. That may not be a solution for you however. Here's a way I've just tested (on 1.4.2) that forces the trailing-slash redirects to incorporate a random HTTP header ("foo", here) as their scheme: # include your boilerplate as per previous email location / { location ~ "^(.*)[^/]$" { rewrite ^ $http_foo://$http_host$uri/ permanent; } } Or, supposing you have certain URIs which *can* end in not-a-trailing-slash: (also tested on 1.4.2) location / { if (-d $document_root$uri) { rewrite ^ $http_foo://$http_host$uri/ permanent; } } I suppose the question is then: what *other* classes of automatic redirects do you find yourself hitting, and can you deterministically isolate their URIs using either a location{} or if{}, so that you can pre-empt the auto redirect in order to incorporate the X-forwarded-proto header? HTH, J -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From jan.algermissen at nordsc.com Thu Jul 25 19:09:34 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Thu, 25 Jul 2013 21:09:34 +0200 Subject: .spec file for 1.4.2 RPM Message-ID: <9CC804FA-AA10-447C-8324-96F169C00AE7@nordsc.com> Hi, can anyone point me to the RPM .spec file for the binary RHEL 6 RPM? I need to create a binary RPM in a certain context and look for a quick starting point. Jan From sb at nginx.com Thu Jul 25 19:42:01 2013 From: sb at nginx.com (Sergey Budnevitch) Date: Thu, 25 Jul 2013 23:42:01 +0400 Subject: .spec file for 1.4.2 RPM In-Reply-To: <9CC804FA-AA10-447C-8324-96F169C00AE7@nordsc.com> References: <9CC804FA-AA10-447C-8324-96F169C00AE7@nordsc.com> Message-ID: <9BEDC1DD-5C83-4072-A8DC-1DB9A8DC2DF3@nginx.com> On 25 Jul2013, at 23:09 , Jan Algermissen wrote: > Hi, > > can anyone point me to the RPM .spec file for the binary RHEL 6 RPM? > > I need to create a binary RPM in a certain context and look for a quick starting point. Take it from srpm: http://nginx.org/packages/rhel/6/SRPMS/nginx-1.4.2-1.el6.ngx.src.rpm From jan.algermissen at nordsc.com Thu Jul 25 20:09:58 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Thu, 25 Jul 2013 22:09:58 +0200 Subject: .spec file for 1.4.2 RPM In-Reply-To: <9BEDC1DD-5C83-4072-A8DC-1DB9A8DC2DF3@nginx.com> References: <9CC804FA-AA10-447C-8324-96F169C00AE7@nordsc.com> <9BEDC1DD-5C83-4072-A8DC-1DB9A8DC2DF3@nginx.com> Message-ID: <5FC29583-7522-4BCD-98F0-851C193AF8A0@nordsc.com> On 25.07.2013, at 21:42, Sergey Budnevitch wrote: > > On 25 Jul2013, at 23:09 , Jan Algermissen wrote: > >> Hi, >> >> can anyone point me to the RPM .spec file for the binary RHEL 6 RPM? >> >> I need to create a binary RPM in a certain context and look for a quick starting point. > > Take it from srpm: http://nginx.org/packages/rhel/6/SRPMS/nginx-1.4.2-1.el6.ngx.src.rpm Ah, ok. Sorry for being n00b maybe ... so I take that .spec and delete all the nginx building stuff? Wouldn't it be simpler to just take the binary .spec and copy what I also need into that? Anyhow - I understand you to be saying that the installed files in the srpm are the exact same files that the rpm would install? Jan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jan.algermissen at nordsc.com Thu Jul 25 21:43:09 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Thu, 25 Jul 2013 23:43:09 +0200 Subject: .spec file for 1.4.2 RPM In-Reply-To: <5FC29583-7522-4BCD-98F0-851C193AF8A0@nordsc.com> References: <9CC804FA-AA10-447C-8324-96F169C00AE7@nordsc.com> <9BEDC1DD-5C83-4072-A8DC-1DB9A8DC2DF3@nginx.com> <5FC29583-7522-4BCD-98F0-851C193AF8A0@nordsc.com> Message-ID: <2E3871DE-6B6E-42A2-AD63-57E7C43DCDB8@nordsc.com> On 25.07.2013, at 22:09, Jan Algermissen wrote: > > Ah, ok. Sorry for being n00b maybe . Damn ... got it. The SRPM conatins the .spec file to build the RPM. Oh my - stupid me. Jan > Jan > > > >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From paulnpace at gmail.com Thu Jul 25 23:22:53 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Thu, 25 Jul 2013 16:22:53 -0700 Subject: Update nginx with Ubuntu PPA In-Reply-To: <201307252129.00920.vbart@nginx.com> References: <14f193d1417e1440cc9ed4563ce36fbd.NginxMailingListEnglish@forum.nginx.org> <201307252129.00920.vbart@nginx.com> Message-ID: On Thu, Jul 25, 2013 at 10:29 AM, Valentin V. Bartenev wrote: > On Tuesday 23 July 2013 12:24:38 JackB wrote: >> openletter Wrote: >> ------------------------------------------------------- >> >> > If you are using the apt-get upgrade or aptitude upgrade commands, the >> > service will be restarted for you. >> >> This might be a little off topic, but how can one upgrade nginx on ubuntu >> with the official ppa via apt without having a restart of nginx but an >> upgrade instead? (/etc/init.d/nginx upgrade) >> > > Please note, there is no "official ppa". > > Official nginx repositories for Ubuntu (and other Linux ditros) are here: > http://nginx.org/en/linux_packages.html > > wbr, Valentin V. Bartenev Yes, there is no official PPA. The PPA I and many others use is unofficial, but seems to be well maintained (thanks for that, whoever you are). Someone wanting to use the same unofficial repository may execute the following: add-apt-repository ppa:nginx/stable apt-get update apt-get install nginx If you want to use the devel version, then replace ppa:nginx/stable with ppa:nginx/development If you don't have add-apt-repository, then apt-get install python-software-properties. From nginx-forum at nginx.us Fri Jul 26 00:04:29 2013 From: nginx-forum at nginx.us (nenad) Date: Thu, 25 Jul 2013 20:04:29 -0400 Subject: nginx 1.4.x file upload multipart ecoded Message-ID: <83758930f66508dbb8960557b0fca99c.NginxMailingListEnglish@forum.nginx.org> How I can handle multipart data uploads with the latest nginx release? The good old upload module won^t compile anymore. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241277,241277#msg-241277 From nginx-forum at nginx.us Fri Jul 26 04:32:44 2013 From: nginx-forum at nginx.us (drook) Date: Fri, 26 Jul 2013 00:32:44 -0400 Subject: listen directive In-Reply-To: <20130725091312.GG55404@lo0.su> References: <20130725091312.GG55404@lo0.su> Message-ID: Actually, I've read carefully this part of the manual before creating this topic. I still think the quoted passage explains nothing about virtualhost handling. It doesn't explain that skipping address or port will result in what I prefer to call "less prioritized vhosts". May be it's obvious from the parts you and Igor pointed at, but not to me, sorry. I undrestood the concept (at least I think I did), so no further questions left - thanks. I think you don't indend to discuss how to write the documantation. :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241215,241278#msg-241278 From nginx-forum at nginx.us Fri Jul 26 04:34:51 2013 From: nginx-forum at nginx.us (JackB) Date: Fri, 26 Jul 2013 00:34:51 -0400 Subject: Update nginx with Ubuntu PPA In-Reply-To: <201307252129.00920.vbart@nginx.com> References: <201307252129.00920.vbart@nginx.com> Message-ID: <788832df02119bdb22fe8a292218e6ec.NginxMailingListEnglish@forum.nginx.org> Valentin V. Bartenev Wrote: > > This might be a little off topic, but how can one upgrade nginx on ubuntu > > with the official ppa via apt without having a restart of nginx but an > > upgrade instead? (/etc/init.d/nginx upgrade) > Please note, there is no "official ppa". > > Official nginx repositories for Ubuntu (and other Linux ditros) are > here: > http://nginx.org/en/linux_packages.html Oh, I meant your repositories but named it ppa. Sorry for that. Will there be a way of having an upgrade instead of an automatic restart in case of binary/package updates in the future? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241123,241279#msg-241279 From agentzh at gmail.com Fri Jul 26 05:03:53 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 25 Jul 2013 22:03:53 -0700 Subject: nginx 1.4.x file upload multipart ecoded In-Reply-To: <83758930f66508dbb8960557b0fca99c.NginxMailingListEnglish@forum.nginx.org> References: <83758930f66508dbb8960557b0fca99c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Thu, Jul 25, 2013 at 5:04 PM, nenad wrote: > How I can handle multipart data uploads with the latest nginx release? > Take a look at ngx_lua module and the lua-resty-upload library: https://github.com/agentzh/lua-resty-upload Best regards, -agentzh From sb at nginx.com Fri Jul 26 09:30:18 2013 From: sb at nginx.com (Sergey Budnevitch) Date: Fri, 26 Jul 2013 13:30:18 +0400 Subject: Update nginx with Ubuntu PPA In-Reply-To: <788832df02119bdb22fe8a292218e6ec.NginxMailingListEnglish@forum.nginx.org> References: <201307252129.00920.vbart@nginx.com> <788832df02119bdb22fe8a292218e6ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2882059B-2B88-4D92-9D05-96CB5858E239@nginx.com> On 26 Jul2013, at 08:34 , JackB wrote: > Valentin V. Bartenev Wrote: >>> This might be a little off topic, but how can one upgrade nginx on > ubuntu >>> with the official ppa via apt without having a restart of nginx but an >>> upgrade instead? (/etc/init.d/nginx upgrade) > >> Please note, there is no "official ppa". >> >> Official nginx repositories for Ubuntu (and other Linux ditros) are >> here: >> http://nginx.org/en/linux_packages.html > > Oh, I meant your repositories but named it ppa. Sorry for that. > > Will there be a way of having an upgrade instead of an automatic restart in > case of binary/package updates in the future? Our package already calls '/etc/init.d/nginx upgrade' on update From nick at livejournalinc.com Fri Jul 26 09:33:42 2013 From: nick at livejournalinc.com (Nick Toseland) Date: Fri, 26 Jul 2013 10:33:42 +0100 Subject: Help with Keepalive on NGINX Message-ID: <51F24276.3000408@livejournalinc.com> Hi All, I am struggling to get keepalives to work on Nginx, I believe I have added the necessary configuration. Can you take a look and suggest where I may be going wrong or what I am missing? We have a nginx server that sits in front of a varnish server which sits in front of the back-end servers http { keepalive_timeout 120; keepalive_requests 500; upstream varnish_rand { server 172.21.1.1:80; keepalive 256; } server { server_name www.abc.com location / { proxy_pass http://varnish_rand; proxy_http_version 1.1; } } You can see that the nginx server is using HTTP1.1 to communicate with Varnish *varnishtop -i RxProtocol * list length 1 9.67 RxProtocol HTTP/1.1 However there nginx always send a "connection close" *varnishtop -c -i RxHeader -I "Connection"* list length 1 9.59 RxHeader Connection: close Thanks in advance Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jul 26 10:03:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 26 Jul 2013 14:03:21 +0400 Subject: Help with Keepalive on NGINX In-Reply-To: <51F24276.3000408@livejournalinc.com> References: <51F24276.3000408@livejournalinc.com> Message-ID: <20130726100321.GM90722@mdounin.ru> Hello! On Fri, Jul 26, 2013 at 10:33:42AM +0100, Nick Toseland wrote: > Hi All, > > I am struggling to get keepalives to work on Nginx, I believe I have > added the necessary configuration. Can you take a look and suggest > where I may be going wrong or what I am missing? > > We have a nginx server that sits in front of a varnish server which > sits in front of the back-end servers > > http { > > keepalive_timeout 120; > keepalive_requests 500; > > upstream varnish_rand { > server 172.21.1.1:80; > keepalive 256; > } > > server { > server_name www.abc.com > > location / { > > proxy_pass http://varnish_rand; > proxy_http_version 1.1; > > } > } > > You can see that the nginx server is using HTTP1.1 to communicate > with Varnish > > *varnishtop -i RxProtocol * > list length 1 > > 9.67 RxProtocol HTTP/1.1 > > However there nginx always send a "connection close" > > *varnishtop -c -i RxHeader -I "Connection"* > > list length 1 > > 9.59 RxHeader Connection: close That's because nginx uses "Connection: close" by default, regardless of a protocol version used. If you want keepalive to work, you have to instruct nginx to don't send "Connection: close" using the proxy_set_header directive. For more information see documentation at http://nginx.org/r/keepalive. -- Maxim Dounin http://nginx.org/en/donation.html From ian.hobson at ntlworld.com Fri Jul 26 10:19:45 2013 From: ian.hobson at ntlworld.com (Ian Hobson) Date: Fri, 26 Jul 2013 11:19:45 +0100 Subject: index and location not working together properly? Message-ID: <51F24D41.5070808@ntlworld.com> Hi all, Still fighting with my nginx configuration. What I want to achieve. If static file exists in "reseller" root, serve from "reseller" root else if static file file exists in "central" root, then serve from "Central" root else reply with 404 endif If php file exists in "reseller" root, serve from "reseller" root via FCGI else if php file file exists in "central" root, then serve from "Central" root via FCGI else reply with 404 endif if no query given serve "index.php" according to above rules. end if I have got everything working, except the last bit. If index.php exists ONLY in the central root, then I get a 403. I've tried rewrite. I've tried index. Tried "try_files $uri index.php". Below I have tried a special location to force things - it doesn't work either. Here is my config # reseller on anake # # This is the development version - in reseller and served without ssl! # server { server_name reseller.anake.hcs; listen 80; fastcgi_read_timeout 300; index index.php; root /home/ian/websites/reseller/htdocs; # if / then redirect to index.php location = / { # serve /index.php rewrite ^$ /index.php last; } # if local php file exits, serve with fcgi location ~ \.php$ { try_files $uri $uri/ @masterphp; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } # serve php file from master root location @masterphp { root /home/ian/websites/coachmaster3dev/htdocs; try_files $uri /index.php =404; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } # serve local static files if they exist try_files $uri @masterstatic; # switch to master set location @masterstatic { root /home/ian/websites/coachmaster3dev/htdocs; try_files $uri =404; } } reseller.anake.hcs/index.php results in if reseller/htdocs/index.php exists it is served - correctly if reseller/htdocs/index.php does not exist and coachmster3dev/htdocs/index.php exists, then the coachmaster3 version is served - correctly However reseller.anake.hcs results in if reseller/htdocs/index.php exists it is served - correctly if reseller/htdocs/index.php does not exist and coachmster3dev/htdocs/index.php exists, then I get a 403 Forbidden. It should serve the coachmater3dev version. I have checked many times, and there are NO permission problems on directories and files. So what is going wrong. It appears as if try_files ?? ?? @somewhere ; disables index. Thanks for your help. Ian p.s. Is there an alternate approach that I might try? -- Ian Hobson 31 Sheerwater, Northampton NN3 5HU, Tel: 01604 513875 Preparing eBooks for Kindle and ePub formats to give the best reader experience. From djczaski at gmail.com Fri Jul 26 11:50:47 2013 From: djczaski at gmail.com (djczaski) Date: Fri, 26 Jul 2013 07:50:47 -0400 Subject: nginx 1.4.x file upload multipart ecoded In-Reply-To: <83758930f66508dbb8960557b0fca99c.NginxMailingListEnglish@forum.nginx.org> References: <83758930f66508dbb8960557b0fca99c.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have the same problem. The upload module was exactly what I needed. I don't understand the complexity of why it can't be made to work in 1.4.x+. On Thu, Jul 25, 2013 at 8:04 PM, nenad wrote: > How I can handle multipart data uploads with the latest nginx release? > > The good old upload module won^t compile anymore. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,241277,241277#msg-241277 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at styleflare.com Fri Jul 26 12:06:46 2013 From: david at styleflare.com (David J) Date: Fri, 26 Jul 2013 08:06:46 -0400 Subject: nginx 1.4.x file upload multipart ecoded In-Reply-To: References: <83758930f66508dbb8960557b0fca99c.NginxMailingListEnglish@forum.nginx.org> Message-ID: I was looking at resty upload module. I was curious how to store the file on disk relative to the document root? For some reason it didn't work unless I specified the absolute path. Perhaps I am doing it wrong? On Jul 26, 2013 7:51 AM, "djczaski" wrote: > I have the same problem. The upload module was exactly what I needed. I > don't understand the complexity of why it can't be made to work in 1.4.x+. > > > On Thu, Jul 25, 2013 at 8:04 PM, nenad wrote: > >> How I can handle multipart data uploads with the latest nginx release? >> >> The good old upload module won^t compile anymore. >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,241277,241277#msg-241277 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nick at livejournalinc.com Fri Jul 26 13:54:46 2013 From: nick at livejournalinc.com (Nick Toseland) Date: Fri, 26 Jul 2013 14:54:46 +0100 Subject: Help with Keepalive on NGINX In-Reply-To: <20130726100321.GM90722@mdounin.ru> References: <51F24276.3000408@livejournalinc.com> <20130726100321.GM90722@mdounin.ru> Message-ID: <51F27FA6.1030807@livejournalinc.com> Hi Maxim, Thanks for the reply, a great help. works as expected now :-) Have a good weekend Cheers Nick On Fri Jul 26 11:03:21 2013, Maxim Dounin wrote: > Hello! > > On Fri, Jul 26, 2013 at 10:33:42AM +0100, Nick Toseland wrote: > > > Hi All, > > > > I am struggling to get keepalives to work on Nginx, I believe I have > > added the necessary configuration. Can you take a look and suggest > > where I may be going wrong or what I am missing? > > > > We have a nginx server that sits in front of a varnish server which > > sits in front of the back-end servers > > > > http { > > > > keepalive_timeout 120; > > keepalive_requests 500; > > > > upstream varnish_rand { > > server 172.21.1.1:80; > > keepalive 256; > > } > > > > server { > > server_name www.abc.com > > > > location / { > > > > proxy_pass http://varnish_rand; > > proxy_http_version 1.1; > > > > } > > } > > > > You can see that the nginx server is using HTTP1.1 to communicate > > with Varnish > > > > *varnishtop -i RxProtocol * > > list length 1 > > > > 9.67 RxProtocol HTTP/1.1 > > > > However there nginx always send a "connection close" > > > > *varnishtop -c -i RxHeader -I "Connection"* > > > > list length 1 > > > > 9.59 RxHeader Connection: close > > That's because nginx uses "Connection: close" by default, > regardless of a protocol version used. If you want keepalive to > work, you have to instruct nginx to don't send "Connection: > close" using the proxy_set_header directive. > > For more information see documentation at > http://nginx.org/r/keepalive. > From mdounin at mdounin.ru Fri Jul 26 15:25:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 26 Jul 2013 19:25:12 +0400 Subject: index and location not working together properly? In-Reply-To: <51F24D41.5070808@ntlworld.com> References: <51F24D41.5070808@ntlworld.com> Message-ID: <20130726152512.GS90722@mdounin.ru> Hello! On Fri, Jul 26, 2013 at 11:19:45AM +0100, Ian Hobson wrote: [...] > root /home/ian/websites/reseller/htdocs; > # if / then redirect to index.php > location = / { > # serve /index.php > rewrite ^$ /index.php last; The rewrite here does nothing as only URI "/" may appear here, and it's not matched by the "^$" pattern. You probably want to change it to rewrite ^ /index.php last; [...] -- Maxim Dounin http://nginx.org/en/donation.html From michaeljohnmitchell at gmail.com Fri Jul 26 17:22:56 2013 From: michaeljohnmitchell at gmail.com (Michael Mitchell) Date: Fri, 26 Jul 2013 10:22:56 -0700 Subject: can't get http authentication to work Message-ID: Hi, I'm trying to implement HTTP authentication as outlined in this article https://www.digitalocean.com/community/articles/how-to-set-up-http-authentication-with-nginx-on-ubuntu-12-10 I went through the early steps in the tutorial where it prompts you to create a password. That was fine. I then added the two auth_basic and auth_basic_user_file lines to the second location block in my nginx.conf (see below) and pushed it to my server (which necessarily restarts the server), but the http authentication isn't happening. I can access my demo rails app without problem. Any ideas what I might be doing wrong? Thanks if you can help upstream unicorn { server unix:/tmp/unicorn.remotepg.sock fail_timeout=0; } server { listen 80 default deferred; # server_name example.com; root /home/michael/apps/remotepg/current/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; auth_basic "Restricted"; auth_basic_user_file /home/michael/apps/remotepg/current/public/.htpasswd; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Fri Jul 26 18:38:42 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 26 Jul 2013 11:38:42 -0700 Subject: nginx 1.4.x file upload multipart ecoded In-Reply-To: References: <83758930f66508dbb8960557b0fca99c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Fri, Jul 26, 2013 at 5:06 AM, David J wrote: > I was looking at resty upload module. I was curious how to store the file on > disk relative to the document root? > > For some reason it didn't work unless I specified the absolute path. > You can just obtain the document root in Lua via ngx.var.document_root, i.e., accessing Nginx's built-in variable $document_root: http://wiki.nginx.org/HttpCoreModule#.24document_root So it's trivial to construct a proper absolute path without assuming what the document root is in Lua. Regards, -agentzh From ian.hobson at ntlworld.com Fri Jul 26 19:50:29 2013 From: ian.hobson at ntlworld.com (Ian Hobson) Date: Fri, 26 Jul 2013 20:50:29 +0100 Subject: index and location not working together properly? In-Reply-To: <20130726152512.GS90722@mdounin.ru> References: <51F24D41.5070808@ntlworld.com> <20130726152512.GS90722@mdounin.ru> Message-ID: <51F2D305.3080000@ntlworld.com> Hi Maxim, Thank you. That worked a treat. Regards Ian On 26/07/2013 16:25, Maxim Dounin wrote: > Hello! > > On Fri, Jul 26, 2013 at 11:19:45AM +0100, Ian Hobson wrote: > > [...] > >> root /home/ian/websites/reseller/htdocs; >> # if / then redirect to index.php >> location = / { >> # serve /index.php >> rewrite ^$ /index.php last; > The rewrite here does nothing as only URI "/" may appear here, and > it's not matched by the "^$" pattern. > > You probably want to change it to > > rewrite ^ /index.php last; > > [...] > -- Ian Hobson 31 Sheerwater, Northampton NN3 5HU, Tel: 01604 513875 Preparing eBooks for Kindle and ePub formats to give the best reader experience. From nginx-forum at nginx.us Fri Jul 26 21:18:57 2013 From: nginx-forum at nginx.us (YesThatGuy) Date: Fri, 26 Jul 2013 17:18:57 -0400 Subject: Nginx changing settings on the fly? Message-ID: <616d7cb70d44ed4b11ec9c59600e2ce5.NginxMailingListEnglish@forum.nginx.org> We're currently using CrossRoads for our load balancing needs and are evaluating a switch to Nginx. We are excited by the existence of a pacemaker/heartbeat ocf script. But one of the needs that we have is that our web servers have somewhat volatile load averages. Some hits cause a tremendous amount of processing, most others cause very little. The nginx config option "weight" allows you to balance how much load to preferentially give to which host, but can this option be changed in value without restarting nginx? I would want to default to "1" and then have each logic server announce to the load balancer what weight should be used, in short intervals. Crossroads has a web-based admin interface that we use to set an equivalent value in crossroads so that busy servers are effectively "routed around" until their load drops to a more normal value. In Crossroads' case it would look something like: #! /bin/sh loadbalancer="crossroads.mycompany.com" servername="alphabits" weight="4.5" wget http://$loadbalancer/$servername/load/$weight Does nginx have a feature like this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241311,241311#msg-241311 From reallfqq-nginx at yahoo.fr Fri Jul 26 21:54:29 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 26 Jul 2013 17:54:29 -0400 Subject: Nginx changing settings on the fly? In-Reply-To: <616d7cb70d44ed4b11ec9c59600e2ce5.NginxMailingListEnglish@forum.nginx.org> References: <616d7cb70d44ed4b11ec9c59600e2ce5.NginxMailingListEnglish@forum.nginx.org> Message-ID: RTFM ^^ http://nginx.org/en/docs/beginners_guide.html#control http://nginx.org/en/docs/control.html The command described by the Beginner's Guide are available on the service file installed by the official package from the nginx.org repository. Have you even tried out Nginx properly before? That's one of the 1st question I have about configuration management... --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jul 26 22:33:29 2013 From: nginx-forum at nginx.us (YesThatGuy) Date: Fri, 26 Jul 2013 18:33:29 -0400 Subject: Nginx changing settings on the fly? In-Reply-To: References: Message-ID: <1bf1eec09f05ea5d0e006d859e3aa9ed.NginxMailingListEnglish@forum.nginx.org> B.R. thanks for not answering the question. Perhaps you'd do well to RTFM(essage)? Should I assume that that only way to set the weight parameter is by updating the config file and reloading the config? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241311,241313#msg-241313 From reallfqq-nginx at yahoo.fr Fri Jul 26 22:50:59 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 26 Jul 2013 18:50:59 -0400 Subject: Nginx changing settings on the fly? In-Reply-To: <1bf1eec09f05ea5d0e006d859e3aa9ed.NginxMailingListEnglish@forum.nginx.org> References: <1bf1eec09f05ea5d0e006d859e3aa9ed.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, On Fri, Jul 26, 2013 at 6:33 PM, YesThatGuy wrote: > B.R. thanks for not answering the question. Perhaps you'd do well to > RTFM(essage)? > ? No problem, you are welcome...? > > Should I assume that that only way to set the weight parameter is by > updating the config file and reloading the config? > If you RTFM and got interested in Nginx design and the way it deals with its configuration, you would have known that a master process loads the configuration and spawns workers which only job is to deal with requests.? So... Changing configuration would mean signaling the master process in some way... You could write configuration in a conf.d directory where Nginx loads *.conf files with a script and then automatically behind signal Nginx to reload the conf with the same script. > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,241311,241313#msg-241313 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > I am deeply sorry to have provided you with a RTFM answer that outraged you. Please accept my apologies and rest assured that won't occur anymore. Sometimes silence is golden. ? ? ?? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Jul 27 00:26:02 2013 From: nginx-forum at nginx.us (YesThatGuy) Date: Fri, 26 Jul 2013 20:26:02 -0400 Subject: Nginx changing settings on the fly? In-Reply-To: References: Message-ID: <2cea3b57db906450bf78ae0b469bb4d5.NginxMailingListEnglish@forum.nginx.org> > So... Changing configuration would mean signaling the master process in some way... Sounds like it wouldn't be a good environment for frequently updating config values, and it doesn't seem like it would be a simple "bolt-on" feature given the architecture involved. > I am deeply sorry to have provided you with a RTFM answer that outraged you.. > Please accept my apologies and rest assured that won't occur anymore. > Sometimes silence is golden. I've made the same mistake. My reply may have been un-necessarily short as well, and for that I apologize in kind. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241311,241316#msg-241316 From contact at jpluscplusm.com Sat Jul 27 08:54:15 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 27 Jul 2013 09:54:15 +0100 Subject: Nginx changing settings on the fly? In-Reply-To: <616d7cb70d44ed4b11ec9c59600e2ce5.NginxMailingListEnglish@forum.nginx.org> References: <616d7cb70d44ed4b11ec9c59600e2ce5.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 26 July 2013 22:18, YesThatGuy wrote: > We're currently using CrossRoads for our load balancing needs and are > evaluating a switch to Nginx. We are excited by the existence of a > pacemaker/heartbeat ocf script. But one of the needs that we have is that > our web servers have somewhat volatile load averages. Some hits cause a > tremendous amount of processing, most others cause very little. > > The nginx config option "weight" allows you to balance how much load to > preferentially give to which host, but can this option be changed in value > without restarting nginx? I would want to default to "1" and then have each > logic server announce to the load balancer what weight should be used, in > short intervals. I'd be looking at HAProxy for this. You can update weights of existing servers on the fly via a unix socket. Of course, your runtime state then diverges from your on-disk startup state, but that sounds like the mechanism you're looking for. J From nhadie at gmail.com Sun Jul 28 02:55:52 2013 From: nhadie at gmail.com (ron ramos) Date: Sun, 28 Jul 2013 10:55:52 +0800 Subject: block bot on uri with query_string Message-ID: Hi All, Been trying to block bots from accessing a URI that has a query_string "action=get_it", i tried below location ~* \?(action=get_it)$ { if ( $http_user_agent ~ (crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider)) { return 404; break; } } i just learned that location does not match query string, if i do the is_arg i cant do nested if, anyone able to do this before? TIA. Regards, Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Sun Jul 28 09:07:10 2013 From: me at myconan.net (edogawaconan) Date: Sun, 28 Jul 2013 18:07:10 +0900 Subject: block bot on uri with query_string In-Reply-To: References: Message-ID: On Sun, Jul 28, 2013 at 11:55 AM, ron ramos wrote: > Hi All, > > Been trying to block bots from accessing a URI that has a query_string > "action=get_it", i tried below > > > location ~* \?(action=get_it)$ { > > if ( $http_user_agent ~ > (crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider)) { > > return 404; > > break; > > } > > } > > i just learned that location does not match query string, if i do the is_arg > i cant do nested if, anyone able to do this before? > > TIA. > I don't remember the exact syntax but something like this should work: if ($arg_action = get_it) { set $no_bot 1; } if ($http_user_agent ~ ) { set $is_bot 1; } if ($no_bot$is_bot = 11) { return 404; } -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nhadie at gmail.com Sun Jul 28 10:21:21 2013 From: nhadie at gmail.com (ron ramos) Date: Sun, 28 Jul 2013 18:21:21 +0800 Subject: block bot on uri with query_string In-Reply-To: References: Message-ID: oh cool thanks i get what you mean. thanks for the help! Regards, Ron On Sun, Jul 28, 2013 at 5:07 PM, edogawaconan wrote: > On Sun, Jul 28, 2013 at 11:55 AM, ron ramos wrote: > > Hi All, > > > > Been trying to block bots from accessing a URI that has a query_string > > "action=get_it", i tried below > > > > > > location ~* \?(action=get_it)$ { > > > > if ( $http_user_agent ~ > > (crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider)) { > > > > return 404; > > > > break; > > > > } > > > > } > > > > i just learned that location does not match query string, if i do the > is_arg > > i cant do nested if, anyone able to do this before? > > > > TIA. > > > > I don't remember the exact syntax but something like this should work: > > if ($arg_action = get_it) { set $no_bot 1; } > if ($http_user_agent ~ ) { set $is_bot 1; } > if ($no_bot$is_bot = 11) { return 404; } > > -- > O< ascii ribbon campaign - stop html mail - www.asciiribbon.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun Jul 28 12:50:28 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 28 Jul 2013 13:50:28 +0100 Subject: block bot on uri with query_string In-Reply-To: References: Message-ID: On 28 Jul 2013 10:08, "edogawaconan" wrote: > > On Sun, Jul 28, 2013 at 11:55 AM, ron ramos wrote: > > Hi All, > > > > Been trying to block bots from accessing a URI that has a query_string > > "action=get_it", i tried below > > > > > > location ~* \?(action=get_it)$ { > > > > if ( $http_user_agent ~ > > (crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider)) { > > > > return 404; > > > > break; > > > > } > > > > } > > > > i just learned that location does not match query string, if i do the is_arg > > i cant do nested if, anyone able to do this before? > > > > TIA. > > > > I don't remember the exact syntax but something like this should work: > > if ($arg_action = get_it) { set $no_bot 1; } > if ($http_user_agent ~ ) { set $is_bot 1; } > if ($no_bot$is_bot = 11) { return 404; } I would personally use a map{} which examined both variables and then a single if() to take action based on the map's result. I'd be interested in any official comment from nginx staff about the relative merits of these approaches ... J -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at disqus.com Mon Jul 29 21:44:46 2013 From: john at disqus.com (John Watson) Date: Mon, 29 Jul 2013 14:44:46 -0700 Subject: cannot build variables_hash In-Reply-To: <613C34BB-BDDA-4352-84DF-232198BDD8BC@nginx.com> References: <613C34BB-BDDA-4352-84DF-232198BDD8BC@nginx.com> Message-ID: Hi Andrew, Woops, completely forgot about doing that first before posting here. It seems the issue is v0.21 of http://wiki.nginx.org/NginxHttpSRCacheModulethat causes this issue. I'll file a bug with agentzh on his Github repo. Thanks for your help! On Thu, Jul 25, 2013 at 3:55 AM, Andrew Alexeev wrote: > On Jul 25, 2013, at 2:49 AM, John Watson wrote: > > Upgraded from 1.2.9 to 1.4.1 and now started getting: > > [emerg] could not build the variables_hash, you should increase either > variables_hash_max_size: 512 or variables_hash_bucket_size: 64 > > Same configuration and even dropped (2) 3rd party modules. > > nginx.conf and ./configure params: > https://gist.github.com/dctrwatson/6075317 > > adding this to http block fixes it: > variables_hash_max_size 1024; > > Any ideas? Or direction on debugging? > > > Hi John, > > Any chance you could check it with 1.4.1+ and without _any_ 3rd party > modules? > > Thanks, > > John > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jul 29 23:17:05 2013 From: nginx-forum at nginx.us (nenad) Date: Mon, 29 Jul 2013 19:17:05 -0400 Subject: nginx 1.4.x file upload multipart ecoded In-Reply-To: References: Message-ID: <4c6b8bd241dd1cbc5477c35d13f97a6d.NginxMailingListEnglish@forum.nginx.org> Thank you agentzh, it look as it can do the job. Do you know if there will be some performance related Issues with usage of the lua module? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241277,241346#msg-241346 From agentzh at gmail.com Mon Jul 29 23:28:09 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 29 Jul 2013 16:28:09 -0700 Subject: nginx 1.4.x file upload multipart ecoded In-Reply-To: <4c6b8bd241dd1cbc5477c35d13f97a6d.NginxMailingListEnglish@forum.nginx.org> References: <4c6b8bd241dd1cbc5477c35d13f97a6d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Mon, Jul 29, 2013 at 4:17 PM, nenad wrote: > Do you know if there will be some performance related Issues with usage of > the lua module? > The ngx_lua module was created with performance in mind from day #1 and quite a few people have been using it in production to prevent developing custom nginx C modules. So if you run into any performance issues, just let me know. BTW, you're recommended to use LuaJIT 2.0+ with ngx_lua to maximize speed. Regards, -agentzh From nginx-forum at nginx.us Tue Jul 30 10:52:16 2013 From: nginx-forum at nginx.us (ludwigvan) Date: Tue, 30 Jul 2013 06:52:16 -0400 Subject: Problem with VPN IP address and Nginx In-Reply-To: <1373621190.74036.YahooMailNeo@web121605.mail.ne1.yahoo.com> References: <1373621190.74036.YahooMailNeo@web121605.mail.ne1.yahoo.com> Message-ID: <3ddc90c09720493383bbdcc465c09c43.NginxMailingListEnglish@forum.nginx.org> I have the same problem. Have you been able to find a solution to this? I believe it might need to be fixed on the vpn side. The setup works OK if the vpn server and nginx server are on different machines, but if they are on the same machine, it doesn't work for me. A more detailed description of the problem is here: http://serverfault.com/questions/527256/restricting-nginx-website-connection-to-vpn-users-on-the-same-server Ustun Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240709,241374#msg-241374 From mdounin at mdounin.ru Tue Jul 30 13:41:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Jul 2013 17:41:05 +0400 Subject: nginx-1.5.3 Message-ID: <20130730134105.GI2130@mdounin.ru> Changes with nginx 1.5.3 30 Jul 2013 *) Change in internal API: now u->length defaults to -1 if working with backends in unbuffered mode. *) Change: now after receiving an incomplete response from a backend server nginx tries to send an available part of the response to a client, and then closes client connection. *) Bugfix: a segmentation fault might occur in a worker process if the ngx_http_spdy_module was used with the "client_body_in_file_only" directive. *) Bugfix: the "so_keepalive" parameter of the "listen" directive might be handled incorrectly on DragonFlyBSD. Thanks to Sepherosa Ziehau. *) Bugfix: in the ngx_http_xslt_filter_module. *) Bugfix: in the ngx_http_sub_filter_module. -- Maxim Dounin http://nginx.org/en/donation.html From kirpit at gmail.com Tue Jul 30 13:51:21 2013 From: kirpit at gmail.com (kirpit) Date: Tue, 30 Jul 2013 16:51:21 +0300 Subject: Can't get SPDY working with OpenSSL 1.0.1e Message-ID: Hi, I simply read every article on the net, followed the docs but simply can't get SPDY support working. It's running on Debian Squeeze (that I have to) so I upgraded OpenSSL to the version 1.0.1e from Wheezy repo. Downloaded the latest stable Nginx 1.4.2 then compiled with the configuration options: ./configure \ --with-ipv6 \ --with-http_ssl_module \ --with-http_spdy_module \ --with-http_gzip_static_module \ --with-http_geoip_module Configuration summary + using system PCRE library + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library Please note that it's listening on "IP:443 ssl spdy" as the server has more than one IP and SSL running websites. SSL works fine and compiling didn't complain about anything at all. # /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.4.2 built by gcc 4.4.5 (Debian 4.4.5-8) TLS SNI support enabled configure arguments: --with-ipv6 --with-http_ssl_module --with-http_spdy_module --with-http_gzip_static_module --with-http_geoip_module Well, I get the warning message on boot: Starting nginx: nginx: [warn] nginx was built without OpenSSL NPN support, SPDY is not enabled for xxxx. Any help is much appreciated. Roy -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Jul 30 15:15:09 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 30 Jul 2013 19:15:09 +0400 Subject: Can't get SPDY working with OpenSSL 1.0.1e In-Reply-To: References: Message-ID: <201307301915.09256.vbart@nginx.com> On Tuesday 30 July 2013 17:51:21 kirpit wrote: > Hi, > > I simply read every article on the net, followed the docs but simply can't > get SPDY support working. > > It's running on Debian Squeeze (that I have to) so I upgraded OpenSSL to > the version 1.0.1e from Wheezy repo. Downloaded the latest stable Nginx > 1.4.2 then compiled with the configuration options: > > ./configure \ > --with-ipv6 \ > --with-http_ssl_module \ > --with-http_spdy_module \ > --with-http_gzip_static_module \ > --with-http_geoip_module > > Configuration summary > + using system PCRE library > + using system OpenSSL library > + md5: using OpenSSL library > + sha1: using OpenSSL library > + using system zlib library > > Please note that it's listening on "IP:443 ssl spdy" as the server has more > than one IP and SSL running websites. SSL works fine and compiling didn't > complain about anything at all. > > # /usr/local/nginx/sbin/nginx -V > nginx version: nginx/1.4.2 > built by gcc 4.4.5 (Debian 4.4.5-8) > TLS SNI support enabled > configure arguments: --with-ipv6 --with-http_ssl_module > --with-http_spdy_module --with-http_gzip_static_module > --with-http_geoip_module > > Well, I get the warning message on boot: > > Starting nginx: nginx: [warn] nginx was built without OpenSSL NPN support, > SPDY is not enabled for xxxx. > > Any help is much appreciated. You have upgraded OpenSSL binary, but its headers are still from the old version. Please look at the libssl-dev package to ensure that its version is in sync with your openssl. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From kirpit at gmail.com Tue Jul 30 15:25:27 2013 From: kirpit at gmail.com (kirpit) Date: Tue, 30 Jul 2013 18:25:27 +0300 Subject: Can't get SPDY working with OpenSSL 1.0.1e In-Reply-To: <201307301915.09256.vbart@nginx.com> References: <201307301915.09256.vbart@nginx.com> Message-ID: This was quite tricky! # apt-show-versions libssl-dev libssl-dev/wheezy upgradeable from 0.9.8o-4squeeze14 to 1.0.1e-2 Thanks a lot for the help mate. On Tue, Jul 30, 2013 at 6:15 PM, Valentin V. Bartenev wrote: > On Tuesday 30 July 2013 17:51:21 kirpit wrote: > > Hi, > > > > I simply read every article on the net, followed the docs but simply > can't > > get SPDY support working. > > > > It's running on Debian Squeeze (that I have to) so I upgraded OpenSSL to > > the version 1.0.1e from Wheezy repo. Downloaded the latest stable Nginx > > 1.4.2 then compiled with the configuration options: > > > > ./configure \ > > --with-ipv6 \ > > --with-http_ssl_module \ > > --with-http_spdy_module \ > > --with-http_gzip_static_module \ > > --with-http_geoip_module > > > > Configuration summary > > + using system PCRE library > > + using system OpenSSL library > > + md5: using OpenSSL library > > + sha1: using OpenSSL library > > + using system zlib library > > > > Please note that it's listening on "IP:443 ssl spdy" as the server has > more > > than one IP and SSL running websites. SSL works fine and compiling didn't > > complain about anything at all. > > > > # /usr/local/nginx/sbin/nginx -V > > nginx version: nginx/1.4.2 > > built by gcc 4.4.5 (Debian 4.4.5-8) > > TLS SNI support enabled > > configure arguments: --with-ipv6 --with-http_ssl_module > > --with-http_spdy_module --with-http_gzip_static_module > > --with-http_geoip_module > > > > Well, I get the warning message on boot: > > > > Starting nginx: nginx: [warn] nginx was built without OpenSSL NPN > support, > > SPDY is not enabled for xxxx. > > > > Any help is much appreciated. > > You have upgraded OpenSSL binary, but its headers are still from the old > version. Please look at the libssl-dev package to ensure that its version > is in sync with your openssl. > > wbr, Valentin V. Bartenev > > -- > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Tue Jul 30 18:49:35 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 30 Jul 2013 11:49:35 -0700 Subject: cannot build variables_hash In-Reply-To: References: <613C34BB-BDDA-4352-84DF-232198BDD8BC@nginx.com> Message-ID: Hello! On Mon, Jul 29, 2013 at 2:44 PM, John Watson wrote: > It seems the issue is v0.21 of http://wiki.nginx.org/NginxHttpSRCacheModule > that causes this issue. I'll file a bug with agentzh on his Github repo. > Just for the reference, there is no bugs in ngx_srcache. It's just that enabling many nginx modules (including standard ones and 3rd-party ones) exceeds the variables_hash_max_size limit (default value). See below for more details: https://github.com/agentzh/srcache-nginx-module/issues/21 Given the current rich ecosystem of nginx, maybe the nginx core should increase the default value of variables_hash_max_size? Not sure though. Regards, -agentzh From nginx-forum at nginx.us Tue Jul 30 18:53:45 2013 From: nginx-forum at nginx.us (Sylvia) Date: Tue, 30 Jul 2013 14:53:45 -0400 Subject: Can't get SPDY working with OpenSSL 1.0.1e In-Reply-To: References: Message-ID: <7b2478a89be1f671af8ac17fb607992a.NginxMailingListEnglish@forum.nginx.org> Hi. If you compile nginx yourself, its not necessary to update system libraries You can use openssl 1.0.1 source and use --with-openssl=/var/tmp/openssl-1.0.1e switch to configure options, where folder is the path where you unpacked the source, nginx will configure and compile openssl as static library while building nginx and then will link it into nginx ~GL Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241381,241386#msg-241386 From nginx-forum at nginx.us Tue Jul 30 19:17:37 2013 From: nginx-forum at nginx.us (keogh) Date: Tue, 30 Jul 2013 15:17:37 -0400 Subject: Nginx internal server errors static files first load only? Message-ID: <5fce7c9c0eed5298e6ac06f176a82444.NginxMailingListEnglish@forum.nginx.org> Hi Guys, Got a strange issue happening, I do my coding on Windows 8 x64 using Sublime Text 2.0.2, my sites run of a Ubuntu 13.04 VirtualBox guest over a bridged adaptor connection. Weird thing happens, if I save a static file (HTML, CSS etc) from Sublime on Windows to my Ubuntu vm and load it I get a 500 internal error (generic nginx one), hit refresh and all is well. However, If I do the same from Notepad++ I get no issues at all, the file will load first time on nginx. Both are set for EOL unix and UTF-8 and even changing this on both seems to bring the same issue? The nginx error log is vague, just saying its temporarily unavailable. I've posted this on the sublime forum too, as it seems to be sumin that is doing with the file upon saving? Even saving a .html in plain old notepad is fine. Does anyone have any ideas what this could be? How I can solve it? Or even find out more info on why nginx does like the file first time round? Also worth mentioning that no such issue occurs with Apache, that serves all files? Any help appreciated! Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241387,241387#msg-241387 From nginx-forum at nginx.us Tue Jul 30 19:20:31 2013 From: nginx-forum at nginx.us (keogh) Date: Tue, 30 Jul 2013 15:20:31 -0400 Subject: Nginx internal server errors static files first load only? In-Reply-To: <5fce7c9c0eed5298e6ac06f176a82444.NginxMailingListEnglish@forum.nginx.org> References: <5fce7c9c0eed5298e6ac06f176a82444.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4714744869135de8629df4c8c0d484ea.NginxMailingListEnglish@forum.nginx.org> To clarify above, when I say I change the EOL and encoding on both sublime and notepad++ nothing changes. If notepad++ is windows EOL and ANSI it is still fine loading first time, no matter what I set sublime too it's always failing first load Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241387,241388#msg-241388 From nginx-forum at nginx.us Tue Jul 30 21:26:04 2013 From: nginx-forum at nginx.us (keogh) Date: Tue, 30 Jul 2013 17:26:04 -0400 Subject: Nginx internal server errors static files first load only? In-Reply-To: <5fce7c9c0eed5298e6ac06f176a82444.NginxMailingListEnglish@forum.nginx.org> References: <5fce7c9c0eed5298e6ac06f176a82444.NginxMailingListEnglish@forum.nginx.org> Message-ID: This can be ignored now, I've found the issue. It was samba locking the file, adding "oplocks = no" to my smb.conf solved the problem. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241387,241389#msg-241389 From glenn at zewt.org Tue Jul 30 21:26:02 2013 From: glenn at zewt.org (Glenn Maynard) Date: Tue, 30 Jul 2013 16:26:02 -0500 Subject: Incorrect redirect protocol when behind a reverse proxy In-Reply-To: References: Message-ID: On Thu, Jul 25, 2013 at 1:41 PM, Jonathan Matthews wrote: > I've just got to a box and can ACK that. I can make that stop with a > correctly configured try_files, which I would always choose to have > set up, myself. That may not be a solution for you however. > > Here's a way I've just tested (on 1.4.2) that forces the > trailing-slash redirects to incorporate a random HTTP header ("foo", > here) as their scheme: > > # include your boilerplate as per previous email > location / { > location ~ "^(.*)[^/]$" { > rewrite ^ $http_foo://$http_host$uri/ permanent; > } > } > > Or, supposing you have certain URIs which *can* end in > not-a-trailing-slash: (also tested on 1.4.2) > > location / { > if (-d $document_root$uri) { > rewrite ^ $http_foo://$http_host$uri/ permanent; > } > } > > > I suppose the question is then: what *other* classes of automatic > redirects do you find yourself hitting, and can you deterministically > isolate their URIs using either a location{} or if{}, so that you can > pre-empt the auto redirect in order to incorporate the > X-forwarded-proto header? > Thanks, I'll give these approaches a try. I don't know where else this might happen, though. Hopefully at some point I'll be able to say something like "override_protocol $http_x_forwarded_proto;" to tell nginx which protocol it's really receiving a request on, since SSL "offloading" is fairly common these days (http://aws.amazon.com/elasticloadbalancing/, etc). -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From kiran at kiranp.com Wed Jul 31 00:21:09 2013 From: kiran at kiranp.com (Kiran Pillarisetty) Date: Tue, 30 Jul 2013 17:21:09 -0700 Subject: File upload issue with Nginx (Reverse proxy+SSL negotiation) and Tomcat Message-ID: Configuration: ========== - Nginx as reverse proxy + SSL negotiation - Apache Tomcat. - nginx version: nginx/1.5.2 built by gcc 4.1.2 20080704 (Red Hat 4.1.2-54) TLS SNI support disabled configure arguments: --with-rtsig_module --with-select_module --with-poll_module --with-file-aio --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-mail --with-mail_ssl_module --with-cpp_test_module --with-pcre --with-libatomic --with-debug ========== Everything seems to work fine, except for the file upload. For some reason file upload never completes. With the configuration listed below, I am able to upload small files (4K). Upload fails on a 194K file. When I increase "client_body_buffer_size" to 256K, I can upload the 194K file, but a 500K file upload fails. Increasing "client_body_buffer_size" beyond 256K has no impact. **Note: When I access Tomcat directly and upload the 500K file, it finishes in a few milliseconds.** So, looks like something is wrong with Nginx configuration. Any suggestions are greatly appreciated. location / { root /xyz; proxy_pass http://127.0.0.1:9090; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Server $host; proxy_intercept_errors on; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 500m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 75; proxy_send_timeout 180; proxy_read_timeout 180; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 64k; } I have tried adding several other parameters to nginx config (client_body_temp_path, proxy_temp_path, proxy_temp_file_write_size). They didn't seem to help. ==== ***** Further investigation revealed that we have problem uploading 196K file and upwards. 194K file works. "client_body_buffer_size" value is set to 256K. Nginx debug logs show the following in case of failue: =========== 2013/07/30 16:29:57 [debug] 14208#0: *1 recv: fd:11 2606 of 16384 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy status 200 "200 OK" 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Server: Apache-Coyote/1.1" 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Content-Type: text/html;charset=utf-8" 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Date: Tue, 30 Jul 2013 22:29:57 GMT" 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Connection: close" 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header done 2013/07/30 16:29:57 [debug] 14208#0: *1 xslt filter header 2013/07/30 16:29:57 [debug] 14208#0: *1 HTTP/1.1 200 OK^M Server: nginx/1.5.2^M Date: Tue, 30 Jul 2013 22:29:57 GMT^M Content-Type: text/html;charset=utf-8^M Transfer-Encoding: chunked^M Connection: keep-alive^M 2013/07/30 16:29:57 [debug] 14208#0: *1 write new buf t:1 f:0 000000001E61DAD8, pos 000000001E61DAD8, size: 168 file: 0, size: 0 2013/07/30 16:29:57 [debug] 14208#0: *1 http write filter: l:0 f:0 s:168 2013/07/30 16:29:57 [debug] 14208#0: *1 http cacheable: 0 2013/07/30 16:29:57 [debug] 14208#0: *1 posix_memalign: 000000001E62D450:4096 @16 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy filter init s:200 h:0 c:0 l:-1 2013/07/30 16:29:57 [debug] 14208#0: *1 http upstream process upstream 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe read upstream: 1 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe preread: 2465 2013/07/30 16:29:57 [debug] 14208#0: *1 readv: 1:13778 2013/07/30 16:29:57 [debug] 14208#0: *1 readv() not ready (11: Resource temporarily unavailable) 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe recv chain: -2 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe buf free s:0 t:1 f:0 000000001E61DBD0, pos 000000001E61DC5D, size: 2465 file: 0, size: 0 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe length: -1 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe write downstream: 1 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe write busy: 0 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe write: out:0000000000000000, f:0 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe read upstream: 0 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe buf free s:0 t:1 f:0 000000001E61DBD0, pos 000000001E61DC5D, size: 2465 file: 0, size: 0 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe length: -1 2013/07/30 16:29:57 [debug] 14208#0: *1 event timer add: 11: 180000:1375223577332 2013/07/30 16:29:57 [debug] 14208#0: *1 http upstream request: "/upload/html?" 2013/07/30 16:29:57 [debug] 14208#0: *1 http upstream send request handler 2013/07/30 16:29:57 [debug] 14208#0: timer delta: 6 2013/07/30 16:29:57 [debug] 14208#0: posted events 0000000000000000 2013/07/30 16:29:57 [debug] 14208#0: worker cycle 2013/07/30 16:29:57 [debug] 14208#0: epoll timer: 179994 ================= I notice "http upstream send request handler" in above log snippet, where as in success case, I see this: 2013/07/30 16:29:44 [debug] 14208#0: *1 http upstream dummy handler Any idea what "http upstream send request handler" and "http upstream dummy handler" mean, and what they signify? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jul 31 02:31:36 2013 From: nginx-forum at nginx.us (badtzhou) Date: Tue, 30 Jul 2013 22:31:36 -0400 Subject: force range request on proxy server Message-ID: We used amazon S3 as upstream server. And it does not serve 'Accept-Ranges: bytes' as respond header. So the nginx proxy server will not serve range response even if the content has been completely cached on the proxy server. We tested on a upstream server that do serve 'Accept-Ranges: bytes' as respond header. And proxy server was working proper by servering range respond. Is that the default behavior for nginx proxy. Is there a way to overwrite it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241394,241394#msg-241394 From weiyue at taobao.com Wed Jul 31 02:49:06 2013 From: weiyue at taobao.com (=?gb2312?B?zsDUvQ==?=) Date: Wed, 31 Jul 2013 10:49:06 +0800 Subject: nginx-1.5.3 In-Reply-To: <20130730134105.GI2130@mdounin.ru> References: <20130730134105.GI2130@mdounin.ru> Message-ID: <03b801ce8d98$87083310$95189930$@com> > *) Change: now after receiving an incomplete response from a backend > server nginx tries to send an available part of the response to a > client, and then closes client connection. It is obviously different from previous nginx, but I wonder why nginx made this change. From agentzh at gmail.com Wed Jul 31 04:59:41 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 30 Jul 2013 21:59:41 -0700 Subject: nginx-1.5.3 In-Reply-To: <03b801ce8d98$87083310$95189930$@com> References: <20130730134105.GI2130@mdounin.ru> <03b801ce8d98$87083310$95189930$@com> Message-ID: Hello! On Tue, Jul 30, 2013 at 7:49 PM, ?? wrote: >> *) Change: now after receiving an incomplete response from a backend >> server nginx tries to send an available part of the response to a >> client, and then closes client connection. > > It is obviously different from previous nginx, but I wonder why nginx made > this change. > I originally proposed this change here: http://mailman.nginx.org/pipermail/nginx-devel/2012-September/002693.html It was just wrong that Nginx assumed that truncated upstream responses to be well formed and complete. Regards, -agentzh From nginx-forum at nginx.us Wed Jul 31 06:10:18 2013 From: nginx-forum at nginx.us (skchopperguy) Date: Wed, 31 Jul 2013 02:10:18 -0400 Subject: external authentication using a custom script In-Reply-To: References: Message-ID: <6f7fceb836a0cac932a57c955b7ca600.NginxMailingListEnglish@forum.nginx.org> CM Fields Wrote: ------------------------------------------------------- When I get external authentication working with > Nginx I would be happy to share the complete setup with the list. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hi, I am looking to protect some directories with mysql auth. Would you mind sharing the working setup/script(s) you ended up with? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227538,241399#msg-241399 From nginx-forum at nginx.us Wed Jul 31 06:51:43 2013 From: nginx-forum at nginx.us (skchopperguy) Date: Wed, 31 Jul 2013 02:51:43 -0400 Subject: Custom forced 503 page does not work In-Reply-To: <51F12289.3060804@conversis.de> References: <51F12289.3060804@conversis.de> Message-ID: <319c98e60444a1e61412a8aa922489c8.NginxMailingListEnglish@forum.nginx.org> Hi, Nginx reads from top to bottom, just like you. In your example, you are returning 503 (an action) before instructing to use your custom error page. Just need to put your instructions first, then your actions...like so: error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location / { return 503; } -Skyler Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241261,241401#msg-241401 From mdounin at mdounin.ru Wed Jul 31 08:01:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 31 Jul 2013 12:01:04 +0400 Subject: File upload issue with Nginx (Reverse proxy+SSL negotiation) and Tomcat In-Reply-To: References: Message-ID: <20130731080104.GN2130@mdounin.ru> Hello! On Tue, Jul 30, 2013 at 05:21:09PM -0700, Kiran Pillarisetty wrote: > Configuration: > > ========== > - Nginx as reverse proxy + SSL negotiation > - Apache Tomcat. > > - nginx version: nginx/1.5.2 > built by gcc 4.1.2 20080704 (Red Hat 4.1.2-54) > TLS SNI support disabled > configure arguments: --with-rtsig_module --with-select_module > --with-poll_module --with-file-aio --with-http_ssl_module > --with-http_realip_module --with-http_addition_module > --with-http_xslt_module --with-http_image_filter_module > --with-http_geoip_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_mp4_module > --with-http_gzip_static_module --with-http_random_index_module --with-mail > --with-mail_ssl_module --with-cpp_test_module --with-pcre --with-libatomic > --with-debug > ========== > > > Everything seems to work fine, except for the file upload. For some reason > file upload never completes. With the configuration listed below, I am able > to upload small files (4K). Upload fails on a 194K file. When I increase > "client_body_buffer_size" to 256K, I can upload the 194K file, but a 500K > file upload fails. Increasing "client_body_buffer_size" beyond 256K has no > impact. > > **Note: When I access Tomcat directly and upload the 500K file, it finishes > in a few milliseconds.** > > So, looks like something is wrong with Nginx configuration. Any suggestions > are greatly appreciated. > > location / { > root /xyz; > proxy_pass http://127.0.0.1:9090; > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-Server $host; > proxy_intercept_errors on; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > client_max_body_size 500m; > client_body_buffer_size 128k; > proxy_buffering on; > proxy_connect_timeout 75; > proxy_send_timeout 180; > proxy_read_timeout 180; > proxy_buffer_size 128k; > proxy_buffers 4 256k; > proxy_busy_buffers_size 256k; > proxy_temp_file_write_size 64k; > > } > > I have tried adding several other parameters to nginx config > (client_body_temp_path, proxy_temp_path, proxy_temp_file_write_size). They > didn't seem to help. > > > ==== > > > ***** Further investigation revealed that we have problem uploading 196K > file and upwards. 194K file works. "client_body_buffer_size" value is set > to 256K. > > Nginx debug logs show the following in case of failue: > =========== > 2013/07/30 16:29:57 [debug] 14208#0: *1 recv: fd:11 2606 of 16384 > 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy status 200 "200 OK" > 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Server: > Apache-Coyote/1.1" > 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Content-Type: > text/html;charset=utf-8" > 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Date: Tue, 30 > Jul 2013 22:29:57 GMT" > 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Connection: > close" > 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header done > 2013/07/30 16:29:57 [debug] 14208#0: *1 xslt filter header > 2013/07/30 16:29:57 [debug] 14208#0: *1 HTTP/1.1 200 OK^M > Server: nginx/1.5.2^M > Date: Tue, 30 Jul 2013 22:29:57 GMT^M > Content-Type: text/html;charset=utf-8^M > Transfer-Encoding: chunked^M > Connection: keep-alive^M > > 2013/07/30 16:29:57 [debug] 14208#0: *1 write new buf t:1 f:0 > 000000001E61DAD8, pos 000000001E61DAD8, size: 168 file: 0, size: 0 > 2013/07/30 16:29:57 [debug] 14208#0: *1 http write filter: l:0 f:0 s:168 > 2013/07/30 16:29:57 [debug] 14208#0: *1 http cacheable: 0 > 2013/07/30 16:29:57 [debug] 14208#0: *1 posix_memalign: > 000000001E62D450:4096 @16 > 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy filter init s:200 h:0 > c:0 l:-1 > 2013/07/30 16:29:57 [debug] 14208#0: *1 http upstream process upstream > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe read upstream: 1 > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe preread: 2465 > 2013/07/30 16:29:57 [debug] 14208#0: *1 readv: 1:13778 > 2013/07/30 16:29:57 [debug] 14208#0: *1 readv() not ready (11: Resource > temporarily unavailable) > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe recv chain: -2 > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe buf free s:0 t:1 f:0 > 000000001E61DBD0, pos 000000001E61DC5D, size: 2465 file: 0, size: 0 > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe length: -1 > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe write downstream: 1 > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe write busy: 0 > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe write: out:0000000000000000, > f:0 > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe read upstream: 0 > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe buf free s:0 t:1 f:0 > 000000001E61DBD0, pos 000000001E61DC5D, size: 2465 file: 0, size: 0 > 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe length: -1 > 2013/07/30 16:29:57 [debug] 14208#0: *1 event timer add: 11: > 180000:1375223577332 > 2013/07/30 16:29:57 [debug] 14208#0: *1 http upstream request: > "/upload/html?" > 2013/07/30 16:29:57 [debug] 14208#0: *1 http upstream send request handler > 2013/07/30 16:29:57 [debug] 14208#0: timer delta: 6 > 2013/07/30 16:29:57 [debug] 14208#0: posted events 0000000000000000 > 2013/07/30 16:29:57 [debug] 14208#0: worker cycle > 2013/07/30 16:29:57 [debug] 14208#0: epoll timer: 179994 > ================= >From a debug log fragment provided it's not clear what goes wrong, you may want to show full debug log. My best guess is that your backend tries to respond to a request before it actually read a request. In such a case nginx will stop sending a request (if there is anything left), but your backend needs it to complete the response. Hence timeout. > I notice "http upstream send request handler" in above log snippet, where > as in success case, I see this: > 2013/07/30 16:29:44 [debug] 14208#0: *1 http upstream dummy handler > > > Any idea what "http upstream send request handler" and "http upstream dummy > handler" mean, and what they signify? You may try looking into sources to find out. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Jul 31 08:17:47 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 31 Jul 2013 12:17:47 +0400 Subject: Custom forced 503 page does not work In-Reply-To: <319c98e60444a1e61412a8aa922489c8.NginxMailingListEnglish@forum.nginx.org> References: <51F12289.3060804@conversis.de> <319c98e60444a1e61412a8aa922489c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130731081747.GO2130@mdounin.ru> Hello! On Wed, Jul 31, 2013 at 02:51:43AM -0400, skchopperguy wrote: > Hi, > > Nginx reads from top to bottom, just like you. In your example, you are > returning 503 (an action) before instructing to use your custom error page. > Just need to put your instructions first, then your actions...like so: > > error_page 500 502 503 504 /50x.html; > location = /50x.html { root /usr/share/nginx/html; } > > location / { return 503; } No, config directives order doesn't matter in general. There are some exceptions (like rewrite module directives order, or order of locations with regular expressions), but the configuration in question is fine and has no problems with directives order. Most likely, the problem was somewhere else (e.g. conflicting server{} block in the configuration, or original message author just forgot to reload the configuration after changes). -- Maxim Dounin http://nginx.org/en/donation.html From yaoweibin at gmail.com Wed Jul 31 13:35:34 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Wed, 31 Jul 2013 21:35:34 +0800 Subject: [ANNOUNCE] Tengine-1.5.0 is released Message-ID: Hi folks, We are excited to announce that Tengine-1.5.0 (stable version) has been released! You can either checkout the source code from github: https://github.com/alibaba/tengine or download the tar ball directly: http://tengine.taobao.org/download/tengine-1.5.0.tar.gz This is the latest stable version of Tengine, in which we have added the non-buffering request body mechanism (Nginx has to buffer the whole request body before sending to a backend. And in some cases, Nginx has to save the request body to disk). This feature could reduce disk IO and system load greatly for uploading services. The ABI compatibility verification of DSO has been introduced. And we also added the trim module which could be used to remove white spaces and comments to decrease the size of a page. The full change log follows below: *) Feature: added ABI compatibility verification for DSO modules. (monadbobo) *) Feature: added non-buffering request body mechanism. Now the http proxy and fastcgi module can send requests to backend servers when it receives part of a request body. (yaoweibin) *) Feature: added trim module which can remove unnecessary white spaces and comments to reduce the size of a page. (taoyuanyuan) *) Feature: added the accept filter mechanism which supports to do some filter processing after accepting a new connection. (yzprofile) *) Feature: Now the server banner in a default error page can be replaced by the string specified in server_tag. (zhuzhaoyuan) *) Bugfix: fixed the bug of the 'buffer' argument might be ignored in the 'access_log' directive. (cfsego) *) Bugfix: fixed the session_sticky module didn't issue the session cookie in the direct mode. (dinic) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Have fun! Regards, -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 31 14:03:23 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 31 Jul 2013 18:03:23 +0400 Subject: cannot build variables_hash In-Reply-To: References: <613C34BB-BDDA-4352-84DF-232198BDD8BC@nginx.com> Message-ID: <20130731140323.GW2130@mdounin.ru> Hello! On Tue, Jul 30, 2013 at 11:49:35AM -0700, Yichun Zhang (agentzh) wrote: > Hello! > > On Mon, Jul 29, 2013 at 2:44 PM, John Watson wrote: > > It seems the issue is v0.21 of http://wiki.nginx.org/NginxHttpSRCacheModule > > that causes this issue. I'll file a bug with agentzh on his Github repo. > > > > Just for the reference, there is no bugs in ngx_srcache. > > It's just that enabling many nginx modules (including standard ones > and 3rd-party ones) exceeds the variables_hash_max_size limit (default > value). See below for more details: > > https://github.com/agentzh/srcache-nginx-module/issues/21 > > Given the current rich ecosystem of nginx, maybe the nginx core should > increase the default value of variables_hash_max_size? Not sure > though. How many variables are in total in the configuration in question? Standard nginx with almost all modules compiled in (and a few extra modules) here has just 114 variables, and 512 looks big enough. I suspect the real problem is hash collisions. Especially this used to hit people under qemu where cache line size of an emulated CPU is detected as 32 by nginx, see here: http://trac.nginx.org/nginx/ticket/352 Not sure how to properly address it though. Right now I think just automatically doubling bucket_size if we wasn't able to build hash might be a good idea. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Jul 31 21:44:20 2013 From: nginx-forum at nginx.us (automatix) Date: Wed, 31 Jul 2013 17:44:20 -0400 Subject: Error 404 when the URL contains special characters (braces, hyphens etc.) Message-ID: In my (Zend Framework 2) application I have a catalog of cities with links to sports pages: page "Cities" (/catalog) Madrid link: website.tld/catalog/Madrid Berlin link: website.tld/catalog/Berlin London link: website.tld/catalog/London page "Sports in London" (/catalog/London) Foo link: website.tld/catalog/London/Foo Bar (Bar) link: website.tld/catalog/London/Bar (Bar) Baz - Baz link: website.tld/catalog/London/Baz - Baz The URLs are escaped and work well. But when I try to reach a page with an unescaped special character like a brace (e.g. "website.tld/catalog/Berlin/Jeu de Paume (Real Tennis)" instead of "website.tld/catalog/Berlin/Jeu%20de%20Paume%20%28Real%20Tennis%29"), I get a 404 Error. How to fix it? What I want to achieve is a behaviour like ? la Wikipedia -- You can use "http://en.wikipedia.org/wiki/Signal_(electrical_engineering)" or "http://en.wikipedia.org/wiki/Signal_%28electrical_engineering%29" and will always get the corect page. Additional info: System properties: Debian 7, nginx 1.2.1, PHP 5.5. nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } ax-common-vhost: server { listen 80; server_name foo.loc bar.loc baz.loc ; if ($host ~ ^(?.+)\.(?.+)\.loc$) { #set $project $1; # already set #set $area $2; # already set set $folder "$area/$project"; #set $domain "$project.$area.loc"; # equal to $host } access_log /var/log/nginx/$area/$project.access.log; error_log /var/log/nginx/error.log; gzip on; gzip_min_length 1000; gzip_types text/plain text/xml application/xml; client_max_body_size 25m; root /var/www/$folder/public/; try_files $uri $uri/ /index.php?$args; index index.html index.php; location / { index index.html index.php; sendfile off; } location ~ (\.inc\.php|\.tpl|\.sql|\.tpl\.php|\.db)$ { deny all; } location ~ \.htaccess { deny all; } if (!-e $request_filename) { rewrite ^.*$ /index.php last; } location ~ \.php$ { fastcgi_cache off; #fastcgi_pass 127.0.0.1:9001; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_read_timeout 6000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param APPLICATION_ENV development; fastcgi_param HTTPS $https; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241421,241421#msg-241421 From pkiran at gmail.com Wed Jul 31 21:54:36 2013 From: pkiran at gmail.com (Kiran Pillarisetty) Date: Wed, 31 Jul 2013 14:54:36 -0700 Subject: File upload issue with Nginx (Reverse proxy+SSL negotiation) and Tomcat In-Reply-To: <20130731080104.GN2130@mdounin.ru> References: <20130731080104.GN2130@mdounin.ru> Message-ID: Hi Maxim, Thank you for the response. I am attaching the full debug log files from success and failure cases. > ... In such a case nginx will stopsending a request > (if there is anything left), but your backend > needs it to complete the response. Hence timeout. In such scenario, how exactly could we prevent Nginx from stopping the sending of the file? Also, it is interesting to note that when we access Tomcat port directly, we can upload the file without an issue. Thank you! On Wed, Jul 31, 2013 at 1:01 AM, Maxim Dounin wrote: > Hello! > > On Tue, Jul 30, 2013 at 05:21:09PM -0700, Kiran Pillarisetty wrote: > >> Configuration: >> >> ========== >> - Nginx as reverse proxy + SSL negotiation >> - Apache Tomcat. >> >> - nginx version: nginx/1.5.2 >> built by gcc 4.1.2 20080704 (Red Hat 4.1.2-54) >> TLS SNI support disabled >> configure arguments: --with-rtsig_module --with-select_module >> --with-poll_module --with-file-aio --with-http_ssl_module >> --with-http_realip_module --with-http_addition_module >> --with-http_xslt_module --with-http_image_filter_module >> --with-http_geoip_module --with-http_sub_module --with-http_dav_module >> --with-http_flv_module --with-http_mp4_module >> --with-http_gzip_static_module --with-http_random_index_module --with-mail >> --with-mail_ssl_module --with-cpp_test_module --with-pcre --with-libatomic >> --with-debug >> ========== >> >> >> Everything seems to work fine, except for the file upload. For some reason >> file upload never completes. With the configuration listed below, I am able >> to upload small files (4K). Upload fails on a 194K file. When I increase >> "client_body_buffer_size" to 256K, I can upload the 194K file, but a 500K >> file upload fails. Increasing "client_body_buffer_size" beyond 256K has no >> impact. >> >> **Note: When I access Tomcat directly and upload the 500K file, it finishes >> in a few milliseconds.** >> >> So, looks like something is wrong with Nginx configuration. Any suggestions >> are greatly appreciated. >> >> location / { >> root /xyz; >> proxy_pass http://127.0.0.1:9090; >> proxy_redirect off; >> proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-Server $host; >> proxy_intercept_errors on; >> >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> client_max_body_size 500m; >> client_body_buffer_size 128k; >> proxy_buffering on; >> proxy_connect_timeout 75; >> proxy_send_timeout 180; >> proxy_read_timeout 180; >> proxy_buffer_size 128k; >> proxy_buffers 4 256k; >> proxy_busy_buffers_size 256k; >> proxy_temp_file_write_size 64k; >> >> } >> >> I have tried adding several other parameters to nginx config >> (client_body_temp_path, proxy_temp_path, proxy_temp_file_write_size). They >> didn't seem to help. >> >> >> ==== >> >> >> ***** Further investigation revealed that we have problem uploading 196K >> file and upwards. 194K file works. "client_body_buffer_size" value is set >> to 256K. >> >> Nginx debug logs show the following in case of failue: >> =========== >> 2013/07/30 16:29:57 [debug] 14208#0: *1 recv: fd:11 2606 of 16384 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy status 200 "200 OK" >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Server: >> Apache-Coyote/1.1" >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Content-Type: >> text/html;charset=utf-8" >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Date: Tue, 30 >> Jul 2013 22:29:57 GMT" >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header: "Connection: >> close" >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy header done >> 2013/07/30 16:29:57 [debug] 14208#0: *1 xslt filter header >> 2013/07/30 16:29:57 [debug] 14208#0: *1 HTTP/1.1 200 OK^M >> Server: nginx/1.5.2^M >> Date: Tue, 30 Jul 2013 22:29:57 GMT^M >> Content-Type: text/html;charset=utf-8^M >> Transfer-Encoding: chunked^M >> Connection: keep-alive^M >> >> 2013/07/30 16:29:57 [debug] 14208#0: *1 write new buf t:1 f:0 >> 000000001E61DAD8, pos 000000001E61DAD8, size: 168 file: 0, size: 0 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http write filter: l:0 f:0 s:168 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http cacheable: 0 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 posix_memalign: >> 000000001E62D450:4096 @16 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http proxy filter init s:200 h:0 >> c:0 l:-1 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http upstream process upstream >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe read upstream: 1 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe preread: 2465 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 readv: 1:13778 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 readv() not ready (11: Resource >> temporarily unavailable) >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe recv chain: -2 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe buf free s:0 t:1 f:0 >> 000000001E61DBD0, pos 000000001E61DC5D, size: 2465 file: 0, size: 0 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe length: -1 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe write downstream: 1 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe write busy: 0 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe write: out:0000000000000000, >> f:0 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe read upstream: 0 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe buf free s:0 t:1 f:0 >> 000000001E61DBD0, pos 000000001E61DC5D, size: 2465 file: 0, size: 0 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 pipe length: -1 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 event timer add: 11: >> 180000:1375223577332 >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http upstream request: >> "/upload/html?" >> 2013/07/30 16:29:57 [debug] 14208#0: *1 http upstream send request handler >> 2013/07/30 16:29:57 [debug] 14208#0: timer delta: 6 >> 2013/07/30 16:29:57 [debug] 14208#0: posted events 0000000000000000 >> 2013/07/30 16:29:57 [debug] 14208#0: worker cycle >> 2013/07/30 16:29:57 [debug] 14208#0: epoll timer: 179994 >> ================= > > From a debug log fragment provided it's not clear what goes wrong, > you may want to show full debug log. > > My best guess is that your backend tries to respond to a request > before it actually read a request. In such a case nginx will stop > sending a request (if there is anything left), but your backend > needs it to complete the response. Hence timeout. > >> I notice "http upstream send request handler" in above log snippet, where >> as in success case, I see this: >> 2013/07/30 16:29:44 [debug] 14208#0: *1 http upstream dummy handler >> >> >> Any idea what "http upstream send request handler" and "http upstream dummy >> handler" mean, and what they signify? > > You may try looking into sources to find out. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: NginxDebugLog-SuccessCaseWith194Kfile.log.zip Type: application/zip Size: 6023 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NginxDebugLog-FailureCaseWith196Kfile.log.zip Type: application/zip Size: 5575 bytes Desc: not available URL: From agentzh at gmail.com Wed Jul 31 23:04:17 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 31 Jul 2013 16:04:17 -0700 Subject: cannot build variables_hash In-Reply-To: <20130731140323.GW2130@mdounin.ru> References: <613C34BB-BDDA-4352-84DF-232198BDD8BC@nginx.com> <20130731140323.GW2130@mdounin.ru> Message-ID: Hello! On Wed, Jul 31, 2013 at 7:03 AM, Maxim Dounin wrote: > > How many variables are in total in the configuration in question? I've just checked. There are only 124 variables. > > I suspect the real problem is hash collisions. Yeah, it seems like that :) > Especially this used > to hit people under qemu where cache line size of an emulated CPU > is detected as 32 by nginx, see here: > Well, this is a modern Linux x86_64 system running on real hardware though :) > Not sure how to properly address it though. Right now I think > just automatically doubling bucket_size if we wasn't able to build > hash might be a good idea. > Yeah, this work-around looks good to me :) Best regards, -agentzh