From mdounin at mdounin.ru Wed Apr 1 00:18:24 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 Apr 2015 03:18:24 +0300 Subject: about proxy_request_buffering In-Reply-To: <95a28ed575a6583da2bf9f5e7f383a9c.NginxMailingListEnglish@forum.nginx.org> References: <95a28ed575a6583da2bf9f5e7f383a9c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150401001824.GM88631@mdounin.ru> Hello! On Sun, Mar 29, 2015 at 01:51:15AM -0400, cubicdaiya wrote: > Hello. > > Though I'm trying to apply 'proxy_request_buffering off;' for unbuffered > uploading, > The buffering still seems to be enabled from error.log. > > # nginx.conf > > server { > listen 443 ssl spdy; > server_name example.com; > > location /upload { > proxy_request_buffering off; > proxy_pass http://upload_backend; > } > } > > # error.log > > 2015/03/29 14:02:20 [warn] 6965#0: *1 a client request body is buffered to a > temporary file /etc/nginx/client_body_temp/0000000001, client: x.x.x.x, > server: example.com, request: "POST /upload HTTP/1.1", host: "example.com" > > > The warning above is not output when SPDY is not enabled. > > Is proxy_request_buffering always enabled when SPDY is enabled? Yes, it is not currently possible to switch off proxy_request_buffering when using SPDY. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Apr 1 06:39:49 2015 From: nginx-forum at nginx.us (George) Date: Wed, 01 Apr 2015 02:39:49 -0400 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: <0145D419-FA77-4CCE-8F99-009274A5E2ED@nginx.com> References: <0145D419-FA77-4CCE-8F99-009274A5E2ED@nginx.com> Message-ID: thanks Sarah dug deeper and apparently those nginx reported header sites were behind Google Pagespeed's service so that must of been why HTTP/2 was reported Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256561,257778#msg-257778 From black.fledermaus at arcor.de Wed Apr 1 07:06:47 2015 From: black.fledermaus at arcor.de (basti) Date: Wed, 01 Apr 2015 09:06:47 +0200 Subject: allow access to certain client addresses or use auth_basic In-Reply-To: <20150330192102.GT29618@daoine.org> References: <55190D3D.1060503@arcor.de> <20150330192102.GT29618@daoine.org> Message-ID: <551B9907.1080809@arcor.de> Thanks a lot! On 30.03.2015 21:21, Francis Daly wrote: > On Mon, Mar 30, 2015 at 10:45:49AM +0200, basti wrote: > > Hi there, > >> is there a way to do following in nginx server or location config. >> >> 1. allow access to certain client addresses >> 2. if the ip is not in the list, allow access by ngx_http_auth_basic_module > Yes. > > http://nginx.org/r/satisfy > > f From nginx-forum at nginx.us Wed Apr 1 10:51:34 2015 From: nginx-forum at nginx.us (cubicdaiya) Date: Wed, 01 Apr 2015 06:51:34 -0400 Subject: about proxy_request_buffering In-Reply-To: <20150401001824.GM88631@mdounin.ru> References: <20150401001824.GM88631@mdounin.ru> Message-ID: Hello. > 2015-04-01 9:18 GMT+09:00 Maxim Dounin : > Yes, it is not currently possible to switch off proxy_request_buffering > when using SPDY. Thanks. My question was resolved. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257731,257784#msg-257784 From nginx-forum at nginx.us Wed Apr 1 11:10:00 2015 From: nginx-forum at nginx.us (patrickshan) Date: Wed, 01 Apr 2015 07:10:00 -0400 Subject: about proxy_request_buffering In-Reply-To: <20150401001824.GM88631@mdounin.ru> References: <20150401001824.GM88631@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Sun, Mar 29, 2015 at 01:51:15AM -0400, cubicdaiya wrote: > > > Hello. > > > > Though I'm trying to apply 'proxy_request_buffering off;' for > unbuffered > > uploading, > > The buffering still seems to be enabled from error.log. > > > > # nginx.conf > > > > server { > > listen 443 ssl spdy; > > server_name example.com; > > > > location /upload { > > proxy_request_buffering off; > > proxy_pass http://upload_backend; > > } > > } > > > > # error.log > > > > 2015/03/29 14:02:20 [warn] 6965#0: *1 a client request body is > buffered to a > > temporary file /etc/nginx/client_body_temp/0000000001, client: > x.x.x.x, > > server: example.com, request: "POST /upload HTTP/1.1", host: > "example.com" > > > > > > The warning above is not output when SPDY is not enabled. > > > > Is proxy_request_buffering always enabled when SPDY is enabled? > > Yes, it is not currently possible to switch off > proxy_request_buffering > when using SPDY. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks for confirming this. It would be nice if we can have this documented here: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257731,257785#msg-257785 From mdounin at mdounin.ru Wed Apr 1 12:55:24 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 Apr 2015 15:55:24 +0300 Subject: about proxy_request_buffering In-Reply-To: References: <20150401001824.GM88631@mdounin.ru> Message-ID: <20150401125524.GQ88631@mdounin.ru> Hello! On Wed, Apr 01, 2015 at 07:10:00AM -0400, patrickshan wrote: > Maxim Dounin Wrote: > > > On Sun, Mar 29, 2015 at 01:51:15AM -0400, cubicdaiya wrote: [...] > > > Is proxy_request_buffering always enabled when SPDY is enabled? > > > > Yes, it is not currently possible to switch off > > proxy_request_buffering > > when using SPDY. > > Thanks for confirming this. It would be nice if we can have this documented > here: > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering > . Rather, at http://nginx.org/en/docs/http/ngx_http_spdy_module.html#bugs. Yes, we'll consider this, thanks. -- Maxim Dounin http://nginx.org/ From cole.putnamhill at comcast.net Wed Apr 1 19:05:41 2015 From: cole.putnamhill at comcast.net (Cole Tierney) Date: Wed, 1 Apr 2015 15:05:41 -0400 Subject: 2 maps for one 1 variable? In-Reply-To: References: Message-ID: Hello, Is it possible to use more than one map directive with a single variable? I tried but it seems the second map over writes any value set by the 1st map even if there is no match in the 2nd map. I tried leaving out the default value in the second map. ? Cole From steve at greengecko.co.nz Wed Apr 1 19:23:24 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 02 Apr 2015 08:23:24 +1300 Subject: 2 maps for one 1 variable? In-Reply-To: References: Message-ID: <1427916204.3304.280.camel@steve-new> On Wed, 2015-04-01 at 15:05 -0400, Cole Tierney wrote: > Hello, > > Is it possible to use more than one map directive with a single variable? I tried but it seems the second map over writes any value set by the 1st map even if there is no match in the 2nd map. I tried leaving out the default value in the second map. > > ? > Cole You can link 2 maps together by setting the default value for the second map as the result from the first map. Unique names will be required though. Does that fit with what you're trying to do?? -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From cole.putnamhill at comcast.net Wed Apr 1 20:02:18 2015 From: cole.putnamhill at comcast.net (Cole Tierney) Date: Wed, 1 Apr 2015 16:02:18 -0400 Subject: 2 maps for one 1 variable? In-Reply-To: <1427916204.3304.280.camel@steve-new> References: <1427916204.3304.280.camel@steve-new> Message-ID: On Apr 1, 2015, at 3:23 PM, Steve Holdoway wrote: > > On Wed, 2015-04-01 at 15:05 -0400, Cole Tierney wrote: >> Hello, >> >> Is it possible to use more than one map directive with a single variable? I tried but it seems the second map over writes any value set by the 1st map even if there is no match in the 2nd map. I tried leaving out the default value in the second map. >> >> ? >> Cole > You can link 2 maps together by setting the default value for the second > map as the result from the first map. Unique names will be required > though. > > Does that fit with what you're trying to do?? Thank does work. Thanks! From cole.putnamhill at comcast.net Wed Apr 1 20:25:42 2015 From: cole.putnamhill at comcast.net (Cole Tierney) Date: Wed, 1 Apr 2015 16:25:42 -0400 Subject: shellshock probing In-Reply-To: <1427916204.3304.280.camel@steve-new> References: <1427916204.3304.280.camel@steve-new> Message-ID: <653D5C0A-C36E-402A-8EB6-FC225B4F3248@comcast.net> Hello, I'm seeing lots of shellshock probing in my access logs. My server's not vulnerable, but my logs are filling up with 404s. The requests are for random cgi scripts. The referer and user_agents are the same and always start with () { :; }; followed by curl or wget to a remote perl script piped to perl locally. I'd like to return 444 for these. I'm currently using a couple of maps to set a variable $drop. What would be the most efficient way to test for the initial "() { :; };" at beginning of these request headers? This is what I have so far: map $http_referer $drop_referer { default 0; "~^\s*\(\s*\)\s*\{[^\}]*\}\s*" 1; } map $http_user_agent $drop { default $drop_referer; "~^\s*\(\s*\)\s*\{[^\}]*\}\s*" 1; } Or is there a better method to block these? -- Cole From nginx-forum at nginx.us Wed Apr 1 20:50:49 2015 From: nginx-forum at nginx.us (mex) Date: Wed, 01 Apr 2015 16:50:49 -0400 Subject: shellshock probing In-Reply-To: <653D5C0A-C36E-402A-8EB6-FC225B4F3248@comcast.net> References: <653D5C0A-C36E-402A-8EB6-FC225B4F3248@comcast.net> Message-ID: <953439a699e860e3c0c5374e7d922a36.NginxMailingListEnglish@forum.nginx.org> hi cole, if implemetable you couldd use naxsi https://github.com/nbs-system/naxsi for this, there exists a rule to detect and block shellshock-exploit-attempts: MainRule "str:() {" "msg:Possible Remote code execution through Bash CVE-2014-6271" "mz:BODY|HEADERS" "s:$ATTACK:8" id:42000393 ; see -> http://spike.nginx-goodies.com/rules/view/42000393 there is also an extended ruleset available -> https://bitbucket.org/lazy_dogtown/doxi-rules cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257792,257796#msg-257796 From dmiller at amfes.com Wed Apr 1 20:55:18 2015 From: dmiller at amfes.com (Daniel Miller) Date: Wed, 01 Apr 2015 13:55:18 -0700 Subject: Preferred method for location blocks Message-ID: What is the difference between: location /admin { } vs. location ~ /admin(/.*) { } The first seems cleaner, and I assume runs faster - but do they process differently? -- Daniel From cole.putnamhill at comcast.net Wed Apr 1 21:07:29 2015 From: cole.putnamhill at comcast.net (Cole Tierney) Date: Wed, 1 Apr 2015 17:07:29 -0400 Subject: shellshock probing In-Reply-To: <953439a699e860e3c0c5374e7d922a36.NginxMailingListEnglish@forum.nginx.org> References: <653D5C0A-C36E-402A-8EB6-FC225B4F3248@comcast.net> <953439a699e860e3c0c5374e7d922a36.NginxMailingListEnglish@forum.nginx.org> Message-ID: <24E5EC25-5C6B-4CAA-8D1C-8802B4FDE325@comcast.net> Thanks mex, I?ll check it out. > On Apr 1, 2015, at 4:50 PM, mex wrote: > > hi cole, > > if implemetable you couldd use naxsi https://github.com/nbs-system/naxsi > for this, there exists a rule to detect and block > shellshock-exploit-attempts: > > MainRule "str:() {" "msg:Possible Remote code execution through Bash > CVE-2014-6271" "mz:BODY|HEADERS" "s:$ATTACK:8" id:42000393 ; > > see -> http://spike.nginx-goodies.com/rules/view/42000393 > > there is also an extended ruleset available > -> https://bitbucket.org/lazy_dogtown/doxi-rules > > cheers, > > mex From dmiller at amfes.com Wed Apr 1 21:12:57 2015 From: dmiller at amfes.com (Daniel Miller) Date: Wed, 01 Apr 2015 14:12:57 -0700 Subject: Set a PHP parameter for only one location Message-ID: I have a "standard" location block for my php directives... # Pass all .php files onto a php-fpm/php-fcgi server. location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass php; } But...I want to set a php_value for a specific directory. Is there a more elegant method than duplicating all the directives for the "global" php handler above for the directory? -- Daniel From nginx-forum at nginx.us Wed Apr 1 21:17:14 2015 From: nginx-forum at nginx.us (mex) Date: Wed, 01 Apr 2015 17:17:14 -0400 Subject: shellshock probing In-Reply-To: <24E5EC25-5C6B-4CAA-8D1C-8802B4FDE325@comcast.net> References: <24E5EC25-5C6B-4CAA-8D1C-8802B4FDE325@comcast.net> Message-ID: <5f06f09d395e3d1f9a2f8f56f9cebc4a.NginxMailingListEnglish@forum.nginx.org> if you have questions on naxsi, feel free to join the naxsi-discuss - ml https://groups.google.com/forum/#!forum/naxsi-discuss cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257792,257801#msg-257801 From sarah at nginx.com Wed Apr 1 22:14:01 2015 From: sarah at nginx.com (Sarah Novotny) Date: Wed, 1 Apr 2015 15:14:01 -0700 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: <0145D419-FA77-4CCE-8F99-009274A5E2ED@nginx.com> Message-ID: <94A5C9E6-869C-4B4A-848F-4531112D7935@nginx.com> > On Mar 31, 2015, at 11:39 PM, George wrote: > > thanks Sarah > > dug deeper and apparently those nginx reported header sites were behind > Google Pagespeed's service so that must of been why HTTP/2 was reported That does seem like a likely reason. .s. From gmm at csdoc.com Wed Apr 1 22:39:40 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Thu, 02 Apr 2015 01:39:40 +0300 Subject: Preferred method for location blocks In-Reply-To: References: Message-ID: <551C73AC.9040701@csdoc.com> On 01.04.2015 23:55, Daniel Miller wrote: > What is the difference between: > > location /admin { > } > > vs. > > location ~ /admin(/.*) { > } > > > The first seems cleaner, and I assume runs faster - but do they process > differently? Yes, they process differently. http://nginx.org/en/docs/http/ngx_http_core_module.html#location http://nginx.org/en/docs/http/request_processing.html -- Best regards, Gena From gmm at csdoc.com Wed Apr 1 22:48:11 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Thu, 02 Apr 2015 01:48:11 +0300 Subject: Scaleable NGINX Configuration In-Reply-To: References: Message-ID: <551C75AB.1000809@csdoc.com> On 02.04.2015 0:12, Daniel Miller wrote: > I have a "standard" location block for my php directives... > > # Pass all .php files onto a php-fpm/php-fcgi server. > location ~ \.php$ { > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > include fastcgi_params; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_pass php; > } > > But...I want to set a php_value for a specific directory. Is there a > more elegant method than duplicating all the directives for the "global" > php handler above for the directory? > Detailed answer on your question from Igor Sysoev, creator of nginx: on English: https://www.youtube.com/watch?v=YWRYbLKsS0I Scaleable NGINX Configuration on Russian: https://events.yandex.ru/lib/talks/2392/ ?????????????? ???????????? nginx -- Best regards, Gena From francis at daoine.org Wed Apr 1 23:28:26 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 2 Apr 2015 00:28:26 +0100 Subject: Set a PHP parameter for only one location In-Reply-To: References: Message-ID: <20150401232826.GV29618@daoine.org> On Wed, Apr 01, 2015 at 02:12:57PM -0700, Daniel Miller wrote: Hi there, > But...I want to set a php_value for a specific directory. Is there > a more elegant method than duplicating all the directives for the > "global" php handler above for the directory? I think that "duplicating" is the elegant way. I suppose that you could put the four useful lines of your config into an external file, and "include" that in both your current and new locations. But I'd consider that "extra level of indirection" to be less elegant. (I can't think of any requests where your "fastcgi_split_path_info" or "fastcgi_index" directives will do anything useful.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Apr 1 23:41:57 2015 From: nginx-forum at nginx.us (carnagel) Date: Wed, 01 Apr 2015 19:41:57 -0400 Subject: $skip_cache define home page In-Reply-To: <20150323225900.GK29618@daoine.org> References: <20150323225900.GK29618@daoine.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > On Sun, Mar 22, 2015 at 06:35:31AM -0400, carnagel wrote: > > Hi there, > > > I understand how to skip cache on cookies, POST, query strings, urls > > containing string etc > > How do you skip cache on urls containing strings? # Don't cache uris containing the following segments if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { set $skip_cache 1; } > > > But how do you define the home page itself in $skip_cache please? > > What url or urls is "the home page"? the root website page http://www.mysite.com/ > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I guess another way of asking my question "How do I define bypass fastcgi cache on a page with no url string, ie: the root website page?" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257535,257806#msg-257806 From reallfqq-nginx at yahoo.fr Thu Apr 2 08:39:15 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 2 Apr 2015 10:39:15 +0200 Subject: Set a PHP parameter for only one location In-Reply-To: <20150401232826.GV29618@daoine.org> References: <20150401232826.GV29618@daoine.org> Message-ID: Do not be afraid of copy-pasting, those few kB on disk/in memory will relieve you from pain during maintenance (and it is basically how you would manage your configuration using templates). https://youtu.be/YWRYbLKsS0I --- *B. R.* On Thu, Apr 2, 2015 at 1:28 AM, Francis Daly wrote: > On Wed, Apr 01, 2015 at 02:12:57PM -0700, Daniel Miller wrote: > > Hi there, > > > But...I want to set a php_value for a specific directory. Is there > > a more elegant method than duplicating all the directives for the > > "global" php handler above for the directory? > > I think that "duplicating" is the elegant way. > > I suppose that you could put the four useful lines of your config into an > external file, and "include" that in both your current and new locations. > > But I'd consider that "extra level of indirection" to be less elegant. > > (I can't think of any requests where your "fastcgi_split_path_info" or > "fastcgi_index" directives will do anything useful.) > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Apr 2 09:41:45 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 2 Apr 2015 11:41:45 +0200 Subject: Callback hitting hard Message-ID: Hello, I am facing a problem on a website which has been using AJAX callbacks to report JS errors. It seems there has been sth wrong going on and a tremendous number of errors are being reported through the callback. You could say the website owner crafted it own DDoS vector. Errors collection has been deactivated and our cache purged so new visitors are unaffected. However it seems an old version of the cache is stored somewhere in a specific network since its gateway is responsible for a huge amount of requests. The question is: - Knowing the current version of the page is OK - Knowing we isolated the callback calls (POST, not cached) in nginx to alleviate the pain previously inflicted to the backends Is there a way (HTTP Status code?) to tell the page making these callbacks to refreh itself? I was thinking about 205, but I am unsure if it means what I think it does. Does it merely clear forms and other types of user input in the page? For now, We are serving 204, even though the JS script does not seem to like it (requests volume increased, but nginx can handle without problem of course). ?The problem is now traffic volume...? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Apr 2 10:13:27 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 2 Apr 2015 11:13:27 +0100 Subject: $skip_cache define home page In-Reply-To: References: <20150323225900.GK29618@daoine.org> Message-ID: <20150402101327.GW29618@daoine.org> On Wed, Apr 01, 2015 at 07:41:57PM -0400, carnagel wrote: > Francis Daly Wrote: > ------------------------------------------------------- > > On Sun, Mar 22, 2015 at 06:35:31AM -0400, carnagel wrote: Hi there, > > > I understand how to skip cache on cookies, POST, query strings, urls > > > containing string etc > > > > How do you skip cache on urls containing strings? > > # Don't cache uris containing the following segments > if ($request_uri ~* > "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { > set $skip_cache 1; > } If that works, then if ($request_uri = "/") { set $skip_cache 1; } should work the same way. > I guess another way of asking my question "How do I define bypass fastcgi > cache on a page with no url string, ie: the root website page?" location = / { # the configuration here is for the root website page } f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Apr 2 11:21:56 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 02 Apr 2015 07:21:56 -0400 Subject: shellshock probing In-Reply-To: <653D5C0A-C36E-402A-8EB6-FC225B4F3248@comcast.net> References: <653D5C0A-C36E-402A-8EB6-FC225B4F3248@comcast.net> Message-ID: <2cda77a803291bd78e4c8693da669414.NginxMailingListEnglish@forum.nginx.org> Cole Tierney Wrote: ------------------------------------------------------- > Or is there a better method to block these? Not really better but good enough :) map $http_referer $waffableref { default 0; ~*\{.*\:\; 1; } map $http_user_agent $waffableua { default 0; ~*\{.*\:\; 1; } map $waffableref$waffableua $waffable { default 0; ~1 1; } # Block shellshock: if ($waffable) { return 444; } # Drop'm from logging: map $waffable $loggable { default 1; ~1 0; } access_log /path/to/access.log combined if=$loggable; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257792,257814#msg-257814 From nginx-forum at nginx.us Thu Apr 2 11:39:11 2015 From: nginx-forum at nginx.us (patrickshan) Date: Thu, 02 Apr 2015 07:39:11 -0400 Subject: about proxy_request_buffering In-Reply-To: <20150401125524.GQ88631@mdounin.ru> References: <20150401125524.GQ88631@mdounin.ru> Message-ID: <7fb9ebf7ac923c5e57a37e49ceae70ad.NginxMailingListEnglish@forum.nginx.org> K. Thanks Maxim :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257731,257815#msg-257815 From reallfqq-nginx at yahoo.fr Thu Apr 2 12:32:57 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 2 Apr 2015 14:32:57 +0200 Subject: Callback hitting hard In-Reply-To: References: Message-ID: I found a way to mitigate it, if anyone interested. At first, I thought that trying to rate-limit would not provide results, as the JS callback could still retry, not caring about the returned code. It seems sending 503 after rate limiting (not having allowed any queue with the 'burst' parameter) is enough. --- *B. R.* On Thu, Apr 2, 2015 at 11:41 AM, B.R. wrote: > Hello, > > I am facing a problem on a website which has been using AJAX callbacks to > report JS errors. > It seems there has been sth wrong going on and a tremendous number of > errors are being reported through the callback. > You could say the website owner crafted it own DDoS vector. > > Errors collection has been deactivated and our cache purged so new > visitors are unaffected. > However it seems an old version of the cache is stored somewhere in a > specific network since its gateway is responsible for a huge amount of > requests. > > The question is: > - Knowing the current version of the page is OK > - Knowing we isolated the callback calls (POST, not cached) in nginx to > alleviate the pain previously inflicted to the backends > > Is there a way (HTTP Status code?) to tell the page making these callbacks > to refreh itself? > > I was thinking about 205, but I am unsure if it means what I think it > does. Does it merely clear forms and other types of user input in the page? > For now, We are serving 204, even though the JS script does not seem to > like it (requests volume increased, but nginx can handle without problem of > course). > ?The problem is now traffic volume...? > --- > *B. R.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cole.putnamhill at comcast.net Thu Apr 2 13:33:20 2015 From: cole.putnamhill at comcast.net (Cole Tierney) Date: Thu, 2 Apr 2015 09:33:20 -0400 Subject: shellshock probing In-Reply-To: <2cda77a803291bd78e4c8693da669414.NginxMailingListEnglish@forum.nginx.org> References: <653D5C0A-C36E-402A-8EB6-FC225B4F3248@comcast.net> <2cda77a803291bd78e4c8693da669414.NginxMailingListEnglish@forum.nginx.org> Message-ID: > On Apr 2, 2015, at 7:21 AM, itpp2012 wrote: > > Cole Tierney Wrote: > ------------------------------------------------------- >> Or is there a better method to block these? > > Not really better but good enough :) > > map $http_referer $waffableref { > default 0; > ~*\{.*\:\; 1; > } > map $http_user_agent $waffableua { > default 0; > ~*\{.*\:\; 1; > } > map $waffableref$waffableua $waffable { > default 0; > ~1 1; > } > > # Block shellshock: > if ($waffable) { return 444; } > > # Drop'm from logging: > map $waffable $loggable { > default 1; > ~1 0; > } > > access_log /path/to/access.log combined if=$loggable; Thanks! I like the combined variables in the 3rd map. From reallfqq-nginx at yahoo.fr Thu Apr 2 14:48:22 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 2 Apr 2015 16:48:22 +0200 Subject: shellshock probing In-Reply-To: References: <653D5C0A-C36E-402A-8EB6-FC225B4F3248@comcast.net> <2cda77a803291bd78e4c8693da669414.NginxMailingListEnglish@forum.nginx.org> Message-ID: That is the power of the 'empty value = does nothing' logic. :o) --- *B. R.* On Thu, Apr 2, 2015 at 3:33 PM, Cole Tierney wrote: > > On Apr 2, 2015, at 7:21 AM, itpp2012 wrote: > > > > Cole Tierney Wrote: > > ------------------------------------------------------- > >> Or is there a better method to block these? > > > > Not really better but good enough :) > > > > map $http_referer $waffableref { > > default 0; > > ~*\{.*\:\; 1; > > } > > map $http_user_agent $waffableua { > > default 0; > > ~*\{.*\:\; 1; > > } > > map $waffableref$waffableua $waffable { > > default 0; > > ~1 1; > > } > > > > # Block shellshock: > > if ($waffable) { return 444; } > > > > # Drop'm from logging: > > map $waffable $loggable { > > default 1; > > ~1 0; > > } > > > > access_log /path/to/access.log combined if=$loggable; > > Thanks! I like the combined variables in the 3rd map. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 2 19:22:29 2015 From: nginx-forum at nginx.us (josephmc5) Date: Thu, 02 Apr 2015 15:22:29 -0400 Subject: undefined symbol: ldap_init_fd In-Reply-To: <5348F464.80307@laimbock.com> References: <5348F464.80307@laimbock.com> Message-ID: <3a4d03f868dbb5e3efece0ed54de7259.NginxMailingListEnglish@forum.nginx.org> Patrick Laimbock Wrote: ------------------------------------------------------- > On 04/11/2014 10:48 PM, allamm78 wrote: > > I successfully compile Nginx with Nginx-auth-ldap and when I start > Nginx , > > the worker process turn defunct and I see - > > > > nginx: worker process: symbol lookup error: nginx: worker process: > undefined > > symbol: ldap_init_fd > > > > in the error logs without able to utilize ldap, what could be wrong > here? > > I have never seen that error before. The revision that compiles and > works fine for me with nginx 1.4.7 and openldap-2.4.39 is this one: > > https://github.com/kvspb/nginx-auth-ldap/tree/ee45bc4898d70770e06af9fe > 0a8c0088b4cb9f26 > > HTH, > Patrick > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx allamm78, did you find a resolution? I'm having this issue with nginx 1.6.2 and the latest nginx-auth-ldap. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249240,257823#msg-257823 From nginx-forum at nginx.us Thu Apr 2 20:47:49 2015 From: nginx-forum at nginx.us (josephmc5) Date: Thu, 02 Apr 2015 16:47:49 -0400 Subject: undefined symbol: ldap_init_fd In-Reply-To: <3a4d03f868dbb5e3efece0ed54de7259.NginxMailingListEnglish@forum.nginx.org> References: <5348F464.80307@laimbock.com> <3a4d03f868dbb5e3efece0ed54de7259.NginxMailingListEnglish@forum.nginx.org> Message-ID: <01ddf485250623e0e58c8c76a2c1261d.NginxMailingListEnglish@forum.nginx.org> josephmc5 Wrote: ------------------------------------------------------- > Patrick Laimbock Wrote: > ------------------------------------------------------- > > On 04/11/2014 10:48 PM, allamm78 wrote: > > > I successfully compile Nginx with Nginx-auth-ldap and when I > start > > Nginx , > > > the worker process turn defunct and I see - > > > > > > nginx: worker process: symbol lookup error: nginx: worker > process: > > undefined > > > symbol: ldap_init_fd > > > > > > in the error logs without able to utilize ldap, what could be > wrong > > here? > > > > I have never seen that error before. The revision that compiles and > > > works fine for me with nginx 1.4.7 and openldap-2.4.39 is this one: > > > > > https://github.com/kvspb/nginx-auth-ldap/tree/ee45bc4898d70770e06af9fe > > > 0a8c0088b4cb9f26 > > > > HTH, > > Patrick > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > allamm78, > > did you find a resolution? I'm having this issue with nginx 1.6.2 and > the latest nginx-auth-ldap. The answer for me to fix this error was to upgrade openldap to the latest release (not version) https://bugzilla.redhat.com/show_bug.cgi?id=655133 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249240,257824#msg-257824 From nginx-forum at nginx.us Fri Apr 3 22:12:22 2015 From: nginx-forum at nginx.us (kevinff) Date: Fri, 03 Apr 2015 18:12:22 -0400 Subject: 1.7 major caching change? Message-ID: Hello! Until version 1.6.2 we had a configuration which was working fine to force people to hit the cache: proxy_set_header Cookie ""; proxy_hide_header Set-Cookie; proxy_ignore_headers Expires Cache-Control Set-Cookie; proxy_cache_valid 3m; and we use a specific proxy_cache_bypass so we BYPASS the cache by ourselves periodically to refresh the cache (under 3m of course). The thing is, after upgrading to 1.7, it is broken, the bypass still works, but there are some requests not hitting the cache. 03/Apr/2015:21:46:09 +0000 "GET / HTTP/1.1" 200 [...] BYPASS <-- me refreshing the cache [..] 03/Apr/2015:21:47:21 +0000 "GET / HTTP/1.1" 200 [...] EXPIRED <-- random visitor 03/Apr/2015:21:47:29 +0000 "GET / HTTP/1.1" 200 [...] EXPIRED <-- random visitor Not only it is not hitting the first cached page, but then it should even hit the previous visitor cache because we use proxy_cache_use_stale updating Are there some new headers we need to ignore in 1.7, or any other major change? I've looked at the cache folder and i see it creating different files with different filenames/paths but the KEY is the same.. I had tried it with 1.7.9 and we faced the issue so we reverted back to 1.6, i've just tried 1.7.11 and we still have the problem.. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257827,257827#msg-257827 From nginx-forum at nginx.us Fri Apr 3 22:24:33 2015 From: nginx-forum at nginx.us (kevinff) Date: Fri, 03 Apr 2015 18:24:33 -0400 Subject: 1.7 major caching change? In-Reply-To: References: Message-ID: Nevermind i think i found: nginx takes into account the Vary header for caching since 1.7.7. Hope it helps somebody else. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257827,257828#msg-257828 From nginx-forum at nginx.us Sun Apr 5 13:25:37 2015 From: nginx-forum at nginx.us (nanochelandro) Date: Sun, 05 Apr 2015 09:25:37 -0400 Subject: How often ssl_stapling_file picks up an updated file? Message-ID: <44d70aa6c3b3ce721955d535e1394c27.NginxMailingListEnglish@forum.nginx.org> Hey all. Before I file a bugreport I'd like to consult with community to make sure whether I get the whole thing right. I use ssl_stapling_file and update that file daily. Today I discovered that one of my SSL websites returns outdated OCSP response, not the one which is in the OCSP stapling file: > openssl s_client -connect xxxx:443 -tls1 -tlsextdebug -status ... Cert Status: good This Update: Mar 26 06:05:34 2015 GMT Next Update: Mar 28 06:05:34 2015 GMT Today is April 5. I checked OCSP file, it's fresh (April 4), has correct permissions, readable by nginx, etc. Then I reloaded nginx (HUP) and boom: > openssl s_client -connect xxxx:443 -tls1 -tlsextdebug -status ... Cert Status: good This Update: Apr 4 04:19:53 2015 GMT Next Update: Apr 6 04:19:53 2015 GMT I run a dozen of SSL websites with ssl_stapling_file but never had to HUP nginx to pick up an updated file (or at least I never noticed the issue (even in FireFox which is very picky regarding OCSP)). Is that a bug (1.7.11) or did I do it wrong all the time? :) Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257831,257831#msg-257831 From reallfqq-nginx at yahoo.fr Sun Apr 5 19:16:43 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 5 Apr 2015 21:16:43 +0200 Subject: How often ssl_stapling_file picks up an updated file? In-Reply-To: <44d70aa6c3b3ce721955d535e1394c27.NginxMailingListEnglish@forum.nginx.org> References: <44d70aa6c3b3ce721955d535e1394c27.NginxMailingListEnglish@forum.nginx.org> Message-ID: If nginx manages those files like the others (like logs), it (re)opens them on reload/restart. You might tweak your updating script to also send a HUP signal to nginx. It would be recommanded to check the error log on reload, as errors (if any) will appear there. You might also simply use the ssl_stapling directive, with which nginx will manage the cache of the received OCSP answer in memory by itself. Why are not you using this method? --- *B. R.* On Sun, Apr 5, 2015 at 3:25 PM, nanochelandro wrote: > Hey all. > Before I file a bugreport I'd like to consult with community to make sure > whether I get the whole thing right. > > I use ssl_stapling_file and update that file daily. > Today I discovered that one of my SSL websites returns outdated OCSP > response, not the one which is in the OCSP stapling file: > > > openssl s_client -connect xxxx:443 -tls1 -tlsextdebug -status > ... > Cert Status: good > This Update: Mar 26 06:05:34 2015 GMT > Next Update: Mar 28 06:05:34 2015 GMT > > Today is April 5. I checked OCSP file, it's fresh (April 4), has correct > permissions, readable by nginx, etc. > Then I reloaded nginx (HUP) and boom: > > > openssl s_client -connect xxxx:443 -tls1 -tlsextdebug -status > ... > Cert Status: good > This Update: Apr 4 04:19:53 2015 GMT > Next Update: Apr 6 04:19:53 2015 GMT > > > I run a dozen of SSL websites with ssl_stapling_file but never had to HUP > nginx to pick up an updated file (or at least I never noticed the issue > (even in FireFox which is very picky regarding OCSP)). > > Is that a bug (1.7.11) or did I do it wrong all the time? :) > > Thanks. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257831,257831#msg-257831 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 6 03:26:19 2015 From: nginx-forum at nginx.us (bughunter) Date: Sun, 05 Apr 2015 23:26:19 -0400 Subject: How to enable OCSP stapling when default server is self-signed? Message-ID: <1ecc20d791aac9c1adc6652d83da9785.NginxMailingListEnglish@forum.nginx.org> My web server is intentionally set up to only support virtual hosts and TLS SNI. I know that the latter eliminates some ancient web browsers but I don't care about those browsers. I want to enable OCSP stapling and it seems to be configured correctly in my test vhost (everything else about SSL already works fine - I get an A on the Qualys SSL Labs test) and there are no errors or warnings but "openssl s_client" always returns: "OCSP response: no response sent" Yes, I ran the s_client command multiple times to account for the nginx responder delay. I was testing OCSP stapling on just one of my domains. Then I read that the 'default_server' SSL server also has to have OCSP stapling enabled for vhost OCSP stapling to work: https://gist.github.com/konklone/6532544 This is a huge problem if I want to enable OCSP for my vhosts because my 'default_server' certificate is self-signed (intentional) and running 'configtest' with 'ssl_stapling' options on the default server, of course, results in a warning: "nginx: [warn] "ssl_stapling" ignored, issuer certificate not found" Which indicates that it isn't enabled on the default server and subsequent s_client tests (after reloading the config, which, of course, issued the same warning a second time) on the test vhost confirm that there was still no OCSP stapling. It was a long-shot in the first place. So how do I enable OCSP stapling for my vhosts when the default server cert is self-signed? This seems like a potential bug in the nginx SSL module. Other useful info: Running nginx 1.6.2 (Stable) built from source. My 'resolver 127.0.0.1' line in my config points at a local BIND9 server that 'dig myvhostdomain.com @localhost' confirms is working just fine - so it isn't a DNS resolver issue as far as I can tell. The error logs are quiet other than the warning I got when I added the OCSP stapling options to 'default_server'. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257833,257833#msg-257833 From emailbuilder88 at yahoo.com Mon Apr 6 04:39:01 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Sun, 5 Apr 2015 21:39:01 -0700 Subject: Basic HTTP auth prompting too many times Message-ID: <1428295141.60002.YahooMailBasic@web142402.mail.bf1.yahoo.com> Hello, I have set up HTTP auth using the auth_pam module (although I'm not sure that module is the problem - it might be nginx problem). https://github.com/stogh/ngx_http_auth_pam_module/ All works great for a while---- After some time, browsers begin to prompt for authentication over and over again (I guess once for every image, stylesheet, script, etc?). Or maybe it is prompting because the credentials failed, but I don't think so because if I hit cancel/ESC over and over again, I can use the web page (I'm still authenticated), but none of the images or scripts have loaded. The logs don't show any indication of what caused the prompts to start showing. Previously, I saw this error (in nginx's error log) associated with the situation: Can't initialize threads: error 11 This looks a little like a MySQL error (I use pam_mysql behind auth_pam).I don't know if there is some bad code in auth_pam causing this(?). Restarting nginx fixed the prompting in this case. However, today, the prompting started again and the above error does NOT appear. I don't see any errors. Browser doesn't matter - tried it on firefox, mobile, whatever. I will try to test with just the built-in basic auth but that's not a long term solution for me, I need pam/mysql behind the auth (lot of virt users). From emailbuilder88 at yahoo.com Mon Apr 6 04:55:40 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Sun, 5 Apr 2015 21:55:40 -0700 Subject: Endless HTTP auth attempts without 5xx error? [was: Basic HTTP auth prompting too many times] In-Reply-To: <1428295141.60002.YahooMailBasic@web142402.mail.bf1.yahoo.com> Message-ID: <1428296140.32295.YahooMailBasic@web142401.mail.bf1.yahoo.com> by the way, i changed to nginx basic_auth and when I enter wrong credentials, it allows me endless tries. i'm was used to apache gives a 5xx page after three bad tries. i guess you could refresh that and try again in apache too, but endless tries without a error for nginx? is there a way to change this? > I have set up HTTP auth using the auth_pam module (although > I'm not sure that module is the problem - it might be nginx > problem). > > https://github.com/stogh/ngx_http_auth_pam_module/ > > All works great for a while---- > > After some time, browsers begin to prompt for authentication > over > and over again (I guess once for every image, stylesheet, > script, etc?). > Or maybe it is prompting because the credentials failed, but > I don't > think so because if I hit cancel/ESC over and over again, I > can use > the web page (I'm still authenticated), but none of the > images or > scripts have loaded. > > The logs don't show any indication of what caused the > prompts to > start showing. Previously, I saw this error (in nginx's > error log) > associated with the situation: > > Can't initialize threads: error 11 > > This looks a little like a MySQL error (I use pam_mysql > behind > auth_pam).I don't know if there is some bad code in > auth_pam > causing this(?). Restarting nginx fixed the prompting in > this case. > > However, today, the prompting started again and the above > error > does NOT appear. I don't see any errors. Browser doesn't > matter - > tried it on firefox, mobile, whatever. > > I will try to test with just the built-in basic auth but > that's not a long > term solution for me, I need pam/mysql behind the auth (lot > of virt > users). From emailbuilder88 at yahoo.com Mon Apr 6 05:09:13 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Sun, 5 Apr 2015 22:09:13 -0700 Subject: Endless HTTP auth attempts without 5xx error? [was: Basic HTTP auth prompting too many times] In-Reply-To: <1428296140.32295.YahooMailBasic@web142401.mail.bf1.yahoo.com> Message-ID: <1428296953.12613.YahooMailBasic@web142405.mail.bf1.yahoo.com> Sorry I guess I meant 401 instead of 5xx? Well, same question tho. -------------------------------------------- by the way, i changed to nginx basic_auth and when I enter wrong credentials, it allows me endless tries.? i'm was used to apache gives a 5xx page after three bad tries.? i guess you could refresh that and try again in apache too, but endless tries without a error for nginx?? is there a way to change this? >? I have set up HTTP auth using the auth_pam module (although >? I'm not sure that module is the problem - it might be nginx >? problem). >? >? https://github.com/stogh/ngx_http_auth_pam_module/ >? >? All works great for a while---- >? >? After some time, browsers begin to prompt for authentication >? over >? and over again (I guess once for every image, stylesheet, >? script, etc?). >? Or maybe it is prompting because the credentials failed, but >? I don't >? think so because if I hit cancel/ESC over and over again, I >? can use >? the web page (I'm still authenticated), but none of the >? images or >? scripts have loaded. >? >? The logs don't show any indication of what caused the >? prompts to >? start showing. Previously, I saw this error (in nginx's >? error log) >? associated with the situation: >? >? Can't initialize threads: error 11 >? >? This looks a little like a MySQL error (I use pam_mysql >? behind >? auth_pam).I don't know if there is some bad code in >? auth_pam >? causing this(?). Restarting nginx fixed the prompting in >? this case. >? >? However, today, the prompting started again and the above >? error >? does NOT appear. I don't see any errors. Browser doesn't >? matter - >? tried it on firefox, mobile, whatever. >? >? I will try to test with just the built-in basic auth but >? that's not a long >? term solution for me, I need pam/mysql behind the auth (lot >? of virt >? users). From mdounin at mdounin.ru Mon Apr 6 19:20:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Apr 2015 22:20:58 +0300 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <1ecc20d791aac9c1adc6652d83da9785.NginxMailingListEnglish@forum.nginx.org> References: <1ecc20d791aac9c1adc6652d83da9785.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150406192058.GP88631@mdounin.ru> Hello! On Sun, Apr 05, 2015 at 11:26:19PM -0400, bughunter wrote: > My web server is intentionally set up to only support virtual hosts and TLS > SNI. I know that the latter eliminates some ancient web browsers but I > don't care about those browsers. > > I want to enable OCSP stapling and it seems to be configured correctly in my > test vhost (everything else about SSL already works fine - I get an A on the > Qualys SSL Labs test) and there are no errors or warnings but "openssl > s_client" always returns: > > "OCSP response: no response sent" > > Yes, I ran the s_client command multiple times to account for the nginx > responder delay. I was testing OCSP stapling on just one of my domains. > Then I read that the 'default_server' SSL server also has to have OCSP > stapling enabled for vhost OCSP stapling to work: > > https://gist.github.com/konklone/6532544 There is no such a requirement. > This is a huge problem if I want to enable OCSP for my vhosts because my > 'default_server' certificate is self-signed (intentional) and running > 'configtest' with 'ssl_stapling' options on the default server, of course, > results in a warning: > > "nginx: [warn] "ssl_stapling" ignored, issuer certificate not found" > > Which indicates that it isn't enabled on the default server and subsequent > s_client tests (after reloading the config, which, of course, issued the > same warning a second time) on the test vhost confirm that there was still > no OCSP stapling. It was a long-shot in the first place. This warning indicates that you've tried to enable OCSP Stapling for a server with a certificate whose issuer certificate cannot be found, therefore the "ssl_stapling" directive was ignored for the server. To avoid seeing the warning on each start, consider switching off ssl_stapling for the server{} block in question. > So how do I enable OCSP stapling for my vhosts when the default server cert > is self-signed? This seems like a potential bug in the nginx SSL module. Just enable ssl_stapling in appropriate server{} blocks. -- Maxim Dounin http://nginx.org/ From igal at lucee.org Mon Apr 6 19:23:50 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Mon, 06 Apr 2015 12:23:50 -0700 Subject: SSL cert issues with mobile devices Message-ID: <5522DD46.4050507@lucee.org> I have an issue with my SSL certificate on some mobile devices, e.g. Safari on iPhone and Firefox on Android. Everything seems to be fine with desktop browsers as well as some mobile browsers (works fine on Chrome on Android). According to ssllabs.com the issue is with the Certificate Chain and/or the Certification Path: This server's certificate chain is incomplete. Grade capped to B. Certificates provided 1 (1331 bytes) Chain issues *Incomplete* Certification Paths Path #1: Trusted *1* Sent by server www.mydomainname.com RSA 2048 bits (e 65537) / SHA256withRSA *2* Extra download Go Daddy Secure Certificate Authority - G2 RSA 2048 bits (e 65537) / SHA256withRSA *3* In trust store Go Daddy Root Certificate Authority - G2 Self-signed RSA 2048 bits (e 65537) / SHA256withRSA Here are my ssl settings: server { ### other settings ommited listen localhost.mydomainname:443 ssl; ssl_certificate_key C:/ssl-certificates/mydomainname.key; ## may be stored in certificate file (i.e. .pem) ssl_certificate C:/ssl-certificates/mydomainname.crt; ## .crt or .pem ssl_trusted_certificate C:/ssl-certificates/gd_bundle-g2-g1.crt; ssl_stapling on; ssl_stapling_verify on; keepalive_timeout 70; ## minimize ssl handshake overhead ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ## removes SSLv3 which is on by default and is vulnerable to POODLE attacks ssl_prefer_server_ciphers on; } How can I fix this? TIA! -- Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From delphij at delphij.net Mon Apr 6 19:32:44 2015 From: delphij at delphij.net (Xin Li) Date: Mon, 06 Apr 2015 12:32:44 -0700 Subject: SSL cert issues with mobile devices In-Reply-To: <5522DD46.4050507@lucee.org> References: <5522DD46.4050507@lucee.org> Message-ID: <5522DF5C.8000302@delphij.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 04/06/15 12:23, Igal @ Lucee.org wrote: > I have an issue with my SSL certificate on some mobile devices, > e.g. Safari on iPhone and Firefox on Android. Everything seems to > be fine with desktop browsers as well as some mobile browsers > (works fine on Chrome on Android). > > According to ssllabs.com the issue is with the Certificate Chain > and/or the Certification Path: > > This server's certificate chain is incomplete. Grade capped to B. > > Certificates provided 1 (1331 bytes) Chain issues *Incomplete* [...] > ssl_certificate C:/ssl-certificates/mydomainname.crt; ## .crt > or .pem You need to get a copy of your intermediate certificate authority's certificate (in your case, that Go Daddy Secure Certificate Authority - - G2 or probably https://certs.godaddy.com/repository/gdig2.crt, check https://certs.godaddy.com/repository to make sure) and concatnate it at the end of your mydomainname.crt. This way you are presenting a chain of certificate (your certificate, then intermediate certificate that have signed your certificate; you don't need to include the root certificate as it's a waste of bandwidth) to the client. Cheers, - -- Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.1.2 (FreeBSD) iQIcBAEBCgAGBQJVIt9cAAoJEJW2GBstM+nsXBoP/3QrCKtBezAh51tym2BeVyRA eShkpqu8Sfh+hzXJZYuLd/1l/IIW8H40LU8bdrXcuyZ6RUl/damrlb2z7qx7PXAU MRBb2YF7ohicdzwzcpMDSnpx+573WeSdDE6s9Ne/EPbotHr8meHv+L83O3qR+D9u b4kfhasvhRYz1rUgGXj/66h1S1ExTu1Pp9MJJfxZ1e/2l3TNkJLRE4A8flUy2rq4 rXzjiWupoyBWXtNem8t0o3Caag+7W6bvj9k8EqNDA87G575o8p4QuEt/ImoC82Bi ZHtcM9fGt5m6120DX7eTjfEcaaM9xfqACLrhtuBEQs4u1EPAZ1CwBkF2ONfW+sWZ qyZtjfVlvkiF4hWi0N2vAW9apsFGT6MA2cv4bBO7EQAjmTmJdZt3oV6VCzOdM8AZ cBbsm+jlc0LmGc2OWP59G+8loJYmI4dRPDkHB34TphavjrAfilewlEoLh3xNBT9b pkMU7R1Xa1DOxL5+xPhfJlFEHLQEzFc9T0e1BXtNVNw0WZrpRzRRMjVJKp1ei3Mf AyCoWUK2fIK7ZFMNCFu94G4S8KGhn092HqtjJKj/0ps9kTW5oPBdxf5WfFJvfEQy HU08D7MIDdBBMZEZxHGx9zMUvRU6Ip6Iu4puiq/0/mzr4LwKJMfEiFtpL9wlSRpm D599/YGgDOC/An2bAx89 =ItVn -----END PGP SIGNATURE----- From igal at lucee.org Mon Apr 6 20:37:57 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Mon, 06 Apr 2015 13:37:57 -0700 Subject: SSL cert issues with mobile devices In-Reply-To: <5522DF5C.8000302@delphij.net> References: <5522DD46.4050507@lucee.org> <5522DF5C.8000302@delphij.net> Message-ID: <5522EEA5.60907@lucee.org> Thank you Xin! I appended gdig2.crt to my domain's certificate, and commented out the ssl_trusted_certificate and the ssl_stapling directives, and it did the trick. Many thanks, Igal Sapir Lucee Core Developer Lucee.org On 4/6/2015 12:32 PM, Xin Li wrote: > On 04/06/15 12:23, Igal @ Lucee.org wrote: > > I have an issue with my SSL certificate on some mobile devices, > > e.g. Safari on iPhone and Firefox on Android. Everything seems to > > be fine with desktop browsers as well as some mobile browsers > > (works fine on Chrome on Android). > > > According to ssllabs.com the issue is with the Certificate Chain > > and/or the Certification Path: > > > This server's certificate chain is incomplete. Grade capped to B. > > > Certificates provided 1 (1331 bytes) Chain issues *Incomplete* > [...] > > ssl_certificate C:/ssl-certificates/mydomainname.crt; ## .crt > > or .pem > > You need to get a copy of your intermediate certificate authority's > certificate (in your case, that Go Daddy Secure Certificate Authority > - G2 or probably https://certs.godaddy.com/repository/gdig2.crt, check > https://certs.godaddy.com/repository to make sure) and concatnate it > at the end of your mydomainname.crt. > > This way you are presenting a chain of certificate (your certificate, > then intermediate certificate that have signed your certificate; you > don't need to include the root certificate as it's a waste of > bandwidth) to the client. > > Cheers, > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 7 04:26:23 2015 From: nginx-forum at nginx.us (bughunter) Date: Tue, 07 Apr 2015 00:26:23 -0400 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <20150406192058.GP88631@mdounin.ru> References: <20150406192058.GP88631@mdounin.ru> Message-ID: <2f79af7ba0865f6f2ef8a975e0ea4cc3.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Sun, Apr 05, 2015 at 11:26:19PM -0400, bughunter wrote: > > > My web server is intentionally set up to only support virtual hosts > and TLS > > SNI. I know that the latter eliminates some ancient web browsers > but I > > don't care about those browsers. > > > > I want to enable OCSP stapling and it seems to be configured > correctly in my > > test vhost (everything else about SSL already works fine - I get an > A on the > > Qualys SSL Labs test) and there are no errors or warnings but > "openssl > > s_client" always returns: > > > > "OCSP response: no response sent" > > > > Yes, I ran the s_client command multiple times to account for the > nginx > > responder delay. I was testing OCSP stapling on just one of my > domains. > > Then I read that the 'default_server' SSL server also has to have > OCSP > > stapling enabled for vhost OCSP stapling to work: > > > > https://gist.github.com/konklone/6532544 > > There is no such a requirement. > > > This is a huge problem if I want to enable OCSP for my vhosts > because my > > 'default_server' certificate is self-signed (intentional) and > running > > 'configtest' with 'ssl_stapling' options on the default server, of > course, > > results in a warning: > > > > "nginx: [warn] "ssl_stapling" ignored, issuer certificate not found" > > > > Which indicates that it isn't enabled on the default server and > subsequent > > s_client tests (after reloading the config, which, of course, issued > the > > same warning a second time) on the test vhost confirm that there was > still > > no OCSP stapling. It was a long-shot in the first place. > > This warning indicates that you've tried to enable OCSP Stapling > for a server with a certificate whose issuer certificate cannot be > found, therefore the "ssl_stapling" directive was ignored for the > server. To avoid seeing the warning on each start, consider > switching off ssl_stapling for the server{} block in question. As I explained, I enabled it as a long-shot. I was expecting to get a warning and I did. I have, of course, disabled it for the default server section. > > So how do I enable OCSP stapling for my vhosts when the default > server cert > > is self-signed? This seems like a potential bug in the nginx SSL > module. > > Just enable ssl_stapling in appropriate server{} blocks. As far as I can tell, I'm already doing that: http://pastebin.com/Ymb5hxDP Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257833,257850#msg-257850 From nginx-forum at nginx.us Tue Apr 7 13:22:17 2015 From: nginx-forum at nginx.us (khaled.benjannet) Date: Tue, 07 Apr 2015 09:22:17 -0400 Subject: Load Balancer many different site Message-ID: <82408db0372844e529658c46476bcd60.NginxMailingListEnglish@forum.nginx.org> Hello, I need to use only one nginx server to configure the load balancing for 3 differents plateforme. When I add the 3 config files under /etc/ngix/conf.d, the nginx take only the first configuration on the list. Please I need help to configure correctly nginx. Thanks in advance Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257875,257875#msg-257875 From mdounin at mdounin.ru Tue Apr 7 13:23:22 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Apr 2015 16:23:22 +0300 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <2f79af7ba0865f6f2ef8a975e0ea4cc3.NginxMailingListEnglish@forum.nginx.org> References: <20150406192058.GP88631@mdounin.ru> <2f79af7ba0865f6f2ef8a975e0ea4cc3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150407132322.GV88631@mdounin.ru> Hello! On Tue, Apr 07, 2015 at 12:26:23AM -0400, bughunter wrote: [...] > > > So how do I enable OCSP stapling for my vhosts when the default > > server cert > > > is self-signed? This seems like a potential bug in the nginx SSL > > module. > > > > Just enable ssl_stapling in appropriate server{} blocks. > > As far as I can tell, I'm already doing that: > > http://pastebin.com/Ymb5hxDP The configuration you are testing with seems to be overcomplicated. Nevertheless, it should work assuming correct certificates are supplied and OCSP responder works fine. What makes you think that it doesn't work? -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Tue Apr 7 15:18:36 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 07 Apr 2015 17:18:36 +0200 Subject: Load Balancer many different site In-Reply-To: <82408db0372844e529658c46476bcd60.NginxMailingListEnglish@forum.nginx.org> References: <82408db0372844e529658c46476bcd60.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9e3671baaac62acc4a74fff3f2992b0e@none.at> Hi. Am 07-04-2015 15:22, schrieb khaled.benjannet: > Hello, > > I need to use only one nginx server to configure the load balancing for > 3 > differents plateforme. > > When I add the 3 config files under /etc/ngix/conf.d, the nginx take > only > the first configuration on the list. Please can you post: nginx -V cat /etc/nginx/nginx.conf ls -la /etc/ngix/conf.d/ lsb_release -a > Please I need help to configure correctly nginx. Thanks in advance Thanks too. > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257875,257875#msg-257875 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From maxim at nginx.com Tue Apr 7 15:39:47 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 07 Apr 2015 18:39:47 +0300 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: References: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> <9084820.MalaW2N874@vbart-laptop> <1939b93a165cad3e75abe5a91a604ace.squirrel@deds.nl> <54E4B31F.1020405@nginx.com> <193911151aafee54da2387b2106ddeb5.squirrel@deds.nl> <54E4B84E.90608@nginx.com> <54E4BCA8.7040901@nginx.com> Message-ID: <5523FA43.1040102@nginx.com> Hi, On 2/18/15 7:35 PM, rikske at deds.nl wrote: >> [...] >>> Hi Maxim, >>> >>> Understood. Apart from the fact what SPDY can mean for someone specific. >>> There is a flaw in and which prevents "tcp_nodelay" in Nginx 1.6. to >>> function correctly. >>> >>> How to fix that. >>> >> There are several options: >> >> - you can backport 1.7 diff to 1.6; >> >> - you can use 1.7 in production (this is what we actually recommend >> to do; e.g. nginx-plus based on 1.7 branch currently). >> >> We will discuss merging this code to 1.6 for the next release but we >> don't have a schedule for 1.6.3 yet. [...] > > Hi Maxim, > > Thanks, your help is mutch appriciated. > Regards, > This bugfix was merged to 1.6 and will be a part of 1.6.3 release which is out today. -- Maxim Konovalov http://nginx.com From mdounin at mdounin.ru Tue Apr 7 16:20:27 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Apr 2015 19:20:27 +0300 Subject: nginx-1.6.3 Message-ID: <20150407162027.GY88631@mdounin.ru> Changes with nginx 1.6.3 07 Apr 2015 *) Feature: now the "tcp_nodelay" directive works with SPDY connections. *) Bugfix: in error handling. Thanks to Yichun Zhang and Daniil Bondarev. *) Bugfix: alerts "header already sent" appeared in logs if the "post_action" directive was used; the bug had appeared in 1.5.4. *) Bugfix: alerts "sem_post() failed" might appear in logs. *) Bugfix: in hash table handling. Thanks to Chris West. *) Bugfix: in integer overflow handling. Thanks to R?gis Leroy. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Apr 7 16:23:08 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Apr 2015 19:23:08 +0300 Subject: nginx-1.7.12 Message-ID: <20150407162307.GC88631@mdounin.ru> Changes with nginx 1.7.12 07 Apr 2015 *) Feature: now the "tcp_nodelay" directive works with backend SSL connections. *) Feature: now thread pools can be used to read cache file headers. *) Bugfix: in the "proxy_request_buffering" directive. *) Bugfix: a segmentation fault might occur in a worker process when using thread pools on Linux. *) Bugfix: in error handling when using the "ssl_stapling" directive. Thanks to Filipe da Silva. *) Bugfix: in the ngx_http_spdy_module. -- Maxim Dounin http://nginx.org/ From yap7800 at gmail.com Tue Apr 7 16:59:10 2015 From: yap7800 at gmail.com (Max Yap) Date: Tue, 07 Apr 2015 16:59:10 +0000 Subject: No subject Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 7 22:26:24 2015 From: nginx-forum at nginx.us (cyruspy) Date: Tue, 07 Apr 2015 18:26:24 -0400 Subject: SSL protected page giving segfaults on load Message-ID: Hi!, anybody has seen core dumps caused by just loading a landing page?: This is the trace I'm seeing, this is in CentOS 6 with nginx-1.6.2-1.el6.ngx.x86_64: Core was generated by `nginx: worker process '. Program terminated with signal 11, Segmentation fault. #0 ngx_ssl_new_session (ssl_conn=0xace2b0, sess=0xacf180) at src/event/ngx_event_openssl.c:1985 1985 shpool = (ngx_slab_pool_t *) shm_zone->shm.addr; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257898,257898#msg-257898 From vbart at nginx.com Wed Apr 8 02:41:29 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 08 Apr 2015 05:41:29 +0300 Subject: SSL protected page giving segfaults on load In-Reply-To: References: Message-ID: <40816311.rhLB1g5DXR@vbart-laptop> On Tuesday 07 April 2015 18:26:24 cyruspy wrote: > Hi!, anybody has seen core dumps caused by just loading a landing page?: > > This is the trace I'm seeing, this is in CentOS 6 with > nginx-1.6.2-1.el6.ngx.x86_64: > > Core was generated by `nginx: worker process '. > Program terminated with signal 11, Segmentation fault. > #0 ngx_ssl_new_session (ssl_conn=0xace2b0, sess=0xacf180) at > src/event/ngx_event_openssl.c:1985 > 1985 shpool = (ngx_slab_pool_t *) shm_zone->shm.addr; > http://trac.nginx.org/nginx/ticket/235 wbr, Valentin V. Bartenev From emailbuilder88 at yahoo.com Wed Apr 8 05:00:34 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Tue, 7 Apr 2015 22:00:34 -0700 Subject: Understanding locations (multiple locations that do different things) Message-ID: <1428469234.64258.YahooMailBasic@web142402.mail.bf1.yahoo.com> Hello, I'm new to Nginx, coming from Apache. Now I'm struggling with how to apply multiple configs and rules to different location (request tyeps). Easy example, a server/site has PHP support for all of its requests but only one of its directoryies needs to have HTTP AUTH. I had: location ~* \.php$ {...PHP settings...} location /admin {...HTTP AUTH settings...} After reading about locations now I understand that ONLY ONE gets used. Which means PHP was working fine but HTTP AUTH was only protecting the non-PHP files in /admin! Am I correct? I know I can nest location blocks but when I tested, it doesn't look like the settings in the outer block are inherited by the inner block, so the only advantage to nesting is just narrowing the request type handled by the block. For example, if I nest "location /admin" inside the PHP block I still have to repeat the entire PHP setup parameters inside the /admin block. Plus I still need another /admin block outside the PHP block to have HTTP AUTH on the non-PHP files in /admin. This gets repetative and messy. So what's the most smoothe way to have one or more location handlers that need to be *addative* like having a global handler for PHP files in addition to handlers that are specific to the directory, like HTTP AUTH or other things? TIA From igor at sysoev.ru Wed Apr 8 06:16:51 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 8 Apr 2015 09:16:51 +0300 Subject: Understanding locations (multiple locations that do different things) In-Reply-To: <1428469234.64258.YahooMailBasic@web142402.mail.bf1.yahoo.com> References: <1428469234.64258.YahooMailBasic@web142402.mail.bf1.yahoo.com> Message-ID: <0F1A323F-0A67-4929-974D-26B048D0189A@sysoev.ru> On 08 Apr 2015, at 08:00, E.B. wrote: > Hello, > > I'm new to Nginx, coming from Apache. Now I'm struggling with > how to apply multiple configs and rules to different location (request > tyeps). > > Easy example, a server/site has PHP support for all of its requests > but only one of its directoryies needs to have HTTP AUTH. > > I had: > > location ~* \.php$ {...PHP settings...} > location /admin {...HTTP AUTH settings...} > > After reading about locations now I understand that ONLY ONE gets used. > Which means PHP was working fine but HTTP AUTH was only protecting > the non-PHP files in /admin! Am I correct? > > I know I can nest location blocks but when I tested, it doesn't look like > the settings in the outer block are inherited by the inner block, so the > only advantage to nesting is just narrowing the request type handled by > the block. > > For example, if I nest "location /admin" inside the PHP block I still have > to repeat the entire PHP setup parameters inside the /admin block. Plus > I still need another /admin block outside the PHP block to have HTTP > AUTH on the non-PHP files in /admin. This gets repetative and messy. > > So what's the most smoothe way to have one or more location handlers > that need to be *addative* like having a global handler for PHP files in > addition to handlers that are specific to the directory, like HTTP AUTH > or other things? Additive locations are good when you want to make little configurations even lesser. However, when such configurations grow their maintenance becomes a nightmare - you have to look though all configuration to see how a tiny change will affect entire configuration. People want not to write less but they want to spend less time. These are different things. With right nginx configuration you should write more but you will spend much less time when you will change your configuration. So the recommended way to configure your task is following: ...all common PHP setting... location /admin { ...AUTH settings... location ~* \.php$ { ...AUTH PHP settings... } } location ~* \.php$ { ...generic PHP settings... } You can also look my presentation on this topic: http://www.youtube.com/watch?v=YWRYbLKsS0I http://sysoev.ru/tmp/nginx.conf.2014.16x9.pdf -- Igor Sysoev http://nginx.com From emailbuilder88 at yahoo.com Wed Apr 8 06:18:28 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Tue, 7 Apr 2015 23:18:28 -0700 Subject: Basic HTTP auth prompting too many times In-Reply-To: <1428295141.60002.YahooMailBasic@web142402.mail.bf1.yahoo.com> Message-ID: <1428473908.39432.YahooMailBasic@web142406.mail.bf1.yahoo.com> > After some time, browsers begin to prompt for authentication > over > and over again (I guess once for every image, stylesheet, > script, etc?). > Or maybe it is prompting because the credentials failed, but > I don't > think so because if I hit cancel/ESC over and over again, I > can use > the web page (I'm still authenticated), but none of the > images or > scripts have loaded. Oh, I think this is caused by misunderstnad how locations working. I had: location ~* \.php$ {...PHP settings...} location /admin {...HTTP AUTH settings...} After reading about locations now I understand that ONLY ONE gets used. Which means PHP was working fine but HTTP AUTH was only protecting the non-PHP files in /admin! Am I correct? So problem in this case is caused by when browser forgets auth credentials but PHP files not protected so it displayed the cached PHP file OK but all the images and script files cause prompt for auth. Many prompts over and over. From justinbeech at gmail.com Wed Apr 8 06:23:59 2015 From: justinbeech at gmail.com (jb) Date: Wed, 8 Apr 2015 16:23:59 +1000 Subject: limit_rate for POST targets ? Message-ID: Is there a module that does throttled reading for POST and works the same way as limit_rate does for GET (per stream, X bytes per second). I got some kind of throttle effect by putting a usleep() into the os/unix/ngx_recv.c file reader, but I want to use something that works the same way as limit_rate. thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 8 06:30:12 2015 From: nginx-forum at nginx.us (bughunter) Date: Wed, 08 Apr 2015 02:30:12 -0400 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <20150407132322.GV88631@mdounin.ru> References: <20150407132322.GV88631@mdounin.ru> Message-ID: <5c501739e08261589b9701f9d6c9822e.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Tue, Apr 07, 2015 at 12:26:23AM -0400, bughunter wrote: > > [...] > > > > > So how do I enable OCSP stapling for my vhosts when the default > > > server cert > > > > is self-signed? This seems like a potential bug in the nginx > SSL > > > module. > > > > > > Just enable ssl_stapling in appropriate server{} blocks. > > > > As far as I can tell, I'm already doing that: > > > > http://pastebin.com/Ymb5hxDP > > The configuration you are testing with seems to be > overcomplicated. Nevertheless, it should work assuming correct > certificates are supplied and OCSP responder works fine. What > makes you think that it doesn't work? Running the 'openssl s_client' command only returns "OCSP response: no response sent" as evidenced here (I've replaced the actual domain with "mydomain.org" in the command): # openssl s_client -servername mydomain.org -connect mydomain.org:443 -tls1 -tlsextdebug -status CONNECTED(00000003) TLS server extension "server name" (id=0), len=0 TLS server extension "renegotiation info" (id=65281), len=1 0001 - TLS server extension "EC point formats" (id=11), len=4 0000 - 03 00 01 02 .... TLS server extension "session ticket" (id=35), len=0 TLS server extension "heartbeat" (id=15), len=1 0000 - 01 . OCSP response: no response sent ... Also, the Qualys SSL labs test indicates OCSP support in the certificate but no OCSP stapling for the server. ssl_certificate /var/www/mydomain.org/mydomain.org.chain.pem; That contains the signed certificate, intermediate CA cert, and root CA cert (in that order). PEM format. ssl_certificate_key /var/www/mydomain.org/mydomain.org.key.pem; That contains the private key. PEM format. ssl_trusted_certificate /var/www/root.certs.pem; That contains the intermediate CA cert and root CA cert (in that order). PEM format. And the OCSP responder itself is working fine because Firefox is working fine (for the moment) and I can also ping the OCSP responder and access the OCSP responder directly using the URL in the certificate from the server that nginx sits on. The CA's OCSP responder went down for a few hours a couple of days ago, which caused my browser (Firefox) to freak out and deny access to my own website. At that point I went about figuring out setting up OCSP stapling to prevent the issue from reoccurring in the future. The certificate has the v3 OCSP extension in it and it points at a valid location. There aren't any errors in the nginx logs about attempts to retrieve OCSP responses and failing. There are no errors, warnings, or notices during startup of nginx. I've reloaded and restarted nginx many times, rebooted the whole system one time, and run the "openssl s_client" command a bunch of times after each "long-shot" configuration adjustment (and reverted shortly after back to the config you saw in the pastebin). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257833,257906#msg-257906 From themic1st at gmail.com Wed Apr 8 12:55:13 2015 From: themic1st at gmail.com (Mic Tremblay) Date: Wed, 8 Apr 2015 08:55:13 -0400 Subject: league of angel Message-ID: need some help on my dev.acc i cant acces on with the nginx server please help thank.. best regard themic -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 8 15:27:52 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Apr 2015 18:27:52 +0300 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <5c501739e08261589b9701f9d6c9822e.NginxMailingListEnglish@forum.nginx.org> References: <20150407132322.GV88631@mdounin.ru> <5c501739e08261589b9701f9d6c9822e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150408152751.GI88631@mdounin.ru> Hello! On Wed, Apr 08, 2015 at 02:30:12AM -0400, bughunter wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Tue, Apr 07, 2015 at 12:26:23AM -0400, bughunter wrote: > > > > [...] > > > > > > > So how do I enable OCSP stapling for my vhosts when the default > > > > server cert > > > > > is self-signed? This seems like a potential bug in the nginx > > SSL > > > > module. > > > > > > > > Just enable ssl_stapling in appropriate server{} blocks. > > > > > > As far as I can tell, I'm already doing that: > > > > > > http://pastebin.com/Ymb5hxDP > > > > The configuration you are testing with seems to be > > overcomplicated. Nevertheless, it should work assuming correct > > certificates are supplied and OCSP responder works fine. What > > makes you think that it doesn't work? > > Running the 'openssl s_client' command only returns "OCSP response: no > response sent" as evidenced here (I've replaced the actual domain with > "mydomain.org" in the command): > > # openssl s_client -servername mydomain.org -connect mydomain.org:443 -tls1 > -tlsextdebug -status > CONNECTED(00000003) > TLS server extension "server name" (id=0), len=0 > TLS server extension "renegotiation info" (id=65281), len=1 > 0001 - > TLS server extension "EC point formats" (id=11), len=4 > 0000 - 03 00 01 02 .... > TLS server extension "session ticket" (id=35), len=0 > TLS server extension "heartbeat" (id=15), len=1 > 0000 - 01 . > OCSP response: no response sent > ... Note that a connection with a Sertificate Status Request will only return a status if it is already loaded. If there is no OCSP status available in the worker process, nginx will return no OCSP status, but will initiate a request to OCSP responder. That is, it may take a while before OCSP status will be available - even if everything works fine. [...] > And the OCSP responder itself is working fine because Firefox is working > fine (for the moment) and I can also ping the OCSP responder and access the > OCSP responder directly using the URL in the certificate from the server Note that this isn't really indicate anything: there are two forms of OCSP requests, POST and GET. And Firefox uses POST, while nginx uses GET. Given the fact that the responder was completely broken just a few days ago - it's quite possible that it's still broken for GETs in some cases. > that nginx sits on. The CA's OCSP responder went down for a few hours a > couple of days ago, which caused my browser (Firefox) to freak out and deny > access to my own website. At that point I went about figuring out setting > up OCSP stapling to prevent the issue from reoccurring in the future. The > certificate has the v3 OCSP extension in it and it points at a valid > location. There aren't any errors in the nginx logs about attempts to > retrieve OCSP responses and failing. There are no errors, warnings, or > notices during startup of nginx. I've reloaded and restarted nginx many > times, rebooted the whole system one time, and run the "openssl s_client" > command a bunch of times after each "long-shot" configuration adjustment > (and reverted shortly after back to the config you saw in the pastebin). I would recommend the following: - test a trivial config with a single server{} block with the certificate and "ssl_stapling on", nothing more; this should rule out problems related to OCSP response verification, as well as well as problems related to default vs. non-default server you've claimed. - try using debugging log to see what happens on low level in nginx (see http://nginx.org/en/docs/debugging_log.html), and tcpdump to see what happens on the wire between nginx and OCSP responder. -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Wed Apr 8 15:38:38 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 08 Apr 2015 17:38:38 +0200 Subject: league of angel In-Reply-To: References: Message-ID: Dear Mic. Please can you post: nginx -?V cat /?etc/?nginx/?nginx.conf ls -?la /?etc/?ngix/?conf.d/? lsb_release -?a and please more specific request! What ist 'dev.acc'? Where is nginx running? How do you request nginx? And much more ! BR Aleks Am 08-04-2015 14:55, schrieb Mic Tremblay: > need some help on my dev.acc i cant acces on with the nginx server please help thank.. best regard themic > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Apr 8 16:54:34 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 8 Apr 2015 12:54:34 -0400 Subject: [nginx-announce] nginx-1.6.3 In-Reply-To: <20150407162031.GZ88631@mdounin.ru> References: <20150407162031.GZ88631@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.6.3 for Windows http://goo.gl/kOynSa (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Apr 7, 2015 at 12:20 PM, Maxim Dounin wrote: > Changes with nginx 1.6.3 07 Apr > 2015 > > *) Feature: now the "tcp_nodelay" directive works with SPDY > connections. > > *) Bugfix: in error handling. > Thanks to Yichun Zhang and Daniil Bondarev. > > *) Bugfix: alerts "header already sent" appeared in logs if the > "post_action" directive was used; the bug had appeared in 1.5.4. > > *) Bugfix: alerts "sem_post() failed" might appear in logs. > > *) Bugfix: in hash table handling. > Thanks to Chris West. > > *) Bugfix: in integer overflow handling. > Thanks to R?gis Leroy. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce -------------- next part -------------- An HTML attachment was scrubbed... URL: From black.fledermaus at arcor.de Thu Apr 9 12:12:32 2015 From: black.fledermaus at arcor.de (basti) Date: Thu, 09 Apr 2015 14:12:32 +0200 Subject: combine basic auth and ip whitelisting load ip dynamicly Message-ID: <55266CB0.20200@arcor.de> Hello I have config to combine basic auth and ip whitelisting like that: # combine basic auth and ip whitelisting # http://serverfault.com/questions/242218/how-to-disable-http-basic-auth-in-nginx-for-a-specific-ip-range satisfy any; include /etc/nginx/myips; deny all; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/.htpasswd; # end combine basic auth and ip whitelisting In the past I only use static IP's and that work very well. Now I use dynamic IP's and want to add IP's to "myips"-file by script. Is it possible that nginx load the file dynamic on runtime or must I always reload nginx when an IP is added? From daniel at mostertman.org Thu Apr 9 18:03:37 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Thu, 09 Apr 2015 20:03:37 +0200 Subject: Linux package for Debian "jessie" (8.x) Message-ID: <5526BEF9.90007@mostertman.org> Hi, I've been using the mainline packages with the instructions on http://nginx.org/en/linux_packages.html for a while now, and the Debian section mentions to replace "codename" with the actual codename. On April 25th 2015, Debian will release their new version, codename "jessie". I've been running jessie for as long as testing was out, and is now frozen. The packages created for Debian codename wheezy (current), work perfectly and without any issues on jessie (upcoming). Perhaps the release team could also release this version for jessie? It requires no modifications, at least not for the mainline version. Currently I'm just using the wheezy line on jessie: deb http://nginx.org/packages/mainline/debian/ wheezy nginx Kind regards, Dani?l Mostertman From daniel at mostertman.org Thu Apr 9 18:03:44 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Thu, 09 Apr 2015 20:03:44 +0200 Subject: Linux package for Debian "jessie" (8.x) Message-ID: <5526BF00.5080800@mostertman.org> Hi, I've been using the mainline packages with the instructions on http://nginx.org/en/linux_packages.html for a while now, and the Debian section mentions to replace "codename" with the actual codename. On April 25th 2015, Debian will release their new version, codename "jessie". I've been running jessie for as long as testing was out, and is now frozen. The packages created for Debian codename wheezy (current), work perfectly and without any issues on jessie (upcoming). Perhaps the release team could also release this version for jessie? It requires no modifications, at least not for the mainline version. Currently I'm just using the wheezy line on jessie: deb http://nginx.org/packages/mainline/debian/ wheezy nginx Kind regards, Dani?l Mostertman From nginx-forum at nginx.us Thu Apr 9 18:09:32 2015 From: nginx-forum at nginx.us (blason) Date: Thu, 09 Apr 2015 14:09:32 -0400 Subject: Site should not be accessed through IP Message-ID: Hi Guys, I have my nginx box deployed as a reverse proxy serving almost more than 10 sites. But when I browse through 1 IP the first site configured gets accessed. I dont want anyone to access the sites through IP, by using only FQDN sites should be accesible. So anyone trying to access the site using IP should recieve a host not found or may be error like COnnection reset. Can we do that in nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257927,257927#msg-257927 From Sebastian.Stabbert at heg.com Thu Apr 9 18:12:00 2015 From: Sebastian.Stabbert at heg.com (Sebastian Stabbert) Date: Thu, 9 Apr 2015 20:12:00 +0200 Subject: Site should not be accessed through IP In-Reply-To: References: Message-ID: <0A4381FD-82ED-4108-AC92-09C2E4CB48C8@heg.com> configure a site first, which does the ?default? handling and has no content? Cheers, Sebastian Am 09.04.2015 um 20:09 schrieb blason : > Hi Guys, > > I have my nginx box deployed as a reverse proxy serving almost more than 10 > sites. But when I browse through 1 IP the first site configured gets > accessed. I dont want anyone to access the sites through IP, by using only > FQDN sites should be accesible. > > So anyone trying to access the site using IP should recieve a host not found > or may be error like COnnection reset. > > Can we do that in nginx? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257927,257927#msg-257927 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Sebastian Stabbert Systemadministrator Host Europe GmbH is a company of HEG Telefon: +49 2203 1045-7362 ----------------------------------------------------------------------- Host Europe GmbH - http://www.hosteurope.de Welserstra?e 14 - 51149 K?ln - Germany HRB 28495 Amtsgericht K?ln Gesch?ftsf?hrer: Tobias Mohr, Patrick Pulverm?ller -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 204 bytes Desc: Message signed with OpenPGP using GPGMail URL: From nginx-forum at nginx.us Thu Apr 9 18:13:06 2015 From: nginx-forum at nginx.us (FinalX) Date: Thu, 09 Apr 2015 14:13:06 -0400 Subject: Site should not be accessed through IP In-Reply-To: References: Message-ID: <1376b4d4e25130eb6e1272cf6b0ccc7a.NginxMailingListEnglish@forum.nginx.org> You could use an extra host config with a default_server, like so: server { listen 80 default_server; server_name _; return 444; } You can find this example on http://nginx.org/en/docs/http/server_names.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257927,257928#msg-257928 From nginx-forum at nginx.us Thu Apr 9 18:30:34 2015 From: nginx-forum at nginx.us (blason) Date: Thu, 09 Apr 2015 14:30:34 -0400 Subject: Site should not be accessed through IP In-Reply-To: <1376b4d4e25130eb6e1272cf6b0ccc7a.NginxMailingListEnglish@forum.nginx.org> References: <1376b4d4e25130eb6e1272cf6b0ccc7a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Finalx, you mean shall I create the .conf file by name default_server and add up this there? Or would you please tell me where shoudl I add the above stanza? Sorry I am being novice in nginx just would like to know more information about this. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257927,257930#msg-257930 From daniel at mostertman.org Thu Apr 9 18:38:36 2015 From: daniel at mostertman.org (Daniel Mostertman) Date: Thu, 9 Apr 2015 20:38:36 +0200 Subject: Site should not be accessed through IP In-Reply-To: References: <1376b4d4e25130eb6e1272cf6b0ccc7a.NginxMailingListEnglish@forum.nginx.org> Message-ID: If you're using a 1-config-per-site setup, then yes, you could. It completely depends on your setup as to where you need to place it. You can put it in any existing file that already has a server directive in there. Just make sure none of the other server configs/files have the default_server in their listen directive. The server name of _ just makes sure it won't conflict with any existing name as hostnames are not allowed to have underscores in them. The default_server is special, it makes sure that any request that does not have a matching name in the rest of the config will end up there. So not just the IP that you asked for, but also any other website name that is not in the config. The 444 status code is just to return a "no response" kinda thing. If you want you can even have a default site there, telling users there is no site at that address with a fancy text and/or logo instead. On Apr 9, 2015 8:30 PM, "blason" wrote: > Hi Finalx, > > you mean shall I create the .conf file by name default_server and add up > this there? Or would you please tell me where shoudl I add the above > stanza? > Sorry I am being novice in nginx just would like to know more information > about this. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257927,257930#msg-257930 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailbuilder88 at yahoo.com Thu Apr 9 19:57:10 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Thu, 9 Apr 2015 12:57:10 -0700 Subject: HTTP AUTH (auth_pam module) Can't initialize threads: error 11 Message-ID: <1428609430.99152.YahooMailBasic@web142404.mail.bf1.yahoo.com> Hello, Using the auth_pam module to implement HTTP AUTH: https://github.com/stogh/ngx_http_auth_pam_module/ Once in a while authentication seems to stop working across all browsers and users. The error that shows in the Nginx error log file when a browser tries to authenticate is: Can't initialize threads: error 11 (Verbatim, the error has no timestamp or anything else) Restarting Nginx fixes the problem for some time (days?). Next time I'll try a reload instead. Searching for that error doesn't turn up too much, except that it might be a MySQL error(?) Can anyone help? Is the author Sergio on this list? From nginx-forum at nginx.us Thu Apr 9 22:33:12 2015 From: nginx-forum at nginx.us (nikita.tovstoles) Date: Thu, 09 Apr 2015 18:33:12 -0400 Subject: nginx_cache entry evicted ~ 10 min after write despite Cache-Control in the future Message-ID: <0937efcd53feb8e859ccfe74ff039394.NginxMailingListEnglish@forum.nginx.org> Using nginx 1.2.7 Trying to figure out what and why is removing cache entries about 10 min after insert (or last read - not yet sure) when the Cache-Control + Last-Modified is nearly 24 hours in the future. Are my config / response headers to blame or something else? Ex I would expect the following entry to remain until 22:24:45 Apr 10, 2015 - yet it disappears from nginx cache dir about 10 min into existence - i.e. at around 22:35 Apr 9: [clabs at lb1 cache]$ head 5c395f3ff23eaa0fae58000e5cdbb30a 8Pyote/1-?&U?@< KEY: /ge/ge.js HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Last-Modified: Thu, 09 Apr 2015 22:24:45 GMT Cache-Control: max-age=86400, public Content-Type: text/javascript;charset=UTF-8 Content-Language: en-US Content-Length: 153734 Date: Thu, 09 Apr 2015 22:24:45 GMT ...and config: proxy_cache_path /opt/clabs/nginx/tmp/cache keys_zone=static:1000m; server { server_name OMITTED; listen 80; proxy_cache static; proxy_cache_key "$uri"; location / { proxy_pass http://search-cluster; add_header "X-ECR-Nginx-Cache" $upstream_cache_status; add_header "X-ECR-Domain" "static"; } } thanks -nikita Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257933,257933#msg-257933 From francis at daoine.org Thu Apr 9 22:52:24 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Apr 2015 23:52:24 +0100 Subject: nginx_cache entry evicted ~ 10 min after write despite Cache-Control in the future In-Reply-To: <0937efcd53feb8e859ccfe74ff039394.NginxMailingListEnglish@forum.nginx.org> References: <0937efcd53feb8e859ccfe74ff039394.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150409225224.GB29618@daoine.org> On Thu, Apr 09, 2015 at 06:33:12PM -0400, nikita.tovstoles wrote: Hi there, > Trying to figure out what and why is removing cache entries about 10 min > after insert (or last read - not yet sure) when the Cache-Control + > Last-Modified is nearly 24 hours in the future. Are my config / response > headers to blame or something else? http://nginx.org/r/proxy_cache_path > proxy_cache_path /opt/clabs/nginx/tmp/cache keys_zone=static:1000m; Look for "inactive". (That's my guess, at any rate.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Apr 9 22:53:55 2015 From: nginx-forum at nginx.us (nikita.tovstoles) Date: Thu, 09 Apr 2015 18:53:55 -0400 Subject: nginx_cache entry evicted ~ 10 min after write despite Cache-Control in the future In-Reply-To: <0937efcd53feb8e859ccfe74ff039394.NginxMailingListEnglish@forum.nginx.org> References: <0937efcd53feb8e859ccfe74ff039394.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8cf0a4a7162e9db40b4bc7815b629dd3.NginxMailingListEnglish@forum.nginx.org> Replying own question - looks like proxy_cache_path's inactive param is to blame - it defaults to 10 min per docs. Can I disable this param (by setting inactive=0?) to rely solely on HTTP Response headers? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257933,257934#msg-257934 From oyljerry at gmail.com Fri Apr 10 08:05:37 2015 From: oyljerry at gmail.com (Jerry OELoo) Date: Fri, 10 Apr 2015 16:05:37 +0800 Subject: What is official way to upload nginx.conf on server Message-ID: Hi. I have running nginx on server, when I modify nginx.conf setting, I need to use sftp to upload file to server, as nginx.conf is at /usr/local/ folder, it need root privilege. So currently, I will upload file to /home/doc folder, then use 'sudo mv' to deploy nginx.conf. Then I will reload nginx with command line. I want to know what is recommend way to do this? Thanks! -- Rejoice,I Desire! From francis at daoine.org Fri Apr 10 10:09:32 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Apr 2015 11:09:32 +0100 Subject: What is official way to upload nginx.conf on server In-Reply-To: References: Message-ID: <20150410100932.GC29618@daoine.org> On Fri, Apr 10, 2015 at 04:05:37PM +0800, Jerry OELoo wrote: Hi there, > So currently, I will upload file to /home/doc folder, then use 'sudo > mv' to deploy nginx.conf. Then I will reload nginx with command line. > > I want to know what is recommend way to do this? Thanks! I think that there isn't one, because nginx does not care how files get where they get. All nginx cares about is that the config file that it is told to read is readable. Your site or system administrator may have their own policy -- edit the file in-place; "sudo mv" as you do; or use a configuration management system to deploy the newest tested revision of the config. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Apr 10 10:36:37 2015 From: nginx-forum at nginx.us (stanojr) Date: Fri, 10 Apr 2015 06:36:37 -0400 Subject: strange behavior for cache manager In-Reply-To: <7131f249e636925c63e69bdc7c4f6187.NginxMailingListEnglish@forum.nginx.org> References: <7131f249e636925c63e69bdc7c4f6187.NginxMailingListEnglish@forum.nginx.org> Message-ID: <94620e11c1099cd822965b32c045e81f.NginxMailingListEnglish@forum.nginx.org> Can confirm this bug, we have same problem. But i dont know yet how to reproduce it. Nothing strange in logs. error_log set to notice level Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256168,257943#msg-257943 From nginx-forum at nginx.us Fri Apr 10 16:45:21 2015 From: nginx-forum at nginx.us (cloud devops) Date: Fri, 10 Apr 2015 12:45:21 -0400 Subject: Nginx reverse proxy multi upstream (multiples sites) Message-ID: <6f323fd818928ade8772eb94b3d1493d.NginxMailingListEnglish@forum.nginx.org> Hello, Here is my situation: I will have one frontend server running nginx, and multiple backends servers running apache or tomcat with different applications. I am NOT trying to do any load balancing. What I need to do is setup nginx to proxy connections to specific servers based on a specific IP of nginx. I'm using the nginx 1.6.1. I tried the configuration below but I have trouble with the location configuration, I need a dynamic configuration that does not impact the backend servers. cat nginx.conf : worker_processes 1; worker_rlimit_nofile 100000; error_log logs/error.log warn; pid /home/nginx/logs/nginx.pid; events { worker_connections 10240; use epoll; multi_accept on; } http{ client_max_body_size 2048m; include /home/nginx/naxsi/naxsi_config/naxsi_core.rules; #include /etc/nginx/naxsi.rules; include /home/nginx/conf/mime.types; default_type application/octet-stream; server_tokens off; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; include /home/nginx/conf/conf.d/*.conf; #include /etc/nginx/sites-enabled/*.conf; } cat conf.d/combined.conf : upstream market.cloud.com { server 10.1.0.16; server 10.1.0.60 backup; } upstream panel.cloud.com { server 10.1.0.12; server 10.1.0.51 backup; } server { location / { proxy_pass http://market.cloud.com; } location /panel { proxy_pass http://panel.cloud.com; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257950,257950#msg-257950 From francis at daoine.org Fri Apr 10 20:59:18 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Apr 2015 21:59:18 +0100 Subject: Nginx reverse proxy multi upstream (multiples sites) In-Reply-To: <6f323fd818928ade8772eb94b3d1493d.NginxMailingListEnglish@forum.nginx.org> References: <6f323fd818928ade8772eb94b3d1493d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150410205918.GE29618@daoine.org> On Fri, Apr 10, 2015 at 12:45:21PM -0400, cloud devops wrote: Hi there, > Here is my situation: I will have one frontend server running nginx, and > multiple backends servers running apache or tomcat with different > applications. I am NOT trying to do any load balancing. What I need to do is > setup nginx to proxy connections to specific servers based on a specific IP > of nginx. What request do you make that does not give the response that you want? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Apr 10 21:09:33 2015 From: nginx-forum at nginx.us (cloud devops) Date: Fri, 10 Apr 2015 17:09:33 -0400 Subject: Nginx reverse proxy multi upstream (multiples sites) In-Reply-To: <20150410205918.GE29618@daoine.org> References: <20150410205918.GE29618@daoine.org> Message-ID: <3625ff4d0a56804bfe9851817827cb72.NginxMailingListEnglish@forum.nginx.org> When I make http://panel.cloud.com I have the first site which is on the first stream Si I make http://panel.cloud.com/panel, in this case it was redirected to the home page of the second site as the configuration is done but i can not navigate on the site because the URL is changed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257950,257952#msg-257952 From justinbeech at gmail.com Fri Apr 10 21:22:47 2015 From: justinbeech at gmail.com (jb) Date: Sat, 11 Apr 2015 07:22:47 +1000 Subject: limit_rate for POST targets ? In-Reply-To: References: Message-ID: Anyone? no ideas? how would i go about getting such a feature added. I imagine it would be much the same code, just applied to reading the request body rather than writing it. And since it is core functionality I'm not sure one for POST should be an extension? On Wed, Apr 8, 2015 at 4:23 PM, jb wrote: > Is there a module that does throttled reading for POST and works the same > way as limit_rate does for GET (per stream, X bytes per second). > > I got some kind of throttle effect by putting a usleep() into the > os/unix/ngx_recv.c file reader, but I want to use something that works the > same way as limit_rate. > > thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Apr 10 21:23:46 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Apr 2015 22:23:46 +0100 Subject: Nginx reverse proxy multi upstream (multiples sites) In-Reply-To: <3625ff4d0a56804bfe9851817827cb72.NginxMailingListEnglish@forum.nginx.org> References: <20150410205918.GE29618@daoine.org> <3625ff4d0a56804bfe9851817827cb72.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150410212346.GF29618@daoine.org> On Fri, Apr 10, 2015 at 05:09:33PM -0400, cloud devops wrote: Hi there, > When I make http://panel.cloud.com I have the first site which is on the > first stream > Si I make http://panel.cloud.com/panel, in this case it was redirected to > the home page of the second site as the configuration is done but i can not > navigate on the site because the URL is changed. I'm afraid I do not fully understand. "panel.cloud.com" resolves to your nginx machine, yes? You do "curl -i http://panel.cloud.com/", and you expect to get the response from 10.1.0.16 (upstream market.cloud.com) for "/"? And that is what you get? You do "curl -i http://panel.cloud.com/panel", and you expect to get the response from 10.1.0.12 for "/panel"? What response do you get? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Apr 10 22:29:38 2015 From: nginx-forum at nginx.us (cloud devops) Date: Fri, 10 Apr 2015 18:29:38 -0400 Subject: Nginx reverse proxy multi upstream (multiples sites) In-Reply-To: <20150410212346.GF29618@daoine.org> References: <20150410212346.GF29618@daoine.org> Message-ID: <11f3596253972c8dd76c3ce0219ecdc3.NginxMailingListEnglish@forum.nginx.org> Excuse me for the mail that was not clear, I don't find the correct technical term to use. Yes "market.cloud.com" and "panel.cloud.com" resolve to the nginx server. When I do "curl -i http://panel.cloud.com/", Iget the response from 10.1.0.16 and it work fine because it's on the location / When I do "curl -i http://panel.cloud.com/panel", I get the response from 10.1.0.12 but with /panel on the URL. So After i get the home page I can't go the the other page because the URL contain the "/panel" My issue is to point the nginx server to many backend server, the nginx requires to have different location which cause probleme to navigate in the different backen server. Is there a solution to work with nginx as a reverse proxy for many backend servers (differents sites) Thank you in advance Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257950,257955#msg-257955 From nginx-forum at nginx.us Fri Apr 10 23:09:43 2015 From: nginx-forum at nginx.us (avi9526) Date: Fri, 10 Apr 2015 19:09:43 -0400 Subject: alias vs root in regex location Message-ID: Trying to serve nagios web GUI through nginx. Part of config: ? location ~ "^/nagios/(.+?\.php)(/.*)?$" { alias "/usr/share/nagios3/htdocs/"; try_files $1 $uri $uri/ /nagios/index.php; #rewrite "^/nagios/(.+?\.php)(/.*)?$" /$1$2 break; auth_basic "Authorization required to access Nagios"; auth_basic_user_file htpasswd; # PHP-FPM Settings include avi9526/php-fpm; } location ~* ^/cgi-bin/nagios3/(.+?\.cgi)$ { alias "/usr/lib/cgi-bin/nagios3/"; try_files $1 $uri $uri/ =404; #rewrite "^/cgi-bin/nagios3/(.+?\.cgi)$" /$1 break; auth_basic "Authorization required to access Nagios"; auth_basic_user_file htpasswd; fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; include avi9526/fcgiwrap; } it's seems to be working. If i change "alias" to "root" and uncomment "rewrite" directive for both php and cgi scripts it will work as well. So, what the difference? And why most online manuals suggests to use "root" followed by rewrite? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257956,257956#msg-257956 From emailbuilder88 at yahoo.com Sat Apr 11 03:38:15 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Fri, 10 Apr 2015 20:38:15 -0700 Subject: HTTP AUTH (auth_pam module) Can't initialize threads: error 11 In-Reply-To: <1428609430.99152.YahooMailBasic@web142404.mail.bf1.yahoo.com> Message-ID: <1428723495.6431.YahooMailBasic@web142405.mail.bf1.yahoo.com> > Using the auth_pam module to implement HTTP AUTH: > > https://github.com/stogh/ngx_http_auth_pam_module/ > > Once in a while authentication seems to stop working across all browsers > and users. The error that shows in the Nginx error log file when a browser > tries to authenticate is: > > Can't initialize threads: error 11 > > (Verbatim, the error has no timestamp or anything else) > > Restarting Nginx fixes the problem for some time (days?). > Next time I'll try a reload instead. Reload also fixed the problem. But its not possible to use software that breaks once a day. Can anyone please help? > Searching for that error doesn't turn up too much, except that it > might be a MySQL error(?) > > Can anyone help? Is the author Sergio on this list? From francis at daoine.org Sat Apr 11 07:25:31 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 11 Apr 2015 08:25:31 +0100 Subject: Nginx reverse proxy multi upstream (multiples sites) In-Reply-To: <11f3596253972c8dd76c3ce0219ecdc3.NginxMailingListEnglish@forum.nginx.org> References: <20150410212346.GF29618@daoine.org> <11f3596253972c8dd76c3ce0219ecdc3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150411072531.GG29618@daoine.org> On Fri, Apr 10, 2015 at 06:29:38PM -0400, cloud devops wrote: Hi there, > Yes "market.cloud.com" and "panel.cloud.com" resolve to the nginx server. > My issue is to point the nginx server to many backend server, the nginx > requires to have different location which cause probleme to navigate in the > different backen server. Is there a solution to work with nginx as a reverse > proxy for many backend servers (differents sites) nginx needs some way of knowing which backend server to use, for each individual request. The simplest is probably just to use the host in the request: == server { server_name market.cloud.com; location / { proxy_pass http://market.cloud.com; } } server { server_name panel.cloud.com; location / { proxy_pass http://panel.cloud.com; } } == If that is not appropriate, then the next most straightforward is probably to change the backend servers so that all of the content on 10.1.0.12 is available below "/panel/", and all of the content on 10.1.0.16 is below "/market/" (or some other unique prefix) and use *that* as the way that nginx can decide which backend to use. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Apr 11 10:06:05 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 11 Apr 2015 11:06:05 +0100 Subject: alias vs root in regex location In-Reply-To: References: Message-ID: <20150411100605.GH29618@daoine.org> On Fri, Apr 10, 2015 at 07:09:43PM -0400, avi9526 wrote: Hi there, > Trying to serve nagios web GUI through nginx. > location ~ "^/nagios/(.+?\.php)(/.*)?$" > { > alias "/usr/share/nagios3/htdocs/"; > try_files $1 $uri $uri/ /nagios/index.php; > location ~* ^/cgi-bin/nagios3/(.+?\.cgi)$ > { > alias "/usr/lib/cgi-bin/nagios3/"; > try_files $1 $uri $uri/ =404; > it's seems to be working. If i change "alias" to "root" and uncomment > "rewrite" directive for both php and cgi scripts it will work as well. So, > what the difference? http://nginx.org/r/root and http://nginx.org/r/alias. Note that "alias" and "try_files" do have some unobvious interactions, especially in a regex location. See http://trac.nginx.org/nginx/ticket/97 for some details. Your configuration works because of your non-typical "try_files" directive. I suspect that not all of the four arguments are ever used -- but it does depend on the incoming request, the files on the file system, and how you want them to be matched together. > And why most online manuals suggests to use "root" > followed by rewrite? I don't see that. I see some suggesting root and rewrite; and some suggesting root; and some suggesting alias. I suspect that it depends on where things are installed. f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Sat Apr 11 11:18:54 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 11 Apr 2015 13:18:54 +0200 Subject: Avoid logging specific user agents requests Message-ID: Hello, Following: http://trac.nginx.org/nginx/ticket/713 How does one avoid logging (ie set 'access_log off') requests from specific user agents? Using 'if' would mean to use 'return' inside (as it is advised to do with that directive, not continuing the normal process). Using 'map', as shown in ticket #713, does not work as expected. Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Apr 11 11:24:37 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 11 Apr 2015 13:24:37 +0200 Subject: limit_rate for POST targets ? In-Reply-To: References: Message-ID: The docs suggest using 'if', even though it is uncertain under which conditions this directive works/should be used or not... You could either set the logic in 'if' or use a map matching the $http_method variable and setting an intermediary variable which will be used in turn by 'if' to set the rate_limit feature or not. ?That is the most efficient way of doing that I can think of.? --- *B. R.* On Fri, Apr 10, 2015 at 11:22 PM, jb wrote: > Anyone? no ideas? > > how would i go about getting such a feature added. I imagine it would be > much the same code, just applied to reading the request body rather than > writing it. And since it is core functionality I'm not sure one for POST > should be an extension? > > On Wed, Apr 8, 2015 at 4:23 PM, jb wrote: > >> Is there a module that does throttled reading for POST and works the same >> way as limit_rate does for GET (per stream, X bytes per second). >> >> I got some kind of throttle effect by putting a usleep() into the >> os/unix/ngx_recv.c file reader, but I want to use something that works the >> same way as limit_rate. >> >> thanks >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From justinbeech at gmail.com Sat Apr 11 11:50:17 2015 From: justinbeech at gmail.com (Justin) Date: Sat, 11 Apr 2015 21:50:17 +1000 Subject: limit_rate for POST targets ? In-Reply-To: References: Message-ID: <58F2D3C0-B001-4CE8-8774-FE80D5524819@gmail.com> limit_rate it only applies to GET ( and works great ) "response transmission to a client" but not POST (reading data from a client) > On 11 Apr 2015, at 9:24 pm, B.R. wrote: > > The docs suggest using 'if', even though it is uncertain under which conditions this directive works/should be used or not... > > You could either set the logic in 'if' or use a map matching the $http_method variable and setting an intermediary variable which will be used in turn by 'if' to set the rate_limit feature or not. > ?That is the most efficient way of doing that I can think of.? > --- > B. R. > >> On Fri, Apr 10, 2015 at 11:22 PM, jb wrote: >> Anyone? no ideas? >> >> how would i go about getting such a feature added. I imagine it would be much the same code, just applied to reading the request body rather than writing it. And since it is core functionality I'm not sure one for POST should be an extension? >> >>> On Wed, Apr 8, 2015 at 4:23 PM, jb wrote: >>> Is there a module that does throttled reading for POST and works the same way as limit_rate does for GET (per stream, X bytes per second). >>> >>> I got some kind of throttle effect by putting a usleep() into the os/unix/ngx_recv.c file reader, but I want to use something that works the same way as limit_rate. >>> >>> thanks >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Apr 11 12:10:01 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 11 Apr 2015 08:10:01 -0400 Subject: Avoid logging specific user agents requests In-Reply-To: References: Message-ID: <9dbca036242071eb102f5642da354a94.NginxMailingListEnglish@forum.nginx.org> Page 18 from nginx for Windows - documentation 1.3; map $request_uri $loggablevhts { default 1; /ngxvtstatus 0; # zero=do not log /vtsvalues.js 0; # zero=do not log /vtsvalues-eop.js 0; # zero=do not log /ngxvtstatus/format/json 0; # zero=do not log } map $remote_addr $lcladdrvhts { default 1; ~^(127.0.0.*)$ 0; # zero=do not log } # don't log vhts entries when request is local or from management interface map $loggablevhts$lcladdrvhts $loggable { default 0; ~1 1; } access_log /path/to/access.log combined if=$loggable; ?A request will not be logged if the (IF) condition evaluates to "0" or an empty string? Two simple ?maps? which are then combined tested in the third ?map? which is used in the IF evaluation of the log directive. Tweak, change, add your own stuff with $request See also nginx-simple-WAF.conf in the nginx for Windows release archives. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257961,257964#msg-257964 From nginx-forum at nginx.us Sat Apr 11 12:13:47 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 11 Apr 2015 08:13:47 -0400 Subject: limit_rate for POST targets ? In-Reply-To: <58F2D3C0-B001-4CE8-8774-FE80D5524819@gmail.com> References: <58F2D3C0-B001-4CE8-8774-FE80D5524819@gmail.com> Message-ID: <3d42edd058e8eeed7ddee58e05d8bf02.NginxMailingListEnglish@forum.nginx.org> Lua would be a way to go, ea. https://github.com/fanhattan/lua-resty-rate-limit Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257905,257965#msg-257965 From justinbeech at gmail.com Sat Apr 11 12:25:44 2015 From: justinbeech at gmail.com (Justin) Date: Sat, 11 Apr 2015 22:25:44 +1000 Subject: limit_rate for POST targets ? In-Reply-To: <3d42edd058e8eeed7ddee58e05d8bf02.NginxMailingListEnglish@forum.nginx.org> References: <58F2D3C0-B001-4CE8-8774-FE80D5524819@gmail.com> <3d42edd058e8eeed7ddee58e05d8bf02.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6328BFCB-B707-40AF-B565-335843391D8A@gmail.com> hmm that is rate limiting req/s i am looking for an exact limit_rate equivalent - which is bytes/second. > On 11 Apr 2015, at 10:13 pm, itpp2012 wrote: > > Lua would be a way to go, > ea. https://github.com/fanhattan/lua-resty-rate-limit > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257905,257965#msg-257965 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Sat Apr 11 13:18:46 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 11 Apr 2015 15:18:46 +0200 Subject: Avoid logging specific user agents requests In-Reply-To: <9dbca036242071eb102f5642da354a94.NginxMailingListEnglish@forum.nginx.org> References: <9dbca036242071eb102f5642da354a94.NginxMailingListEnglish@forum.nginx.org> Message-ID: I tend to use official documentation only, and I run servers, not Windows. However, thanks for the pointer: the answer might be that 'if' parameter... However not available in the stable branch yet. I will see to it. Thanks again! --- *B. R.* On Sat, Apr 11, 2015 at 2:10 PM, itpp2012 wrote: > Page 18 from nginx for Windows - documentation 1.3; > > map $request_uri $loggablevhts { > default 1; > /ngxvtstatus 0; # zero=do not log > /vtsvalues.js 0; # zero=do not log > /vtsvalues-eop.js 0; # zero=do not log > /ngxvtstatus/format/json 0; # zero=do not log > } > map $remote_addr $lcladdrvhts { > default 1; > ~^(127.0.0.*)$ 0; # zero=do not log > } > > # don't log vhts entries when request is local or from management interface > map $loggablevhts$lcladdrvhts $loggable { > default 0; > ~1 1; > } > > access_log /path/to/access.log combined if=$loggable; > > ?A request will not be logged if the (IF) condition evaluates to "0" or an > empty string? > > Two simple ?maps? which are then combined tested in the third ?map? which > is > used in the > IF evaluation of the log directive. > > Tweak, change, add your own stuff with $request > > See also nginx-simple-WAF.conf in the nginx for Windows release archives. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257961,257964#msg-257964 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Apr 11 13:21:30 2015 From: nginx-forum at nginx.us (Arno0x0x) Date: Sat, 11 Apr 2015 09:21:30 -0400 Subject: auth_request + php-fpm + POST request Message-ID: <7b7a4a890072d7057d9a6be2c2bef891.NginxMailingListEnglish@forum.nginx.org> Hi, I'm using the auth_request module to enable custom (2fa) authentication to protect my whole website, no matter the various applications I host on this website. So the auth_request directive is set at the "server" level. The authentication subrequest works fine, except for client POST requests where the php auth script holds forever until I get a timeout in the nginx error.log : "*1 upstream timed out (110: Connection timed out) while reading response header from upstream" It took me a while guessing why, but my guess is, from the debug trace I created, that the PHP script sees both a "content-length" and "content-type" in the HTTP headers, but the request body is not being sent to the auth scripts (there's no need anyway, all I need is the cookies). I had to trick the config to make it work, and that's what I'm sharing here, but I'd like to know if there's a more "standard" way to deal with this. My nginx.conf file is standard, and here is the bits from my "sites-available" config file: ----------------------------------------------------------------------------------------- server { listen 443; server_name www.example.eu; ssl on; ssl_certificate /etc/nginx/ssl/www.exemple.eu.crt; ssl_certificate_key /etc/nginx/ssl/www.exemple.eu.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'AES256+EECDH:AES256+EDH'; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; root var/www/exemple.eu; index index.php index.html index.htm; auth_request /twofactorauth/auth/auth.php; error_page 401 = @error401; location @error401 { return 302 $scheme://$host/twofactorauth/login/login.html; } location / { try_files $uri $uri/ /index.html; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } location = /twofactorauth/auth/auth.php { fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi.conf; fastcgi_param REQUEST_METHOD "GET"; } location /twofactorauth/login/ { auth_request off; location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } } } ----------------------------------------------------------------------------------------- See the trick ? The auth.php script is being forced a "GET" method even when the client used a POST request. By the way, I didn't manage to get the whole auth_request config working using all the "proxy_pass" stuff. So I used a straight call to the auth.php script. Any ideas are welcomed. Cheers Arno0x0x Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257968,257968#msg-257968 From nginx-forum at nginx.us Sun Apr 12 16:21:19 2015 From: nginx-forum at nginx.us (numroo) Date: Sun, 12 Apr 2015 12:21:19 -0400 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <20150406192058.GP88631@mdounin.ru> References: <20150406192058.GP88631@mdounin.ru> Message-ID: <03e1376c00f87965effb8bed16321584.NginxMailingListEnglish@forum.nginx.org> >> Yes, I ran the s_client command multiple times to account for the nginx >> responder delay. I was testing OCSP stapling on just one of my domains. >> Then I read that the 'default_server' SSL server also has to have OCSP >> stapling enabled for vhost OCSP stapling to work: >> >> https://gist.github.com/konklone/6532544 > >There is no such a requirement. I have the same problem here. openssl s_client -servername ${WEBSITE} -connect ${WEBSITE}:443 -tls1 -tlsextdebug -status|grep OCSP Always returns the following on all virtual hosts no matter on how many times I try: OCSP response: no response sent But as soon that I disable my self-signed default host and restart Nginx, I get a successfull repsonse on the second request on all CA signed hosts: OCSP Response Status: successful (0x0) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257833,257974#msg-257974 From nginx-forum at nginx.us Sun Apr 12 18:41:37 2015 From: nginx-forum at nginx.us (numroo) Date: Sun, 12 Apr 2015 14:41:37 -0400 Subject: Core Dumps on 1.7.12 with SPDY Message-ID: <164856bf819af34e7da0ca571cc79e91.NginxMailingListEnglish@forum.nginx.org> Hello I'm running Nginx installed from the nginx.org repos on a Ubuntu Server 14.04. There are about a dozen different sites running on this server, mostly using PHP-FPM backend. Since the update to 1.7.12 I had frequent core dumps (every few minutes a series of two to four crashes). I tried alot by disabling features, sites, config options without success. The only thing I could tell, that two distinct sites seem to be generating these crashes. One running Wordpress the other running Owncloud. Other PHP-sites run fine. No crashes as long as neither of the WP or OC sites are enabled. Today, out of a hunch, I disabled SPDY on the two problematic hosts. I had not a single crash anymore since then. But whats even stranger. SPDY is still active on those hosts. It might be that SPDY is still actice due to shared IP & and PORT configurations with other hosts on IPv4. But I don't share IPv6 adresses between hosts. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257977,257977#msg-257977 From luky-37 at hotmail.com Sun Apr 12 22:28:44 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 13 Apr 2015 00:28:44 +0200 Subject: Core Dumps on 1.7.12 with SPDY In-Reply-To: <164856bf819af34e7da0ca571cc79e91.NginxMailingListEnglish@forum.nginx.org> References: <164856bf819af34e7da0ca571cc79e91.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Hello > > I'm running Nginx installed from the nginx.org repos on a Ubuntu Server > 14.04. > There are about a dozen different sites running on this server, mostly using > PHP-FPM backend. They are gonna need a backtrace from that coredump: http://wiki.nginx.org/Debugging#Core_dump Lukas From ahaitoute at rinis.nl Mon Apr 13 07:13:22 2015 From: ahaitoute at rinis.nl (Abdelouahed Haitoute) Date: Mon, 13 Apr 2015 09:13:22 +0200 Subject: handling different two way ssl-request via a proxy system Message-ID: <7A82D689-71DC-436A-ACD7-B50AE3776684@rinis.nl> Hello, Currently we?ve got the following situation in our production environment: Clients ?HTTP?> Apache ?HTTPS TWO-WAY SSL VIA PROXY ?> HTTPS SERVERS Just to be clear, the following services are used during this flow: http client (firefox, chrome, curl, wget, etc.) ?> Apache ?> Squid ?> HTTPS services of other parties on the internet, supporting two-way ssl We?ve realized this using the following configuration on the apache service: LoadModule ssl_module modules/mod_ssl.so Listen *:3128 SSLProxyEngine On SSLProxyVerify require SSLProxyVerifyDepth 10 SSLProxyMachineCertificateFile /etc/httpd/certs/client.pem SSLProxyCACertificateFile /etc/httpd/certs/ca.crt RewriteEngine On RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [NC,P] ProxyPreserveHost On ProxyPass / https://$1/ ProxyPassReverse / https://$1/ ProxyRemote https http://192.168.68.102:3128 We?re trying to replace the apache service by using nginx. I?ve installed nginx 1.7.12 on CentOS 6.6 and have realized in a development environment a two-way ssl: http client ?> Nginx 1.7.12 ?> https two-way ssl directly ?> https.example.com server { listen 3128; location / { #this enables client verification proxy_ssl_verify on; proxy_ssl_verify_depth 3; #client certificate for upstream server proxy_ssl_certificate /etc/nginx/certs/client.crt; #client key generated from upstream cert proxy_ssl_certificate_key /etc/nginx/certs/client.key; proxy_ssl_trusted_certificate /etc/nginx/certs/ca.crt; proxy_pass https://https.example.com:443/; # Specifying "https" causes NGINX to # encrypt the traffic } } There are two thing I haven?t realized in the development environment, because I don?t know how: 1. Making the Nginx 1.7.12 to use a proxy system, because that?s our policy to communicate to the outside world. 2. Making the configuration variable as much as possible. So the Nginx 1.7.12 handles all different http client requests to different https servers and send them as a https two-way ssl. Currently it only handles request for https.example.com . Any help is welcome. Abdelouahed -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 13 07:50:50 2015 From: nginx-forum at nginx.us (jinwon42) Date: Mon, 13 Apr 2015 03:50:50 -0400 Subject: My site is vulnerable to the SSL FREAK attacks. Message-ID: <09071df4423baccb6318f1d0be1ff30c.NginxMailingListEnglish@forum.nginx.org> my site is vulnerable to the SSL FREAK attacks. i have a setting problem. my setting is.... I want all request "http" --> "https" But, some location is "https" --> "http". ALL Location : https /companyBrand.do : http only What's problem? --------------------------------------------------------------------------------------------------- map $request_uri $example_org_preferred_proto { default "https"; ~^/mobile/rsvPayOnlyResult2.do "http"; ~^/kor/cartel.do "http"; } server { listen 443 ssl; listen 80; server_name www.test.com; charset utf-8; #ssl on; ssl_certificate D:/nginx-1.7.10/ssl/cert.pem; ssl_certificate_key D:/nginx-1.7.10/ssl/nopasswd.pem; ssl_verify_client off; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES256-SHA:HIGH:!EXPORT:!eNULL:!ADH:RC4+RSA; ssl_prefer_server_ciphers on; error_page 400 /error/error.html; error_page 403 /error/error.html; error_page 404 /error/error.html; if ($scheme != $example_org_preferred_proto) { return 301 $example_org_preferred_proto://$server_name$request_uri; } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_buffering off; proxy_connect_timeout 60; proxy_read_timeout 60; proxy_pass http://wwwtestcom; proxy_ssl_session_reuse off; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257984,257984#msg-257984 From nginx-forum at nginx.us Mon Apr 13 08:16:56 2015 From: nginx-forum at nginx.us (HUMing) Date: Mon, 13 Apr 2015 04:16:56 -0400 Subject: How does Nginx handle the request of the upstream server when it is marked as `down`? Message-ID: <5ed1f5d02fd099164748cf6a95b3df71.NginxMailingListEnglish@forum.nginx.org> For a simple Nginx configuration like this: For a simple Nginx configuration like this: upstream myservers { server 127.0.0.1:3000; server 127.0.0.1:3001; } server { listen 80; location / { proxy_pass http://myservers; } I have two questions related zero downtime of the application: If I change the configuration to mark the first server server 127.0.0.1:3000 as down, I assume that no new request will go to that server, but what about the current request that is handled by the upstream server? Does Nginx can still return valid response to end user for that request? If I remove the first server server 127.0.0.1:3000 and reload the configuration, what about the current request that is handled by this upstream server? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257985,257985#msg-257985 From nginx-forum at nginx.us Mon Apr 13 09:01:22 2015 From: nginx-forum at nginx.us (rolf1316) Date: Mon, 13 Apr 2015 05:01:22 -0400 Subject: How to apply concurrent connection limit ? In-Reply-To: References: Message-ID: <7b89e43ea9cb6a7474c247da7bf810f4.NginxMailingListEnglish@forum.nginx.org> Hello a quick question, ( I'm a newbie in this forum and in Nginx ) is there a way for nginx to limit connections per workstation? lets say I allow only 5 workstations at a time to connect among 20 workstations, is that possible? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257572,257986#msg-257986 From nginx-forum at nginx.us Mon Apr 13 09:53:33 2015 From: nginx-forum at nginx.us (rolf1316) Date: Mon, 13 Apr 2015 05:53:33 -0400 Subject: setting max active connection In-Reply-To: <20c658eb545550de27a7655a4884b2b8.NginxMailingListEnglish@forum.nginx.org> References: <20c658eb545550de27a7655a4884b2b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <16a8fa299a06cb1f997cd2a8eaff2ac5.NginxMailingListEnglish@forum.nginx.org> is it possible to edit the limit of active connections in nginx? Like change it to 5 active connections at a time ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240989,257987#msg-257987 From nginx-forum at nginx.us Mon Apr 13 11:10:57 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 13 Apr 2015 07:10:57 -0400 Subject: My site is vulnerable to the SSL FREAK attacks. In-Reply-To: <09071df4423baccb6318f1d0be1ff30c.NginxMailingListEnglish@forum.nginx.org> References: <09071df4423baccb6318f1d0be1ff30c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <097bd9aa74df278a2a7700269912521b.NginxMailingListEnglish@forum.nginx.org> jinwon42 Wrote: ------------------------------------------------------- > my site is vulnerable to the SSL FREAK attacks. > > ssl_protocols SSLv3 TLSv1; > ssl_ciphers AES256-SHA:HIGH:!EXPORT:!eNULL:!ADH:RC4+RSA; Try these; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:ECDH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!eNULL:!MD5:!DSS:!EXP:!ADH:!LOW:!MEDIUM; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257984,257989#msg-257989 From mdounin at mdounin.ru Mon Apr 13 11:57:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Apr 2015 14:57:39 +0300 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <03e1376c00f87965effb8bed16321584.NginxMailingListEnglish@forum.nginx.org> References: <20150406192058.GP88631@mdounin.ru> <03e1376c00f87965effb8bed16321584.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150413115739.GA88631@mdounin.ru> Hello! On Sun, Apr 12, 2015 at 12:21:19PM -0400, numroo wrote: > >> Yes, I ran the s_client command multiple times to account for the nginx > >> responder delay. I was testing OCSP stapling on just one of my domains. > >> Then I read that the 'default_server' SSL server also has to have OCSP > >> stapling enabled for vhost OCSP stapling to work: > >> > >> https://gist.github.com/konklone/6532544 > > > >There is no such a requirement. > > I have the same problem here. > > openssl s_client -servername ${WEBSITE} -connect ${WEBSITE}:443 -tls1 > -tlsextdebug -status|grep OCSP > > Always returns the following on all virtual hosts no matter on how many > times I try: > OCSP response: no response sent > > But as soon that I disable my self-signed default host and restart Nginx, I > get a successfull repsonse on the second request on all CA signed hosts: > OCSP Response Status: successful (0x0) As previously suggested, tests with trivial config and debugging log may help to find out what goes wrong. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Apr 13 12:46:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Apr 2015 15:46:38 +0300 Subject: auth_request + php-fpm + POST request In-Reply-To: <7b7a4a890072d7057d9a6be2c2bef891.NginxMailingListEnglish@forum.nginx.org> References: <7b7a4a890072d7057d9a6be2c2bef891.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150413124638.GD88631@mdounin.ru> Hello! On Sat, Apr 11, 2015 at 09:21:30AM -0400, Arno0x0x wrote: > Hi, > > I'm using the auth_request module to enable custom (2fa) authentication to > protect my whole website, no matter the various applications I host on this > website. So the auth_request directive is set at the "server" level. > > The authentication subrequest works fine, except for client POST requests > where the php auth script holds forever until I get a timeout in the nginx > error.log : > "*1 upstream timed out (110: Connection timed out) while reading response > header from upstream" > > It took me a while guessing why, but my guess is, from the debug trace I > created, that the PHP script sees both a "content-length" and "content-type" > in the HTTP headers, but the request body is not being sent to the auth > scripts (there's no need anyway, all I need is the cookies). > > I had to trick the config to make it work, and that's what I'm sharing here, > but I'd like to know if there's a more "standard" way to deal with this. The recommended way can be seen in the example configuration in the documentation: location = /auth { proxy_pass ... proxy_pass_request_body off; proxy_set_header Content-Length ""; } Similar approach should work for fastcgi too, but you'll have to avoid sending the CONTENT_LENGTH fastcgi param instead of the Content-Length header. http://nginx.org/en/docs/http/ngx_http_auth_request_module.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Apr 13 13:06:42 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Apr 2015 16:06:42 +0300 Subject: How does Nginx handle the request of the upstream server when it is marked as `down`? In-Reply-To: <5ed1f5d02fd099164748cf6a95b3df71.NginxMailingListEnglish@forum.nginx.org> References: <5ed1f5d02fd099164748cf6a95b3df71.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150413130641.GF88631@mdounin.ru> Hello! On Mon, Apr 13, 2015 at 04:16:56AM -0400, HUMing wrote: > For a simple Nginx configuration like this: > For a simple Nginx configuration like this: > > upstream myservers { > server 127.0.0.1:3000; > server 127.0.0.1:3001; > } > server { > listen 80; > location / { > proxy_pass http://myservers; > } > I have two questions related zero downtime of the application: > > If I change the configuration to mark the first server server 127.0.0.1:3000 > as down, I assume that no new request will go to that server, but what about > the current request that is handled by the upstream server? Does Nginx can > still return valid response to end user for that request? > > If I remove the first server server 127.0.0.1:3000 and reload the > configuration, what about the current request that is handled by this > upstream server? Both removal of the server and marking it down are equivalent as long as you are using round-robin balancing (there is a difference when using ip_hash and hash balancers, as "down" implies less remapping when temporary disabling a server). In both cases the server won't be used by new worker processes to handle new requests, and old worker processes will gracefully terminate upon completion of previously started requests. See here for details: http://nginx.org/en/docs/control.html#reconfiguration -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Apr 13 13:42:11 2015 From: nginx-forum at nginx.us (Arno0x0x) Date: Mon, 13 Apr 2015 09:42:11 -0400 Subject: auth_request + php-fpm + POST request In-Reply-To: <20150413124638.GD88631@mdounin.ru> References: <20150413124638.GD88631@mdounin.ru> Message-ID: Hi Maxim, Thanks for your answer. I'll rather do as you said rather than changing the method from POST to GET. As per your recommended example, I never managed to make it work (proxy_pass stuff): I went into some resolver issue, and then into some infinite loop on internal requests. So I gave up. Regards, Arno0x0x Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257968,257996#msg-257996 From reallfqq-nginx at yahoo.fr Mon Apr 13 16:20:49 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 13 Apr 2015 18:20:49 +0200 Subject: setting max active connection In-Reply-To: <16a8fa299a06cb1f997cd2a8eaff2ac5.NginxMailingListEnglish@forum.nginx.org> References: <20c658eb545550de27a7655a4884b2b8.NginxMailingListEnglish@forum.nginx.org> <16a8fa299a06cb1f997cd2a8eaff2ac5.NginxMailingListEnglish@forum.nginx.org> Message-ID: http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html --- *B. R.* On Mon, Apr 13, 2015 at 11:53 AM, rolf1316 wrote: > is it possible to edit the limit of active connections in nginx? Like > change > it to 5 active connections at a time ? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,240989,257987#msg-257987 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Apr 13 17:03:48 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 13 Apr 2015 19:03:48 +0200 Subject: limit_rate for POST targets ? In-Reply-To: <6328BFCB-B707-40AF-B565-335843391D8A@gmail.com> References: <58F2D3C0-B001-4CE8-8774-FE80D5524819@gmail.com> <3d42edd058e8eeed7ddee58e05d8bf02.NginxMailingListEnglish@forum.nginx.org> <6328BFCB-B707-40AF-B565-335843391D8A@gmail.com> Message-ID: I do not get (aha) where you saw limit_rate only applies to the GET method... But yeah limit_rate applies to resposnes. Rate limiting only properly applies to sender, in your case the client, which is the sole entity ablte to properly craft its requests to contain a specified amount of data/time period. ?The only thi?ng you can limit on intermediaries/receiver is connections/packets, because it is network-related structures which are trivial to handle/buffer. Rate-limiting on a transmitting/receiving end requires buffering content (not envelope, so that means application logic/DPI), and re-crafting forwarded/processed content into suitable network envelopes. Way too expensive/dangerous/demanding. You can limit incoming transmissions in nginx based on connections (limit_conn) or requests (limit_req). You can limit incoming transmissions at TCP level in firewalls surch as iptables based on connections and/or packets. ?My 2 cents,? --- *B. R.* On Sat, Apr 11, 2015 at 2:25 PM, Justin wrote: > hmm that is rate limiting req/s > > i am looking for an exact limit_rate equivalent - which is bytes/second. > > > On 11 Apr 2015, at 10:13 pm, itpp2012 wrote: > > > > Lua would be a way to go, > > ea. https://github.com/fanhattan/lua-resty-rate-limit > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257905,257965#msg-257965 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Apr 13 19:26:55 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 13 Apr 2015 20:26:55 +0100 Subject: handling different two way ssl-request via a proxy system In-Reply-To: <7A82D689-71DC-436A-ACD7-B50AE3776684@rinis.nl> References: <7A82D689-71DC-436A-ACD7-B50AE3776684@rinis.nl> Message-ID: <20150413192655.GK29618@daoine.org> On Mon, Apr 13, 2015 at 09:13:22AM +0200, Abdelouahed Haitoute wrote: Hi there, > Currently we?ve got the following situation in our production environment: > > Clients ?HTTP?> Apache ?HTTPS TWO-WAY SSL VIA PROXY ?> HTTPS SERVERS > We?re trying to replace the apache service by using nginx. nginx does not talk to a proxy. nginx is not a proxy. nginx may not be the right tool for your system. f -- Francis Daly francis at daoine.org From vbart at nginx.com Mon Apr 13 21:04:04 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 14 Apr 2015 00:04:04 +0300 Subject: Core Dumps on 1.7.12 with SPDY In-Reply-To: <164856bf819af34e7da0ca571cc79e91.NginxMailingListEnglish@forum.nginx.org> References: <164856bf819af34e7da0ca571cc79e91.NginxMailingListEnglish@forum.nginx.org> Message-ID: <57168341.ZmiCvM16Dj@vbart-laptop> On Sunday 12 April 2015 14:41:37 numroo wrote: > Hello > > I'm running Nginx installed from the nginx.org repos on a Ubuntu Server > 14.04. > There are about a dozen different sites running on this server, mostly using > PHP-FPM backend. > > Since the update to 1.7.12 I had frequent core dumps (every few minutes a > series of two to four crashes). > I tried alot by disabling features, sites, config options without success. Have you tried to fill a bug report with the core dump or at least a full backtrace provided? > The only thing I could tell, that two distinct sites seem to be generating > these crashes. > One running Wordpress the other running Owncloud. Other PHP-sites run fine. > No crashes as long as neither of the WP or OC sites are enabled. > > Today, out of a hunch, I disabled SPDY on the two problematic hosts. > > I had not a single crash anymore since then. > But whats even stranger. SPDY is still active on those hosts. > It might be that SPDY is still actice due to shared IP & and PORT > configurations with other hosts on IPv4. > But I don't share IPv6 adresses between hosts. > A quote from the documentation: "The spdy parameter allows accepting SPDY connections on this port." So, yes it's always enabled on the ip-port basis. wbr, Valentin V. Bartenev From getyounext at gmail.com Mon Apr 13 21:19:34 2015 From: getyounext at gmail.com (Joseph Gates) Date: Mon, 13 Apr 2015 15:19:34 -0600 Subject: 499s % in production traffic. Message-ID: Hello NGINX Community, - It is my understanding 499 is a client side response code indicating the remote user prematurely closed the connection without finishing the transaction. - I have a production environment which is reporting 0.3%-0.5% of total traffic accounting to this 499 response code with pages already optimized to deliver as fast as we can. Id like to ask if somebody can contribute what the experience with this response code has been in terms of percentage, and if this range is considered a safe minimal low to be expected in a production site. Thanks JG -------------- next part -------------- An HTML attachment was scrubbed... URL: From justinbeech at gmail.com Tue Apr 14 01:02:55 2015 From: justinbeech at gmail.com (jb) Date: Tue, 14 Apr 2015 11:02:55 +1000 Subject: limit_rate for POST targets ? In-Reply-To: References: <58F2D3C0-B001-4CE8-8774-FE80D5524819@gmail.com> <3d42edd058e8eeed7ddee58e05d8bf02.NginxMailingListEnglish@forum.nginx.org> <6328BFCB-B707-40AF-B565-335843391D8A@gmail.com> Message-ID: It is true that the best way to rate limit is the sender. But in the event where the sender is a myriad different browsers, that isn't an option. There is no control at the POST level to throttle an upload. There isn't really any good firewall tool for traffic shaping incoming data per tcp stream either. You can traffic shape to a port, but not as easily per stream. For a particular application I'd like to simulate the effect of uploading at a specific rate. Since I got the desired effect by crudely limiting the rate at which nginx reads its input socket (with usleep) it seemed possible that a mirror of the limit_rate code for sending could be applied to reading as well. It wouldn't be ideal, but if the server was dedicated to this and not other things, I'm not sure it would have any disastrous effects? thanks -Justin On Tue, Apr 14, 2015 at 3:03 AM, B.R. wrote: > I do not get (aha) where you saw limit_rate only applies to the GET > method... > But yeah limit_rate applies to resposnes. > > Rate limiting only properly applies to sender, in your case the client, > which is the sole entity ablte to properly craft its requests to contain a > specified amount of data/time period. > ?The only thi?ng you can limit on intermediaries/receiver is > connections/packets, because it is network-related structures which are > trivial to handle/buffer. > > Rate-limiting on a transmitting/receiving end requires buffering content > (not envelope, so that means application logic/DPI), and re-crafting > forwarded/processed content into suitable network envelopes. > Way too expensive/dangerous/demanding. > > You can limit incoming transmissions in nginx based on connections > (limit_conn) or requests (limit_req). > You can limit incoming transmissions at TCP level in firewalls surch as > iptables based on connections and/or packets. > > ?My 2 cents,? > --- > *B. R.* > > On Sat, Apr 11, 2015 at 2:25 PM, Justin wrote: > >> hmm that is rate limiting req/s >> >> i am looking for an exact limit_rate equivalent - which is bytes/second. >> >> > On 11 Apr 2015, at 10:13 pm, itpp2012 wrote: >> > >> > Lua would be a way to go, >> > ea. https://github.com/fanhattan/lua-resty-rate-limit >> > >> > Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,257905,257965#msg-257965 >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 14 01:28:01 2015 From: nginx-forum at nginx.us (jinwon42) Date: Mon, 13 Apr 2015 21:28:01 -0400 Subject: My site is vulnerable to the SSL FREAK attacks. In-Reply-To: <097bd9aa74df278a2a7700269912521b.NginxMailingListEnglish@forum.nginx.org> References: <09071df4423baccb6318f1d0be1ff30c.NginxMailingListEnglish@forum.nginx.org> <097bd9aa74df278a2a7700269912521b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <67db766979f2f4ef71a6be8dec25e277.NginxMailingListEnglish@forum.nginx.org> same error. site is vulnerable to the SSL FREAK attacks. openssl version is the problem? openssl version is 1.02 what's problem? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257984,258010#msg-258010 From dewanggaba at xtremenitro.org Tue Apr 14 02:25:19 2015 From: dewanggaba at xtremenitro.org (Dewangga) Date: Tue, 14 Apr 2015 09:25:19 +0700 Subject: My site is vulnerable to the SSL FREAK attacks. In-Reply-To: <67db766979f2f4ef71a6be8dec25e277.NginxMailingListEnglish@forum.nginx.org> References: <09071df4423baccb6318f1d0be1ff30c.NginxMailingListEnglish@forum.nginx.org> <097bd9aa74df278a2a7700269912521b.NginxMailingListEnglish@forum.nginx.org> <67db766979f2f4ef71a6be8dec25e277.NginxMailingListEnglish@forum.nginx.org> Message-ID: <552C7A8F.3050008@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello! What linux distribution do you use? On el6 I use openssl-1.0.1e-30.el6 On el7 I use openssl-1.0.1e-42.el7.4.x86_64 https://kb.iweb.com/entries/90860777-Security-vulnerabilities-in-OpenSSL - -FREAK-CVE-2015-0204-and-more Red hat using their own packages versioning (CMIIW), it might be vary with your linux distro. And how do you test your site againts FREAK attacks? On 4/14/2015 08:28, jinwon42 wrote: > same error. site is vulnerable to the SSL FREAK attacks. > > openssl version is the problem? openssl version is 1.02 > > what's problem? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,257984,258010#msg-258010 > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) iQEcBAEBAgAGBQJVLHqOAAoJEF1+odKB6YIxupIIAI+bCplE9ixsIb1SDAXDJriC MDBc8RfCM72V/a6Lm6FpFxd1mJiMYs93zSGNkD34VrkHRABAf0DrL3tMD276dn3G r/9QtrHYfw9A78p/6juZVsQ6tPWPcRPRvRFdXp1M8KUO64pR8JgWCrIxoFAzwNJ0 jj+UMElZAo4+xFsEXndHlRBb4BGb5nOXkG9cXkN9PvjEX3g4EDeAViayqZnJtxCd yORGa4cWgld+HOxPWCSd3rHrxLy9rCaudhhFKPqX+ziRSX4Eq85r9dAnxHxzKg3b kDn3w8ixpc/CqaRA0DvANMB2xc9IXGAR7P/rOkw5MyGO3Foh9w6JVQcsH1JCipU= =AKo2 -----END PGP SIGNATURE----- From nginx-forum at nginx.us Tue Apr 14 05:59:44 2015 From: nginx-forum at nginx.us (jinwon42) Date: Tue, 14 Apr 2015 01:59:44 -0400 Subject: My site is vulnerable to the SSL FREAK attacks. In-Reply-To: <552C7A8F.3050008@xtremenitro.org> References: <552C7A8F.3050008@xtremenitro.org> Message-ID: <363897f2aef74eefb243df502ea62f7a.NginxMailingListEnglish@forum.nginx.org> sorry. my server is windows server. windows + nginx1.7.10 + tomcat Openssl 1.02 updates have been completed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257984,258012#msg-258012 From niteshnarayanlalleo at gmail.com Tue Apr 14 06:20:20 2015 From: niteshnarayanlalleo at gmail.com (nitesh narayan lal) Date: Tue, 14 Apr 2015 11:50:20 +0530 Subject: Using NGINX in non-fork mode Message-ID: Hi, I am using a single worker process and master_process off in nginx.conf. Now as per my understanding, the flow of operation would be something like: NGiNX master process will be created which will spawn a single worker_process using fork and then master process gets killed. Is that correct ? If yes then is it possible to avoid forking. Pthreads is just now been introduced as a testing feature in NGiNX, so I don't want to use it. Is there any other way also to avoid fork? -- Regards Nitesh Narayan Lal http://www.niteshnarayanlal.org/ From igor at sysoev.ru Tue Apr 14 06:34:00 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 14 Apr 2015 09:34:00 +0300 Subject: Using NGINX in non-fork mode In-Reply-To: References: Message-ID: <07B343C1-864E-46DE-8508-FCB735F1BE57@sysoev.ru> On 14 Apr 2015, at 09:20, nitesh narayan lal wrote: > Hi, > > I am using a single worker process and master_process off in nginx.conf. > Now as per my understanding, the flow of operation would be something like: > NGiNX master process will be created which will spawn a single > worker_process using fork and then master process gets killed. > Is that correct ? > > If yes then is it possible to avoid forking. > Pthreads is just now been introduced as a testing feature in NGiNX, so > I don't want to use it. Is there any other way also to avoid fork? When master_process is set to off, then master process does not fork any child processes and all requests are processed by the master process. However, this mode is intended only for development but for productions. There are issues with graceful reload, etc. -- Igor Sysoev http://nginx.com From luky-37 at hotmail.com Tue Apr 14 07:33:13 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 14 Apr 2015 09:33:13 +0200 Subject: My site is vulnerable to the SSL FREAK attacks. In-Reply-To: <363897f2aef74eefb243df502ea62f7a.NginxMailingListEnglish@forum.nginx.org> References: <552C7A8F.3050008@xtremenitro.org>, <363897f2aef74eefb243df502ea62f7a.NginxMailingListEnglish@forum.nginx.org> Message-ID: > my server is windows server. > > windows + nginx1.7.10 + tomcat > Openssl 1.02 updates have been completed. How, are you recompiling nginx on your own? Nginx binary comes bundled with openssl, not sure you are able to update openssl on your own. Get nginx 1.7.12, it bundles with?openssl-1.0.1m. Lukas From nginx-forum at nginx.us Tue Apr 14 08:26:52 2015 From: nginx-forum at nginx.us (jinwon42) Date: Tue, 14 Apr 2015 04:26:52 -0400 Subject: My site is vulnerable to the SSL FREAK attacks. In-Reply-To: References: Message-ID: <101d013e75ee3f9ef95563580ecc5a9f.NginxMailingListEnglish@forum.nginx.org> sorry i was update nginx-1.7.12 version. but, same error. windows + nginx1.7.12 + tomcat. my site is no HSTS.Is this a problem? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257984,258016#msg-258016 From luky-37 at hotmail.com Tue Apr 14 09:10:43 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 14 Apr 2015 11:10:43 +0200 Subject: My site is vulnerable to the SSL FREAK attacks. In-Reply-To: <101d013e75ee3f9ef95563580ecc5a9f.NginxMailingListEnglish@forum.nginx.org> References: , <101d013e75ee3f9ef95563580ecc5a9f.NginxMailingListEnglish@forum.nginx.org> Message-ID: > i was update nginx-1.7.12 version. > but, same error. What error? How exactly do you come to the conclusion that your site is vulnerable? From nginx-forum at nginx.us Tue Apr 14 09:39:54 2015 From: nginx-forum at nginx.us (jinwon42) Date: Tue, 14 Apr 2015 05:39:54 -0400 Subject: My site is vulnerable to the SSL FREAK attacks. In-Reply-To: References: Message-ID: <9cd1348d1963cbb1ab88dea9f2e450a6.NginxMailingListEnglish@forum.nginx.org> i testing this site, "https://tools.keycdn.com/freak" result message : Vulnerable! The domain www.ktkumhorent.com:443 is vulnerable to the SSL FREAK attacks. Do I need to reboot server? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257984,258019#msg-258019 From luky-37 at hotmail.com Tue Apr 14 09:54:25 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 14 Apr 2015 11:54:25 +0200 Subject: My site is vulnerable to the SSL FREAK attacks. In-Reply-To: <9cd1348d1963cbb1ab88dea9f2e450a6.NginxMailingListEnglish@forum.nginx.org> References: , <9cd1348d1963cbb1ab88dea9f2e450a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: > i testing this site, "https://tools.keycdn.com/freak" > > result message : Vulnerable! The domain www.ktkumhorent.com:443 is > vulnerable to the SSL FREAK attacks. Right, also see: https://www.ssllabs.com/ssltest/analyze.html?d=ktkumhorent.com Your site is extremely vulnerable, it even allows SSLv2, very weak ciphers, and is generally vulnerable to a huge number of old issues that are supposed to be fixed a long time ago. It does not match your configuration, so there must be a different proxy, software or other MITM acting as HTTPS server in between. Check your network. From nginx-forum at nginx.us Tue Apr 14 18:22:46 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 14 Apr 2015 14:22:46 -0400 Subject: [ANN] Windows nginx 1.7.12.1 Lizard Message-ID: <15d1a73c1d72d3f5426f3584afc0d6df.NginxMailingListEnglish@forum.nginx.org> 17:02 14-4-2015 nginx 1.7.12.1 Lizard White Rabbit: We need a lazard with a liddle... a lad... can you help us? Bill: At your service, gov'nor. Dodo: Bill, my lad. Have you ever been down a chimney? Bill: Why, gov'nor, I've been down more chimneys... Dodo: Excellent, excellent. Now just hop down the chimney and pull that monster out of there. Bill: Righto, gov'nor... Monster? Aaaaah! The nginx Lizard release is here! Based on nginx 1.7.12 (10-4-2015) with; + nginx-module-vts (upgraded 10-4-2015) + vhts can now display different languages for an outsourced NOC see /conf/vhts/vtsvalues-xy.js (default is English) + enlarged memory thread work space + fixes for fastcgi/proxy cache expire file management + imported 3 patches from nginx 1.7.7.1 WhiteRabbit which we thought were fixed in the original nginx source tree + lua-nginx-module v0.9.16 (upgraded 20-3-2015) + Naxsi WAF v0.53-3 (upgraded 20-3-2015) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258039,258039#msg-258039 From sarah at nginx.com Tue Apr 14 18:31:32 2015 From: sarah at nginx.com (Sarah Novotny) Date: Tue, 14 Apr 2015 11:31:32 -0700 Subject: Igor's post about nginx from this morning Message-ID: <60DFB178-9DD3-4450-89E3-64FDB641AD55@nginx.com> Hello All, If you haven?t seen Igor?s post from today, yet, it?s worth a read. Here?s a teaser -- > The next 12 months herald some major new features for NGINX Open Source. The stories about NGINX and JavaScript will be realized - I have a working prototype of a JavaScript VM that is highly optimized for NGINX?s unique requirements and we?ve begun the task of embedding it within NGINX Open Source. > > Our community of module developers is vital to the success of NGINX in the open source world. We know it?s not as easy as it could be to develop for NGINX, and to address that situation, we?re beginning the implementation of a pluggable module API in the next couple of months. Our goal is to make it simpler for our developer community to create and distribute modules for NGINX, giving users more choice and flexibility to extend the NGINX open source core. We?re also establishing a developer relations team to help support the community in this transition. > > You may already have read our plan to support HTTP/2 in NGINX Open Source. We appreciate how important it is to our users that we continue to support the innovations that others are making in our space, and our HTTP/2 support will of course build on the successful SPDY implementation that has been used by a number of sites. The rest is here ? http://nginx.com/blog/nginx-open-source-reflecting-back-and-looking-ahead/ As always, NGINX?s continued growth and success is thanks to you all in the user and developer communities. If you have any questions about the developer relations team, please feel free to reach out. Sarah From nginx-forum at nginx.us Wed Apr 15 07:44:06 2015 From: nginx-forum at nginx.us (pregunton) Date: Wed, 15 Apr 2015 03:44:06 -0400 Subject: epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream Message-ID: Hello, We have a problem with nginx in a drupal site. The images placed in the domain root are not visible in chrome, but in firefox for example, are fine. This is the error in the nginx virtual host error log: 2015/04/15 03:59:26 [info] 3853#0: *104563 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: xxx.xx.xxx.xxx, server: domain.com, request: "GET /sites/default/files/styles/slideshow__1170x450_/public/xxx.jpg?itok=M4PPaqtP HTTP/1.1", upstream: "fastcgi://unix:/tmp/php5-fpm.sock:", host: "domain.com", referrer: "http://domain.com/" This is a server with nginx + php-fpm I know this error was posted before but in russian. Any ideas please? Thanks, pregunton. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258050,258050#msg-258050 From a.portnov at ism-ukraine.com Wed Apr 15 09:40:52 2015 From: a.portnov at ism-ukraine.com (Aleksey Portnov) Date: Wed, 15 Apr 2015 09:40:52 +0000 Subject: using same cookie in map for 2 independent sites Message-ID: <7B29E79534A00243AF7D95A3D81DC84A974B737E@corp-exch03.corp.ism.nl> Hello! I've got beta and live site listening on different ip on the same server. Both sites using same 'if' in 'server' statement: server { listen 1.1.1.1:80; server_name live; set $mage_run_code default; if ($cookie_store_code = a1) { set $mage_run_code kiosk_a1; } if ($cookie_store_code = b2) { set $mage_run_code kiosk1_b2; } if ($cookie_store_code = c3) { set $mage_run_code kiosk2_c3; } } server { listen 2.2.2.2:80; server_name live; set $mage_run_code default; if ($cookie_store_code = a1) { set $mage_run_code kiosk_a1; } if ($cookie_store_code = b2) { set $mage_run_code kiosk1_b2; } if ($cookie_store_code = c3) { set $mage_run_code kiosk2_c3; } } I want to replace if with map. Changes seem obvious: map $cookie_store_code $mage_run_code_live { default default; a1 kiosk_a1; b2 kiosk1_b2; c3 kiosk2_c3; } map $cookie_store_code $mage_run_code_beta { default default; a1 kiosk_a1; b2 kiosk1_b2; c3 kiosk2_c3; } server { listen 1.1.1.1:80; server_name live; set $mage_run_code $mage_run_code_live; } server { listen 2.2.2.2:80; server_name beta; set $mage_run_code $mage_run_code_beta; } The only thing disquiets me in this solution: 'if's are in 'server' statement and 'map' has a global context 'http'. So, does setting cookie for beta site have any impacts on live site and vice versa? Or this solution provides independence of working live and beta regarding setting cookies? Found in http://openresty.org/download/agentzh-nginx-tutorials-en.html --------------------- even though the scope of Nginx variables is the entire configuration, each request does have its own version of all those variables' containers. Requests do not interfere with each other even if they are referencing a variable with the same name. This is very much like local variables in C/C++ function bodies. Each invocation of the C/C++ function does use its own version of those local variables (on the stack). --------------------- Can someone approve it? It it's true my config is correct. -- Sincerely yours, Aleksey Portnov | System Administrator | ISM eCompany | T +38 098 92 32 432 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 15 13:31:21 2015 From: nginx-forum at nginx.us (cobain86) Date: Wed, 15 Apr 2015 09:31:21 -0400 Subject: nginx reload sometimes not working(worker processes still alive) Message-ID: <3954e73926a23eead6541261577ab96c.NginxMailingListEnglish@forum.nginx.org> hi i'm not sure if this is an nginx or pagespeed or maybe linux problem we're using nginx on RedHat Linux 2.6.32-504.8.1.el6.x86_64 #1 SMP Fri Dec 19 12:09:25 EST 2014 x86_64 x86_64 x86_64 GNU/Linux nginx1.6.2 with pagespeed 1.9.32.3beta: nginx version: nginx/1.6.2 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) TLS SNI support enabled configure arguments: --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --with-http_ssl_module --with-http_stub_status_module --with-http_geoip_module --http-client-body-temp-path=/var/cache/nginx/client_body_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_tempmake --add-module=/tmp/nginx-1.6.2/ngx_pagespeed-1.9.32.3-beta our problem is, since we have added the new pagespeed version, our reload wont work properly. we're using the following command to reload our nginx. "nginx -s reload" sometimes it happens that the master process is killed, but the worker process of the old master process are still running. So we are not able to start a new master process while the old workers are still there. We have no idea why this happens. the message log file shows the following error: Apr 15 11:20:51 kernel: nginx[11423]: segfault at 7 ip 000000000047bbe3 sp 00007fffdd7740f0 error 4 in nginx[400000+957000] Apr 15 11:20:51 init: nginx main process (11423) killed by SEGV signal Apr 15 11:20:51 init: nginx main process ended, respawning Apr 15 11:20:55 init: nginx main process (6642) terminated with status 1 Apr 15 11:20:55 init: nginx main process ended, respawning i have also make/installed pagespeed and nginx again, but no solution for that. does anybody has that problem? or has an idea what i to do? Regards Steven Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258055,258055#msg-258055 From mdounin at mdounin.ru Wed Apr 15 14:20:10 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Apr 2015 17:20:10 +0300 Subject: nginx reload sometimes not working(worker processes still alive) In-Reply-To: <3954e73926a23eead6541261577ab96c.NginxMailingListEnglish@forum.nginx.org> References: <3954e73926a23eead6541261577ab96c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150415142009.GR88631@mdounin.ru> Hello! On Wed, Apr 15, 2015 at 09:31:21AM -0400, cobain86 wrote: > hi > i'm not sure if this is an nginx or pagespeed or maybe linux problem > > we're using nginx on RedHat > Linux 2.6.32-504.8.1.el6.x86_64 #1 SMP Fri Dec 19 12:09:25 EST 2014 x86_64 > x86_64 x86_64 GNU/Linux > > nginx1.6.2 with pagespeed 1.9.32.3beta: > nginx version: nginx/1.6.2 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) > TLS SNI support enabled > configure arguments: --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid > --lock-path=/var/lock/subsys/nginx --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --with-http_ssl_module > --with-http_stub_status_module --with-http_geoip_module > --http-client-body-temp-path=/var/cache/nginx/client_body_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_tempmake > --add-module=/tmp/nginx-1.6.2/ngx_pagespeed-1.9.32.3-beta > > our problem is, since we have added the new pagespeed version, our reload > wont work properly. > we're using the following command to reload our nginx. > > "nginx -s reload" > > sometimes it happens that the master process is killed, but the worker > process of the old master process are still running. > So we are not able to start a new master process while the old workers are > still there. > We have no idea why this happens. > > the message log file shows the following error: > Apr 15 11:20:51 kernel: nginx[11423]: segfault at 7 ip 000000000047bbe3 sp > 00007fffdd7740f0 error 4 in nginx[400000+957000] > Apr 15 11:20:51 init: nginx main process (11423) killed by SEGV signal > Apr 15 11:20:51 init: nginx main process ended, respawning > Apr 15 11:20:55 init: nginx main process (6642) terminated with status 1 > Apr 15 11:20:55 init: nginx main process ended, respawning > > i have also make/installed pagespeed and nginx again, but no solution for > that. > > does anybody has that problem? or has an idea what i to do? >From your description it looks like a pagespeed problem. For additional details you may try obtaining a stack trace, see some hints here: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Apr 15 16:58:20 2015 From: nginx-forum at nginx.us (sip) Date: Wed, 15 Apr 2015 12:58:20 -0400 Subject: Nginx config for wordpress in subfolder with rewrites for main folder Message-ID: <083a16e478d8e0ac61b9aff95d9afbf8.NginxMailingListEnglish@forum.nginx.org> I have a vbulletin site in root and a wordpress blog in /blog/ the root nginx rewrites all work as they should utilising the dbseo plugin for vbulletin to make search engine friendly urls if i turn on permalinks in wordpress i get a 404 error no matter what i try. Could anyone advise what the change should be to be able to use a different set of rewrite rules in just the blog folder. I am hoping to have other wordpress installs for different purposes so this will hopefully explain for the future also. Current domain.conf for nginx ------------------------------------------- server { listen 80; #listen [::]:80 default ipv6only=on; server_name www.mydomain.com mydomain.com; root /home/username/domains/mydomain.com/public_html; access_log /home/username/domains/mydomain.com/logs/access.log; error_log /home/username/domains/mydomain.com/logs/error.log; index index.php index.html index.htm; error_page 404 /404.html; location / { try_files $uri $uri/ /dbseo.php; } location ~ ^((?!dbseo).)*\.php$ { rewrite ^/(.*)$ /dbseo.php last; } # Pass PHP scripts to PHP-FPM location ~ \.php$ { try_files $uri =403; fastcgi_split_path_info ^(/blog)(/.*)$; fastcgi_pass unix:/var/run/php5-fpm-username.sock; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # Enable browser cache for CSS / JS location ~* \.(?:css|js)$ { expires 30d; add_header Pragma "public"; add_header Cache-Control "public"; add_header Vary "Accept-Encoding"; } # Enable browser cache for static files location ~* \.(?:ico|jpg|jpeg|gif|png|bmp|webp|tiff|svg|svgz|pdf|mp3|flac|ogg|mid|midi|wav|mp4|webm|mkv|og$ expires 60d; add_header Pragma "public"; add_header Cache-Control "public"; } # Deny access to hidden files location ~ (^|/)\. { deny all; } # Prevent logging of favicon and robot request errors location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } } -------------------------------------------- I have tried a load of different options to get this to work. I have posted these on stackoverflow which is possibly simpler than posting again here. thats here http://stackoverflow.com/questions/29518899/nginx-config-for-wordpress-in-subfolder-with-rewrites-for-main-folder Any help w2ould be most appreciated. I think i just need the basics of how to ignore / include certain rules for certain pages. how do i make /blog/ only use its own rules and ignore everything else.? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258060,258060#msg-258060 From nginx-forum at nginx.us Wed Apr 15 17:17:36 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 15 Apr 2015 13:17:36 -0400 Subject: Nginx config for wordpress in subfolder with rewrites for main folder In-Reply-To: <083a16e478d8e0ac61b9aff95d9afbf8.NginxMailingListEnglish@forum.nginx.org> References: <083a16e478d8e0ac61b9aff95d9afbf8.NginxMailingListEnglish@forum.nginx.org> Message-ID: sip Wrote: ------------------------------------------------------- > how do i make /blog/ only use its own rules and ignore everything > else.? Have you tried a location block for /blog/ at a position before other blocks start handling things ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258060,258062#msg-258062 From themic1st at gmail.com Thu Apr 16 01:58:54 2015 From: themic1st at gmail.com (Mic Tremblay) Date: Wed, 15 Apr 2015 21:58:54 -0400 Subject: unsubrile me pls ty Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjums07 at gmail.com Thu Apr 16 06:13:47 2015 From: sjums07 at gmail.com (Nikolaj Schomacker) Date: Thu, 16 Apr 2015 06:13:47 +0000 Subject: unsubrile me pls ty In-Reply-To: References: Message-ID: You can go to this page to unsubscribe http://mailman.nginx.org/mailman/listinfo/nginx If you have forgotten your password (which was sent to you in the first mail you received) there's also instructions on getting that back :) On Thu, Apr 16, 2015, 03:59 Mic Tremblay wrote: > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 16 06:49:44 2015 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 16 Apr 2015 02:49:44 -0400 Subject: Nginx on Windows how to know the correct MP4 buffer sizes ? Message-ID: So i do video streaming of large videos 2GB+ in size how do i know what buffer sizes to use in my nginx config or should i leave it at default ? http://nginx-win.ecsds.eu/ http://nginx.org/en/docs/http/ngx_http_mp4_module.html location ~ \.mp4$ { mp4; #mp4_buffer_size ?; #mp4_max_buffer_size ?; limit_rate_after 2m; limit_rate 1m; expires max; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258067,258067#msg-258067 From gyb997 at gmail.com Thu Apr 16 10:02:16 2015 From: gyb997 at gmail.com (cruze guo) Date: Thu, 16 Apr 2015 18:02:16 +0800 Subject: Helloo,do anyone has this situation? Message-ID: I compile the nginx with this configure. --prefix=/home/svn/nginx --user=svn --add-module=../ngx_devel_kit-master --add-module=../srcache-nginx-module-master --add-module=../redis2-nginx-module-master --add-module=../set-misc-nginx-module-master --add-module=../echo-nginx-module-master --add-module=../ngx_http_redis-0.3.7 --add-module=../lua-nginx-module-0.9.13 --with-debug I want to use nginx +redis for caching the svn webdav method. I also patch some code for support the webdav http method,propfind ,but in this situation,it's not import. The svn client use chunked encode for http client request .When I use the TortoiseSVN 1.8.11 to test my cache system. I get this read: 21, 00007FFFF1964830, 2048, 131072 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 133120 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 135168 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 137216 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 139264 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 141312 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 143360 2015/04/08 13:08:58 [debug] 16486#0: *1 access phase: 8 2015/04/08 13:08:58 [debug] 16486#0: *1 lua access handler, uri:"/ps/se/branches" c:1 2015/04/08 13:08:58 [debug] 16486#0: *1 http client request body preread 120 2015/04/08 13:08:58 [debug] 16486#0: *1 http request body chunked filter 2015/04/08 13:08:58 [debug] 16486#0: *1 http body chunked buf t:1 f:0 0000000000746440, pos 0000000000746609, size: 120 file: 0, size: 0 <== 120 IS NOT ENOUGH FOR REQUEST !!!!! 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 31 s:0 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 32 s:1 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 63 s:1 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 0D s:1 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 0A s:3 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 3C s:4 2015/04/08 13:08:58 [debug] 16486#0: *1 http body chunked buf t:1 f:0 0000000000746440, pos 0000000000746681, size: 0 file: 0, size: 0 2015/04/08 13:08:58 [debug] 16486#0: *1 http body new buf t:1 f:0 000000000074660E, pos 000000000074660E, size: 115 file: 0, size: 0 2015/04/08 13:08:58 [debug] 16486#0: *1 malloc: 00007FD8C33DC010:1048576 <=== SO MALLOC NEW BUF but ,the (struct ngx_http_request_s) 's write_event_handler will be set to ngx_http_request_empty_handler. in ngx_http_request_body.c function ngx_http_read_client_request_body r->write_event_handler = ngx_http_request_empty_handler; It mean nothing will handle the next step,when you read all client request body!! I want to know how to take the request body buffer bigger ? or,can I use this ugly patch to solve this problem? for struct ngx_http_request_s { ......... ngx_http_event_handler_pt read_event_handler; ngx_http_event_handler_pt write_event_handler; ngx_http_event_handler_pt write_event_handler_back; <=== ADD this ........ } it's ugly but it's useful. From nginx-forum at nginx.us Thu Apr 16 11:04:55 2015 From: nginx-forum at nginx.us (petestorey) Date: Thu, 16 Apr 2015 07:04:55 -0400 Subject: Prevent caching of 301 redirects Message-ID: Hi I'm running nginx as a reverse proxy for a website, and we have an occasional problem where one of the back end servers occasionally sends an errant 301 with caching headers as the home page for the site. nginx then caches this and responds to any requests with this redirect to the wrong place. I've rtfm and clearly need to use proxy_no_cache and proxy_cache_bypass, but I'm none the wiser on how to actually format it for this, or indeed whether it's possible to control based on the http response code? thanks Pete Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258089,258089#msg-258089 From mdounin at mdounin.ru Thu Apr 16 13:42:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Apr 2015 16:42:20 +0300 Subject: Helloo,do anyone has this situation? In-Reply-To: References: Message-ID: <20150416134220.GT88631@mdounin.ru> Hello! On Thu, Apr 16, 2015 at 06:02:16PM +0800, cruze guo wrote: [...] > but ,the (struct ngx_http_request_s) 's write_event_handler will be > set to ngx_http_request_empty_handler. > > in ngx_http_request_body.c > function ngx_http_read_client_request_body > r->write_event_handler = ngx_http_request_empty_handler; > > It mean nothing will handle the next step,when you read all client > request body!! Write event handler is used to handle write events, and there is no need to set it unless nginx is writing something. While nginx is reading a request body, it only needs read event handler to process new data from a client. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Apr 16 14:13:33 2015 From: nginx-forum at nginx.us (173279834462) Date: Thu, 16 Apr 2015 10:13:33 -0400 Subject: canonicalization of $uri with "/?.*" content Message-ID: <66cc4f544ffdf93082f875e29c66dba3.NginxMailingListEnglish@forum.nginx.org> The last security audit revealed the following: V:Wed Apr 15 20:58:19 2015 - 200 for GET: /?mod=node&nid=some_thing&op=view V:Wed Apr 15 20:58:43 2015 - 200 for GET: /?Open V:Wed Apr 15 20:58:43 2015 - 200 for GET: /?OpenServer V:Wed Apr 15 20:59:16 2015 - 200 for GET: /?sql_debug=1 V:Wed Apr 15 20:59:40 2015 - 200 for GET: /?=PHPB8B5F2A0-3C92-11d3-A3A9-4C7B08C10000 V:Wed Apr 15 20:59:40 2015 - 200 for GET: /?=PHPE9568F36-D428-11d2-A769-00AA001ACF42 V:Wed Apr 15 20:59:40 2015 - 200 for GET: /?=PHPE9568F34-D428-11d2-A769-00AA001ACF42 V:Wed Apr 15 20:59:40 2015 - 200 for GET: /?=PHPE9568F35-D428-11d2-A769-00AA001ACF42 V:Wed Apr 15 20:59:43 2015 - 200 for GET: /?PageServices V:Wed Apr 15 20:59:43 2015 - 200 for GET: /?wp-cs-dump V:Wed Apr 15 21:03:06 2015 - 200 for GET: /?D=A V:Wed Apr 15 21:04:58 2015 - 200 for GET: /?_CONFIG[files][functions_page]=http://example.com/rfiinc.txt? V:Wed Apr 15 21:08:00 2015 - 200 for GET: /?-s V:Wed Apr 15 21:08:09 2015 - 200 for GET: /?q[]=x V:Wed Apr 15 21:08:41 2015 - 200 for GET: /?sc_mode=edit V:Wed Apr 15 21:09:30 2015 - 200 for GET: /?admin In plain words, there is an infinite amount of $request_uri that returns the content of the canonical address. You can test your own domain "example.com": canonical: http://example.com/ unwanted variants: http://example.com/?mod=node&nid=some_thing&op=view http://example.com/?Open http://example.com/?OpenServer ... Is there an nginx parameter to normalize this type of $uri? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258101,258101#msg-258101 From nginx-forum at nginx.us Thu Apr 16 16:42:15 2015 From: nginx-forum at nginx.us (oamakarov) Date: Thu, 16 Apr 2015 12:42:15 -0400 Subject: proxy_pass to specific location Message-ID: <3cdaff838231f3187b3643995c3fb433.NginxMailingListEnglish@forum.nginx.org> Hello everyone! I have next configuration of my nginx: ## first backend ## upstream first { server 192.168.1.12:8080; server 192.168.1.13:8080; sticky; } ## second backend ## upstream second { server 192.168.1.14:8080; } ## config ## server { listen 192.168.1.11:443 ssl spdy; server_name domain.com ssl on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers kEECDH+AES128:kEECDH:kEDH:-3DES:kRSA+AES128:kEDH+3DES:DES-CBC3-SHA:!RC4:!aNULL:!eNULL:!MD5:!EXPORT:!LOW:!SEED:!CAMELLIA:!IDEA:!PSK:!SRP:!SSLv2; ssl_prefer_server_ciphers on; ssl_certificate /etc/nginx/ssl/domain.crt; ssl_certificate_key /etc/nginx/ssl/domain.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; set $root_path '/var/www/domain'; root $root_path; access_log /var/log/nginx/domain.access.log main; error_log /var/log/nginx/domain.error.log warn; index index.html charset utf-8; location / { proxy_set_header Accept-Encoding ""; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-By $server_addr:$server_port; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://first; ## send traffic to SECOND backend if ip is 1.2.3.4 ## if ( $remote_addr ~* 1.2.3.4 ) { proxy_pass http://second; } proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; location /notification { proxy_pass http://first/notification; } } If I go through backend: http://192.168.1.12(13):8080/notification - I'm getting the correct answer. But when I go to https://domain.com/notification a have 404 error and nothing proxied. Please help me to get right conf of NOTIFICATION location. Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258115,258115#msg-258115 From emailbuilder88 at yahoo.com Thu Apr 16 17:42:20 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Thu, 16 Apr 2015 10:42:20 -0700 Subject: HTTP AUTH (auth_pam module) Can't initialize threads: error 11 In-Reply-To: <1428723495.6431.YahooMailBasic@web142405.mail.bf1.yahoo.com> Message-ID: <1429206140.62585.YahooMailBasic@web142404.mail.bf1.yahoo.com> Please, anyone help me? (Also forgot to mention, pam_mysql is the pam module being used) > > Using the auth_pam module to implement HTTP AUTH: > > > > https://github.com/stogh/ngx_http_auth_pam_module/ > > > > Once in a while authentication seems to stop working across all browsers > > and users. The error that shows in the Nginx error log file when a browser > > tries to authenticate is: > > > > Can't initialize threads: error 11 > > > > (Verbatim, the error has no timestamp or anything else) > > > > Restarting Nginx fixes the problem for some time (days?). > > Next time I'll try a reload instead. > > Reload also fixed the problem. > > But its not possible to use software that breaks once a day. > Can anyone please help? > > > > Searching for that error doesn't turn up too much, except that it > > might be a MySQL error(?) > > > > Can anyone help? Is the author Sergio on this list? From gyb997 at gmail.com Fri Apr 17 02:26:14 2015 From: gyb997 at gmail.com (cruze guo) Date: Fri, 17 Apr 2015 10:26:14 +0800 Subject: Helloo,do anyone has this situation? In-Reply-To: <20150416134220.GT88631@mdounin.ru> References: <20150416134220.GT88631@mdounin.ru> Message-ID: Ok ,you are right . in this situation, the nginx must read all client request body, BUT the next step must be handled will not continue. So , I backup the write_event_handler to the write_event_handler_back and when nginx read all request body I restore the handler. like this: write_event_handler = write_event_handler_back; By changing to this,the next strp can continue. It's a BUG for nginx? 2015-04-16 21:42 GMT+08:00 Maxim Dounin : > Hello! > > On Thu, Apr 16, 2015 at 06:02:16PM +0800, cruze guo wrote: > > [...] > >> but ,the (struct ngx_http_request_s) 's write_event_handler will be >> set to ngx_http_request_empty_handler. >> >> in ngx_http_request_body.c >> function ngx_http_read_client_request_body >> r->write_event_handler = ngx_http_request_empty_handler; >> >> It mean nothing will handle the next step,when you read all client >> request body!! > > Write event handler is used to handle write events, and there is no need > to set it unless nginx is writing something. While nginx is reading > a request body, it only needs read event handler to process new data > from a client. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Apr 17 04:59:51 2015 From: nginx-forum at nginx.us (daveyfx) Date: Fri, 17 Apr 2015 00:59:51 -0400 Subject: proxy_pass to upstreams and then 404 location Message-ID: Hello all - I'm attempting to do the following in nginx and having difficulty with the last step in this succession. 1) In / location, proxy_pass to Django upstream. 2) proxy_intercept_errors is on in this block and does a proxy_pass to PHP upstream if 404 is returned. 3) In PHP location block (internal), proxy_intercept_errors is on and if a 404 is returned, goes to 404 location block. 4) The 404 location block should proxy_pass to the same Django upstream, but pass /404/ to the Django app. Everything but the final 404 proxy_pass is working fine. I can send a "bad" URL that will return a 404 from Django and PHP upstreams, but when I tail the Django app logs, the "bad" URL is what is sent to the upstream when I would like /404/ sent to the upstream. I do not need the URL to be re-written to domain.com/404/ in the client browser. Thank you in advance for any help/recommendations. location / { log_not_found off; expires 1m; proxy_intercept_errors on; proxy_pass http://django; error_page 404 = @php; } location @django { internal; log_not_found off; proxy_redirect off; proxy_pass http://django$uri$is_args$args; } location @php { internal; log_not_found off; include conf.d/proxypass.conf; proxy_redirect off; proxy_intercept_errors on; proxy_pass http://php$uri$is_args$args; error_page 404 = /404/; } location /404/ { log_not_found off; proxy_pass http://django; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258132,258132#msg-258132 From francis at daoine.org Fri Apr 17 07:14:20 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 17 Apr 2015 08:14:20 +0100 Subject: canonicalization of $uri with "/?.*" content In-Reply-To: <66cc4f544ffdf93082f875e29c66dba3.NginxMailingListEnglish@forum.nginx.org> References: <66cc4f544ffdf93082f875e29c66dba3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150417071420.GP29618@daoine.org> On Thu, Apr 16, 2015 at 10:13:33AM -0400, 173279834462 wrote: Hi there, > canonical: > http://example.com/ > > unwanted variants: > http://example.com/?mod=node&nid=some_thing&op=view > http://example.com/?Open > http://example.com/?OpenServer > ... > > Is there an nginx parameter to normalize this type of $uri? When I request http://example.com/?Open, what response do you want to send me? Does == location = / { if ($is_args) { return 301 /; } } == cause your right thing to happen? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Apr 17 13:10:34 2015 From: nginx-forum at nginx.us (173279834462) Date: Fri, 17 Apr 2015 09:10:34 -0400 Subject: canonicalization of $uri with "/?.*" content In-Reply-To: <20150417071420.GP29618@daoine.org> References: <20150417071420.GP29618@daoine.org> Message-ID: <76fff7a9f04052104cd287abd664522e.NginxMailingListEnglish@forum.nginx.org> >When I request http://example.com/?Open, what response do you want to send me? 301 to /: this would do the canonicalization, > location = / { if ($is_args) { return 301 /; } } 404: this would correspond to reality, > location = / { if ($is_args) { return 404; } } However, if one compiled nginx without the scripting engines, shouldn't it return 404 by default, instead of returning 200 while ignoring $uri's content? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258101,258153#msg-258153 From mdounin at mdounin.ru Fri Apr 17 13:33:01 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Apr 2015 16:33:01 +0300 Subject: Helloo,do anyone has this situation? In-Reply-To: References: <20150416134220.GT88631@mdounin.ru> Message-ID: <20150417133301.GG88631@mdounin.ru> Hello! On Fri, Apr 17, 2015 at 10:26:14AM +0800, cruze guo wrote: > Ok ,you are right . > in this situation, the nginx must read all client request body, BUT > the next step must be handled will > not continue. > > So , I backup the write_event_handler to the write_event_handler_back > and when nginx read all request body I restore the handler. > like this: > > write_event_handler = write_event_handler_back; > > By changing to this,the next strp can continue. It's a BUG for nginx? If you think there is a bug - please provide steps to reproduce it. For now it looks like you are trying to do something wrong in your own code. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Apr 17 13:51:30 2015 From: nginx-forum at nginx.us (nooske) Date: Fri, 17 Apr 2015 09:51:30 -0400 Subject: Get the request_body in a handler Message-ID: <64ae80aa25b353aa0982c4617bbf83f9.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to make a module that will get the body of http requests and print it in my log. I tried to access the variable r->request_body (r is a ngx_http_request_t *) and it's always empty. I also know that the content is saved in a temp file, so maybe I can get it but I don't know also how to find the name of this temp file. Do you have any idea ? Thank you :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258156,258156#msg-258156 From justinbeech at gmail.com Fri Apr 17 22:23:37 2015 From: justinbeech at gmail.com (jb) Date: Sat, 18 Apr 2015 08:23:37 +1000 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? Message-ID: Is there a variable for bytes read ? $content_length is what should be read, but if the request is terminated early, it is wrong. $request_length is not right either, it is logging 459 bytes on a 9mb upload. thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Apr 17 23:47:47 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 18 Apr 2015 02:47:47 +0300 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? In-Reply-To: References: Message-ID: <2562336.LbqLVeUpmV@vbart-laptop> On Saturday 18 April 2015 08:23:37 jb wrote: > Is there a variable for bytes read ? > > $content_length is what should be read, but if the request is terminated > early, it is wrong. > $request_length is not right either, it is logging 459 bytes on a 9mb > upload. > $request_length should work. wbr, Valentin V. Bartenev From justinbeech at gmail.com Sat Apr 18 00:08:25 2015 From: justinbeech at gmail.com (jb) Date: Sat, 18 Apr 2015 10:08:25 +1000 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? In-Reply-To: <2562336.LbqLVeUpmV@vbart-laptop> References: <2562336.LbqLVeUpmV@vbart-laptop> Message-ID: thanks and this is a popular answer on stack exchange but no, it does not work, because aborted requests have read less bytes -- $request_length reports how many bytes SHOULD have been read, but in the case of any problem, abort by client, or whatever, this is not how many bytes were actually read.. For accounting purposes, I'd want an exact mirror to $bytes_sent ... otherwise math does not add up :( I think there should be a $bytes_recd .. On Sat, Apr 18, 2015 at 9:47 AM, Valentin V. Bartenev wrote: > On Saturday 18 April 2015 08:23:37 jb wrote: > > Is there a variable for bytes read ? > > > > $content_length is what should be read, but if the request is terminated > > early, it is wrong. > > $request_length is not right either, it is logging 459 bytes on a 9mb > > upload. > > > > $request_length should work. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From justinbeech at gmail.com Sat Apr 18 00:24:17 2015 From: justinbeech at gmail.com (jb) Date: Sat, 18 Apr 2015 10:24:17 +1000 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? In-Reply-To: References: <2562336.LbqLVeUpmV@vbart-laptop> Message-ID: And maybe I am approaching this the wrong way? can you comment.. I want an nginx upload target for POST that reads the content, discards it, and reports the amount read. This is what I have in essence: location ~* "/upload" { limit_except POST OPTIONS { deny all; } client_max_body_size 0; add_header Cache-Control "max-age=0, no-cache, no-store, must-revalidate"; keepalive_timeout 0; add_header Content-Type 'text/html'; return 200 '$content_length bytes'; } The uploads are being done with XHR on the browser side. It works, however randomly (less than 1% of cases), browsers fail during the POST: they return xhr with readyState 4 but status 0, and only a partial upload progress recorded. On the server side, no error is generated in error_log, and access_log reports status 200. $request_length is very short, just the header of the upload. I am wondering if this is mis-use of upload handling by nginx. However I do not want to setup an upstream server to receive the POST content, it is to be discarded anyway. Is there a more correct way to handle POST within nginx, with a response after all data is read, but without an upstream server? thanks. On Sat, Apr 18, 2015 at 10:08 AM, jb wrote: > thanks and this is a popular answer on stack exchange but no, it does not > work, because aborted requests have read less bytes -- $request_length > reports how many bytes SHOULD have been read, but in the case of any > problem, abort by client, or whatever, this is not how many bytes were > actually read.. > > For accounting purposes, I'd want an exact mirror to $bytes_sent ... > otherwise math does not add up :( I think there should be a $bytes_recd .. > > On Sat, Apr 18, 2015 at 9:47 AM, Valentin V. Bartenev > wrote: > >> On Saturday 18 April 2015 08:23:37 jb wrote: >> > Is there a variable for bytes read ? >> > >> > $content_length is what should be read, but if the request is terminated >> > early, it is wrong. >> > $request_length is not right either, it is logging 459 bytes on a 9mb >> > upload. >> > >> >> $request_length should work. >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sat Apr 18 00:55:29 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 18 Apr 2015 03:55:29 +0300 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? In-Reply-To: References: <2562336.LbqLVeUpmV@vbart-laptop> Message-ID: <3479891.47TOx0Ucvi@vbart-laptop> On Saturday 18 April 2015 10:08:25 jb wrote: > thanks and this is a popular answer on stack exchange but no, it does not > work, because aborted requests have read less bytes -- $request_length > reports how many bytes SHOULD have been read, but in the case of any > problem, abort by client, or whatever, this is not how many bytes were > actually read.. [..] No, it reports how many bytes has been received by nginx. If it's not, I guess you have some 3rd-party modules or patches. wbr, Valentin V. Bartenev From vbart at nginx.com Sat Apr 18 01:10:31 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 18 Apr 2015 04:10:31 +0300 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? In-Reply-To: References: Message-ID: <2726116.3UHeRisyv4@vbart-laptop> On Saturday 18 April 2015 10:24:17 jb wrote: > And maybe I am approaching this the wrong way? can you comment.. > > I want an nginx upload target for POST that reads the content, discards it, > and reports the amount read. This is what I have in essence: > > location ~* "/upload" { > limit_except POST OPTIONS { deny all; } > client_max_body_size 0; > add_header Cache-Control "max-age=0, no-cache, no-store, must-revalidate"; > keepalive_timeout 0; > add_header Content-Type 'text/html'; This just adds duplicate "Content-Type" header. See the "default_type" directive: http://nginx.org/r/default_type > return 200 '$content_length bytes'; > } > > The uploads are being done with XHR on the browser side. It works, however > randomly (less than 1% of cases), browsers fail during the POST: they > return xhr with readyState 4 but status 0, and only a partial upload > progress recorded. > On the server side, no error is generated in error_log, and access_log > reports status 200. > > $request_length is very short, just the header of the upload. > > I am wondering if this is mis-use of upload handling by nginx. However I do > not want to setup an upstream server to receive the POST content, it is to > be discarded anyway. > > Is there a more correct way to handle POST within nginx, with a response > after all data is read, but without an upstream server? > [..] Oh, ok I see what happens. Indeed, the $request_length isn't accounted if you discard request body this way. The workaround can be using proxy_pass to nginx itself. You don't need to pass body however: http://nginx.org/r/proxy_pass_request_body wbr, Valentin V. Bartenev From justinbeech at gmail.com Sat Apr 18 01:23:54 2015 From: justinbeech at gmail.com (jb) Date: Sat, 18 Apr 2015 11:23:54 +1000 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? In-Reply-To: <2726116.3UHeRisyv4@vbart-laptop> References: <2726116.3UHeRisyv4@vbart-laptop> Message-ID: gotcha, I saw the discarded body thing in the debug log. ok thanks, um, how do you proxy_pass to nginx itself ? can you give an example ? just proxy_pass http://127.0.0.1/ and proxy_pass_request_body off what about my return 200 "$content_length bytes" line still keep that? On Sat, Apr 18, 2015 at 11:10 AM, Valentin V. Bartenev wrote: > On Saturday 18 April 2015 10:24:17 jb wrote: > > And maybe I am approaching this the wrong way? can you comment.. > > > > I want an nginx upload target for POST that reads the content, discards > it, > > and reports the amount read. This is what I have in essence: > > > > location ~* "/upload" { > > limit_except POST OPTIONS { deny all; } > > client_max_body_size 0; > > add_header Cache-Control "max-age=0, no-cache, no-store, > must-revalidate"; > > keepalive_timeout 0; > > add_header Content-Type 'text/html'; > > This just adds duplicate "Content-Type" header. > See the "default_type" directive: http://nginx.org/r/default_type > > > > return 200 '$content_length bytes'; > > } > > > > The uploads are being done with XHR on the browser side. It works, > however > > randomly (less than 1% of cases), browsers fail during the POST: they > > return xhr with readyState 4 but status 0, and only a partial upload > > progress recorded. > > On the server side, no error is generated in error_log, and access_log > > reports status 200. > > > > $request_length is very short, just the header of the upload. > > > > I am wondering if this is mis-use of upload handling by nginx. However I > do > > not want to setup an upstream server to receive the POST content, it is > to > > be discarded anyway. > > > > Is there a more correct way to handle POST within nginx, with a response > > after all data is read, but without an upstream server? > > > [..] > > Oh, ok I see what happens. Indeed, the $request_length isn't accounted if > you discard request body this way. > > The workaround can be using proxy_pass to nginx itself. You don't need > to pass body however: http://nginx.org/r/proxy_pass_request_body > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From justinbeech at gmail.com Sat Apr 18 01:44:07 2015 From: justinbeech at gmail.com (jb) Date: Sat, 18 Apr 2015 11:44:07 +1000 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? In-Reply-To: References: <2726116.3UHeRisyv4@vbart-laptop> Message-ID: ok I figured it out, I proxy_pass to nginx but I still have the same issue with aborted connection and bytes read :) Here is my custom_log format: ... $body_bytes_sent rl=$request_length cl=$content_length ... Here is an example POST using proxy_pass to 127.0.0.1 ... "POST /upload HTTP/1.1" 200 26 rl=456 cl=9885416 rt=15.874 ... You can see request_length is only 456 bytes, and content_length is 9.9mb however the request was aborted after 15 seconds and some OTHER number of bytes were read, some number between those two figures. 9885416 was the number given by the client. thanks On Sat, Apr 18, 2015 at 11:23 AM, jb wrote: > gotcha, I saw the discarded body thing in the debug log. ok thanks, > um, how do you proxy_pass to nginx itself ? > > can you give an example ? > just proxy_pass http://127.0.0.1/ > and proxy_pass_request_body off > > what about my return 200 "$content_length bytes" line still keep that? > > > On Sat, Apr 18, 2015 at 11:10 AM, Valentin V. Bartenev > wrote: > >> On Saturday 18 April 2015 10:24:17 jb wrote: >> > And maybe I am approaching this the wrong way? can you comment.. >> > >> > I want an nginx upload target for POST that reads the content, discards >> it, >> > and reports the amount read. This is what I have in essence: >> > >> > location ~* "/upload" { >> > limit_except POST OPTIONS { deny all; } >> > client_max_body_size 0; >> > add_header Cache-Control "max-age=0, no-cache, no-store, >> must-revalidate"; >> > keepalive_timeout 0; >> > add_header Content-Type 'text/html'; >> >> This just adds duplicate "Content-Type" header. >> See the "default_type" directive: http://nginx.org/r/default_type >> >> >> > return 200 '$content_length bytes'; >> > } >> > >> > The uploads are being done with XHR on the browser side. It works, >> however >> > randomly (less than 1% of cases), browsers fail during the POST: they >> > return xhr with readyState 4 but status 0, and only a partial upload >> > progress recorded. >> > On the server side, no error is generated in error_log, and access_log >> > reports status 200. >> > >> > $request_length is very short, just the header of the upload. >> > >> > I am wondering if this is mis-use of upload handling by nginx. However >> I do >> > not want to setup an upstream server to receive the POST content, it is >> to >> > be discarded anyway. >> > >> > Is there a more correct way to handle POST within nginx, with a response >> > after all data is read, but without an upstream server? >> > >> [..] >> >> Oh, ok I see what happens. Indeed, the $request_length isn't accounted if >> you discard request body this way. >> >> The workaround can be using proxy_pass to nginx itself. You don't need >> to pass body however: http://nginx.org/r/proxy_pass_request_body >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sat Apr 18 02:06:20 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 18 Apr 2015 05:06:20 +0300 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? In-Reply-To: References: <2726116.3UHeRisyv4@vbart-laptop> Message-ID: <5602364.30AkGBVpL4@vbart-laptop> On Saturday 18 April 2015 11:23:54 jb wrote: > gotcha, I saw the discarded body thing in the debug log. ok thanks, > um, how do you proxy_pass to nginx itself ? > > can you give an example ? > just proxy_pass http://127.0.0.1/ > and proxy_pass_request_body off > > what about my return 200 "$content_length bytes" line still keep that? > [..] events {} http { log_format lengths $request_length; server { location / { proxy_pass http://unix:nginx.sock:; proxy_pass_request_body off; proxy_set_header X-Response "$content_length bytes"; proxy_set_header Content-Length ""; access_log logs/lengths.log lengths; } } server { listen unix:nginx.sock; return 200 $http_x_response; } } % telnet 127.0.0.1 8000 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. POST / HTTP/1.0 Content-Length: 1000 bbbbbbbbbbbbbbbbbbbbbbbb bbbbbbbbbbbbbbbbbbbbbbbb bbbbbbbbbbbbbbbbbbbbbbbb bbbbbbbbbbbbbbbbbbbbbbbb ^] telnet> close Connection closed. % cat logs/lengths.log 145 -- wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sat Apr 18 02:09:13 2015 From: nginx-forum at nginx.us (gariac) Date: Fri, 17 Apr 2015 22:09:13 -0400 Subject: dificulty finding local static content Message-ID: <2ee0b95644c9ffeba343f59e0e63d75b.NginxMailingListEnglish@forum.nginx.org> Total newb here so apologies in advance. (Silly me, I could have set up Apache, but I decided to learn something new.) I have an existing website hosted on a server running apache. I am setting up a new server, this one being vitrual and bare bones, i.e. nothing installed but linux. OS is debian. Host company is ovh.com. I installed nginx from the repo rather than complile. Default file local per the debian installation is the website is found at /usr/share/nginx/www. The html files are located in the www directory. The index.html renders fine. However, I can't follow links in the page. I haven't changed the dns yet, but the website can be found by going to 167.114.155.30. External links work fine, as demonstrated on this page: http://167.114.155.30/archived_playback.html The "default" file is as follows: ------------ /etc/nginx/sites-available# cat default # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ $uri.html =404; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} ------------------------- Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258175,258175#msg-258175 From justinbeech at gmail.com Sat Apr 18 02:17:33 2015 From: justinbeech at gmail.com (jb) Date: Sat, 18 Apr 2015 12:17:33 +1000 Subject: logging variables -- $bytes_sent .. where is $bytes_read ? In-Reply-To: <5602364.30AkGBVpL4@vbart-laptop> References: <2726116.3UHeRisyv4@vbart-laptop> <5602364.30AkGBVpL4@vbart-laptop> Message-ID: thanks, the part where you massage the content lengths I was missing -- or had no clue would be needed. On Sat, Apr 18, 2015 at 12:06 PM, Valentin V. Bartenev wrote: > On Saturday 18 April 2015 11:23:54 jb wrote: > > gotcha, I saw the discarded body thing in the debug log. ok thanks, > > um, how do you proxy_pass to nginx itself ? > > > > can you give an example ? > > just proxy_pass http://127.0.0.1/ > > and proxy_pass_request_body off > > > > what about my return 200 "$content_length bytes" line still keep that? > > > [..] > > events {} > > http { > log_format lengths $request_length; > > server { > location / { > proxy_pass http://unix:nginx.sock:; > proxy_pass_request_body off; > > proxy_set_header X-Response "$content_length bytes"; > proxy_set_header Content-Length ""; > > access_log logs/lengths.log lengths; > } > } > > server { > listen unix:nginx.sock; > return 200 $http_x_response; > } > } > > % telnet 127.0.0.1 8000 > Trying 127.0.0.1... > Connected to 127.0.0.1. > Escape character is '^]'. > POST / HTTP/1.0 > Content-Length: 1000 > > bbbbbbbbbbbbbbbbbbbbbbbb > bbbbbbbbbbbbbbbbbbbbbbbb > bbbbbbbbbbbbbbbbbbbbbbbb > bbbbbbbbbbbbbbbbbbbbbbbb > ^] > telnet> close > Connection closed. > % cat logs/lengths.log > 145 > > -- > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Apr 18 08:22:30 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 18 Apr 2015 09:22:30 +0100 Subject: dificulty finding local static content In-Reply-To: <2ee0b95644c9ffeba343f59e0e63d75b.NginxMailingListEnglish@forum.nginx.org> References: <2ee0b95644c9ffeba343f59e0e63d75b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150418082230.GQ29618@daoine.org> On Fri, Apr 17, 2015 at 10:09:13PM -0400, gariac wrote: Hi there, > Default file local per the debian installation is the website is found at > /usr/share/nginx/www. The html files are located in the www directory. The > index.html renders fine. However, I can't follow links in the page. What links on the page can't you follow? Pick any one. What request do you make / what response do you get / what response do you want? I went to the "/" url and followed the nellis.html link with no obvious problems. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Apr 18 08:27:27 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 18 Apr 2015 09:27:27 +0100 Subject: canonicalization of $uri with "/?.*" content In-Reply-To: <76fff7a9f04052104cd287abd664522e.NginxMailingListEnglish@forum.nginx.org> References: <20150417071420.GP29618@daoine.org> <76fff7a9f04052104cd287abd664522e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150418082727.GR29618@daoine.org> On Fri, Apr 17, 2015 at 09:10:34AM -0400, 173279834462 wrote: Hi there, > 301 to /: this would do the canonicalization, > > location = / { if ($is_args) { return 301 /; } } > > 404: this would correspond to reality, > > location = / { if ($is_args) { return 404; } } > > However, if one compiled nginx without the scripting engines, shouldn't it > return 404 by default, > instead of returning 200 while ignoring $uri's content? I'd say "no". If you want your instance to care more about $query_string than the default, you can configure it to, for example as above. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Apr 18 20:08:54 2015 From: nginx-forum at nginx.us (nicocolt) Date: Sat, 18 Apr 2015 16:08:54 -0400 Subject: rewrite rules issue Message-ID: Hello, I have an issue with a rewrite rule for redirect to a subdomain. Here it is: if ($http_host = "subdomain.domain.fr") { rewrite ^(?!/\b(subpath|stats|error)\b)/(.*)$ /subpath/$2 last; } if in my browser i write: host.domain.fr/admin (without last /), then I'm redirected to host.domain.fr/host.domain/admin/ (with a 404 error of course) But if i write: host.domain/admin/ (with last /), then all works fine. I don't understand what is the configuration issue. I want to have the same behaviour with or without last / Any help would be helpfull. Best regards, Nico Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258184,258184#msg-258184 From francis at daoine.org Sat Apr 18 21:03:16 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 18 Apr 2015 22:03:16 +0100 Subject: rewrite rules issue In-Reply-To: References: Message-ID: <20150418210316.GS29618@daoine.org> On Sat, Apr 18, 2015 at 04:08:54PM -0400, nicocolt wrote: Hi there, > if ($http_host = "subdomain.domain.fr") { > rewrite ^(?!/\b(subpath|stats|error)\b)/(.*)$ /subpath/$2 last; > } > > if in my browser i write: > host.domain.fr/admin (without last /), then I'm redirected to > host.domain.fr/host.domain/admin/ (with a 404 error of course) Are you reporting that when you have the three lines above in your config, you get this behaviour; and when you remove those three lines from your config, you do not get this behaviour? Because that seems strange. Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Apr 18 21:31:59 2015 From: nginx-forum at nginx.us (ywarnier) Date: Sat, 18 Apr 2015 17:31:59 -0400 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: <3841706c7d09ea19e6b3baeb9391b66f.NginxMailingListEnglish@forum.nginx.org> <7757d25fc59ff89f5e7f6d46f9f29261.NginxMailingListEnglish@forum.nginx.org> Message-ID: Just an update: as of today, even Debian provides libssl1.0.0:1.0.1e-2+deb7u16 which still generates these error logs, so it looks like the only way is still to fallback to libssl1.0.0:1.0.1e-2+deb7u12. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,258186#msg-258186 From nginx-forum at nginx.us Sat Apr 18 23:20:57 2015 From: nginx-forum at nginx.us (GuiPoM) Date: Sat, 18 Apr 2015 19:20:57 -0400 Subject: Connection timeout from work, working anywhere else Message-ID: <1201d9f2aac5d213dd3547a58e1f3617.NginxMailingListEnglish@forum.nginx.org> Hello, nginx is the HTTP server that is provided with my home automation software, jeedom. It is installed on my Raspberry Pi2. Everyting is working fine, I can connect from my internal network and from outside. But, when connecting from work, I get an error (navigator displays and certificate issue ...). I had a lool to the logs and here is the corresponding entry in /usr/share/nginx/www/jeedom/log/nginx.error: 2015/04/16 18:38:27 [error] 2151#0: *38577 upstream timed out (110: Connection timed out) while reading upstream, client: 109.99.99.99, server: , request: "GET /socket.io/?EIO=3&transport=websocket&sid=qY83EqT2TTpBpiTzAAIn HTTP/1.1", upstream: "http://127.0.0.1:8070/socket.io/?EIO=3&transport=websocket&sid=qY83EqT2TTpBpiTzAAIn", host: "truc:4321" 2015/04/16 18:42:05 [error] 2150#0: *39108 upstream prematurely closed connection while reading response header from upstream, client: 109.99.99.99, server: , request: "GET /socket.io/?EIO=3&transport=polling&t=1429202405657-73&sid=lxHkscjGEw5l1p2YAAI8 HTTP/1.1", upstream: "http://127.0.0.1:8070/socket.io/?EIO=3&transport=polling&t=1429202405657-73&sid=lxHkscjGEw5l1p2YAAI8", host: "truc:4321", referrer: "https://truc:4321/jeedom/index.php?v=d&p=history" Of course 109.99.99.99 is my remote IP and "truc:4321" my domain name / port ! I am not sure this only log may help, but many thanks if you were kind enough to provide me some help to understand and fix my issue. I can do some more tests and provide other logs, just let me know what I should do. Thanks ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258187,258187#msg-258187 From nginx-forum at nginx.us Sun Apr 19 00:05:42 2015 From: nginx-forum at nginx.us (xuhdev) Date: Sat, 18 Apr 2015 20:05:42 -0400 Subject: Disable caching the names in /etc/hosts in reverse proxy? Message-ID: <048256abe36c6fa12c513c9044b8ed97.NginxMailingListEnglish@forum.nginx.org> I'm using Nginx to act as a reverse proxy, where the backend is a name defined in /etc/hosts. However, Nginx does not react to the changes made in /etc/hosts until restarted. Is it possible to disable caching the names in /etc/hosts in reverse proxy? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258188,258188#msg-258188 From nginx-forum at nginx.us Sun Apr 19 06:49:26 2015 From: nginx-forum at nginx.us (gariac) Date: Sun, 19 Apr 2015 02:49:26 -0400 Subject: dificulty finding local static content In-Reply-To: <2ee0b95644c9ffeba343f59e0e63d75b.NginxMailingListEnglish@forum.nginx.org> References: <2ee0b95644c9ffeba343f59e0e63d75b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <01b28a45238343cfa7ec234a61732019.NginxMailingListEnglish@forum.nginx.org> Argh! Pilot error! Once I cleared the browser cache, the website works. Sorry to waste your time. Next time I post, I'll try to have a real problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258175,258189#msg-258189 From nginx-forum at nginx.us Sun Apr 19 08:06:55 2015 From: nginx-forum at nginx.us (nicocolt) Date: Sun, 19 Apr 2015 04:06:55 -0400 Subject: rewrite rules issue In-Reply-To: <20150418210316.GS29618@daoine.org> References: <20150418210316.GS29618@daoine.org> Message-ID: <93174a2c28289486b307d63f6676e9fd.NginxMailingListEnglish@forum.nginx.org> Hello Thanks for your reply. If I remove the three lines, i get a 404 error. As a reminder, and maybe a little bit more clear :) here is the configuration. Here is the directory path: /var/www/domain.fr/web/subdomain/directory my nginx conf is: root /var/www/domain.fr/web if ($http_host = "subdomain.domain.fr") { rewrite ^(?!/\b(subdomain|stats|error)\b)/(.*)$ /subdomain/$2 last; } The if statement is before all of location directives. After purging my cache browser, this is what I get: when i reach http://subdomain.domain.fr/directory i'm redirected to http://subdomain.domain.fr/subdomain/directory (this is not what i want, i want http://subdomain.domain.fr/directory) Thanks fo your help, Best regards, Nico Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258184,258190#msg-258190 From francis at daoine.org Sun Apr 19 11:04:38 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Apr 2015 12:04:38 +0100 Subject: rewrite rules issue In-Reply-To: <93174a2c28289486b307d63f6676e9fd.NginxMailingListEnglish@forum.nginx.org> References: <20150418210316.GS29618@daoine.org> <93174a2c28289486b307d63f6676e9fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150419110438.GU29618@daoine.org> On Sun, Apr 19, 2015 at 04:06:55AM -0400, nicocolt wrote: Hi there, > Here is the directory path: > > /var/www/domain.fr/web/subdomain/directory > > my nginx conf is: > > root /var/www/domain.fr/web > > if ($http_host = "subdomain.domain.fr") { > rewrite ^(?!/\b(subdomain|stats|error)\b)/(.*)$ /subdomain/$2 > last; > } > > The if statement is before all of location directives. > > After purging my cache browser, this is what I get: > > when i reach http://subdomain.domain.fr/directory i'm redirected to > http://subdomain.domain.fr/subdomain/directory (this is not what i want, i > want http://subdomain.domain.fr/directory) Ok, this makes a bit more sense now: your request actually might use the configuration shown. Are you sure that you have copy-pasted exactly the response? I would have expected a redirect to http://subdomain.domain.fr/subdomain/directory/ -- but the actual redirect comes from the configuration that you have not yet shown. Your rewrite says that you *do* want the request to be handled like http://subdomain.domain.fr/subdomain/directory, so I'm still not sure what the actual full end intention is. It does look to me like server_name subdomain.domain.fr; root /var/www/domain.fr/web/subdomain; is probably what you really want, but I guess there is some reason why you don't just use that? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Apr 19 11:05:20 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Apr 2015 12:05:20 +0100 Subject: dificulty finding local static content In-Reply-To: <01b28a45238343cfa7ec234a61732019.NginxMailingListEnglish@forum.nginx.org> References: <2ee0b95644c9ffeba343f59e0e63d75b.NginxMailingListEnglish@forum.nginx.org> <01b28a45238343cfa7ec234a61732019.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150419110520.GV29618@daoine.org> On Sun, Apr 19, 2015 at 02:49:26AM -0400, gariac wrote: Hi there, > Argh! Pilot error! Once I cleared the browser cache, the website works. Good that you found the working system. And thanks for following up, so that the list knows there is no ongoing problem. Cheers, f -- Francis Daly francis at daoine.org From artemrts at ukr.net Sun Apr 19 11:14:03 2015 From: artemrts at ukr.net (wishmaster) Date: Sun, 19 Apr 2015 14:14:03 +0300 Subject: nginx_slowfs_cache Message-ID: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> Hi, Today after upgrading from nginx version 1.6.x to 1.7.x? I have got a segmentation fault. After short investigation the culprit was found. It is module by Frikle - nginx_slowfs_cache. Is anybody has the same issue? Is this module is obsolete? Cheers, Vitaliy From rainer at ultra-secure.de Sun Apr 19 12:53:14 2015 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sun, 19 Apr 2015 14:53:14 +0200 Subject: nginx_slowfs_cache In-Reply-To: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> References: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> Message-ID: <7C88D7F7-7BAD-47AB-A0B0-A88AF8EFD8EE@ultra-secure.de> > Am 19.04.2015 um 13:14 schrieb wishmaster : > > Hi, > > Today after upgrading from nginx version 1.6.x to 1.7.x I have got a segmentation fault. After short investigation the culprit was found. It is module by Frikle - nginx_slowfs_cache. > > Is anybody has the same issue? Is this module is obsolete? Can you describe your use-case for it? And whether you saw a performance-boost from it, compared to other alternatives? I wouldn?t say it?s useless these days, but I view it as a bit ?exotic?. From artemrts at ukr.net Sun Apr 19 13:12:40 2015 From: artemrts at ukr.net (wishmaster) Date: Sun, 19 Apr 2015 16:12:40 +0300 Subject: nginx_slowfs_cache In-Reply-To: <7C88D7F7-7BAD-47AB-A0B0-A88AF8EFD8EE@ultra-secure.de> References: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <7C88D7F7-7BAD-47AB-A0B0-A88AF8EFD8EE@ultra-secure.de> Message-ID: <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> --- Original message --- From: "Rainer Duffner" Date: 19 April 2015, 15:53:29 > > > Am 19.04.2015 um 13:14 schrieb wishmaster : > > > > Hi, > > > > Today after upgrading from nginx version 1.6.x to 1.7.x I have got a segmentation fault. After short investigation the culprit was found. It is module by Frikle - nginx_slowfs_cache. > > > > Is anybody has the same issue? Is this module is obsolete? > > > > Can you describe your use-case for it? > > And whether you saw a performance-boost from it, compared to other alternatives? > > > I wouldn?t say it?s useless these days, but I view it as a bit ?exotic?. Read this from official website: About ===== `ngx_slowfs_cache` is `nginx` module which allows caching of static files (served using `root` directive). This enables one to create fast caches for files stored on slow filesystems, for example: - storage: network disks, cache: local disks, - storage: 7,2K SATA drives, cache: 15K SAS drives in RAID0. **WARNING! There is no point in using this module when cache is placed on the same speed disk(s) as origin.** ==== I use RAM disk for this cache. Yes, it is fast enough. Do you know any alternatives? From rainer at ultra-secure.de Sun Apr 19 13:15:00 2015 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sun, 19 Apr 2015 15:15:00 +0200 Subject: nginx_slowfs_cache In-Reply-To: <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> References: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <7C88D7F7-7BAD-47AB-A0B0-A88AF8EFD8EE@ultra-secure.de> <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> Message-ID: <0CA9AA18-28ED-42C6-AA51-0EF7CF7A1D3C@ultra-secure.de> > Am 19.04.2015 um 15:12 schrieb wishmaster : > > > `ngx_slowfs_cache` is `nginx` module which allows caching of static files > (served using `root` directive). This enables one to create fast caches > for files stored on slow filesystems, for example: > > - storage: network disks, cache: local disks, > - storage: 7,2K SATA drives, cache: 15K SAS drives in RAID0. > > **WARNING! There is no point in using this module when cache is placed > on the same speed disk(s) as origin.** > ==== > > I use RAM disk for this cache. Yes, it is fast enough. > Do you know any alternatives? > I?ve briefly toyed with it myself, at some point. What is your ?slow? filesystem? From justinbeech at gmail.com Sun Apr 19 13:16:58 2015 From: justinbeech at gmail.com (jb) Date: Sun, 19 Apr 2015 23:16:58 +1000 Subject: nginx_slowfs_cache In-Reply-To: <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> References: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <7C88D7F7-7BAD-47AB-A0B0-A88AF8EFD8EE@ultra-secure.de> <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> Message-ID: At least in my experience unless your most used static files exceed in size your available RAM, or are changing, they are effectively cached by the OS anyway. So storing them on a ram disk is really doing the same or worse job than just letting the OS store them and serve them from its file cache memory pages. Plus the OS has the advantage of knowing which are less frequently used and can be purged. On Sun, Apr 19, 2015 at 11:12 PM, wishmaster wrote: > > > > --- Original message --- > From: "Rainer Duffner" > Date: 19 April 2015, 15:53:29 > > > > > > > > Am 19.04.2015 um 13:14 schrieb wishmaster : > > > > > > Hi, > > > > > > Today after upgrading from nginx version 1.6.x to 1.7.x I have got a > segmentation fault. After short investigation the culprit was found. It is > module by Frikle - nginx_slowfs_cache. > > > > > > Is anybody has the same issue? Is this module is obsolete? > > > > > > > > Can you describe your use-case for it? > > > > And whether you saw a performance-boost from it, compared to other > alternatives? > > > > > > I wouldn?t say it?s useless these days, but I view it as a bit ?exotic?. > > Read this from official website: > > About > ===== > `ngx_slowfs_cache` is `nginx` module which allows caching of static files > (served using `root` directive). This enables one to create fast caches > for files stored on slow filesystems, for example: > > - storage: network disks, cache: local disks, > - storage: 7,2K SATA drives, cache: 15K SAS drives in RAID0. > > **WARNING! There is no point in using this module when cache is placed > on the same speed disk(s) as origin.** > ==== > > I use RAM disk for this cache. Yes, it is fast enough. > Do you know any alternatives? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Sun Apr 19 13:24:04 2015 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sun, 19 Apr 2015 15:24:04 +0200 Subject: nginx_slowfs_cache In-Reply-To: References: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <7C88D7F7-7BAD-47AB-A0B0-A88AF8EFD8EE@ultra-secure.de> <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> Message-ID: <7632CCBF-4B6C-42CA-8543-A3233346AF1B@ultra-secure.de> > Am 19.04.2015 um 15:16 schrieb jb : > > At least in my experience unless your most used static files exceed in size your available RAM, or are changing, they are effectively cached by the OS anyway. > Normally, yes. Hence the reason why phk wrote Varnish, when he saw what squid was (and still is) doing... But is that the case with NFS, too? I thought there was some caching, too. But I?m not sure. > So storing them on a ram disk is really doing the same or worse job than just letting the OS store them and serve them from its file cache memory pages. Plus the OS has the advantage of knowing which are less frequently used and can be purged. Yep, that?s why I was asking. If his data-set was _very_ big (in the large multi-TB region) and he had a couple of small SSDs to cache stuff, while at the same time the size of SSDs was about the size of the most requested files, it /could/ make sense. But OTOH, you could also just install FreeBSD and use the SSDs as L2ARC and let the OS do the rest ;-) Even the usefulness of L2ARC is often questioned by people familiar with the matter? OS caching is _very_ hard to beat. From artemrts at ukr.net Sun Apr 19 13:24:06 2015 From: artemrts at ukr.net (wishmaster) Date: Sun, 19 Apr 2015 16:24:06 +0300 Subject: nginx_slowfs_cache In-Reply-To: <0CA9AA18-28ED-42C6-AA51-0EF7CF7A1D3C@ultra-secure.de> References: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <7C88D7F7-7BAD-47AB-A0B0-A88AF8EFD8EE@ultra-secure.de> <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> <0CA9AA18-28ED-42C6-AA51-0EF7CF7A1D3C@ultra-secure.de> Message-ID: <1429449843.800776214.pwg3549w@frv34.fwdcdn.com> --- Original message --- From: "Rainer Duffner" Date: 19 April 2015, 16:15:19 > > > Am 19.04.2015 um 15:12 schrieb wishmaster : > > > > > > `ngx_slowfs_cache` is `nginx` module which allows caching of static files > > (served using `root` directive). This enables one to create fast caches > > for files stored on slow filesystems, for example: > > > > - storage: network disks, cache: local disks, > > - storage: 7,2K SATA drives, cache: 15K SAS drives in RAID0. > > > > **WARNING! There is no point in using this module when cache is placed > > on the same speed disk(s) as origin.** > > ==== > > > > I use RAM disk for this cache. Yes, it is fast enough. > > Do you know any alternatives? > > > > > I?ve briefly toyed with it myself, at some point. > > What is your ?slow? filesystem? SATA II single disk, UFS. From rainer at ultra-secure.de Sun Apr 19 13:32:24 2015 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sun, 19 Apr 2015 15:32:24 +0200 Subject: nginx_slowfs_cache In-Reply-To: <1429449843.800776214.pwg3549w@frv34.fwdcdn.com> References: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> <7C88D7F7-7BAD-47AB-A0B0-A88AF8EFD8EE@ultra-secure.de> <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> <1429448935.634585618.9l9vvzxc@frv34.fwdcdn.com> <0CA9AA18-28ED-42C6-AA51-0EF7CF7A1D3C@ultra-secure.de> <1429449843.800776214.pwg3549w@frv34.fwdcdn.com> Message-ID: > Am 19.04.2015 um 15:24 schrieb wishmaster >: > >> >> >> I?ve briefly toyed with it myself, at some point. >> >> What is your ?slow? filesystem? > > SATA II single disk, UFS. Just let the OS do its work. https://openconnect.itp.netflix.com/software/index.html AFAIK, almost all of the changes the Netflix made to improve performance for their use-case are now back in the tree and available on stock FreeBSD 10.1 with little or no tuning. I assume, the same is true for improvements made to nginx. I?d upgrade to FreeBSD10.1 and max-out the RAM. No need to go ZFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Apr 19 13:53:43 2015 From: nginx-forum at nginx.us (nicocolt) Date: Sun, 19 Apr 2015 09:53:43 -0400 Subject: rewrite rules issue In-Reply-To: <20150419110438.GU29618@daoine.org> References: <20150419110438.GU29618@daoine.org> Message-ID: <45c488304a5f12e26545cac2081f0990.NginxMailingListEnglish@forum.nginx.org> Hello, you're right, my rewrite says that my request is handle like http://subdomain.domain.fr/subdomain/directory/. But it is not that i want. I want that my request is handled like this: http://subdomain.domain.fr/directory/ and i don't want see the intermediate path. I don't understand how to proceed in order to see only http://subdomain.domain.fr/directory/ instead of http://subdomain.domain.fr/subdomain/directory/ Thaks for your help ! Best regards, Nico Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258184,258201#msg-258201 From nginx-forum at nginx.us Sun Apr 19 22:08:35 2015 From: nginx-forum at nginx.us (rPawel) Date: Sun, 19 Apr 2015 18:08:35 -0400 Subject: Intermittent SSL Handshake issues on Ubuntu 12.04 and Nginx Message-ID: <52cca5ce82d343a94b12381b0d143351.NginxMailingListEnglish@forum.nginx.org> Hi Guys, I posted originally my issue on askubuntu but I think this will be a better place http://askubuntu.com/questions/611418/intermittent-ssl-handshake-issues-on-ubuntu-12-04-and-nginx. Original post -------------------------------- # In simple terms I am having issues with https handshakes. I am currently using nginx but it is most likely not an nginx issue. # Behaviour Web clients such as browsers will sometimes present "SSL connection error" (Chrome) Apache benchmark will spit out several error lines and will report around 1-10% failures. Errors below will appear in random order but the first one is more common. (1) Benchmarking mysite.net (be patient)...SSL read failed (1) - closing connection 128494120003296:error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac:s3_pkt.c:486: (2) SSL read failed (1) - closing connection 128494120003296:error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac:s3_pkt.c:1262:SSL alert number 20 # Server setup Ubuntu: Ubuntu 12.04 64bit with all updates and patches installed, server restarted. Nginx: nginx/1.6.3 - from nginx.org (deb http://nginx.org/packages/ubuntu/ precise nginx) OpenSSL dynamically linked: # ldd `which nginx` | grep ssl libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f3065569000) # strings /lib/x86_64-linux-gnu/libssl.so.1.0.0 | grep "^OpenSSL " OpenSSL 1.0.1 14 Mar 2012 Nginx server config (with limited cyphers) OpenSSL: 1.0.1 14 Mar 2012 #dpkg -s libssl1.0.0 Version: 1.0.1-4ubuntu5.25 #Workarounds So far, the only workaround I found, is to narrow down available cyphers. Instead of using Mozilla Intermediate set, I found these would work without any issues: ssl_ciphers 'ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4'; Second option is to downgrade to stock nginx (1.1.19-1ubuntu0.7) #Things I tried Because I am mainly using nginx as a proxy / load balancer, I tried replacing nginx with HA-Proxy 1.5. Unfortunately I got the same problem. I tried compiling nginx 1.6.3 with openssl 1.0.1m - no change. On-line https/ssl validity tester did not found any issues with any of the certificates. Disabling other nginx sites did not help either. #Things I noticed Interestingly this problem does not occur when using apache benchmark from the server itself or it's immediate neighbours, but it does happen when connecting from outside of the data centre. Apparently DC guys (coreix) claim not to have any DDOS prevention system in front of the servers which would cause such an issue. This issue is happening mainly on one of the https domains and is very sporadic for remaining two - hosted on the same box. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258204,258204#msg-258204 From francis at daoine.org Sun Apr 19 22:50:42 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Apr 2015 23:50:42 +0100 Subject: rewrite rules issue In-Reply-To: <45c488304a5f12e26545cac2081f0990.NginxMailingListEnglish@forum.nginx.org> References: <20150419110438.GU29618@daoine.org> <45c488304a5f12e26545cac2081f0990.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150419225042.GX29618@daoine.org> On Sun, Apr 19, 2015 at 09:53:43AM -0400, nicocolt wrote: Hi there, > I don't understand how to proceed in order to see only > http://subdomain.domain.fr/directory/ instead of > http://subdomain.domain.fr/subdomain/directory/ I'll still suggest root /var/www/domain.fr/web/subdomain; in a separate server{} block, and then have /stats/ and /error/ locations which do whatever else you want them to do. Good luck with it, f -- Francis Daly francis at daoine.org From karthika.sathish at siemens.com Mon Apr 20 05:23:37 2015 From: karthika.sathish at siemens.com (Sathish, Karthika IN BLR STS) Date: Mon, 20 Apr 2015 10:53:37 +0530 Subject: license information of component nginx-1.7.9 Message-ID: Hello experts, I am planning to use the component nginx-1.7.9 and came across a file ngx_md5.c (nginx-1.7.9.tar\nginx-1.7.9\src\core\ngx_md5.c). Can I know under which license this file is licensed? With best regards, Karthika Sathish -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 20 07:36:25 2015 From: nginx-forum at nginx.us (camexin) Date: Mon, 20 Apr 2015 03:36:25 -0400 Subject: FTP reverse proxy Message-ID: <94b2fd1a8047edc4edede4da1db01601.NginxMailingListEnglish@forum.nginx.org> Hi all, I'm discovering Nginx. Is this product can work as reverse proxy FTP ? If no, do you know a product with this function ? Thanks Fran?ois Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258207,258207#msg-258207 From nginx-forum at nginx.us Mon Apr 20 08:12:11 2015 From: nginx-forum at nginx.us (nicocolt) Date: Mon, 20 Apr 2015 04:12:11 -0400 Subject: rewrite rules issue In-Reply-To: <20150419225042.GX29618@daoine.org> References: <20150419225042.GX29618@daoine.org> Message-ID: <0c47f93336b460f9bfef00f36a331852.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I have set a new server block, but i'm facing with the initial problem. So let me re-explain it. Now i have: server name stuff.domain.fr root /var/www/domain.fr/web/subdomain; In the subdomain directory i have foo firectory /var/www/domain.fr/web/subdomain/foo/ so now in my browser when i reached stuff.domain.fr i see the page index.html in subdomain folder. But if i reached stuff.domain.fr/foo, i'm redirected to stuff.domain.fr/subdomain/foo/ -> 404 stuff.domain.fr/foo/, i see the page index.html in foo folder Well i'm lost, two days trying to make this work. Seems that the trailling slah has an impact on how the request is handle by nginx. Thanks for your help Best regards, Nico Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258184,258208#msg-258208 From francis at daoine.org Mon Apr 20 13:05:41 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 20 Apr 2015 14:05:41 +0100 Subject: rewrite rules issue In-Reply-To: <0c47f93336b460f9bfef00f36a331852.NginxMailingListEnglish@forum.nginx.org> References: <20150419225042.GX29618@daoine.org> <0c47f93336b460f9bfef00f36a331852.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150420130541.GY29618@daoine.org> On Mon, Apr 20, 2015 at 04:12:11AM -0400, nicocolt wrote: Hi there, > I have set a new server block, but i'm facing with the initial problem. > > So let me re-explain it. > > Now i have: > > server name stuff.domain.fr > root /var/www/domain.fr/web/subdomain; On your test system which shows this behaviour, what is the complete copy-paste version of the server{} configuration that you are using? If I use == server { server_name stuff.domain.fr; root /var/www/domain.fr/web/subdomain; } == then "curl -i -H Host:stuff.domain.fr http://127.0.0.1/foo" does not show me the problem that you report. f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Mon Apr 20 13:18:00 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 20 Apr 2015 18:18:00 +0500 Subject: open socket #84 left in connection Message-ID: Hi, We're using nginx to upload and serve videos files around 1GB of file size via http. We've been receiving complains from some customers that uploading has some issue and sometimes user are unable to upload videos successfully. Server is installed with Nginx-1.4.7+php-fpm, ffmpeg, MP4Box. On checking the nginx logs, we didn't got anything but Following messages : 2015/04/16 16:49:53 [alert] 15077#0: open socket #81 left in connection 49 2015/04/16 16:49:53 [alert] 15084#0: open socket #48 left in connection 19 2015/04/16 16:49:53 [alert] 15077#0: open socket #84 left in connection 51 2015/04/16 16:49:53 [alert] 15084#0: open socket #52 left in connection 21 2015/04/16 16:49:53 [alert] 15077#0: open socket #87 left in connection 53 2015/04/16 16:49:53 [alert] 15079#0: open socket #81 left in connection 46 2015/04/16 16:49:53 [alert] 15084#0: open socket #53 left in connection 22 Here is the nginx.conf user nginx; worker_processes 16; worker_rlimit_nofile 300000; #2 filehandlers for each connection error_log /usr/local/nginx/logs/error.log crit; #access_log logs/access.log; #pid logs/nginx.pid; events { worker_connections 6000; use epoll; } http { include mime.types; default_type application/octet-stream; client_max_body_size 3000M; client_body_buffer_size 2000M; sendfile_max_chunk 128k; client_header_buffer_size 256k; large_client_header_buffers 4 256k; output_buffers 1 512k; server_tokens off; #Conceals nginx version access_log /usr/local/nginx/logs/access.log ; access_log off; sendfile off; ignore_invalid_headers on; client_header_timeout 60m; client_body_timeout 60m; send_timeout 60m; reset_timedout_connection on; keepalive_timeout 15; include "/usr/local/nginx/conf/vhosts/*.conf"; error_page 404 = /thumb.php; error_page 403 /forbidden.html; } If anyone can help me with this ? Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Apr 20 13:54:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Apr 2015 16:54:55 +0300 Subject: license information of component nginx-1.7.9 In-Reply-To: References: Message-ID: <20150420135455.GD32429@mdounin.ru> Hello! On Mon, Apr 20, 2015 at 10:53:37AM +0530, Sathish, Karthika IN BLR STS wrote: > Hello experts, > > I am planning to use the component nginx-1.7.9 and came across a > file ngx_md5.c (nginx-1.7.9.tar\nginx-1.7.9\src\core\ngx_md5.c). > Can I know under which license this file is licensed? This code is believed to be in the public domain, much like the original work it is based on. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Mon Apr 20 14:23:45 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 20 Apr 2015 19:23:45 +0500 Subject: open socket #84 left in connection In-Reply-To: References: Message-ID: I have also enabled debug logging and found 'Resource temporarily unavailable' messages. Below is the reference sample : 2015/04/20 18:41:29 [debug] 12917#0: *2711 event timer add: 18: 15000:1429537304304 2015/04/20 18:41:29 [debug] 12917#0: *2711 post event 0000000000A372D0 2015/04/20 18:41:29 [debug] 12917#0: posted event 0000000000A372D0 2015/04/20 18:41:29 [debug] 12917#0: *2711 delete posted event 0000000000A372D0 2015/04/20 18:41:29 [debug] 12917#0: *2711 http keepalive handler 2015/04/20 18:41:29 [debug] 12917#0: *2711 malloc: 0000000000ACF6C0:262144 2015/04/20 18:41:29 [debug] 12917#0: *2711 recv: fd:18 -1 of 262144 2015/04/20 18:41:29 [debug] 12917#0: *2711 recv() not ready (11: Resource temporarily unavailable) 2015/04/20 18:41:29 [debug] 12917#0: *2711 free: 0000000000ACF6C0 2015/04/20 18:41:29 [debug] 12917#0: posted event 0000000000000000 2015/04/20 18:41:29 [debug] 12917#0: worker cycle 2015/04/20 18:41:29 [debug] 12917#0: accept mutex locked 2015/04/20 18:41:29 [debug] 12917#0: epoll timer: 15000 2015/04/20 18:41:29 [debug] 12913#0: epoll: fd:18 ev:0004 d:00007F58B7D89238 2015/04/20 18:41:29 [debug] 12913#0: *2698 http run request: "/files/videos/2015/04/15/14290705507373d-360.mp4?" Could that be the issue ? On Mon, Apr 20, 2015 at 6:18 PM, shahzaib shahzaib wrote: > Hi, > > We're using nginx to upload and serve videos files around 1GB of file > size via http. We've been receiving complains from some customers that > uploading has some issue and sometimes user are unable to upload videos > successfully. Server is installed with Nginx-1.4.7+php-fpm, ffmpeg, MP4Box. > On checking the nginx logs, we didn't got anything but Following messages : > > 2015/04/16 16:49:53 [alert] 15077#0: open socket #81 left in connection 49 > 2015/04/16 16:49:53 [alert] 15084#0: open socket #48 left in connection 19 > 2015/04/16 16:49:53 [alert] 15077#0: open socket #84 left in connection 51 > 2015/04/16 16:49:53 [alert] 15084#0: open socket #52 left in connection 21 > 2015/04/16 16:49:53 [alert] 15077#0: open socket #87 left in connection 53 > 2015/04/16 16:49:53 [alert] 15079#0: open socket #81 left in connection 46 > 2015/04/16 16:49:53 [alert] 15084#0: open socket #53 left in connection 22 > > Here is the nginx.conf > > user nginx; > worker_processes 16; > worker_rlimit_nofile 300000; #2 filehandlers for each connection > error_log /usr/local/nginx/logs/error.log crit; > #access_log logs/access.log; > #pid logs/nginx.pid; > > > events { > worker_connections 6000; > use epoll; > } > http { > include mime.types; > default_type application/octet-stream; > client_max_body_size 3000M; > client_body_buffer_size 2000M; > sendfile_max_chunk 128k; > client_header_buffer_size 256k; > large_client_header_buffers 4 256k; > output_buffers 1 512k; > server_tokens off; #Conceals nginx version > access_log /usr/local/nginx/logs/access.log ; > access_log off; > sendfile off; > ignore_invalid_headers on; > client_header_timeout 60m; > client_body_timeout 60m; > send_timeout 60m; > reset_timedout_connection on; > > keepalive_timeout 15; > include "/usr/local/nginx/conf/vhosts/*.conf"; > error_page 404 = /thumb.php; > error_page 403 /forbidden.html; > } > > If anyone can help me with this ? > > Regards. > Shahzaib > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Apr 20 17:09:45 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 20 Apr 2015 19:09:45 +0200 Subject: Disable caching the names in /etc/hosts in reverse proxy? In-Reply-To: <048256abe36c6fa12c513c9044b8ed97.NginxMailingListEnglish@forum.nginx.org> References: <048256abe36c6fa12c513c9044b8ed97.NginxMailingListEnglish@forum.nginx.org> Message-ID: There is no 'cache' per se, but rather remembered DNS results, much like you do when you deal with domain names to avoid putting to high a burden on NS resolvers. nginx resolves names on start or reload. The commercial version added a feature to update names periodically (resolve option of the server directive in the upstream module ). However, FOSS nginx is stuck with 'static' names resolution. The commercial version is said to be full-featured, having new features the FOSS version does not have and does not need... questionable when you see such 'degraded' features in FOSS, which the commercial version handles the right way. Political/Economical choices I suppose. --- *B. R.* On Sun, Apr 19, 2015 at 2:05 AM, xuhdev wrote: > I'm using Nginx to act as a reverse proxy, where the backend is a name > defined in /etc/hosts. However, Nginx does not react to the changes made in > /etc/hosts until restarted. Is it possible to disable caching the names in > /etc/hosts in reverse proxy? > > Thanks > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258188,258188#msg-258188 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 20 17:34:51 2015 From: nginx-forum at nginx.us (daBee) Date: Mon, 20 Apr 2015 13:34:51 -0400 Subject: VHost Guidance Message-ID: Hi folks. Brand new to nginx. I'm trying to run 3 vhosts on my workstation to get familiar with nginx. alpha bravo charlie I'm using bravo in the main nginx.conf pointing to /var/www/alpha/ bravo and charlie are in settings/vhosts.conf into /var/www/bravo and /var/www/charlie Upon trying to start nginx (installed via homebrew on OSX), I get the following: HQ:~ rich$ sudo nginx nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) nginx: [emerg] still could not bind() Not too sure what that means. I'm sure it's a directive that's not entered for localhost. Anyway, any leadership appreciated as to how to get this thing going. Cheers Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258229,258229#msg-258229 From mdounin at mdounin.ru Mon Apr 20 17:43:36 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Apr 2015 20:43:36 +0300 Subject: Intermittent SSL Handshake issues on Ubuntu 12.04 and Nginx In-Reply-To: <52cca5ce82d343a94b12381b0d143351.NginxMailingListEnglish@forum.nginx.org> References: <52cca5ce82d343a94b12381b0d143351.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150420174336.GL32429@mdounin.ru> Hello! On Sun, Apr 19, 2015 at 06:08:35PM -0400, rPawel wrote: > Hi Guys, > > I posted originally my issue on askubuntu but I think this will be a better > place > > http://askubuntu.com/questions/611418/intermittent-ssl-handshake-issues-on-ubuntu-12-04-and-nginx. > > Original post > -------------------------------- > > # In simple terms > > I am having issues with https handshakes. I am currently using nginx but it > is most likely not an nginx issue. > > # Behaviour > > Web clients such as browsers will sometimes present "SSL connection error" > (Chrome) > > Apache benchmark will spit out several error lines and will report around > 1-10% failures. Errors below will appear in random order but the first one > is more common. > > (1) Benchmarking mysite.net (be patient)...SSL read failed (1) - closing > connection > 128494120003296:error:1408F119:SSL routines:SSL3_GET_RECORD:decryption > failed or bad record mac:s3_pkt.c:486: > > (2) SSL read failed (1) - closing connection > 128494120003296:error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad > record mac:s3_pkt.c:1262:SSL alert number 20 > > # Server setup > Ubuntu: > > Ubuntu 12.04 64bit with all updates and patches installed, server > restarted. > Nginx: > > nginx/1.6.3 - from nginx.org (deb http://nginx.org/packages/ubuntu/ precise > nginx) > > OpenSSL dynamically linked: > > # ldd `which nginx` | grep ssl > libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 > (0x00007f3065569000) > > # strings /lib/x86_64-linux-gnu/libssl.so.1.0.0 | grep "^OpenSSL " > OpenSSL 1.0.1 14 Mar 2012 > > Nginx server config (with limited cyphers) > OpenSSL: > > 1.0.1 14 Mar 2012 > > #dpkg -s libssl1.0.0 > Version: 1.0.1-4ubuntu5.25 This looks similar to this ticket (turned out to be a bug in OpenSSL, see comments for details): http://trac.nginx.org/nginx/ticket/215 Try upgrading to OpenSSL 1.0.1h or newer to see if it helps. Alternatively, make sure the OpenSSL package you are using includes the fix in question. [...] -- Maxim Dounin http://nginx.org/ From semenukha at gmail.com Mon Apr 20 17:51:35 2015 From: semenukha at gmail.com (Styopa Semenukha) Date: Mon, 20 Apr 2015 13:51:35 -0400 Subject: VHost Guidance In-Reply-To: References: Message-ID: <7332841.CJ25rMlJao@tornado> You already have an application listening on port 8080. You can find it using netstat(1). On Monday, April 20, 2015 01:34:51 PM daBee wrote: > Hi folks. > > Brand new to nginx. I'm trying to run 3 vhosts on my workstation to get > familiar with nginx. > > alpha > bravo > charlie > > I'm using bravo in the main nginx.conf pointing to /var/www/alpha/ > > bravo and charlie are in settings/vhosts.conf into /var/www/bravo and > /var/www/charlie > > Upon trying to start nginx (installed via homebrew on OSX), I get the > following: > > HQ:~ rich$ sudo nginx > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] still could not bind() > > Not too sure what that means. I'm sure it's a directive that's not entered > for localhost. > > Anyway, any leadership appreciated as to how to get this thing going. > > Cheers > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258229,258229#msg-258229 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Best regards, Styopa Semenukha. From devel at jasonwoods.me.uk Mon Apr 20 17:52:13 2015 From: devel at jasonwoods.me.uk (Jason Woods) Date: Mon, 20 Apr 2015 18:52:13 +0100 Subject: Disable caching the names in /etc/hosts in reverse proxy? In-Reply-To: <048256abe36c6fa12c513c9044b8ed97.NginxMailingListEnglish@forum.nginx.org> References: <048256abe36c6fa12c513c9044b8ed97.NginxMailingListEnglish@forum.nginx.org> Message-ID: <47AA0B28-C72B-415F-AAF7-7C443D0D8A4A@jasonwoods.me.uk> > On 19 Apr 2015, at 01:05, xuhdev wrote: > > I'm using Nginx to act as a reverse proxy, where the backend is a name > defined in /etc/hosts. However, Nginx does not react to the changes made in > /etc/hosts until restarted. Is it possible to disable caching the names in > /etc/hosts in reverse proxy? As B.R. mentioned, the DNS lookup (via /etc/hosts or real DNS) is only done at startup and reload. You can change the behaviour to dynamic lookup by specifying a resolver in the server block and then using variables in the proxy_pass. For example: server { resolver 8.8.8.8; location / { proxy_pass http://reverse_host$uri$is_args$args; } } Because of the variables Nginx can't predict at startup what to perform a lookup for. As a result it will perform the DNS lookup at request time. The lookup response is then cached for the DNS TTL period. CPU may be a little higher I guess but I haven't noticed anything even on high load clusters. This behaviour is eluded to in the documentation's last couple of paragraphs for proxy_pass at: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass Jason From nginx-forum at nginx.us Mon Apr 20 17:54:59 2015 From: nginx-forum at nginx.us (daBee) Date: Mon, 20 Apr 2015 13:54:59 -0400 Subject: VHost Guidance In-Reply-To: <7332841.CJ25rMlJao@tornado> References: <7332841.CJ25rMlJao@tornado> Message-ID: Hi there. This is my port scan from 8000 to 8080: Port Scan has started? Port Scanning host: 127.0.0.1 Port Scan has completed? I've never set anything up to use that port. Styopa Semenukha Wrote: ------------------------------------------------------- > You already have an application listening on port 8080. You can find > it using > netstat(1). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258229,258233#msg-258233 From devel at jasonwoods.me.uk Mon Apr 20 18:01:09 2015 From: devel at jasonwoods.me.uk (Jason Woods) Date: Mon, 20 Apr 2015 19:01:09 +0100 Subject: VHost Guidance In-Reply-To: References: <7332841.CJ25rMlJao@tornado> Message-ID: <8B881A46-FEBF-490B-9A57-3B4D4D519182@jasonwoods.me.uk> Try port scan your network assigned IP and not 127.0.0.1. If something listens on 192.168.0.10:8080 for example, which is not 127.0.0.1, it will block requests for listening on "all interfaces" (0.0.0.0) which Nginx is trying to do, because one or more interface is in use for that port. netstat -ln Will tell you all listening ports and interfaces on most OS and is always preferred over port scans as its direct information that does not rely on successful connection that a firewall may block. Hope it helps Jason > On 20 Apr 2015, at 18:54, daBee wrote: > > Hi there. This is my port scan from 8000 to 8080: > > Port Scan has started? > > Port Scanning host: 127.0.0.1 > > Port Scan has completed? > > > I've never set anything up to use that port. > > > Styopa Semenukha Wrote: > ------------------------------------------------------- >> You already have an application listening on port 8080. You can find >> it using >> netstat(1). > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258229,258233#msg-258233 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Apr 20 18:08:44 2015 From: nginx-forum at nginx.us (daBee) Date: Mon, 20 Apr 2015 14:08:44 -0400 Subject: VHost Guidance In-Reply-To: <8B881A46-FEBF-490B-9A57-3B4D4D519182@jasonwoods.me.uk> References: <8B881A46-FEBF-490B-9A57-3B4D4D519182@jasonwoods.me.uk> Message-ID: <9eb24fb12c3177a242d801ad65104e8a.NginxMailingListEnglish@forum.nginx.org> Hi there. Tried 192.168.1.4 (hard assigned), 127.0.0.1, localhost and 0.0.0.0: nothing shows up. Tried netstat -ln and nothing showing "8080" in any return. Jason Woods Wrote: ------------------------------------------------------- > Try port scan your network assigned IP and not 127.0.0.1. > > If something listens on 192.168.0.10:8080 for example, which is not > 127.0.0.1, it will block requests for listening on "all interfaces" > (0.0.0.0) which Nginx is trying to do, because one or more interface > is in use for that port. > > netstat -ln > Will tell you all listening ports and interfaces on most OS and is > always preferred over port scans as its direct information that does > not rely on successful connection that a firewall may block. > > Hope it helps > > Jason Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258229,258235#msg-258235 From jacklinkers at gmail.com Mon Apr 20 18:10:25 2015 From: jacklinkers at gmail.com (JACK LINKERS) Date: Mon, 20 Apr 2015 20:10:25 +0200 Subject: VHost Guidance In-Reply-To: References: Message-ID: Hi, I'm not sure what you did, but nginx.conf file, is only used to configure the http server, not vhosts. There is 2 ways to setup vhosts (called server blocs in NginX) : A. Use the default config file (usually located @ /etc/nginx/sites-available/default) and add all your vhosts there. The default file is self explanatory B. You can use a different file for every server bloc (vhost) by making a copy of the default file : /etc/nginx/sites-available/vhost1.tld /etc/nginx/sites-available/vhos2.tld ... I think the conflict you are facing is due to default config file, delete the symbolic link found in /etc/nginx/sites-enabled/ folder. Hope this helps Some interesting readings when starting with NginX : http://nginx.org/en/docs/beginners_guide.html http://blog.martinfjordvald.com/2010/07/nginx-primer/ 2015-04-20 19:34 GMT+02:00 daBee : > Hi folks. > > Brand new to nginx. I'm trying to run 3 vhosts on my workstation to get > familiar with nginx. > > alpha > bravo > charlie > > I'm using bravo in the main nginx.conf pointing to /var/www/alpha/ > > bravo and charlie are in settings/vhosts.conf into /var/www/bravo and > /var/www/charlie > > Upon trying to start nginx (installed via homebrew on OSX), I get the > following: > > HQ:~ rich$ sudo nginx > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:8080 failed (48: Address already in use) > nginx: [emerg] still could not bind() > > Not too sure what that means. I'm sure it's a directive that's not entered > for localhost. > > Anyway, any leadership appreciated as to how to get this thing going. > > Cheers > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258229,258229#msg-258229 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 20 18:11:48 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 20 Apr 2015 14:11:48 -0400 Subject: Disable caching the names in /etc/hosts in reverse proxy? In-Reply-To: References: Message-ID: B.R. Wrote: ------------------------------------------------------- > nginx resolves names on start or reload. > The commercial version added a feature to update names periodically > (resolve > option of the server directive in the upstream module > ). > However, FOSS nginx is stuck with 'static' names resolution. For an alternative method look here, https://groups.google.com/forum/#!topic/openresty-en/wt_9m7GvROg sources at the end of that thread. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258188,258236#msg-258236 From piotr at cloudflare.com Mon Apr 20 20:51:23 2015 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 20 Apr 2015 13:51:23 -0700 Subject: nginx_slowfs_cache In-Reply-To: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> References: <1429441646.910777337.zt9ngr4w@frv34.fwdcdn.com> Message-ID: Hi, > Today after upgrading from nginx version 1.6.x to 1.7.x I have got a segmentation fault. After short investigation the culprit was found. It is module by Frikle - nginx_slowfs_cache. > > Is anybody has the same issue? Is this module is obsolete? It's not obsolete, but it's not actively maintained either... I'll take a look later this week and fix it. But as others already suggested, don't try to beat OS with cache in RAM. The original use-case for this module was local cache for files served from NFS storage. Best regards, Piotr Sikora From nginx-forum at nginx.us Tue Apr 21 02:26:43 2015 From: nginx-forum at nginx.us (jackyfkc) Date: Mon, 20 Apr 2015 22:26:43 -0400 Subject: upstream prematurely closed connection while reading response header from upstream Message-ID: <8711592812ad09a160c47da4212b30bb.NginxMailingListEnglish@forum.nginx.org> Recently I tried to implemented an Nginx extension module to save the request body to redis list (via RPUSH command) directly . However, I got a 502 Bad Gateway error! The error occurs after calling ngx_http_redis_create_request, before ngx_http_redis_process_header() called [ nginx/logs/error.log ] Anyone know what is the problem? Here are the related log in [nginx/logs/error.log] 2015/04/20 13:56:54 [debug] 5759#0: *1 http process request line 2015/04/20 13:56:54 [debug] 5759#0: *1 http request line: "GET /test HTTP/1.1" 2015/04/20 13:56:54 [debug] 5759#0: *1 http uri: "/test" 2015/04/20 13:56:54 [debug] 5759#0: *1 http args: "" 2015/04/20 13:56:54 [debug] 5759#0: *1 http exten: "" 2015/04/20 13:56:54 [debug] 5759#0: *1 http process request header line 2015/04/20 13:56:54 [debug] 5759#0: *1 http header: "User-Agent: Wget/1.15 (linux-gnu)" 2015/04/20 13:56:54 [debug] 5759#0: *1 http header: "Accept: */*" 2015/04/20 13:56:54 [debug] 5759#0: *1 http header: "Host: localhost:8989" 2015/04/20 13:56:54 [debug] 5759#0: *1 http header: "Connection: Keep-Alive" 2015/04/20 13:56:54 [debug] 5759#0: *1 http header done 2015/04/20 13:56:54 [debug] 5759#0: *1 event timer del: 3: 1429509474481 2015/04/20 13:56:54 [debug] 5759#0: *1 test location: "/" 2015/04/20 13:56:54 [debug] 5759#0: *1 test location: "general/test" 2015/04/20 13:56:54 [debug] 5759#0: *1 test location: "test" 2015/04/20 13:56:54 [debug] 5759#0: *1 using configuration "/test" 2015/04/20 13:56:54 [debug] 5759#0: *1 http cl:-1 max:1048576 2015/04/20 13:56:54 [debug] 5759#0: *1 generic phase: 1 2015/04/20 13:56:54 [debug] 5759#0: *1 generic phase: 2 2015/04/20 13:56:54 [debug] 5759#0: *1 access phase: 3 2015/04/20 13:56:54 [debug] 5759#0: *1 access phase: 4 2015/04/20 13:56:54 [debug] 5759#0: *1 post access phase: 5 2015/04/20 13:56:54 [debug] 5759#0: *1 posix_memalign: 00000000012627D0:4096 @16 2015/04/20 13:56:54 [debug] 5759#0: *1 http init upstream, client timer: 0 2015/04/20 13:56:54 [debug] 5759#0: *1 epoll add event: fd:3 op:3 ev:80002005 2015/04/20 13:56:54 [debug] 5759#0: *1 http cleanup add: 0000000001262788 2015/04/20 13:56:54 [debug] 5759#0: *1 get rr peer, try: 1 2015/04/20 13:56:54 [debug] 5759#0: *1 socket 9 2015/04/20 13:56:54 [debug] 5759#0: *1 epoll add connection: fd:9 ev:80002005 2015/04/20 13:56:54 [debug] 5759#0: *1 connect to 127.0.0.1:6379, fd:9 #2 2015/04/20 13:56:54 [debug] 5759#0: *1 http upstream connect: -2 2015/04/20 13:56:54 [debug] 5759#0: *1 posix_memalign: 000000000125B640:128 @16 2015/04/20 13:56:54 [debug] 5759#0: *1 event timer add: 9: 60000:1429509474482 2015/04/20 13:56:54 [debug] 5759#0: *1 http finalize request: -4, "/test?" a:1, c:2 2015/04/20 13:56:54 [debug] 5759#0: *1 http request count:2 blk:0 2015/04/20 13:56:54 [debug] 5759#0: timer delta: 1 2015/04/20 13:56:54 [debug] 5759#0: posted events 0000000000000000 2015/04/20 13:56:54 [debug] 5759#0: worker cycle 2015/04/20 13:56:54 [debug] 5759#0: epoll timer: 60000 2015/04/20 13:56:54 [debug] 5759#0: epoll: fd:3 ev:0004 d:00007F39455B51B0 2015/04/20 13:56:54 [debug] 5759#0: *1 http run request: "/test?" 2015/04/20 13:56:54 [debug] 5759#0: *1 http upstream check client, write event:1, "/test" 2015/04/20 13:56:54 [debug] 5759#0: *1 http upstream recv(): -1 (11: Resource temporarily unavailable) 2015/04/20 13:56:54 [debug] 5759#0: epoll: fd:9 ev:0004 d:00007F39455B5280 2015/04/20 13:56:54 [debug] 5759#0: *1 http upstream request: "/test?" 2015/04/20 13:56:54 [debug] 5759#0: *1 http upstream send request handler 2015/04/20 13:56:54 [debug] 5759#0: *1 http upstream send request 2015/04/20 13:56:54 [debug] 5759#0: *1 chain writer buf fl:0 s:30 2015/04/20 13:56:54 [debug] 5759#0: *1 chain writer in: 0000000001262B88 2015/04/20 13:56:54 [debug] 5759#0: *1 writev: 30 2015/04/20 13:56:54 [debug] 5759#0: *1 chain writer out: 0000000000000000 2015/04/20 13:56:54 [debug] 5759#0: *1 event timer del: 9: 1429509474482 2015/04/20 13:56:54 [debug] 5759#0: *1 event timer add: 9: 60000:1429509474482 2015/04/20 13:56:54 [debug] 5759#0: timer delta: 0 2015/04/20 13:56:54 [debug] 5759#0: posted events 0000000000000000 2015/04/20 13:56:54 [debug] 5759#0: worker cycle 2015/04/20 13:56:54 [debug] 5759#0: epoll timer: 60000 2015/04/20 13:56:54 [debug] 5759#0: epoll: fd:9 ev:0005 d:00007F39455B5280 2015/04/20 13:56:54 [debug] 5759#0: *1 http upstream request: "/test?" 2015/04/20 13:56:54 [debug] 5759#0: *1 http upstream process header 2015/04/20 13:56:54 [debug] 5759#0: *1 recv: fd:9 0 of 0 2015/04/20 13:56:54 [error] 5759#0: *1 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: localhost,request: "GET /test HTTP/1.1", upstream: "redis://127.0.0.1:6379", host: "localhost:8989" =============================================================================== The configuration in Nginx as following shown redis_pass 127.0.0.1:6379; redis_db 3; redis_key candice; =============================================================================== Here are the source code of the module. static ngx_int_t ngx_http_redis_handler(ngx_http_request_t *r) { ngx_http_redis_loc_conf_t *rlcf; ngx_http_upstream_t *u; ngx_int_t rc; /* set up upstream structure */ if (ngx_http_upstream_create(r) != NGX_OK) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "ngx_http_upstream_create() failed"); return NGX_HTTP_INTERNAL_SERVER_ERROR; } u = r->upstream; ngx_str_set(&u->schema, "redis://"); rlcf = ngx_http_get_module_loc_conf(r, ngx_http_redis_module); u->conf = &rlcf->upstream; /* attach the callback functions */ u->create_request = ngx_http_redis_create_request; u->reinit_request = ngx_http_redis_reinit_request; u->process_header = ngx_http_redis_process_header; u->finalize_request = ngx_http_redis_finalize_request; rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc > NGX_HTTP_SPECIAL_RESPONSE) return rc; return NGX_DONE; } =============================================================================== /* callbacks - To send command "SELECT \r\nRPUSH \r\n" to redis */ static ngx_int_t ngx_http_redis_create_request(ngx_http_request_t *r) { ngx_http_redis_loc_conf_t * rlcf; ngx_chain_t *cl, *body; ngx_buf_t *buf, *b; /* Do not forget to change the following offset when modifying the query string */ ngx_str_t query = ngx_string("SELECT %ui\r\nRPUSH %V "); size_t len = query.len - 5; rlcf = ngx_http_get_module_loc_conf(r, ngx_http_redis_module); len += (rlcf->db > 9 ? 2 : 1) + rlcf->key.len; /* Create temporary buffer for request with size len. */ buf = ngx_create_temp_buf(r->pool, len); if (buf == NULL) { return NGX_ERROR; } ngx_snprintf(buf->pos, len, (char*)query.data, rlcf->db, &rlcf->key); buf->last = buf->pos + len; cl = ngx_alloc_chain_link(r->pool); if (cl == NULL) { return NGX_ERROR; } cl->buf = buf; cl->next = NULL; body = r->upstream->request_bufs; r->upstream->request_bufs = cl; while (body) { b = ngx_alloc_buf(r->pool); if (b == NULL) return NGX_ERROR; ngx_memcpy(b, body->buf, sizeof(ngx_buf_t)); cl->next = ngx_alloc_chain_link(r->pool); if (cl->next == NULL) return NGX_ERROR; cl = cl->next; cl->buf = b; body = body->next; } *cl->buf->last++ = CR; *cl->buf->last++ = LF; cl->next = NULL; return NGX_OK; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258239,258239#msg-258239 From nginx-forum at nginx.us Tue Apr 21 04:00:15 2015 From: nginx-forum at nginx.us (daBee) Date: Tue, 21 Apr 2015 00:00:15 -0400 Subject: VHost Guidance In-Reply-To: References: Message-ID: <52488eb5778a2cd8a404333523a25039.NginxMailingListEnglish@forum.nginx.org> OK, thanks for the guidance. I will go have a look. I was using two texts that somehow convinced me it was done this way. I'll read up with the online documentation. Cheers shiroweb Wrote: ------------------------------------------------------- > Hi, I'm not sure what you did, but nginx.conf file, is only used to > configure the http server, not vhosts. > > There is 2 ways to setup vhosts (called server blocs in NginX) : > > A. Use the default config file (usually located @ > /etc/nginx/sites-available/default) and add all your vhosts there. The > default file is self explanatory > > B. You can use a different file for every server bloc (vhost) by > making a > copy of the default file : > /etc/nginx/sites-available/vhost1.tld > /etc/nginx/sites-available/vhos2.tld > .... > > I think the conflict you are facing is due to default config file, > delete > the symbolic link found in /etc/nginx/sites-enabled/ folder. > > Hope this helps > > Some interesting readings when starting with NginX : > http://nginx.org/en/docs/beginners_guide.html > http://blog.martinfjordvald.com/2010/07/nginx-primer/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258229,258240#msg-258240 From gyb997 at gmail.com Tue Apr 21 06:45:37 2015 From: gyb997 at gmail.com (cruze guo) Date: Tue, 21 Apr 2015 14:45:37 +0800 Subject: Helloo,do anyone has this situation? In-Reply-To: References: Message-ID: pls give me sometime, I will give the detail document about this "bug".... 2015-04-16 18:02 GMT+08:00 cruze guo : > I compile the nginx with this configure. > --prefix=/home/svn/nginx --user=svn > --add-module=../ngx_devel_kit-master > --add-module=../srcache-nginx-module-master > --add-module=../redis2-nginx-module-master > --add-module=../set-misc-nginx-module-master > --add-module=../echo-nginx-module-master > --add-module=../ngx_http_redis-0.3.7 > --add-module=../lua-nginx-module-0.9.13 --with-debug > > I want to use nginx +redis for caching the svn webdav method. > I also patch some code for support the webdav http method,propfind > ,but in this situation,it's not import. > > The svn client use chunked encode for http client request .When I use > the TortoiseSVN 1.8.11 to test my cache system. > I get this > > read: 21, 00007FFFF1964830, 2048, 131072 > 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 133120 > 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 135168 > 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 137216 > 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 139264 > 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 141312 > 2015/04/08 13:08:58 [debug] 16486#0: *1 read: 21, 00007FFFF1964830, 2048, 143360 > 2015/04/08 13:08:58 [debug] 16486#0: *1 access phase: 8 > 2015/04/08 13:08:58 [debug] 16486#0: *1 lua access handler, > uri:"/ps/se/branches" c:1 > 2015/04/08 13:08:58 [debug] 16486#0: *1 http client request body preread 120 > 2015/04/08 13:08:58 [debug] 16486#0: *1 http request body chunked filter > 2015/04/08 13:08:58 [debug] 16486#0: *1 http body chunked buf t:1 f:0 > 0000000000746440, pos 0000000000746609, size: 120 file: 0, size: 0 <== > 120 IS NOT ENOUGH FOR REQUEST !!!!! > 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 31 s:0 > 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 32 s:1 > 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 63 s:1 > 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 0D s:1 > 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 0A s:3 > 2015/04/08 13:08:58 [debug] 16486#0: *1 http chunked byte: 3C s:4 > 2015/04/08 13:08:58 [debug] 16486#0: *1 http body chunked buf t:1 f:0 > 0000000000746440, pos 0000000000746681, size: 0 file: 0, size: 0 > 2015/04/08 13:08:58 [debug] 16486#0: *1 http body new buf t:1 f:0 > 000000000074660E, pos 000000000074660E, size: 115 file: 0, size: 0 > 2015/04/08 13:08:58 [debug] 16486#0: *1 malloc: > 00007FD8C33DC010:1048576 <=== SO MALLOC NEW BUF > > but ,the (struct ngx_http_request_s) 's write_event_handler will be > set to ngx_http_request_empty_handler. > > in ngx_http_request_body.c > function ngx_http_read_client_request_body > r->write_event_handler = ngx_http_request_empty_handler; > > It mean nothing will handle the next step,when you read all client > request body!! > I want to know how to take the request body buffer bigger ? > or,can I use this ugly patch to solve this problem? > > for struct ngx_http_request_s { > ......... > ngx_http_event_handler_pt read_event_handler; > ngx_http_event_handler_pt write_event_handler; > ngx_http_event_handler_pt write_event_handler_back; <=== ADD this > ........ > } > > it's ugly but it's useful. From shahzaib.cb at gmail.com Tue Apr 21 07:06:38 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 21 Apr 2015 12:06:38 +0500 Subject: open socket #84 left in connection In-Reply-To: References: Message-ID: Hi, Problem was with monit which was kept on restarting nginx persistently. Thanks !! On Mon, Apr 20, 2015 at 7:23 PM, shahzaib shahzaib wrote: > I have also enabled debug logging and found 'Resource temporarily > unavailable' messages. Below is the reference sample : > > 2015/04/20 18:41:29 [debug] 12917#0: *2711 event timer add: 18: > 15000:1429537304304 > 2015/04/20 18:41:29 [debug] 12917#0: *2711 post event 0000000000A372D0 > 2015/04/20 18:41:29 [debug] 12917#0: posted event 0000000000A372D0 > 2015/04/20 18:41:29 [debug] 12917#0: *2711 delete posted event > 0000000000A372D0 > 2015/04/20 18:41:29 [debug] 12917#0: *2711 http keepalive handler > 2015/04/20 18:41:29 [debug] 12917#0: *2711 malloc: 0000000000ACF6C0:262144 > 2015/04/20 18:41:29 [debug] 12917#0: *2711 recv: fd:18 -1 of 262144 > 2015/04/20 18:41:29 [debug] 12917#0: *2711 recv() not ready (11: Resource > temporarily unavailable) > 2015/04/20 18:41:29 [debug] 12917#0: *2711 free: 0000000000ACF6C0 > 2015/04/20 18:41:29 [debug] 12917#0: posted event 0000000000000000 > 2015/04/20 18:41:29 [debug] 12917#0: worker cycle > 2015/04/20 18:41:29 [debug] 12917#0: accept mutex locked > 2015/04/20 18:41:29 [debug] 12917#0: epoll timer: 15000 > 2015/04/20 18:41:29 [debug] 12913#0: epoll: fd:18 ev:0004 > d:00007F58B7D89238 > 2015/04/20 18:41:29 [debug] 12913#0: *2698 http run request: > "/files/videos/2015/04/15/14290705507373d-360.mp4?" > > Could that be the issue ? > > On Mon, Apr 20, 2015 at 6:18 PM, shahzaib shahzaib > wrote: > >> Hi, >> >> We're using nginx to upload and serve videos files around 1GB of file >> size via http. We've been receiving complains from some customers that >> uploading has some issue and sometimes user are unable to upload videos >> successfully. Server is installed with Nginx-1.4.7+php-fpm, ffmpeg, MP4Box. >> On checking the nginx logs, we didn't got anything but Following messages : >> >> 2015/04/16 16:49:53 [alert] 15077#0: open socket #81 left in connection 49 >> 2015/04/16 16:49:53 [alert] 15084#0: open socket #48 left in connection 19 >> 2015/04/16 16:49:53 [alert] 15077#0: open socket #84 left in connection 51 >> 2015/04/16 16:49:53 [alert] 15084#0: open socket #52 left in connection 21 >> 2015/04/16 16:49:53 [alert] 15077#0: open socket #87 left in connection 53 >> 2015/04/16 16:49:53 [alert] 15079#0: open socket #81 left in connection 46 >> 2015/04/16 16:49:53 [alert] 15084#0: open socket #53 left in connection 22 >> >> Here is the nginx.conf >> >> user nginx; >> worker_processes 16; >> worker_rlimit_nofile 300000; #2 filehandlers for each connection >> error_log /usr/local/nginx/logs/error.log crit; >> #access_log logs/access.log; >> #pid logs/nginx.pid; >> >> >> events { >> worker_connections 6000; >> use epoll; >> } >> http { >> include mime.types; >> default_type application/octet-stream; >> client_max_body_size 3000M; >> client_body_buffer_size 2000M; >> sendfile_max_chunk 128k; >> client_header_buffer_size 256k; >> large_client_header_buffers 4 256k; >> output_buffers 1 512k; >> server_tokens off; #Conceals nginx version >> access_log /usr/local/nginx/logs/access.log ; >> access_log off; >> sendfile off; >> ignore_invalid_headers on; >> client_header_timeout 60m; >> client_body_timeout 60m; >> send_timeout 60m; >> reset_timedout_connection on; >> >> keepalive_timeout 15; >> include "/usr/local/nginx/conf/vhosts/*.conf"; >> error_page 404 = /thumb.php; >> error_page 403 /forbidden.html; >> } >> >> If anyone can help me with this ? >> >> Regards. >> Shahzaib >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 21 13:26:59 2015 From: nginx-forum at nginx.us (nicocolt) Date: Tue, 21 Apr 2015 09:26:59 -0400 Subject: rewrite rules issue In-Reply-To: <20150420130541.GY29618@daoine.org> References: <20150420130541.GY29618@daoine.org> Message-ID: <25a69345c6480a90db60d1de2c9ba09d.NginxMailingListEnglish@forum.nginx.org> Hello Francis, server { listen *:80; server_name domain.fr www.domain.fr subdomain.domain.fr; root /var/www/domain.fr/web; if ($http_host = "subdomain.domain.fr") { rewrite ^(?!/\b(bar|stats|error)\b)/(.*)$ /bar/$2 last; } index index.html index.htm index.php index.cgi index.pl index.xhtml; error_page 400 /error/400.html; error_page 401 /error/401.html; error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 405 /error/405.html; error_page 500 /error/500.html; error_page 502 /error/502.html; error_page 503 /error/503.html; recursive_error_pages on; location = /error/400.html { internal; } location = /error/401.html { internal; } location = /error/403.html { internal; } location = /error/404.html { internal; } location = /error/405.html { internal; } location = /error/500.html { internal; } location = /error/502.html { internal; } location = /error/503.html { internal; } error_log /var/log/ispconfig/httpd/domain.fr/error.log; access_log /var/log/ispconfig/httpd/domain.fr/access.log combined; location ~ /\. { deny all; access_log off; log_not_found off; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location /stats/ { index index.html index.php; auth_basic "Members Only"; auth_basic_user_file /var/www/clients/client0/web5/web/stats/.htpasswd_stats; } location ^~ /awstats-icon { alias /usr/share/awstats/icon; } location ~ \.php$ { try_files /95c54bbc57ae02e6bba619001d015e75.htm @php; } location @php { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web5.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; } } So if host is subdomain.domain.fr, then the url is rewrite to subdomain.domain.fr/bar/ Then now if in bar i have foo if i try to reach http://subdomain.domain.fr/foo/ then ok if i try to reach http://subdomain.domain.fr/foo then i'm redirect to http://subdomain.domain.fr/bar/foo/ NOT ok Thanks for your help ! Best regards, Nico Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258184,258257#msg-258257 From lists at ruby-forum.com Tue Apr 21 13:40:40 2015 From: lists at ruby-forum.com (Devika R.) Date: Tue, 21 Apr 2015 15:40:40 +0200 Subject: maximum number of descriptors supported by select() is 1024 while connecting to upstream Message-ID: <930f5d231f7961a9e4d37366dca5c3d7@ruby-forum.com> I am getting following error in my nginx logs and I don't think its related to the worker connections. I am on a windows 7 machine. I think the problem is to do with proxy. Any idea how to resolve this? ERROR LOG: 2015/04/21 09:32:28 [error] 9304#11016: *41311 maximum number of descriptors supported by select() is 1024 while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /content/fci/en/home/index.html HTTP/1.1", upstream: "http://127.0.0.1:3000/content/fci/en/home/index.html", host: "localhost:4502" NGINX.CONF: server { listen 4501; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; root C:/D_DRIVE/Installables/IEA/nginx-1.6.2/html; index index.html index.htm; include C:/D_DRIVE/Installables/IEA/nginx-1.6.2/conf.d/include/falcon.conf; location ~* ^.+\.(jpeg|gif|png|jpg|js|css|ico|woff|svg|ttf|json) { proxy_pass http://localhost:4502; } location / { proxy_redirect off; proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; proxy_set_header authoring true; proxy_pass http://localhost:3000; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root C:/D_DRIVE/Installables/IEA/nginx-1.6.2/html; } } -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Tue Apr 21 13:49:37 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Apr 2015 16:49:37 +0300 Subject: maximum number of descriptors supported by select() is 1024 while connecting to upstream In-Reply-To: <930f5d231f7961a9e4d37366dca5c3d7@ruby-forum.com> References: <930f5d231f7961a9e4d37366dca5c3d7@ruby-forum.com> Message-ID: <20150421134937.GT32429@mdounin.ru> Hello! On Tue, Apr 21, 2015 at 03:40:40PM +0200, Devika R. wrote: > I am getting following error in my nginx logs and I don't think its > related to the worker connections. I am on a windows 7 machine. > > I think the problem is to do with proxy. Any idea how to resolve this? > > ERROR LOG: > 2015/04/21 09:32:28 [error] 9304#11016: *41311 maximum number of > descriptors supported by select() is 1024 while connecting to upstream, > client: 127.0.0.1, server: localhost, request: "GET > /content/fci/en/home/index.html HTTP/1.1", upstream: > "http://127.0.0.1:3000/content/fci/en/home/index.html", host: > "localhost:4502" Quoting http://nginx.org/en/docs/windows.html: : A worker can handle no more than 1024 simultaneous connections. You may try recompiling nginx with bigger FD_SETSIZE, see http://nginx.org/en/docs/howto_build_on_win32.html. But in general it's better idea to switch to UNIX if you are going to use nginx in production. -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Tue Apr 21 13:49:35 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 21 Apr 2015 15:49:35 +0200 Subject: maximum number of descriptors supported by select() is 1024 while connecting to upstream In-Reply-To: <930f5d231f7961a9e4d37366dca5c3d7@ruby-forum.com> References: <930f5d231f7961a9e4d37366dca5c3d7@ruby-forum.com> Message-ID: Dear Devika. Am 21-04-2015 15:40, schrieb Devika R.: > I am getting following error in my nginx logs and I don't think its > related to the worker connections. I am on a windows 7 machine. > > I think the problem is to do with proxy. Any idea how to resolve this? > > ERROR LOG: > 2015/04/21 09:32:28 [error] 9304#11016: *41311 maximum number of > descriptors supported by select() is 1024 while connecting to upstream, > client: 127.0.0.1, server: localhost, request: "GET > /content/fci/en/home/index.html HTTP/1.1", upstream: > "http://127.0.0.1:3000/content/fci/en/home/index.html", host: > "localhost:4502" Do you have read this right? ;-) http://nginx.org/en/docs/windows.html Due to this and some other known issues version of nginx for Windows is considered to be a beta version. Known issues A worker can handle no more than 1024 simultaneous connections. BR Aleks From nginx-forum at nginx.us Tue Apr 21 14:15:51 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 21 Apr 2015 10:15:51 -0400 Subject: maximum number of descriptors supported by select() is 1024 while connecting to upstream In-Reply-To: <930f5d231f7961a9e4d37366dca5c3d7@ruby-forum.com> References: <930f5d231f7961a9e4d37366dca5c3d7@ruby-forum.com> Message-ID: <87cf0fa8518d1b9c5aad716eeff5bcba.NginxMailingListEnglish@forum.nginx.org> Devika R. Wrote: ------------------------------------------------------- > I am getting following error in my nginx logs and I don't think its > related to the worker connections. I am on a windows 7 machine. > > I think the problem is to do with proxy. Any idea how to resolve this? > > ERROR LOG: > 2015/04/21 09:32:28 [error] 9304#11016: *41311 maximum number of > descriptors supported by select() is 1024 while connecting to Try this version: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258259,258263#msg-258263 From mdounin at mdounin.ru Tue Apr 21 15:19:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Apr 2015 18:19:39 +0300 Subject: nginx-1.8.0 Message-ID: <20150421151939.GU32429@mdounin.ru> Changes with nginx 1.8.0 21 Apr 2015 *) 1.8.x stable branch. -- Maxim Dounin http://nginx.org/ From grantksupport at operamail.com Tue Apr 21 15:29:42 2015 From: grantksupport at operamail.com (grantksupport at operamail.com) Date: Tue, 21 Apr 2015 08:29:42 -0700 Subject: Building nginx 1.8.0, linking to local-install of Openssl, 'nginx -V' still reports "built with" *system* openssl. why? Message-ID: <1429630182.2987470.256667905.533C3E70@webmail.messagingengine.com> I'm building nginx 1.8.0 on linux/64. I have openssl 1.0.2a built locally, and installed into /usr/local/ssl which openssl /usr/local/ssl/bin/openssl I've configured nginx build with ./configure \ ... --with-cc-opt='... -I/usr/local/ssl/include -I/usr/local/include' \ --with-ld-opt='-L/usr/local/ssl/lib64 -Wl,-rpath,/usr/local/ssl/lib64 -lssl -lcrypto -ldl -lz' \ --with-http_ssl_module \ ... checking after build/install, the intended ssl libs ARE correctly linked ldd objs/nginx | egrep -i "ssl|crypto" libssl.so.1.0.0 => /usr/local/ssl/lib64/libssl.so.1.0.0 (0x00007f9cedd2b000) libcrypto.so.1.0.0 => /usr/local/ssl/lib64/libcrypto.so.1.0.0 (0x00007f9ced8e8000) But 'nginx -V' references BOTH the system-installed OpenSSL 1.0.1k-fips, and 'my' OpenSSL 1.0.2a nginx -V nginx version: nginx/1.8.0 built with OpenSSL 1.0.1k-fips 8 Jan 2015 (running with OpenSSL 1.0.2a 19 Mar 2015) TLS SNI support enabled configure arguments: ... I want to ensure that the system-installed OpenSSL 1.0.1k-fips is completely UNinvolved. What needs to change in the build/config to make sure that it's not? grant From kworthington at gmail.com Tue Apr 21 15:57:59 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 21 Apr 2015 11:57:59 -0400 Subject: [nginx-announce] nginx-1.8.0 In-Reply-To: <20150421151946.GV32429@mdounin.ru> References: <20150421151946.GV32429@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.8.0 for Windows http://goo.gl/A3tH0N (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Apr 21, 2015 at 11:19 AM, Maxim Dounin wrote: > Changes with nginx 1.8.0 21 Apr > 2015 > > *) 1.8.x stable branch. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikrant.thakur at gmail.com Tue Apr 21 18:56:02 2015 From: vikrant.thakur at gmail.com (vikrant singh) Date: Tue, 21 Apr 2015 11:56:02 -0700 Subject: Question on Nginx Proxy Installation Message-ID: Hello, Following is a problem I am facing while using nginx as a proxy, please provide me feedback and ideas to solve it. I have a zookeeper based service discovery setup. For some of the services I want to use nginx as a proxy. Flow I am planning to set is like this.. Client connects to nginx with some user id, based on userid nginx handler need to do a "lookup in ZK" and find out a service instance. Then pass request to service using "proxy_pass" . Problem I am facing is how to do this "lookup" along with proxy_pass. I understand that I may need to register my handler in one of the phase explained here http://wiki.nginx.org/Phases . Most likely it will "rewrite", but that allows on regex. How I can invoke a functionality bundled in jar? Please note I am also aware of http://nginx-clojure.github.io/ which allows you to execute java handlers through nginx, but not really sure how to use it with "proxy_pass". Thanks in advance for your help. Best Regards, Vikrant -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Tue Apr 21 20:28:13 2015 From: lists at ruby-forum.com (Devika R.) Date: Tue, 21 Apr 2015 22:28:13 +0200 Subject: maximum number of descriptors supported by select() is 1024 while connecting to upstream In-Reply-To: <87cf0fa8518d1b9c5aad716eeff5bcba.NginxMailingListEnglish@forum.nginx.org> References: <930f5d231f7961a9e4d37366dca5c3d7@ruby-forum.com> <87cf0fa8518d1b9c5aad716eeff5bcba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6d70e86c6dadbb79fb4f938a37561a68@ruby-forum.com> itpp2012 wrote in post #1172401: > Devika R. Wrote: > ------------------------------------------------------- >> I am getting following error in my nginx logs and I don't think its >> related to the worker connections. I am on a windows 7 machine. >> >> I think the problem is to do with proxy. Any idea how to resolve this? >> >> ERROR LOG: >> 2015/04/21 09:32:28 [error] 9304#11016: *41311 maximum number of >> descriptors supported by select() is 1024 while connecting to > > Try this version: http://nginx-win.ecsds.eu/ > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258259,258263#msg-258263 I don't think the problem is due to number of worker threads as other very block of 'server config' seems to work perfectly fine. So I think I have somehow messed up the configuration where in its going in a loop or something and hence wanted to know if anyone has faced the same thing. -- Posted via http://www.ruby-forum.com/. From francis at daoine.org Tue Apr 21 20:48:54 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 21 Apr 2015 21:48:54 +0100 Subject: rewrite rules issue In-Reply-To: <25a69345c6480a90db60d1de2c9ba09d.NginxMailingListEnglish@forum.nginx.org> References: <20150420130541.GY29618@daoine.org> <25a69345c6480a90db60d1de2c9ba09d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150421204854.GA29618@daoine.org> On Tue, Apr 21, 2015 at 09:26:59AM -0400, nicocolt wrote: Hi there, > server { > listen *:80; > > > server_name domain.fr www.domain.fr subdomain.domain.fr; This doesn't match what I thought you had done. > So if host is subdomain.domain.fr, then the url is rewrite to > subdomain.domain.fr/bar/ > > Then now if in bar i have foo > > if i try to reach http://subdomain.domain.fr/foo/ then ok > if i try to reach http://subdomain.domain.fr/foo then i'm redirect to > http://subdomain.domain.fr/bar/foo/ NOT ok I suspect I'm not going to be able to give you the answer you want, so I'll leave it for someone else. Good luck with it, f -- Francis Daly francis at daoine.org From tfransosi at gmail.com Tue Apr 21 21:59:18 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Tue, 21 Apr 2015 18:59:18 -0300 Subject: nginx php cgi interpreter Message-ID: Hi all, I'm just trying to configure nginx to use use php, but it seemed too complicated. Why is it so complicated to tell nginx to use php-cgi interpreter [1]? When compared to mongoose it is just a matter of setting the cgi_interpreter variable [2] to the path of php-cgi? [1] - https://www.linode.com/docs/websites/nginx/nginx-and-phpfastcgi-on-ubuntu-12-04-lts-precise-pangolin [2] - cesanta.com/docs/Options.html Best regards, -- Thiago Farina From r1ch+nginx at teamliquid.net Wed Apr 22 00:02:10 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 22 Apr 2015 02:02:10 +0200 Subject: nginx php cgi interpreter In-Reply-To: References: Message-ID: On Tue, Apr 21, 2015 at 11:59 PM, Thiago Farina wrote: > Hi all, > > I'm just trying to configure nginx to use use php, but it seemed too > complicated. > > Why is it so complicated to tell nginx to use php-cgi interpreter [1]? > When compared to mongoose it is just a matter of setting the > cgi_interpreter variable [2] to the path of php-cgi? > > That guide seems obsolete. Use the php5-fpm package, https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-debian-7 looks like a more up to date guide (skip the mysql part if it isn't needed). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfransosi at gmail.com Wed Apr 22 02:33:46 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Tue, 21 Apr 2015 23:33:46 -0300 Subject: nginx php cgi interpreter In-Reply-To: References: Message-ID: On Tue, Apr 21, 2015 at 9:02 PM, Richard Stanway wrote: >> That guide seems obsolete. Use the php5-fpm package, > https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-debian-7 > looks like a more up to date guide (skip the mysql part if it isn't needed). > Hey! Thanks a lot! I was able to configure it following this tutorial. My nginx conf ended up being like the attached file. The only thing I have had to change from it was the unix:/var/run/php5-fpm.sock back to 127.0.0.1:9000. >From /usr/local/nginx/logs/error.log: 2015/04/21 23:25:47 [crit] 12184#0: *1 connect() to unix:/var/run/php5-fpm.sock failed (13: Permission denied) while connecting to upstream, client : 192.168.0.101, server: tfarina.org, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "tfarina.org" -- Thiago Farina -------------- next part -------------- A non-text attachment was scrubbed... Name: tfarina_org Type: application/octet-stream Size: 770 bytes Desc: not available URL: From nginx-forum at nginx.us Wed Apr 22 05:42:12 2015 From: nginx-forum at nginx.us (GuiPoM) Date: Wed, 22 Apr 2015 01:42:12 -0400 Subject: Connection timeout from work, working anywhere else In-Reply-To: <1201d9f2aac5d213dd3547a58e1f3617.NginxMailingListEnglish@forum.nginx.org> References: <1201d9f2aac5d213dd3547a58e1f3617.NginxMailingListEnglish@forum.nginx.org> Message-ID: Short update, as I am still struggling with this problem: Same issue for HTTP, if I route an external port to nginx on my raspberry. But if I route to another HTTP server, everything works fine from my work office. So this must have something to do with nginx, but I have absolutely no idea where to have a look and what to do. Any help would be very appreciated ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258187,258289#msg-258289 From mihaiv at 4psa.com Wed Apr 22 12:41:34 2015 From: mihaiv at 4psa.com (Mihai Vintila) Date: Wed, 22 Apr 2015 15:41:34 +0300 Subject: Connection timeout from work, working anywhere else In-Reply-To: References: <1201d9f2aac5d213dd3547a58e1f3617.NginxMailingListEnglish@forum.nginx.org> Message-ID: <553796FE.1000201@4psa.com> Try it with: sendfile on; tcp_nodelay on; tcp_nopush off; Best regards, Mihai Vintila On 4/22/2015 8:42 AM, GuiPoM wrote: > Short update, as I am still struggling with this problem: Same issue for > HTTP, if I route an external port to nginx on my raspberry. But if I route > to another HTTP server, everything works fine from my work office. So this > must have something to do with nginx, but I have absolutely no idea where to > have a look and what to do. Any help would be very appreciated ! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258187,258289#msg-258289 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 22 19:15:46 2015 From: nginx-forum at nginx.us (nicocolt) Date: Wed, 22 Apr 2015 15:15:46 -0400 Subject: rewrite rules issue In-Reply-To: <20150421204854.GA29618@daoine.org> References: <20150421204854.GA29618@daoine.org> Message-ID: <5323201b19ea71234ca574aeba0b4497.NginxMailingListEnglish@forum.nginx.org> Hi Francis Ok. Many thanks for your interest of my problem. Best regards Nico Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258184,258308#msg-258308 From justinbeech at gmail.com Thu Apr 23 00:50:30 2015 From: justinbeech at gmail.com (jb) Date: Thu, 23 Apr 2015 10:50:30 +1000 Subject: best place in code to setsockoptions from variables.. Message-ID: A pointer on source modifications would be great. In which function is the logical/best place to set custom socket options after accept, from variable values set in nginx.conf, eg per location? or, is it best to copy and modify an existing extension that uses that hook. If so, which extension? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-fujita at tkx.co.jp Thu Apr 23 02:56:04 2015 From: k-fujita at tkx.co.jp (=?UTF-8?B?6Jek55SwIOa1qeS4gA==?=) Date: Thu, 23 Apr 2015 11:56:04 +0900 Subject: Couldn't display even 404 or 403 Error page when Nginx Updated to 1.8.0 Message-ID: <55385F44.9010008@tkx.co.jp> Hello, there. I'm Koichi Fujita. Last day, when I came to the office, my web brouser and iPhone returned a massage "Couldn't connect the web server, temporary unavailable the server or blah blah blah...", which it means that brousers can't display from inside and outside network, and our Web site didn't display anything even 404 or 403 Error massages, too. I hit the ping command, and it completely returned. At the later, I restarted the OS which is CentOS linux x86_64 architecture and Nginx software twice, but nothing had changed. I logged in the system with tarminal, then I could. Later, I viewed the log, I found the fact that the Nginx had been updated by crontab command. I think it is a cause of the malfunction, though... Please tell me how to set the parameter to display our site from inside and outside. Or should you have any an aware point, please tell me. Thanks in advance. Best regards. K.Fujita -- ???????????????????? ??????? ????????????? ??????? ??????????????? MAIL : k-fujita at tkx.co.jp TEL : 06-6768-0681 FAX : 06-6768-4000 ?543-0011????????????5-16 ???????????????????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From umarzuki at gmail.com Thu Apr 23 03:20:39 2015 From: umarzuki at gmail.com (Umarzuki Mochlis) Date: Thu, 23 Apr 2015 11:20:39 +0800 Subject: Couldn't display even 404 or 403 Error page when Nginx Updated to 1.8.0 In-Reply-To: <55385F44.9010008@tkx.co.jp> References: <55385F44.9010008@tkx.co.jp> Message-ID: any log messages? 2015-04-23 10:56 GMT+08:00 ?? ?? : > Hello, there. > > I'm Koichi Fujita. > > Last day, when I came to the office, my web brouser and iPhone returned a > massage "Couldn't connect the web server, temporary unavailable the server > or blah blah blah...", which it means that brousers can't display from > inside and outside network, and our Web site didn't display anything even > 404 or 403 Error massages, too. > > I hit the ping command, and it completely returned. > At the later, I restarted the OS which is CentOS linux x86_64 architecture > and Nginx software twice, > but nothing had changed. > > I logged in the system with tarminal, then I could. > Later, I viewed the log, I found the fact that the Nginx had been updated by > crontab command. > I think it is a cause of the malfunction, though... > > Please tell me how to set the parameter to display our site from inside and > outside. > Or should you have any an aware point, please tell me. > > Thanks in advance. > > Best regards. > > K.Fujita > > -- > ???????????????????? > ??????? > ????????????? > ??????? > ??????????????? > > MAIL : k-fujita at tkx.co.jp > TEL : 06-6768-0681 FAX : 06-6768-4000 > ?543-0011????????????5-16 > ???????????????????? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Apr 23 10:44:19 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 23 Apr 2015 06:44:19 -0400 Subject: open socket left in connection Message-ID: <39d111e9a1323c3a9424b71d51eb610e.NginxMailingListEnglish@forum.nginx.org> This issue which can be found as; http://trac.nginx.org/nginx/ticket/714 http://trac.nginx.org/nginx/ticket/626 http://trac.nginx.org/nginx/ticket/346 Still exists in 1.7.12, 1.8 and 1.9 We've isolated this between SPDY and (file)caching such as fastcgi_cache and proxy_cache, where exactly is not yet known, should you have this issue either disable file caching or spdy. You often need to run debug logging for days to get anything useful, should you have a worker with a de-attached port state then attempt to core-dump this worker. This issue is OS independent. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258312,258312#msg-258312 From nginx-forum at nginx.us Thu Apr 23 13:07:37 2015 From: nginx-forum at nginx.us (GuiPoM) Date: Thu, 23 Apr 2015 09:07:37 -0400 Subject: Connection timeout from work, working anywhere else In-Reply-To: <1201d9f2aac5d213dd3547a58e1f3617.NginxMailingListEnglish@forum.nginx.org> References: <1201d9f2aac5d213dd3547a58e1f3617.NginxMailingListEnglish@forum.nginx.org> Message-ID: <44dc0ba498c3681fdfd413969f896b3b.NginxMailingListEnglish@forum.nginx.org> It must have a positive impact, I am now able to access the server using HTTP, but not using HTTPS. nginx is running on my raspberry, port 80 for http, for 443 for https. External ports are routed to according internal ports. So HTTP is fine, HTTPS still returns an ERR_TUNNEL_CONNECTION_FAILED from my work computer. But from my phone, it works fine. Error log is: 2015/04/23 15:04:04 [error] 2151#0: *8964 upstream prematurely closed connection while reading response header from upstream, client: 109.99.99.99 , server: , request: "GET /socket.io/?EIO=3&transport=polling&t=1429794125290-15&sid=ZjXxkYkMXrbfNkTbAACm HTTP/1.1", upstream: "http://127.0.0.1:8070/socket.io/?EIO=3&transport=polling&t=1429794125290-15&sid=ZjXxkYkMXrbfNkTbAACm", host: "truc:4321", referrer: "https://truc:4321/jeedom/index.php?v=m&" Something that could help, maybe: I notice that referrer is a hostname, some for host, but I am accessing with an ip address. Is it expected ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258187,258320#msg-258320 From sandro.bordacchini at nems.it Thu Apr 23 16:52:00 2015 From: sandro.bordacchini at nems.it (Sandro Bordacchini) Date: Thu, 23 Apr 2015 18:52:00 +0200 Subject: POST request body manipulation Message-ID: <55392330.5090400@nems.it> Hello everyone, i have a problem in configuring Nginx. I have a location that serves as a proxy for a well-specified url "/login". This location can receive both GET and POST request. GET request have no body and should be proxied to a default and well-know host. POST request contains the host to be proxied to in their body (extractable by a regexp). To avoid use of "if", i was using a map: map $request_body $target_tenant_loginbody { ~*account=(.*)%40(?P.*)&password.* $body_tenant; default default.example.com; } location /login { echo_read_request_body; proxy_pass http://$target_tenant_loginbody:9000; # Debug proxy_set_header X-Debug-Routing-Value $target_tenant_loginbody; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } This is not working (works with the GETs but not with the POSTs), seems that the map returns always the default value even if the regexp works (tested on regex101.com). After a few tests, i understood that $request_body is empty or non-initialized. I tried also with $echo_request_body, that seems correctly initialized in location context but not in the map. I read about a lot of issues and people having problem with empty $request_body. Maybe is there another approach you could direct me to? Thanks in advance, Sandro. --- Questa e-mail ? stata controllata per individuare virus con Avast antivirus. http://www.avast.com From nginx-forum at nginx.us Fri Apr 24 05:27:43 2015 From: nginx-forum at nginx.us (sporkman) Date: Fri, 24 Apr 2015 01:27:43 -0400 Subject: Proxying to older apache fails Message-ID: <7577c3d1f0dc4a3da86e5db5a2789d91.NginxMailingListEnglish@forum.nginx.org> I'm trying to keep an old apache install limping along for a few more months by letting nginx handle the SSL connection between site visitors and apache. I have a pretty simple config on the nginx side for the proxy_pass config; location / { proxy_pass https://foo.i.example.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_send_timeout 360; proxy_read_timeout 360; } I see the request hit the apache side, and with some debugging enabled, I'm able to get some detail: [Fri Apr 24 01:21:48 2015] [info] Initial (No.1) HTTPS request received for child 6 (server signup.biglist.com:443) [Fri Apr 24 01:21:48 2015] [debug] ssl_engine_kernel.c(400): [client 10.99.88.59] Reconfigured cipher suite will force renegotiation [Fri Apr 24 01:21:48 2015] [info] [client 10.99.88.59] Requesting connection re-negotiation [Fri Apr 24 01:21:48 2015] [debug] ssl_engine_kernel.c(750): [client 10.99.88.59] Performing full renegotiation: complete handshake protocol (client does support secure renegotiation) [Fri Apr 24 01:21:48 2015] [info] [client 10.99.88.59] Awaiting re-negotiation handshake [Fri Apr 24 01:22:18 2015] [error] [client 10.99.88.59] Re-negotiation handshake failed: Not accepted by client!? This is nginx 1.6.2, OpenSSL 1.0.1m and Apache 2.2.25, OpenSSL 0.9.8y Relevant apache config: SSLEngine On SSLVerifyClient none (tried with and without this) SSLInsecureRenegotiation off (tried with and without this) SSLStrictSNIVHostCheck off (tried with and without this) SSLProtocol ALL -SSLv2 SSLCipherSuite ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM I've also tried forcing a TLSv1 and a single cipher on the nginx side, thinking that might somehow simplify things, but no difference. Any ideas? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258341,258341#msg-258341 From mdounin at mdounin.ru Fri Apr 24 11:45:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 Apr 2015 14:45:21 +0300 Subject: Proxying to older apache fails In-Reply-To: <7577c3d1f0dc4a3da86e5db5a2789d91.NginxMailingListEnglish@forum.nginx.org> References: <7577c3d1f0dc4a3da86e5db5a2789d91.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150424114521.GO32429@mdounin.ru> Hello! On Fri, Apr 24, 2015 at 01:27:43AM -0400, sporkman wrote: > I'm trying to keep an old apache install limping along for a few more months > by letting nginx handle the SSL connection between site visitors and > apache. > > I have a pretty simple config on the nginx side for the proxy_pass config; > > location / { > proxy_pass https://foo.i.example.com; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_send_timeout 360; > proxy_read_timeout 360; > } > > I see the request hit the apache side, and with some debugging enabled, I'm > able to get some detail: > > [Fri Apr 24 01:21:48 2015] [info] Initial (No.1) HTTPS request received for > child 6 (server signup.biglist.com:443) > [Fri Apr 24 01:21:48 2015] [debug] ssl_engine_kernel.c(400): [client > 10.99.88.59] Reconfigured cipher suite will force renegotiation > [Fri Apr 24 01:21:48 2015] [info] [client 10.99.88.59] Requesting connection > re-negotiation > [Fri Apr 24 01:21:48 2015] [debug] ssl_engine_kernel.c(750): [client > 10.99.88.59] Performing full renegotiation: complete handshake protocol > (client does support secure renegotiation) > [Fri Apr 24 01:21:48 2015] [info] [client 10.99.88.59] Awaiting > re-negotiation handshake > [Fri Apr 24 01:22:18 2015] [error] [client 10.99.88.59] Re-negotiation > handshake failed: Not accepted by client!? > > This is nginx 1.6.2, OpenSSL 1.0.1m and Apache 2.2.25, OpenSSL 0.9.8y > > Relevant apache config: > > SSLEngine On > SSLVerifyClient none (tried with and without this) > SSLInsecureRenegotiation off (tried with and without this) > SSLStrictSNIVHostCheck off (tried with and without this) > SSLProtocol ALL -SSLv2 > SSLCipherSuite ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM > > I've also tried forcing a TLSv1 and a single cipher on the nginx side, > thinking that might somehow simplify things, but no difference. > > Any ideas? You have to configure Apache in a way which won't force renegotiation. In particular, avoid configuring ciphers in virtual hosts - note "Reconfigured cipher suite will force renegotiation" in Apache logs. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Apr 24 17:09:00 2015 From: nginx-forum at nginx.us (sporkman) Date: Fri, 24 Apr 2015 13:09:00 -0400 Subject: Proxying to older apache fails In-Reply-To: <20150424114521.GO32429@mdounin.ru> References: <20150424114521.GO32429@mdounin.ru> Message-ID: <3faed5f598925992f06d399873c6d721.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Fri, Apr 24, 2015 at 01:27:43AM -0400, sporkman wrote: > > > I'm trying to keep an old apache install limping along for a few > more months > > by letting nginx handle the SSL connection between site visitors and > > apache. > > > > I have a pretty simple config on the nginx side for the proxy_pass > config; > > > > location / { > > proxy_pass https://foo.i.example.com; > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP > $remote_addr; > > proxy_send_timeout 360; > > proxy_read_timeout 360; > > } > > > > I see the request hit the apache side, and with some debugging > enabled, I'm > > able to get some detail: > > > > [Fri Apr 24 01:21:48 2015] [info] Initial (No.1) HTTPS request > received for > > child 6 (server signup.biglist.com:443) > > [Fri Apr 24 01:21:48 2015] [debug] ssl_engine_kernel.c(400): [client > > 10.99.88.59] Reconfigured cipher suite will force renegotiation > > [Fri Apr 24 01:21:48 2015] [info] [client 10.99.88.59] Requesting > connection > > re-negotiation > > [Fri Apr 24 01:21:48 2015] [debug] ssl_engine_kernel.c(750): [client > > 10.99.88.59] Performing full renegotiation: complete handshake > protocol > > (client does support secure renegotiation) > > [Fri Apr 24 01:21:48 2015] [info] [client 10.99.88.59] Awaiting > > re-negotiation handshake > > [Fri Apr 24 01:22:18 2015] [error] [client 10.99.88.59] > Re-negotiation > > handshake failed: Not accepted by client!? > > > > This is nginx 1.6.2, OpenSSL 1.0.1m and Apache 2.2.25, OpenSSL > 0.9.8y > > > > Relevant apache config: > > > > SSLEngine On > > SSLVerifyClient none (tried with and without this) > > SSLInsecureRenegotiation off (tried with and without this) > > SSLStrictSNIVHostCheck off (tried with and without this) > > SSLProtocol ALL -SSLv2 > > SSLCipherSuite > ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM > > > > I've also tried forcing a TLSv1 and a single cipher on the nginx > side, > > thinking that might somehow simplify things, but no difference. > > > > Any ideas? > > You have to configure Apache in a way which won't force > renegotiation. In particular, avoid configuring ciphers in > virtual hosts - note "Reconfigured cipher suite will force > renegotiation" in Apache logs. That was too simple. :) Thanks so much. I kept finding this thread and thinking a much more complicated issue was going on: http://forum.nginx.org/read.php?2,248982,248982 I removed all overrides and nginx and apache are happily talking ssl to each other. Thanks again, Charles > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258341,258365#msg-258365 From nginx-forum at nginx.us Sat Apr 25 12:25:20 2015 From: nginx-forum at nginx.us (blason) Date: Sat, 25 Apr 2015 08:25:20 -0400 Subject: issue with rewrite rule, need help Message-ID: <47f7dbcd61213e3c3846738a6c02de58.NginxMailingListEnglish@forum.nginx.org> Hi Folks, I am having apache server which is being prtoected by nginx reverse proxy server. Now issue I am facing here is while accessing certain page proxy is throwing me error 301 and instead of opening up pdf file in a separate window it shows 301 error and brings up a login screen. Here are the logs with reverse proxy. xx.xxx.xxx.xx - - [25/Apr/2015:15:54:04 +0530] "GET /Axxxx/cgi-bin/distributor_xxxx/printreceipt.pl?action=viewreceipt&sessionid=142995736904385&receiptno=R/201504/MH55/0010&print_receipt=Yes&reprint_rec=REPRINT HTTP/1.1" 200 523 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; MS-RTC LM 8)" xx.xxx.xxx.xx - - [25/Apr/2015:15:54:06 +0530] "GET /xxxxxxx/distributor_xxxxx/receipt_pdf/xxxxxx_201504_MH55_xxxx.pdf HTTP/1.1" 301 178 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; MS-RTC LM 8)" And strangely when I connect to the server directly the page opens up properly, here are the logs of when I connect to the server directly. 192.168.20.160 - - [25/Apr/2015:16:13:32 +0530] "GET /xxxxxx/cgi-bin/distributor_xxxxx/printreceipt.pl?action=viewreceipt&sessionid=142995852208801&receiptno=R/201504/MH55/0010&print_receipt=Yes&reprint_rec=REPRINT HTTP/1.1" 200 511 192.168.20.160 - - [25/Apr/2015:16:13:36 +0530] "GET /xxxxxx/distributor_xxxxxx/receipt_pdf/MH55R_201504_MH55_0010.pdf HTTP/1.1" 200 29270 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258370,258370#msg-258370 From nginx-forum at nginx.us Sat Apr 25 18:01:33 2015 From: nginx-forum at nginx.us (grigory) Date: Sat, 25 Apr 2015 14:01:33 -0400 Subject: Static files bad loading time Message-ID: <40317aade7d7a359f28d9aa4188a061a.NginxMailingListEnglish@forum.nginx.org> Hey guys, I have a dedicated server at iWeb.com (i3-540 + 8GB RAM). It has free bandwidth, average load is around 0.05-0.2 during the day and I/O wait ratio is very low now. However, sometimes my Nginx 1.4.2 loads in a browser 1MB image for like 10-30 seconds. Sometimes it takes 2 seconds like it should. My bandwidth is also free, of course. I made some tests like MTR, ping, traceroute -- everything is OK. Updated Nginx to 1.8.0 but nothing's changed. Here is my Nginx config: ============================================================== worker_processes 4; worker_rlimit_nofile 12400; events { worker_connections 32768; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; tcp_nodelay on; gzip on; gzip_min_length 1400; gzip_proxied any; gzip_types text/plain text/xml application/xml application/x-javascript text/javascript text/css text/json; gzip_comp_level 6; map $http_host $root_dir { hostnames; } root $root_dir; server { listen 188.88.88.88:80; server_name domain.com www.domain.com; access_log /nginx/logs/domain.com-access.log main; location / { proxy_pass http://domain.com:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; root /home/www/domain.com; } } } ============================================================== I use server for a couple of websites (PHP+MySQL). One of websites is an image hosting. Usually it serves images less than 1Mb but I created a couple of tools so user can upload a bigger image now. But still 3-5Mb max. Does anybody know what causes this loading lag? Thanks in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258372,258372#msg-258372 From nginx-forum at nginx.us Sat Apr 25 18:05:09 2015 From: nginx-forum at nginx.us (grigory) Date: Sat, 25 Apr 2015 14:05:09 -0400 Subject: Static files bad loading time In-Reply-To: <40317aade7d7a359f28d9aa4188a061a.NginxMailingListEnglish@forum.nginx.org> References: <40317aade7d7a359f28d9aa4188a061a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0bb9e84dcc068b22892a4db3760e6b5c.NginxMailingListEnglish@forum.nginx.org> I use CentOS 6.6. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258372,258373#msg-258373 From francis at daoine.org Sat Apr 25 22:32:33 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 25 Apr 2015 23:32:33 +0100 Subject: issue with rewrite rule, need help In-Reply-To: <47f7dbcd61213e3c3846738a6c02de58.NginxMailingListEnglish@forum.nginx.org> References: <47f7dbcd61213e3c3846738a6c02de58.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150425223233.GB29618@daoine.org> On Sat, Apr 25, 2015 at 08:25:20AM -0400, blason wrote: Hi there, What does your configuration tell nginx to do with the request for /xxxxxxx/distributor_xxxxx/receipt_pdf/xxxxxx_201504_MH55_xxxx.pdf You haven't shown the rewrite rule mentioned in your Subject: line. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Apr 25 22:37:55 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 25 Apr 2015 23:37:55 +0100 Subject: Static files bad loading time In-Reply-To: <40317aade7d7a359f28d9aa4188a061a.NginxMailingListEnglish@forum.nginx.org> References: <40317aade7d7a359f28d9aa4188a061a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150425223755.GC29618@daoine.org> On Sat, Apr 25, 2015 at 02:01:33PM -0400, grigory wrote: Hi there, > However, sometimes my Nginx 1.4.2 loads in a browser 1MB image for like > 10-30 seconds. Sometimes it takes 2 seconds like it should. > Does anybody know what causes this loading lag? The configuration you show doesn't seem to tell nginx to serve any static files. Perhaps the port-8080 server can tell you more about what is happening? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Apr 26 10:11:32 2015 From: nginx-forum at nginx.us (grigory) Date: Sun, 26 Apr 2015 06:11:32 -0400 Subject: Static files bad loading time In-Reply-To: <20150425223755.GC29618@daoine.org> References: <20150425223755.GC29618@daoine.org> Message-ID: <3a8312f2e6b4a09ade6212913516cc4c.NginxMailingListEnglish@forum.nginx.org> Sorry, I forgot to add the following part of the config (from server's block): # Static files location location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|wav|bmp|rtf|js)$ { if ($args ~* "^download") { add_header Content-Disposition "attachment; filename=$1"; } expires 30d; root /home/www/domain.com; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258372,258382#msg-258382 From k-fujita at tkx.co.jp Mon Apr 27 06:11:27 2015 From: k-fujita at tkx.co.jp (=?UTF-8?B?6Jek55SwIOa1qeS4gA==?=) Date: Mon, 27 Apr 2015 15:11:27 +0900 Subject: nginx Digest, Vol 66, Issue 35 In-Reply-To: References: Message-ID: <553DD30F.7080203@tkx.co.jp> On 2015/04/23 21:00, nginx-request at nginx.org wrote: > Message: 5 > Date: Thu, 23 Apr 2015 11:20:39 +0800 > From: Umarzuki Mochlis > To:nginx at nginx.org > Subject: Re: Couldn't display even 404 or 403 Error page when Nginx > Updated to 1.8.0 > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > any log messages? To: Mr.Mochlis I checked log messages. But the log message is nothing. That means nginx was not working. There was only in the log, it was a message that NGINX has been updated. So I tried to downgrade NGINX, and anyway the problem was solved. The cause was that New Nginx didn't match our web server's kernel. Thanks again. K.Fujita -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 27 08:53:21 2015 From: nginx-forum at nginx.us (atrus) Date: Mon, 27 Apr 2015 04:53:21 -0400 Subject: Nginx does not delete old cached files Message-ID: <62ed8c24f41cf5fa3c32fe96e343e5d7.NginxMailingListEnglish@forum.nginx.org> Hi, I have nginx serve as image cached, here is the main config : proxy_cache_path /etc/nginx/cache-media levels=1:2 keys_zone=media:1000m inactive=2y max_size=100g; proxy_temp_path /etc/nginx/cache-media/tmp; /dev/sdc1 is an intel SSD with ext4 mounted (-o noatime, nodiratime). It looks like that the nginx do not evict the old cached files : Disk usage on /dev/sdc1 : # df -h /dev/sdc1 Filesystem Size Used Avail Use% Mounted on /dev/sdc1 110G 102G 7.5G 94% /etc/nginx/cache-media the max_size=100g but the real size has been raise up to 102GB. Sometime it full up to 100% of the sdc1 disk and get errror : 2015/04/27 12:03:55 [crit] 7708#0: *18862126 pwrite() "/etc/nginx/cache-media/tmp/0004826203" failed (28: No space left on device) while reading upstream, and I need to manually remove by find && rm -rf My nginx -V : ginx version: nginx/1.7.10 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' Plz tell me what could be the problem ? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258396,258396#msg-258396 From nginx-forum at nginx.us Mon Apr 27 15:05:53 2015 From: nginx-forum at nginx.us (DigitalHermit) Date: Mon, 27 Apr 2015 11:05:53 -0400 Subject: [yet another] proxy question - rewrite URLs Message-ID: I'm brand new to nginx so apologies in advance if this is the incorrect place to ask but I've been struggling with this for a week without much headway. I'm trying to reverse proxy two hosts behind nginx. The twist is that one of the hosts provides resources that come from another host that's not accessible by the client: There are three servers in question: 1) ico-proxy 2) webhost1 2) webhost2 webhost1 has login pages at [https]//webhost1/ and [https]//webhost1:8443. These are not visible outside the secure environment. ico-proxy sits on the publicly accessible network and can access webhost1 and webhost2 over ports 443 and 8443. I can successfully redirect the following using 301 returns: [http]//ico-proxy/webhost1 -> [https]//ico-proxy [http]//ico-proxy/webhost2 -> [https]//ico-proxy:8443 e.g., location /webhost1/ { return 301 [https]//$host$request_uri; } I then do another redirect inside the 8443 listener: server { listen 8443; servername webhost1; location / { proxy_pass [https]//webhost1:8443; } } The above works so far. The problem occurs because there are some links that refer to webhost2 directly. I can fix some of these with proxy_set_header statements. However, webhost1 has multiple links to a page on webhost2. Is there a way to reverse proxy to webhost1 and somehow intercept all webhost2 requests and in turn proxy them through another port on ico-proxy? This is for an IBM Cloud Orchestrator (OpenStack based) installation. IBM doesn't have any guidance for this setup. Unfortunately, I can't modify the links from webhost2 as it's a canned app. Thanks in advance for any guidance on the best way to approach this. I've thought about adding DNS entries for webhost2 that point to ico-host but this breaks other functionality. KLL Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258400,258400#msg-258400 From nginx-forum at nginx.us Mon Apr 27 15:59:14 2015 From: nginx-forum at nginx.us (grigory) Date: Mon, 27 Apr 2015 11:59:14 -0400 Subject: Static files bad loading time In-Reply-To: <20150425223755.GC29618@daoine.org> References: <20150425223755.GC29618@daoine.org> Message-ID: So, Francis... Do you have any idea on my problem? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258372,258406#msg-258406 From nginx-forum at nginx.us Mon Apr 27 23:45:17 2015 From: nginx-forum at nginx.us (meteor8488) Date: Mon, 27 Apr 2015 19:45:17 -0400 Subject: how to separate robot access log and human access log Message-ID: <1c2f7d55b1d50c81d78247aa8b19f5cf.NginxMailingListEnglish@forum.nginx.org> Hi all, I'm trying to separate the robot access log and human access log, so I'm using below configuration: http { .... map $http_user_agent $ifbot { default 0; "~*rogerbot" 3; "~*ChinasoSpider" 3; "~*Yahoo" 1; "~*Bot" 1; "~*Spider" 1; "~*archive" 1; "~*search" 1; "~*Yahoo" 1; "~Mediapartners-Google" 1; "~*bingbot" 1; "~*YandexBot" 1; "~*Feedly" 2; "~*Superfeedr" 2; "~*QuiteRSS" 2; "~*g2reader" 2; "~*Digg" 2; "~*trendiction" 3; "~*AhrefsBot" 3; "~*curl" 3; "~*Ruby" 3; "~*Player" 3; "~*Go\ http\ package" 3; "~*Lynx" 3; "~*Sleuth" 3; "~*Python" 3; "~*Wget" 3; "~*perl" 3; "~*httrack" 3; "~*JikeSpider" 3; "~*PHP" 3; "~*WebIndex" 3; "~*magpie-crawler" 3; "~*JUC" 3; "~*Scrapy" 3; "~*libfetch" 3; "~*WinHTTrack" 3; "~*htmlparser" 3; "~*urllib" 3; "~*Zeus" 3; "~*scan" 3; "~*Indy\ Library" 3; "~*libwww-perl" 3; "~*GetRight" 3; "~*GetWeb!" 3; "~*Go!Zilla" 3; "~*Go-Ahead-Got-It" 3; "~*Download\ Demon" 3; "~*TurnitinBot" 3; "~*WebscanSpider" 3; "~*WebBench" 3; "~*YisouSpider" 3; "~*check_http" 3; "~*webmeup-crawler" 3; "~*omgili" 3; "~*blah" 3; "~*fountainfo" 3; "~*MicroMessenger" 3; "~*QQDownload" 3; "~*shoulu.jike.com" 3; "~*omgilibot" 3; "~*pyspider" 3; } .... } And in server part, I'm using: if ($ifbot = "1") { set $spiderbot 1; } if ($ifbot = "2") { set $rssbot 1; } if ($ifbot = "3") { return 403; access_log /web/log/badbot.log main; } access_log /web/log/location_access.log main; access_log /web/log/spider_access.log main if=$spiderbot; access_log /web/log/rssbot_access.log main if=$rssbot; But it seems that nginx still writes some robot logs in to both location_access.log and spider_access.log. How can I separate the logs for the robot? And another questions is that some robot logs are not written to spider_access.log but exist in location_access.log. It seems that my map is not working. Is anything wrong when I define "map"? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258417,258417#msg-258417 From nginx-forum at nginx.us Tue Apr 28 08:21:50 2015 From: nginx-forum at nginx.us (extremecarver) Date: Tue, 28 Apr 2015 04:21:50 -0400 Subject: rewrite rule with ? in url Message-ID: Hi - I've got a plugin which was misconfigured - and now I need to rewrite a string - however I cannot find out at all how to do this for urls with ? in them. So I need to have: www.velomap.org/de/?s2member_paypal_notify=1 rewritten to www.velomap.org/?s2member_paypal_notify=1 (note the /de missing). I tried: server { server_name www.velomap.org/de/?s2member_paypal_notify=1 return 301 $scheme://www.velomap.org/?s2member_paypal_notify=1$request_uri?; } but this is not accepted as valid nginx.conf... So I need this rewritten and it is important that whatever string is following this is not lost... (also - but less important) - how can I rewrite a specific url? So I want to rewrite www.velomap.org/de/ to www.velomap.org - but only if it is exactly this url. www.velomap.org/de as well as www.velomap.org/de/* is not allowed to be rewritten... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258420,258420#msg-258420 From nginx-forum at nginx.us Tue Apr 28 08:54:53 2015 From: nginx-forum at nginx.us (extremecarver) Date: Tue, 28 Apr 2015 04:54:53 -0400 Subject: rewrite rule with ? in url In-Reply-To: References: Message-ID: <4d38b5b6f974c8cf05a9003092863a6f.NginxMailingListEnglish@forum.nginx.org> Will this work correctly with all the arguments followed? location = /de/ { if ( $arg_s2member_paypal_notify ) { rewrite ^ / permanent; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258420,258422#msg-258422 From nginx-forum at nginx.us Tue Apr 28 09:05:14 2015 From: nginx-forum at nginx.us (extremecarver) Date: Tue, 28 Apr 2015 05:05:14 -0400 Subject: rewrite rule with ? in url In-Reply-To: <4d38b5b6f974c8cf05a9003092863a6f.NginxMailingListEnglish@forum.nginx.org> References: <4d38b5b6f974c8cf05a9003092863a6f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9e1a368e8e8ec42ced04a9e6c629f5f3.NginxMailingListEnglish@forum.nginx.org> okay - I noticed my homepage wouldn't load for www.velomap.org/de/ anymore with this rule - location = /de/ { if ( $arg_s2member_paypal_notify ) { rewrite ^ / permanent; } try_files $uri $uri/ /index.php; } Seems to work - well I hope this is passing on all information correctly still.. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258420,258424#msg-258424 From justinbeech at gmail.com Tue Apr 28 15:13:37 2015 From: justinbeech at gmail.com (jb) Date: Wed, 29 Apr 2015 01:13:37 +1000 Subject: jsfiddle demonstrating IE11 difference talking to nginx vs google Message-ID: See, or please try: http://jsfiddle.net/qe44nbwh/ If you see what I see in my IE11, when I press Start Test, and the target is nginx.org, the duration for each request goes slow/normal/slow/normal But when I change nginx.org to google.com, the timing is normal/normal/normal ... This only happens with IE11, other browsers behave normally (they all do normal/normal/normal timings) Why would IE11 be sensitive to an nginx server, and not to google ? What is nginx doing that google is not doing... -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 28 15:43:00 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Apr 2015 18:43:00 +0300 Subject: nginx-1.9.0 Message-ID: <20150428154300.GG32429@mdounin.ru> Changes with nginx 1.9.0 28 Apr 2015 *) Change: obsolete aio and rtsig event methods have been removed. *) Feature: the "zone" directive inside the "upstream" block. *) Feature: the stream module. *) Feature: byte ranges support in the ngx_http_memcached_module. Thanks to Martin Mlyn??. *) Feature: shared memory can now be used on Windows versions with address space layout randomization. Thanks to Sergey Brester. *) Feature: the "error_log" directive can now be used on mail and server levels in mail proxy. *) Bugfix: the "proxy_protocol" parameter of the "listen" directive did not work if not specified in the first "listen" directive for a listen socket. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Apr 28 19:09:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Apr 2015 22:09:21 +0300 Subject: jsfiddle demonstrating IE11 difference talking to nginx vs google In-Reply-To: References: Message-ID: <20150428190921.GM32429@mdounin.ru> Hello! On Wed, Apr 29, 2015 at 01:13:37AM +1000, jb wrote: > See, or please try: > http://jsfiddle.net/qe44nbwh/ > > If you see what I see in my IE11, when I press Start Test, and the target > is nginx.org, the duration for each request goes slow/normal/slow/normal > > But when I change nginx.org to google.com, the timing is > normal/normal/normal ... > > This only happens with IE11, other browsers behave normally > (they all do normal/normal/normal timings) > > Why would IE11 be sensitive to an nginx server, and not to google ? > What is nginx doing that google is not doing... It looks like a combination of the following factors: - for some reason IE11 closes connections after JS requests to nginx.org in your test; - when IE11 have to open a connection, it opens 2 of them. As a result, every second request request uses pre-cached connection established previously, and it is fast. Others have to wait while a connection is established. No idea why IE closes connections, but it doesn't happen on normal use. May be some bug, or heuristics, or a security measure (as the test triggers CORS errors). -- Maxim Dounin http://nginx.org/ From zxcvbn4038 at gmail.com Tue Apr 28 20:50:26 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 28 Apr 2015 16:50:26 -0400 Subject: Setting content-length if app doesn't know what it is Message-ID: Behind my web server is an application that doesn't include content-length headers because it doesn't know what it is. I'm pretty sure this is an application issue but I promised I'd come here and ask the question - is there a way to have nginx buffer an entire server response and insert a content-length header if none has been provided (i.e. by *-fpm, upstream proxy, etc)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 28 21:13:37 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 28 Apr 2015 17:13:37 -0400 Subject: 1.9 stream not working? 'directive is not allowed here' Message-ID: <1a612d04f257aeafe88defc6230b8a31.NginxMailingListEnglish@forum.nginx.org> Hmm, following: http://nginx.com/resources/admin-guide/tcp-load-balancing/ I get a nginx: [emerg] "stream" directive is not allowed here eventhough its within the http context like upstreams are, anyone with a good example? or did I stumble on a context eval bug? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258463,258463#msg-258463 From nginx-forum at nginx.us Tue Apr 28 21:17:32 2015 From: nginx-forum at nginx.us (lieut_data) Date: Tue, 28 Apr 2015 17:17:32 -0400 Subject: proxy_ssl_certificate not exchanging client certificates Message-ID: I was excited to see proxy_ssl_certificate and friends land in Nginx 1.7.8, and decided to revisit Nginx as a candidate for proxy caching an upstream server requiring client authentication. I've included the debugging configuration I've been playing around with at the end of this post. This particular upstream server does not trigger client authentication for all endpoints. For example, I can issue ----- http http://NGINX_PROXY_IP/test/path Host:UPSTREAM_SERVER ----- and get back the proxied response without error. However, for endpoints that require client authentication (triggered by the server after it examines the request path), nginx never gets a response. I've verified that the upstream server is working as expected using both wget: ----- wget --verbose --debug --save-headers -O - --ca-certificate=PATH_TO_SERVER_CERTFICATE --certificate=PATH_TO_CLIENT_CERTIFICATE --private-key=PATH_TO_CLIENT_PRIVATE_KEY https://UPSTREAM_SERVER/path/requiring/client/authentication ----- and openssl's s_client: ----- echo -e "GET https://UPSTREAM_SERVER/path/requiring/client/authentication HTTP/1.0\r\n\r\n" | openssl s_client -ign_eof -connect UPSTREAM_SERVER:443 -state -debug -cert PATH_TO_CLIENT_CERTIFICATE -key PATH_TO_CLIENT_PRIVATE_KEY ----- In the latter case, it's apparent that the server triggers renegotiation after seeing the requested path, and openssl's s_client responds accordingly by sending the client certificate, completing the exchange moments later. However, when invoking the same via the Nginx proxy: ----- http http://NGINX_PROXY_IP/ Host:UPSTREAM_SERVER/path/requiring/client/authentication ----- the upstream connection eventually closes after sending no data -- the same behaviour it exhibits when no client certificates are provided via wget or openssl's s_client. I'm using nginx.debug and have verified that the client certificate files are read (throwing errors if the wrong path is specified). I've also used ltrace and verified that Nginx appears to inject the client certificate via SSL_CTX_use_certificate (taking a somewhat different path than wget's SSL_CTX_use_certificate_file). Have I failed to configure Nginx with the requisite client certificates? Is there anyway to see the level of debugging from openssl's s_client but via Nginx? Here's the configuration in question: ----- user nginx; worker_processes 1; daemon off; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 1024; } http { proxy_cache_methods GET HEAD; proxy_cache_min_uses 1; server { listen 80; proxy_http_version 1.1; proxy_ssl_protocols 'TLSv1.2'; proxy_ssl_certificate PATH_TO_CLIENT_CERTIFICATE; proxy_ssl_certificate_key PATH_TO_CLIENT_PRIVATE_KEY; proxy_ssl_session_reuse off; proxy_ssl_trusted_certificate PATH_TO_SERVER_CERTFICATE; proxy_ssl_verify on; location / { resolver 8.8.8.8; proxy_pass https://$host$request_uri; proxy_cache rest-cache; add_header rt-Fastcgi-Cache $upstream_cache_status; proxy_set_header Host $host; proxy_ignore_headers Set-Cookie; } } } ----- Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258464,258464#msg-258464 From maxim at nginx.com Tue Apr 28 21:33:49 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 29 Apr 2015 00:33:49 +0300 Subject: 1.9 stream not working? 'directive is not allowed here' In-Reply-To: <1a612d04f257aeafe88defc6230b8a31.NginxMailingListEnglish@forum.nginx.org> References: <1a612d04f257aeafe88defc6230b8a31.NginxMailingListEnglish@forum.nginx.org> Message-ID: <553FFCBD.9060308@nginx.com> On 4/29/15 12:13 AM, itpp2012 wrote: > Hmm, following: http://nginx.com/resources/admin-guide/tcp-load-balancing/ > I get a nginx: [emerg] "stream" directive is not allowed here > eventhough its within the http context like upstreams are, anyone with a > good example? or did I stumble on a context eval bug? > Put it in the same level as http. E.g. ... http { foo } stream { bar } # EOF -- Maxim Konovalov http://nginx.com From pluknet at nginx.com Tue Apr 28 21:36:21 2015 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 29 Apr 2015 00:36:21 +0300 Subject: 1.9 stream not working? 'directive is not allowed here' In-Reply-To: <1a612d04f257aeafe88defc6230b8a31.NginxMailingListEnglish@forum.nginx.org> References: <1a612d04f257aeafe88defc6230b8a31.NginxMailingListEnglish@forum.nginx.org> Message-ID: <553FFD55.1010206@nginx.com> On 29.04.2015 00:13, itpp2012 wrote: > Hmm, following: http://nginx.com/resources/admin-guide/tcp-load-balancing/ > I get a nginx: [emerg] "stream" directive is not allowed here > eventhough its within the http context You need to define it in the main context. See http://nginx.org/r/stream for details. From nginx-forum at nginx.us Tue Apr 28 21:49:08 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 28 Apr 2015 17:49:08 -0400 Subject: 1.9 stream not working? 'directive is not allowed here' In-Reply-To: <553FFD55.1010206@nginx.com> References: <553FFD55.1010206@nginx.com> Message-ID: <861bd6457ce87a0367d6b90194b69c54.NginxMailingListEnglish@forum.nginx.org> Minimal conf; worker_processes 4; events { worker_connections 8192; } http { include mime.types; default_type application/octet-stream; stream { upstream stream_backend { server 192.168.222.22:810 weight=5; server 192.168.222.17:810 weight=5; } } server { listen 12345; proxy_pass stream_backend; } } nginx -t nginx: [emerg] "stream" directive is not allowed here in conf\nginx.conf:11 nginx: configuration file conf\nginx.conf test failed Whats incorrect here then? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258463,258467#msg-258467 From rpaprocki at fearnothingproductions.net Tue Apr 28 22:27:05 2015 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Tue, 28 Apr 2015 15:27:05 -0700 Subject: 1.9 stream not working? 'directive is not allowed here' In-Reply-To: <861bd6457ce87a0367d6b90194b69c54.NginxMailingListEnglish@forum.nginx.org> References: <553FFD55.1010206@nginx.com> <861bd6457ce87a0367d6b90194b69c54.NginxMailingListEnglish@forum.nginx.org> Message-ID: It needs to be at the -same- level as the http {} block, not -within- the http {} block. worker_processes 4; events { worker_connections 8192; } http { include mime.types; default_type application/octet-stream; } stream { upstream stream_backend { server 192.168.222.22:810 weight=5; server 192.168.222.17:810 weight=5; } server { listen 12345; proxy_pass stream_backend; } } On Tue, Apr 28, 2015 at 2:49 PM, itpp2012 wrote: > Minimal conf; > > worker_processes 4; > > events { > worker_connections 8192; > } > > http { > include mime.types; > default_type application/octet-stream; > > stream { > upstream stream_backend { > server 192.168.222.22:810 weight=5; > server 192.168.222.17:810 weight=5; > } > } > > server { > listen 12345; > proxy_pass stream_backend; > } > > } > > nginx -t > nginx: [emerg] "stream" directive is not allowed here in conf\nginx.conf:11 > nginx: configuration file conf\nginx.conf test failed > > Whats incorrect here then? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258463,258467#msg-258467 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 29 00:17:42 2015 From: nginx-forum at nginx.us (gariac) Date: Tue, 28 Apr 2015 20:17:42 -0400 Subject: Installed nginx with iredmail; how to add web content & test without DNS change Message-ID: <7d257d58af02ff92bee88c801674248e.NginxMailingListEnglish@forum.nginx.org> I have an existing website at a hosting service. I have contracted with a virtual server company and have installed iredmail, which in turn installs nginx, [Oddly, Appache2 as well, though probably not relevant.] Since I have an ip address for the server, I am able to test the email service. [Only email to accounts on the server since the MX record still goes to the old hosting company,] Iredmail has a web based mail manager, so it has associated html code. Iredmail puts its html in /var/www. I put a test page in /var/www2 and added a location line to point to it, but I'm confused on how to set this up since it is like hosting two websites at the same IP address. Obviously I need to test the server before changing the DNS. I'm thinking maybe do something in /etc/hosts. Or do I just try to merge the html from iredmail with my own html? That is change the index.html from iredmail to mail.html? One additional complication may be that iredmail requires https while my html does not. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258470,258470#msg-258470 From tfransosi at gmail.com Wed Apr 29 01:47:31 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Tue, 28 Apr 2015 22:47:31 -0300 Subject: Installed nginx with iredmail; how to add web content & test without DNS change In-Reply-To: <7d257d58af02ff92bee88c801674248e.NginxMailingListEnglish@forum.nginx.org> References: <7d257d58af02ff92bee88c801674248e.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Apr 28, 2015 at 9:17 PM, gariac wrote: > I have an existing website at a hosting service. I have contracted with a > virtual server company and have installed iredmail, which in turn installs > nginx, [Oddly, Appache2 as well, though probably not relevant.] > > Since I have an ip address for the server, I am able to test the email > service. [Only email to accounts on the server since the MX record still > goes to the old hosting company,] Iredmail has a web based mail manager, so > it has associated html code. > > Iredmail puts its html in /var/www. I put a test page in /var/www2 and > added a location line to point to it, but I'm confused on how to set this up > since it is like hosting two websites at the same IP address. Obviously I > need to test the server before changing the DNS. > How is nginx configured? Could you post it here? It might be easier to understand what is going on looking at it. Also, what is the difficult are you having? With the nginx configuration or the iredmail? I don't know about the later though. Best regards, -- Thiago Farina From nginx-forum at nginx.us Wed Apr 29 06:38:24 2015 From: nginx-forum at nginx.us (drookie) Date: Wed, 29 Apr 2015 02:38:24 -0400 Subject: ssl stapling, verification fails Message-ID: <169f7c74ad8ee6b5a58f85e35af43812.NginxMailingListEnglish@forum.nginx.org> Hi. I'm trying to get nginx 1.6.2 to authenticate users using their client certificates. I'm using this configuration (besides usual SSL settings, which are proved to work): ssl_stapling on; ssl_client_certificate /etc/nginx/certs/trusted.pem; ssl_verify_client optional_no_ca; trusted.pem contains 3 CA certificates: test CA and 2 production CA (main and intermediate). To pass verification data to the application I'm using fastcgi_param X-SSL-Verified $ssl_client_verify; fastcgi_param X-SSL-Certificate $ssl_client_cert; fastcgi_param X-SSL-IDN $ssl_client_i_dn; fastcgi_param X-SSL-SDN $ssl_client_s_dn; And here comes the issue: when using test CA and test cerificate, I'm getting X-SSL-Verified: SUCCESS, but when using production ones, I'm getting X-SSL-Verified: FAILED. You can say that there's a problem in my certificate bunch, but I tried to verify if the production certificate is really issued by the CA that I think about: openssl verify -verbose -CAfile trusted.pem rt.cert rt.cert: OK Looks like it passes the verification. trusted.pem is the same that nginx uses. In the same time nginx thinks that certificate doesn't pass the test. Why can this happen ? I've also tried setting 'ssl_verify_client on;' - the only difference that I get the 400 answer, because the verification fails explicitely. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258480,258480#msg-258480 From nginx-forum at nginx.us Wed Apr 29 07:04:51 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 29 Apr 2015 03:04:51 -0400 Subject: 1.9 stream not working? 'directive is not allowed here' In-Reply-To: References: Message-ID: Robert Paprocki Wrote: ------------------------------------------------------- > It needs to be at the -same- level as the http {} block, not -within- > the http {} block. Ah! makes sense as a stream is not http. That passes -t Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258463,258481#msg-258481 From mdounin at mdounin.ru Wed Apr 29 11:34:56 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Apr 2015 14:34:56 +0300 Subject: ssl stapling, verification fails In-Reply-To: <169f7c74ad8ee6b5a58f85e35af43812.NginxMailingListEnglish@forum.nginx.org> References: <169f7c74ad8ee6b5a58f85e35af43812.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150429113456.GR32429@mdounin.ru> Hello! On Wed, Apr 29, 2015 at 02:38:24AM -0400, drookie wrote: > Hi. > > I'm trying to get nginx 1.6.2 to authenticate users using their client > certificates. > > I'm using this configuration (besides usual SSL settings, which are proved > to work): > > ssl_stapling on; > ssl_client_certificate /etc/nginx/certs/trusted.pem; > ssl_verify_client optional_no_ca; > > trusted.pem contains 3 CA certificates: test CA and 2 production CA (main > and intermediate). > To pass verification data to the application I'm using > > fastcgi_param X-SSL-Verified $ssl_client_verify; > fastcgi_param X-SSL-Certificate $ssl_client_cert; > fastcgi_param X-SSL-IDN $ssl_client_i_dn; > fastcgi_param X-SSL-SDN $ssl_client_s_dn; > > And here comes the issue: when using test CA and test cerificate, I'm > getting X-SSL-Verified: SUCCESS, but when using production ones, I'm getting > X-SSL-Verified: FAILED. You can say that there's a problem in my certificate > bunch, but I tried to verify if the production certificate is really issued > by the CA that I think about: > > openssl verify -verbose -CAfile trusted.pem rt.cert > rt.cert: OK > > Looks like it passes the verification. trusted.pem is the same that nginx > uses. In the same time nginx thinks that certificate doesn't pass the test. > Why can this happen ? I've also tried setting 'ssl_verify_client on;' - the > only difference that I get the 400 answer, because the verification fails > explicitely. Try looking into the error log, it should have details at the info level. Most likely, the problem is that you are trying to use intermediate CAs with the default value of ssl_verify_depth, see http://nginx.org/r/ssl_verify_depth. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 29 11:57:57 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Apr 2015 14:57:57 +0300 Subject: Setting content-length if app doesn't know what it is In-Reply-To: References: Message-ID: <20150429115757.GT32429@mdounin.ru> Hello! On Tue, Apr 28, 2015 at 04:50:26PM -0400, CJ Ess wrote: > Behind my web server is an application that doesn't include content-length > headers because it doesn't know what it is. I'm pretty sure this is an > application issue but I promised I'd come here and ask the question - is > there a way to have nginx buffer an entire server response and insert a > content-length header if none has been provided (i.e. by *-fpm, upstream > proxy, etc)? No. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 29 12:05:05 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Apr 2015 15:05:05 +0300 Subject: proxy_ssl_certificate not exchanging client certificates In-Reply-To: References: Message-ID: <20150429120504.GU32429@mdounin.ru> Hello! On Tue, Apr 28, 2015 at 05:17:32PM -0400, lieut_data wrote: > I was excited to see proxy_ssl_certificate and friends land in Nginx 1.7.8, > and decided to revisit Nginx as a candidate for proxy caching an upstream > server requiring client authentication. I've included the debugging > configuration I've been playing around with at the end of this post. > > This particular upstream server does not trigger client authentication for > all endpoints. For example, I can issue > > ----- > http http://NGINX_PROXY_IP/test/path Host:UPSTREAM_SERVER > ----- > > and get back the proxied response without error. However, for endpoints that > require client authentication (triggered by the server after it examines the > request path), nginx never gets a response. I've verified that the upstream > server is working as expected using both wget: What nginx doesn't support (or, rather, explicitly forbids) is renegotiation. On the other hand, renegotiation is required if one needs to ask for a client certificate only for some URIs, so it's likely used in your case. You should see something like "SSL renegotiation disabled" in logs at notice level. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Wed Apr 29 13:06:55 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 29 Apr 2015 09:06:55 -0400 Subject: nginx-1.9.0 In-Reply-To: <20150428154300.GG32429@mdounin.ru> References: <20150428154300.GG32429@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.0 for Windows http://goo.gl/eYkfMa (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Apr 28, 2015 at 11:43 AM, Maxim Dounin wrote: > Changes with nginx 1.9.0 28 Apr > 2015 > > *) Change: obsolete aio and rtsig event methods have been removed. > > *) Feature: the "zone" directive inside the "upstream" block. > > *) Feature: the stream module. > > *) Feature: byte ranges support in the ngx_http_memcached_module. > Thanks to Martin Mlyn??. > > *) Feature: shared memory can now be used on Windows versions with > address space layout randomization. > Thanks to Sergey Brester. > > *) Feature: the "error_log" directive can now be used on mail and > server > levels in mail proxy. > > *) Bugfix: the "proxy_protocol" parameter of the "listen" directive did > not work if not specified in the first "listen" directive for a > listen socket. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-users-list at whyaskwhy.org Wed Apr 29 13:40:30 2015 From: nginx-users-list at whyaskwhy.org (deoren) Date: Wed, 29 Apr 2015 08:40:30 -0500 Subject: How does nginx set End of Life for older versions? Message-ID: <5540DF4E.4050501@whyaskwhy.org> Hi, I turned to the official website and Google, but I couldn't find a definitive answer re how older branches go End of Life. For example, now that the 1.8 stable branch has been created, what are the plans for the 1.6 branch? Will future updates be made available to that branch or is it no longer maintained? On a related note, if a branch is listed under "Legacy versions", does that mean it will no longer receive updates? Thanks. From vbart at nginx.com Wed Apr 29 13:58:59 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 29 Apr 2015 16:58:59 +0300 Subject: How does nginx set End of Life for older versions? In-Reply-To: <5540DF4E.4050501@whyaskwhy.org> References: <5540DF4E.4050501@whyaskwhy.org> Message-ID: <2374818.5pSLateSxU@vbart-workstation> On Wednesday 29 April 2015 08:40:30 deoren wrote: > Hi, > > I turned to the official website and Google, but I couldn't find a definitive answer re how older branches go End of Life. For example, now that the 1.8 stable branch has been created, what are the plans for the 1.6 branch? Will future updates be made available to that branch or is it no longer maintained? http://nginx.com/blog/nginx-1-8-and-1-9-released/ | We are no longer supporting 1.6 (the former stable branch) See also: http://nginx.com/blog/nginx-1-6-1-7-released/ > > On a related note, if a branch is listed under "Legacy versions", does that mean it will no longer receive updates? Yes. At any period of time we have only one stable and one mainline branches, that receive updates. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Apr 29 14:00:04 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Apr 2015 17:00:04 +0300 Subject: How does nginx set End of Life for older versions? In-Reply-To: <5540DF4E.4050501@whyaskwhy.org> References: <5540DF4E.4050501@whyaskwhy.org> Message-ID: <20150429140004.GA32429@mdounin.ru> Hello! On Wed, Apr 29, 2015 at 08:40:30AM -0500, deoren wrote: > Hi, > > I turned to the official website and Google, but I couldn't find > a definitive answer re how older branches go End of Life. For > example, now that the 1.8 stable branch has been created, what > are the plans for the 1.6 branch? Will future updates be made > available to that branch or is it no longer maintained? It's no longer maintained. No further commits are expected to happen on the stable-1.6 branch, and no releases are planned. You can use it as long it works for you, but it's a good idea to schedule an upgrade. > On a related note, if a branch is listed under "Legacy > versions", does that mean it will no longer receive updates? Yes. -- Maxim Dounin http://nginx.org/ From nginx-users-list at whyaskwhy.org Wed Apr 29 15:57:13 2015 From: nginx-users-list at whyaskwhy.org (deoren) Date: Wed, 29 Apr 2015 10:57:13 -0500 Subject: How does nginx set End of Life for older versions? In-Reply-To: <2374818.5pSLateSxU@vbart-workstation> References: <5540DF4E.4050501@whyaskwhy.org> <2374818.5pSLateSxU@vbart-workstation> Message-ID: <5540FF59.8050202@whyaskwhy.org> On 4/29/2015 8:58 AM, Valentin V. Bartenev wrote: > On Wednesday 29 April 2015 08:40:30 deoren wrote: >> Hi, >> >> I turned to the official website and Google, but I couldn't find a definitive answer re how older branches go End of Life. For example, now that the 1.8 stable branch has been created, what are the plans for the 1.6 branch? Will future updates be made available to that branch or is it no longer maintained? > > http://nginx.com/blog/nginx-1-8-and-1-9-released/ > > | We are no longer supporting 1.6 (the former stable branch) > > > See also: http://nginx.com/blog/nginx-1-6-1-7-released/ > > >> >> On a related note, if a branch is listed under "Legacy versions", does that mean it will no longer receive updates? > > Yes. At any period of time we have only one stable > and one mainline branches, that receive updates. > > wbr, Valentin V. Bartenev Thanks! I will subscribe to that blog so I'll see future notices. From nginx-users-list at whyaskwhy.org Wed Apr 29 15:57:38 2015 From: nginx-users-list at whyaskwhy.org (deoren) Date: Wed, 29 Apr 2015 10:57:38 -0500 Subject: How does nginx set End of Life for older versions? In-Reply-To: <20150429140004.GA32429@mdounin.ru> References: <5540DF4E.4050501@whyaskwhy.org> <20150429140004.GA32429@mdounin.ru> Message-ID: <5540FF72.9030205@whyaskwhy.org> On 4/29/2015 9:00 AM, Maxim Dounin wrote: > On Wed, Apr 29, 2015 at 08:40:30AM -0500, deoren wrote: > >> Hi, >> >> I turned to the official website and Google, but I couldn't find >> a definitive answer re how older branches go End of Life. For >> example, now that the 1.8 stable branch has been created, what >> are the plans for the 1.6 branch? Will future updates be made >> available to that branch or is it no longer maintained? > > It's no longer maintained. > > No further commits are expected to happen on the stable-1.6 > branch, and no releases are planned. You can use it as long it > works for you, but it's a good idea to schedule an upgrade. > >> On a related note, if a branch is listed under "Legacy >> versions", does that mean it will no longer receive updates? > > Yes. > Thanks Maxim. I'll go ahead and move to the 1.8 stable series then. From aidan at aodhandigital.com Wed Apr 29 16:33:56 2015 From: aidan at aodhandigital.com (Aidan Scheller) Date: Wed, 29 Apr 2015 11:33:56 -0500 Subject: nginx boot issues with centos7 Message-ID: Greetings, It appears that nginx has difficulties starting automatically in CentOS 7 when it needs to resolve DNS names in the configuration. I'm attributing this to systemd. Nginx stable was installed from packages. [admin at nginx ~]$ nginx -v nginx version: nginx/1.8.0 [admin at nginx ~]$ cat /etc/nginx/conf.d/default.conf server { location / { proxy_pass http://web01.mycorp.lan:8080; } } When the system boots nginx doesn't start automatically and the logs indicate that it wasn't able to make a DNS query. [admin at nginx ~]$ sudo systemctl status nginx.service nginx.service - nginx - high performance web server Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled) Active: failed (Result: exit-code) since Wed 2015-04-29 11:11:42 CDT; 1min 30s ago Docs: http://nginx.org/en/docs/ Process: 1141 ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf (code=exited, status=1/FAILURE) Apr 29 11:11:42 nginx.mycorp.lan nginx[1141]: nginx: [emerg] host not found in upstream "web01.mycorp.lan...f:38 Apr 29 11:11:42 nginx.mycorp.lan nginx[1141]: nginx: configuration file /etc/nginx/nginx.conf test failed Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: nginx.service: control process exited, code=exited status=1 Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: Failed to start nginx - high performance web server. Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: Unit nginx.service entered failed state. This is the default configuration file for nginx.service in systemd. [admin at nginx ~]$ sudo cat /usr/lib/systemd/system/nginx.service Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ After=network.target remote-fs.target nss-lookup.target I've determined that adding network-online.target resolves the problem and allows nginx to start properly upon boot. [admin at nginx ~]$ sudo cat /usr/lib/systemd/system/nginx.service Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ After=network.target network-online.target remote-fs.target nss-lookup.target Requires=network-online.target Is this a problem that can be addressed by the nginx team? Many thanks. Aidan -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Wed Apr 29 16:49:26 2015 From: thresh at nginx.com (Konstantin Pavlov) Date: Wed, 29 Apr 2015 19:49:26 +0300 Subject: nginx boot issues with centos7 In-Reply-To: References: Message-ID: <55410B96.5090006@nginx.com> Hello Aidan, On 29/04/2015 19:33, Aidan Scheller wrote: > Greetings, > > It appears that nginx has difficulties starting automatically in CentOS > 7 when it needs to resolve DNS names in the configuration. I'm > attributing this to systemd. Nginx stable was installed from packages. > > [admin at nginx ~]$ nginx -v > nginx version: nginx/1.8.0 > > [admin at nginx ~]$ cat /etc/nginx/conf.d/default.conf > server { > location / { > proxy_pass http://web01.mycorp.lan:8080; > } > } > > When the system boots nginx doesn't start automatically and the > logs indicate that it wasn't able to make a DNS query. > > [admin at nginx ~]$ sudo systemctl status nginx.service > nginx.service - nginx - high performance web server > Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled) > Active: failed (Result: exit-code) since Wed 2015-04-29 11:11:42 CDT; > 1min 30s ago > Docs: http://nginx.org/en/docs/ > Process: 1141 ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf > (code=exited, status=1/FAILURE) > Apr 29 11:11:42 nginx.mycorp.lan nginx[1141]: nginx: [emerg] host not > found in upstream "web01.mycorp.lan...f:38 > Apr 29 11:11:42 nginx.mycorp.lan nginx[1141]: nginx: configuration file > /etc/nginx/nginx.conf test failed > Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: nginx.service: control > process exited, code=exited status=1 > Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: Failed to start nginx - > high performance web server. > Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: Unit nginx.service entered > failed state. > > This is the default configuration file for nginx.service in systemd. > > [admin at nginx ~]$ sudo cat /usr/lib/systemd/system/nginx.service > Description=nginx - high performance web server > Documentation=http://nginx.org/en/docs/ > After=network.target remote-fs.target nss-lookup.target > > I've determined that adding network-online.target resolves the problem > and allows nginx to start properly upon boot. > > [admin at nginx ~]$ sudo cat /usr/lib/systemd/system/nginx.service > Description=nginx - high performance web server > Documentation=http://nginx.org/en/docs/ > After=network.target network-online.target remote-fs.target > nss-lookup.target > Requires=network-online.target > > Is this a problem that can be addressed by the nginx team? I believe the proper way to fix that issue is: # systemctl enable systemd-networkd-wait-online.service I am reluctant in adding this as a dependancy for the default package because of the issues described in http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ (notably, a possible 90s delay). -- Konstantin Pavlov From thresh at nginx.com Wed Apr 29 16:51:44 2015 From: thresh at nginx.com (Konstantin Pavlov) Date: Wed, 29 Apr 2015 19:51:44 +0300 Subject: nginx boot issues with centos7 In-Reply-To: <55410B96.5090006@nginx.com> References: <55410B96.5090006@nginx.com> Message-ID: <55410C20.8070209@nginx.com> On 29/04/2015 19:49, Konstantin Pavlov wrote: > Hello Aidan, > > On 29/04/2015 19:33, Aidan Scheller wrote: >> Greetings, >> >> It appears that nginx has difficulties starting automatically in CentOS >> 7 when it needs to resolve DNS names in the configuration. I'm >> attributing this to systemd. Nginx stable was installed from packages. >> >> [admin at nginx ~]$ nginx -v >> nginx version: nginx/1.8.0 >> >> [admin at nginx ~]$ cat /etc/nginx/conf.d/default.conf >> server { >> location / { >> proxy_pass http://web01.mycorp.lan:8080; >> } >> } >> >> When the system boots nginx doesn't start automatically and the >> logs indicate that it wasn't able to make a DNS query. >> >> [admin at nginx ~]$ sudo systemctl status nginx.service >> nginx.service - nginx - high performance web server >> Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled) >> Active: failed (Result: exit-code) since Wed 2015-04-29 11:11:42 CDT; >> 1min 30s ago >> Docs: http://nginx.org/en/docs/ >> Process: 1141 ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf >> (code=exited, status=1/FAILURE) >> Apr 29 11:11:42 nginx.mycorp.lan nginx[1141]: nginx: [emerg] host not >> found in upstream "web01.mycorp.lan...f:38 >> Apr 29 11:11:42 nginx.mycorp.lan nginx[1141]: nginx: configuration file >> /etc/nginx/nginx.conf test failed >> Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: nginx.service: control >> process exited, code=exited status=1 >> Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: Failed to start nginx - >> high performance web server. >> Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: Unit nginx.service entered >> failed state. >> >> This is the default configuration file for nginx.service in systemd. >> >> [admin at nginx ~]$ sudo cat /usr/lib/systemd/system/nginx.service >> Description=nginx - high performance web server >> Documentation=http://nginx.org/en/docs/ >> After=network.target remote-fs.target nss-lookup.target >> >> I've determined that adding network-online.target resolves the problem >> and allows nginx to start properly upon boot. >> >> [admin at nginx ~]$ sudo cat /usr/lib/systemd/system/nginx.service >> Description=nginx - high performance web server >> Documentation=http://nginx.org/en/docs/ >> After=network.target network-online.target remote-fs.target >> nss-lookup.target >> Requires=network-online.target >> >> Is this a problem that can be addressed by the nginx team? > > I believe the proper way to fix that issue is: > > # systemctl enable systemd-networkd-wait-online.service Or, if one uses NetworkManager: # systemctl enable NetworkManager-wait-online.service -- Konstantin Pavlov From eric.kom83 at gmail.com Wed Apr 29 17:35:29 2015 From: eric.kom83 at gmail.com (Eric C. Kom) Date: Wed, 29 Apr 2015 19:35:29 +0200 Subject: Header on the Index Of page Message-ID: Good day all, Please I do I include a text/image as a header on the Index Of page. Like the above: http://mirror.mephi.ru/debian-cd/current/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 29 21:09:26 2015 From: nginx-forum at nginx.us (lieut_data) Date: Wed, 29 Apr 2015 17:09:26 -0400 Subject: proxy_ssl_certificate not exchanging client certificates In-Reply-To: <20150429120504.GU32429@mdounin.ru> References: <20150429120504.GU32429@mdounin.ru> Message-ID: <934284b6217affa1c751e2ab463b0d1f.NginxMailingListEnglish@forum.nginx.org> Thanks for getting back to me so quickly! Maxim Dounin Wrote: ------------------------------------------------------- > What nginx doesn't support (or, rather, explicitly forbids) is > renegotiation. On the other hand, renegotiation is required if > one needs to ask for a client certificate only for some URIs, so > it's likely used in your case. You should see something like "SSL > renegotiation disabled" in logs at notice level. Yes, this is exactly the problem. With your hint, I commented out the relevant code in ngx_ssl_handshake and ngx_ssl_handle_recv -- and proxying worked flawlessly. (Interestingly, I never saw the log you identified because of SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS having been set on the openssl connection object.) I think I understand the gist of why nginx forbids client-initiated renegotiation (denial of service concerns? security concerns?), but I'm not well-versed in openssl enough to know if the same concerns apply to server-initiated renegotiation with nginx as the client, especially when it applies to cipher renegotiation as noted above. Would nginx be open to a patch that would make this use case feasible? Perhaps as a modification to only disable these renegotiations when nginx is the server in the SSL equation? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258464,258520#msg-258520 From aidan at aodhandigital.com Wed Apr 29 22:24:53 2015 From: aidan at aodhandigital.com (Aidan Scheller) Date: Wed, 29 Apr 2015 17:24:53 -0500 Subject: nginx boot issues with centos7 In-Reply-To: <55410C20.8070209@nginx.com> References: <55410B96.5090006@nginx.com> <55410C20.8070209@nginx.com> Message-ID: Thank you, Konstantin! I'll investigate those. Best regards, Aidan On Wed, Apr 29, 2015 at 11:51 AM, Konstantin Pavlov wrote: > On 29/04/2015 19:49, Konstantin Pavlov wrote: > > Hello Aidan, > > > > On 29/04/2015 19:33, Aidan Scheller wrote: > >> Greetings, > >> > >> It appears that nginx has difficulties starting automatically in CentOS > >> 7 when it needs to resolve DNS names in the configuration. I'm > >> attributing this to systemd. Nginx stable was installed from packages. > >> > >> [admin at nginx ~]$ nginx -v > >> nginx version: nginx/1.8.0 > >> > >> [admin at nginx ~]$ cat /etc/nginx/conf.d/default.conf > >> server { > >> location / { > >> proxy_pass http://web01.mycorp.lan:8080; > >> } > >> } > >> > >> When the system boots nginx doesn't start automatically and the > >> logs indicate that it wasn't able to make a DNS query. > >> > >> [admin at nginx ~]$ sudo systemctl status nginx.service > >> nginx.service - nginx - high performance web server > >> Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled) > >> Active: failed (Result: exit-code) since Wed 2015-04-29 11:11:42 CDT; > >> 1min 30s ago > >> Docs: http://nginx.org/en/docs/ > >> Process: 1141 ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf > >> (code=exited, status=1/FAILURE) > >> Apr 29 11:11:42 nginx.mycorp.lan nginx[1141]: nginx: [emerg] host not > >> found in upstream "web01.mycorp.lan...f:38 > >> Apr 29 11:11:42 nginx.mycorp.lan nginx[1141]: nginx: configuration file > >> /etc/nginx/nginx.conf test failed > >> Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: nginx.service: control > >> process exited, code=exited status=1 > >> Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: Failed to start nginx - > >> high performance web server. > >> Apr 29 11:11:42 nginx.mycorp.lan systemd[1]: Unit nginx.service entered > >> failed state. > >> > >> This is the default configuration file for nginx.service in systemd. > >> > >> [admin at nginx ~]$ sudo cat /usr/lib/systemd/system/nginx.service > >> Description=nginx - high performance web server > >> Documentation=http://nginx.org/en/docs/ > >> After=network.target remote-fs.target nss-lookup.target > >> > >> I've determined that adding network-online.target resolves the problem > >> and allows nginx to start properly upon boot. > >> > >> [admin at nginx ~]$ sudo cat /usr/lib/systemd/system/nginx.service > >> Description=nginx - high performance web server > >> Documentation=http://nginx.org/en/docs/ > >> After=network.target network-online.target remote-fs.target > >> nss-lookup.target > >> Requires=network-online.target > >> > >> Is this a problem that can be addressed by the nginx team? > > > > I believe the proper way to fix that issue is: > > > > # systemctl enable systemd-networkd-wait-online.service > > Or, if one uses NetworkManager: > > # systemctl enable NetworkManager-wait-online.service > > -- > Konstantin Pavlov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silsurf at gmail.com Wed Apr 29 23:56:18 2015 From: silsurf at gmail.com (silsurf Google) Date: Wed, 29 Apr 2015 16:56:18 -0700 Subject: Trying to "see" my NGINX downloads folder via https? Message-ID: I have NGINX installed on a VPN and I would like to access the "downloads" folder via https. I followed instructions given to me as follows: In order to download your files from your vpn, you will move the download folder of deluge into nginx www folder. For example: downloads folder (/usr/share/nginx/www/downloads). Your files will be accessible at http://ip-address/downloads. Change the configuration file is at /etc/nginx/sites-available/default to allow the downloads folder to be accessable via https. Edit that file and you will see a line like root /path/to/root/folder/here <-- change this to your download location. In that file I replaced root /usr/share/nginx/html; index index.html index.htm; with root /usr/share/nginx/www/downloads You also need to enable autoindex. By default, nginx will not list the files inside a folder (security reason). Add these lines to the default file: autoindex on; autoindex_exact_size off; autoindex_localtime on; Which I did. So far I cannot access the downloads folder and thought I would look for some help here. Thanks very much Henry -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfransosi at gmail.com Thu Apr 30 03:12:38 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Thu, 30 Apr 2015 00:12:38 -0300 Subject: Trying to "see" my NGINX downloads folder via https? In-Reply-To: References: Message-ID: On Wed, Apr 29, 2015 at 8:56 PM, silsurf Google wrote: > I have NGINX installed on a VPN and I would like to access the "downloads" > folder via https. I followed instructions given to me as follows: > Is your server listening on port 443? -- Thiago Farina From aqqa11 at earthlink.net Thu Apr 30 06:43:14 2015 From: aqqa11 at earthlink.net (John) Date: Thu, 30 Apr 2015 02:43:14 -0400 (GMT-04:00) Subject: proxy_redirect not working with "refresh" Message-ID: <7705660.1430376194425.JavaMail.root@elwamui-polski.atl.sa.earthlink.net> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect says: "(proxy_redirect) Sets the text that should be changed in the ?Location? and ?Refresh? header fields of a proxied server response." All examples I found online only mentioned how it works with "Location", and that also works perfectly with me. But it just doesn't work with "Refresh" for me. My backend site http://192.168.1.9/test.html is: The nginx on my proxy 1.2.3.4 reads: location / { proxy_pass http://192.168.1.9; proxy_set_header Host $host; proxy_redirect default; proxy_redirect http://192.168.1.9/ /; proxy_redirect http://$proxy_host/ /; proxy_redirect ~.* /; proxy_redirect / /; } You can see I have exhausted all options on that nginx documentation. But after restarting nginx, "curl 1.2.3.4/test.html" still sees that "Refresh" line not translated to http://1.2.3.4/, and visiting http://1.2.3.4/test.html on browser will still redirect me to http://192.168.1.9/, which is unreachable. Did I miss anything? Actually I don't understand that line about "proxy_set_header Host $host", I just copied from web. Thank you! From francis at daoine.org Thu Apr 30 08:11:19 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Apr 2015 09:11:19 +0100 Subject: proxy_redirect not working with "refresh" In-Reply-To: <7705660.1430376194425.JavaMail.root@elwamui-polski.atl.sa.earthlink.net> References: <7705660.1430376194425.JavaMail.root@elwamui-polski.atl.sa.earthlink.net> Message-ID: <20150430081119.GD29618@daoine.org> On Thu, Apr 30, 2015 at 02:43:14AM -0400, John wrote: Hi there, > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect says: > > "(proxy_redirect) Sets the text that should be changed in the ?Location? and ?Refresh? header fields of a proxied server response." > > All examples I found online only mentioned how it works with "Location", and that also works perfectly with me. > > But it just doesn't work with "Refresh" for me. My backend site http://192.168.1.9/test.html is: > > > > That's not a "Refresh" header field. That is something in the http response body. In general, nginx doesn't mess with the response body. (You can configure it to, but I tend to dislike doing that.) > Did I miss anything? Actually I don't understand that line about "proxy_set_header Host $host", I just copied from web. Why does your back-end include the string "http://192.168.1.9/" in its response body? Can you make it instead include a string based on the Host: header it receives? If so, that is what the "proxy_set_header Host $host" is for. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Apr 30 08:14:31 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Apr 2015 09:14:31 +0100 Subject: Trying to "see" my NGINX downloads folder via https? In-Reply-To: References: Message-ID: <20150430081431.GE29618@daoine.org> On Wed, Apr 29, 2015 at 04:56:18PM -0700, silsurf Google wrote: Hi there, > In order to download your files from your vpn, you will move the download folder of deluge into nginx www folder. For example: downloads folder (/usr/share/nginx/www/downloads). Your files will be accessible at http://ip-address/downloads. > In that file I replaced > > root /usr/share/nginx/html; > index index.html index.htm; > > with > > root /usr/share/nginx/www/downloads http://nginx.org/r/root What request do you make? What response do you get? What response do you want? If you want "the listing of the directory /usr/share/nginx/www/downloads, for the request /downloads/", then you want "root /usr/share/nginx/www" as your relevant configuration. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Apr 30 17:15:11 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Apr 2015 18:15:11 +0100 Subject: Static files bad loading time In-Reply-To: <3a8312f2e6b4a09ade6212913516cc4c.NginxMailingListEnglish@forum.nginx.org> References: <20150425223755.GC29618@daoine.org> <3a8312f2e6b4a09ade6212913516cc4c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150430171511.GF29618@daoine.org> On Sun, Apr 26, 2015 at 06:11:32AM -0400, grigory wrote: Hi there, > # Static files location > location ~* > ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|wav|bmp|rtf|js)$ So - you have your configuration; you make a request; sometimes you get the response quickly, but sometimes you get the response slowly. Can you tell from nginx logs whether the slowness is due to slow-read-from-disk, or slow-write-to-client, or something else? Can you find any pattern in the requests which respond more slowly than you want? Certain browsers, certain times of day, anything like that? If you make the request from the machine itself, so network issues should be minor, does it still show sometimes being slow? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Apr 30 17:37:53 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Apr 2015 18:37:53 +0100 Subject: [yet another] proxy question - rewrite URLs In-Reply-To: References: Message-ID: <20150430173753.GG29618@daoine.org> On Mon, Apr 27, 2015 at 11:05:53AM -0400, DigitalHermit wrote: Hi there, > I can successfully redirect the following using 301 returns: > location /webhost1/ { > return 301 [https]//$host$request_uri; > } I'm not sure that does what you indicated that you want it to do; but you also say that it works fine, so I'll believe that part. > I then do another redirect inside the 8443 listener: Note: there is no redirect here. And your webhost1 and webhost2 seem to have become confused, unless I'm missing something. > The above works so far. The problem occurs because there are some links that > refer to webhost2 directly. I can fix some of these with proxy_set_header > statements. However, webhost1 has multiple links to a page on webhost2. What do you mean when you say "links"? HTTP response headers, or HTTP response body content? nginx can relatively easily modify response headers. nginx can less easily modify response body content. > Is there a way to reverse proxy to webhost1 and somehow intercept all > webhost2 requests and in turn proxy them through another port on ico-proxy? Can you describe what you want, in terms of one request / one response at a time? I think it is: client requests https://ico-proxy/webhost1/file nginx responds with the content from https://webhost1/webhost1/file That content includes in its body a html link to https://webhost2/file. So: client requests https://webhost2/file, but fails to get any response because it can't access webhost2. What part of that have I got wrong? (Please be specific. Ports and urls and webhost1 and webhost2 do matter here.) You probably will need to get nginx to reverse-proxy content on webhost2. It's not immediately clear to me whether you are better off making the client believe that ico-proxy *is* webhost2; or trying to edit the html link in the content returned from webhost1. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Apr 30 17:44:53 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Apr 2015 18:44:53 +0100 Subject: how to separate robot access log and human access log In-Reply-To: <1c2f7d55b1d50c81d78247aa8b19f5cf.NginxMailingListEnglish@forum.nginx.org> References: <1c2f7d55b1d50c81d78247aa8b19f5cf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150430174453.GH29618@daoine.org> On Mon, Apr 27, 2015 at 07:45:17PM -0400, meteor8488 wrote: Hi there, > I'm trying to separate the robot access log and human access log, so I'm > using below configuration: "if=" on the access_log line is what you want. > access_log /web/log/location_access.log main; "No 'if=' there" means "log all requests to this file". (Unless overridden later.) > access_log /web/log/spider_access.log main if=$spiderbot; > access_log /web/log/rssbot_access.log main if=$rssbot; > > But it seems that nginx still writes some robot logs in to both > location_access.log and spider_access.log. Do you mean "some" or "all"? > How can I separate the logs for the robot? If you want /web/log/location_access.log to only log some requests, add an "if=" to mark the requests that you want logged. > And another questions is that some robot logs are not written to > spider_access.log but exist in location_access.log. It seems that my map is > not working. Is anything wrong when I define "map"? Example? Anything written to /web/log/rssbot_access.log would match that description, but I guess that's not what you mean. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Apr 30 17:50:16 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Apr 2015 18:50:16 +0100 Subject: Installed nginx with iredmail; how to add web content & test without DNS change In-Reply-To: <7d257d58af02ff92bee88c801674248e.NginxMailingListEnglish@forum.nginx.org> References: <7d257d58af02ff92bee88c801674248e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150430175016.GI29618@daoine.org> On Tue, Apr 28, 2015 at 08:17:42PM -0400, gariac wrote: Hi there, > Iredmail puts its html in /var/www. I put a test page in /var/www2 and > added a location line to point to it, but I'm confused on how to set this up > since it is like hosting two websites at the same IP address. Obviously I > need to test the server before changing the DNS. To test the server, probably all you need is that your client resolves the desired hostname to the new IP address. You can test with something like curl -i -H Host:host.domain http://ip.add.re.ss/ or by changing name resolution for your browser (for example, by using your local "hosts" file). There probably shouldn't be any nginx changes for this to happen. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Apr 30 19:42:23 2015 From: nginx-forum at nginx.us (sekhemre) Date: Thu, 30 Apr 2015 15:42:23 -0400 Subject: nginx-1.9.0 In-Reply-To: References: Message-ID: SRPMS please. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258444,258554#msg-258554 From silsurf at gmail.com Thu Apr 30 20:49:41 2015 From: silsurf at gmail.com (silsurf Google) Date: Thu, 30 Apr 2015 13:49:41 -0700 Subject: nginx Digest, Vol 66, Issue 45 In-Reply-To: References: Message-ID: > On Apr 30, 2015, at 5:00 AM, nginx-request at nginx.org wrote: > > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. Re: Trying to "see" my NGINX downloads folder via https? > (Thiago Farina) > 2. proxy_redirect not working with "refresh" (John) > 3. Re: proxy_redirect not working with "refresh" (Francis Daly) > 4. Re: Trying to "see" my NGINX downloads folder via https? > (Francis Daly) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 30 Apr 2015 00:12:38 -0300 > From: Thiago Farina > To: nginx at nginx.org > Subject: Re: Trying to "see" my NGINX downloads folder via https? > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > On Wed, Apr 29, 2015 at 8:56 PM, silsurf Google wrote: >> I have NGINX installed on a VPN and I would like to access the "downloads" >> folder via https. I followed instructions given to me as follows: >> > Is your server listening on port 443? > Yes, all ports are open Henry > -- > Thiago Farina > > > > ------------------------------ > > Message: 2 > Date: Thu, 30 Apr 2015 02:43:14 -0400 (GMT-04:00) > From: John > To: nginx at nginx.org > Subject: proxy_redirect not working with "refresh" > Message-ID: > <7705660.1430376194425.JavaMail.root at elwamui-polski.atl.sa.earthlink.net> > > Content-Type: text/plain; charset=UTF-8 > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect says: > > "(proxy_redirect) Sets the text that should be changed in the ?Location? and ?Refresh? header fields of a proxied server response." > > All examples I found online only mentioned how it works with "Location", and that also works perfectly with me. > > But it just doesn't work with "Refresh" for me. My backend site http://192.168.1.9/test.html is: > > > > > > The nginx on my proxy 1.2.3.4 reads: > > location / { > proxy_pass http://192.168.1.9; > proxy_set_header Host $host; > proxy_redirect default; > proxy_redirect http://192.168.1.9/ /; > proxy_redirect http://$proxy_host/ /; > proxy_redirect ~.* /; > proxy_redirect / /; > } > > You can see I have exhausted all options on that nginx documentation. But after restarting nginx, "curl 1.2.3.4/test.html" still sees that "Refresh" line not translated to http://1.2.3.4/, and visiting http://1.2.3.4/test.html on browser will still redirect me to http://192.168.1.9/, which is unreachable. > > Did I miss anything? Actually I don't understand that line about "proxy_set_header Host $host", I just copied from web. > > Thank you! > > > > ------------------------------ > > Message: 3 > Date: Thu, 30 Apr 2015 09:11:19 +0100 > From: Francis Daly > To: nginx at nginx.org > Subject: Re: proxy_redirect not working with "refresh" > Message-ID: <20150430081119.GD29618 at daoine.org> > Content-Type: text/plain; charset=utf-8 > > On Thu, Apr 30, 2015 at 02:43:14AM -0400, John wrote: > > Hi there, > >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect says: >> >> "(proxy_redirect) Sets the text that should be changed in the ?Location? and ?Refresh? header fields of a proxied server response." >> >> All examples I found online only mentioned how it works with "Location", and that also works perfectly with me. >> >> But it just doesn't work with "Refresh" for me. My backend site http://192.168.1.9/test.html is: >> >> >> >> > > That's not a "Refresh" header field. > > That is something in the http response body. > > In general, nginx doesn't mess with the response body. > > (You can configure it to, but I tend to dislike doing that.) > >> Did I miss anything? Actually I don't understand that line about "proxy_set_header Host $host", I just copied from web. > > Why does your back-end include the string "http://192.168.1.9/" in its > response body? > > Can you make it instead include a string based on the Host: header it > receives? If so, that is what the "proxy_set_header Host $host" is for. > > f > -- > Francis Daly francis at daoine.org > > > > ------------------------------ > > Message: 4 > Date: Thu, 30 Apr 2015 09:14:31 +0100 > From: Francis Daly > To: nginx at nginx.org > Subject: Re: Trying to "see" my NGINX downloads folder via https? > Message-ID: <20150430081431.GE29618 at daoine.org> > Content-Type: text/plain; charset=us-ascii > > On Wed, Apr 29, 2015 at 04:56:18PM -0700, silsurf Google wrote: > > Hi there, > >> In order to download your files from your vpn, you will move the download folder of deluge into nginx www folder. For example: downloads folder (/usr/share/nginx/www/downloads). Your files will be accessible at http://ip-address/downloads. > >> In that file I replaced >> >> root /usr/share/nginx/html; >> index index.html index.htm; >> >> with >> >> root /usr/share/nginx/www/downloads > > http://nginx.org/r/root > > What request do you make? > > What response do you get? > > What response do you want? > > If you want "the listing of the directory /usr/share/nginx/www/downloads, > for the request /downloads/", then you want "root /usr/share/nginx/www" > as your relevant configuration. > > f > -- > Francis Daly francis at daoine.org > > > > ------------------------------ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > End of nginx Digest, Vol 66, Issue 45 > *************************************