From francis at daoine.org Fri Nov 1 00:38:53 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 1 Nov 2013 00:38:53 +0000 Subject: proxy_pass not passing to dynamic $host In-Reply-To: <4f1e450f6c1c3300600eb28fea87877a.NginxMailingListEnglish@forum.nginx.org> References: <4f1e450f6c1c3300600eb28fea87877a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131101003853.GB25969@craic.sysops.org> On Thu, Oct 31, 2013 at 07:55:15PM -0400, nehay2j wrote: Hi there, > I need to do proxy_pass to host name passed in url and rewrite url as well. > Since the host name is difference with each request, I cannot provide an > upstream for it. Below is the nginx configuration I am using but it doesnt > do proxy pass and returns 404 error. The hostname resembles ec2...com. proxy_pass to a dynamic hostname taken from the request url works for me. What is an example request that you make that does not do what you want? What is the proxy_pass line that that request sees, when you replace the $variables with their values? What do the nginx logs say happened? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Nov 1 01:39:54 2013 From: nginx-forum at nginx.us (nehay2j) Date: Thu, 31 Oct 2013 21:39:54 -0400 Subject: proxy_pass not passing to dynamic $host In-Reply-To: <20131101003853.GB25969@craic.sysops.org> References: <20131101003853.GB25969@craic.sysops.org> Message-ID: <646b224378cad187ea692b91c78f7de7.NginxMailingListEnglish@forum.nginx.org> Thanks Francis. I am making a GET call through browser like- https://example.com/ec2..com Error Logs- 2013/11/01 01:33:49 [error] 13086#0: *1 no host in upstream "/ec2-xx-xxx-xxx-xxx..amazonaws.com:8080/test", client: 10.10.4.167, server: clarity-test.cloud.tibco.com, request: "GET /ec2-xx-xx-xxx-xx..amazonaws.com HTT P/1.1", host: "example.com" Regards, Neha Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244308,244310#msg-244310 From francis at daoine.org Fri Nov 1 05:47:10 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 1 Nov 2013 05:47:10 +0000 Subject: proxy_pass not passing to dynamic $host In-Reply-To: <646b224378cad187ea692b91c78f7de7.NginxMailingListEnglish@forum.nginx.org> References: <20131101003853.GB25969@craic.sysops.org> <646b224378cad187ea692b91c78f7de7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131101054710.GC25969@craic.sysops.org> On Thu, Oct 31, 2013 at 09:39:54PM -0400, nehay2j wrote: Hi there, > I am making a GET call through browser like- > https://example.com/ec2..com So "$1" = "/ec2..com" and the proxy_pass argument is http:///ec2..com/test > Error Logs- > > 2013/11/01 01:33:49 [error] 13086#0: *1 no host in upstream > "/ec2-xx-xxx-xxx-xxx..amazonaws.com:8080/test", > client: 10.10.4.167, server: clarity-test.cloud.tibco.com, request: "GET > /ec2-xx-xx-xxx-xx..amazonaws.com HTT > P/1.1", host: "example.com" In "http:///ec2..com/test", the host is between the second and third /, which is blank, hence "no host in upstream". Change something so that the proxy_pass argument is http://ec2..com/test. Either use "proxy_pass http:/$1/test", or keep http://$1/test and remove the / from $1 by putting it outside the (). Then try again. And if the error message indicates "resolver", see http://nginx.org/r/resolver. f -- Francis Daly francis at daoine.org From yaoweibin at gmail.com Fri Nov 1 05:58:52 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Fri, 1 Nov 2013 13:58:52 +0800 Subject: nginx http proxy support for backend server health checks / status monitoring url In-Reply-To: <20131031124120.GD90747@lo0.su> References: <20131031122641.GY2924@reaktio.net> <20131031124120.GD90747@lo0.su> Message-ID: HPPS health check is difficult for the check module, I have added an alternative feature for this request. The 'port' option can be specifed with different port from the server's original port. For example: server { server 192.168.1.1:443; check interval=3000 rise=1 fall=3 timeout=2000 type=http port=80; check_http_send "GET / HTTP/1.0\r\n\r\n"; check_http_expect_alive http_2xx http_3xx; } This feature has been exeisted in the develop branch for upstream check module: (https://github.com/yaoweibin/nginx_upstream_check_module/tree/development) Or tengine: http://tengine.taobao.org/document/http_upstream_check.html 2013/10/31 Ruslan Ermilov : > On Thu, Oct 31, 2013 at 02:26:41PM +0200, Pasi K?rkk?inen wrote: >> Hello, >> >> I'm using nginx as a http proxy / loadbalancer for an application which >> which has the following setup on the backend servers: >> >> - https/403 provides the application at: >> - https://hostname-of-backend/app/ >> >> - status monitoring url is available at: >> - http://hostname-of-backend/foo/server_status >> - https://hostname-of-backend/foo/server_status >> >> So the status url is available over both http and https, and the status url tells if the application is fully up and running or not. >> Actual application is only available over https. >> >> It's important to decide the backend server availability based on the status url contents/reply, >> otherwise you might push traffic to a backend that isn't fully up and running yet, >> causing false errors to end users. >> >> So.. I don't think nginx currently provides proper status monitoring url support for proxy backends ? >> >> I've found some plugins for this, but they seem to have limitations aswell: >> >> - http://wiki.nginx.org/HttpHealthcheckModule >> - https://github.com/cep21/healthcheck_nginx_upstreams >> - only http 1.0 support, no http 1.1 support >> - doesn't seem to be maintained anymore, latest version 2+ years old >> >> - https://github.com/yaoweibin/nginx_upstream_check_module >> - only supports http backends, so health checks must be over http aswell, not over https >> - if actual app is on 443/https, cannot configure separate port 80 for health checks over http >> - only "ssl" health check possible for https backends >> >> >> Any suggestions? Or should I start hacking and improving the existing plugins.. >> Thanks! > > This functionality is currently available in our commercial version: > http://nginx.com/products/ > > The documentation is here: > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#health_check > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao From nginx-forum at nginx.us Fri Nov 1 07:45:00 2013 From: nginx-forum at nginx.us (j0nes2k) Date: Fri, 01 Nov 2013 03:45:00 -0400 Subject: SSI working on Apache backend, but not on gunicorn backend In-Reply-To: <20131031163411.GA95765@mdounin.ru> References: <20131031163411.GA95765@mdounin.ru> Message-ID: <472935ee48dcf5d1cc7cede831219711.NginxMailingListEnglish@forum.nginx.org> Hello Maxim, thank you for your help! The hint that lead me to the solution was about Gzipped output from gunicorn. In the Django settings.py I activated the GZip Middleware. After removing this, the SSI includes work correctly. Best regards, Jonas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244299,244316#msg-244316 From nginx-forum at nginx.us Fri Nov 1 16:29:24 2013 From: nginx-forum at nginx.us (nehay2j) Date: Fri, 01 Nov 2013 12:29:24 -0400 Subject: proxy_pass not passing to dynamic $host In-Reply-To: <20131101054710.GC25969@craic.sysops.org> References: <20131101054710.GC25969@craic.sysops.org> Message-ID: <46b96b778a931af7802a9c51587b4c9f.NginxMailingListEnglish@forum.nginx.org> Thanks Francis. I was able to get past this issue. Appreciate all the help. Now I am stuck at forwarding the POST parameters to this proxy server. proxy_pass http://$1:8080/clarity; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 300; proxy_connect_timeout 300; proxy_redirect off; #proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; If I uncomment 'proxy_set_header Host' and 'proxy_set_header X-Forwarded-For' it ends into 404 error. Otherwise it rewrites and proxies fine but doesnt pass the POST parameter. Regards, Neha Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244308,244357#msg-244357 From francis at daoine.org Fri Nov 1 17:57:41 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 1 Nov 2013 17:57:41 +0000 Subject: proxy_pass not passing to dynamic $host In-Reply-To: <46b96b778a931af7802a9c51587b4c9f.NginxMailingListEnglish@forum.nginx.org> References: <20131101054710.GC25969@craic.sysops.org> <46b96b778a931af7802a9c51587b4c9f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131101175741.GA16008@craic.sysops.org> On Fri, Nov 01, 2013 at 12:29:24PM -0400, nehay2j wrote: Hi there, > Thanks Francis. I was able to get past this issue. Appreciate all the help. > > Now I am stuck at forwarding the POST parameters to this proxy server. > > proxy_pass http://$1:8080/clarity; > If I uncomment 'proxy_set_header Host' and 'proxy_set_header > X-Forwarded-For' it ends into 404 error. I presume that the 404 comes from your upstream server. Use proxy_set_header to set whatever header values that upstream needs, to handle the request. I suspect that "proxy_set_header Host $1;" is what you want -- or to have no proxy_set_header directives at all so that the default applies. > Otherwise it rewrites and proxies > fine but doesnt pass the POST parameter. It all works fine for me. Can you provide evidence of where it fails to do what you expect? Perhaps tcpdump the traffic, and see what does nginx send to upstream? If it is the expected correct client request, then you should check what the upstream does with a similar request that you make using, say, curl. Provide a specific example of what exactly you do, ideally using "curl", and what response you get, and what response you expect. That's the best way of making it easier for someone else to help. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Nov 1 21:16:11 2013 From: nginx-forum at nginx.us (nehay2j) Date: Fri, 01 Nov 2013 17:16:11 -0400 Subject: proxy_pass not passing to dynamic $host In-Reply-To: <20131101175741.GA16008@craic.sysops.org> References: <20131101175741.GA16008@craic.sysops.org> Message-ID: Thanks Francis. I could finally see the post parameters at server end. Setting proxy_set_header Host $1; changes the browser url which we donot want. Currently, the code looks like- location ~ /(?[0-9].*) { rewrite $(.*)$ https://$http_host/test last; proxy_pass http://$ec2instance:8080/test; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } If i set 'proxy_set_header Host $http_host;' to have $proxy_host instead of $http_host, all other things work fine other than cookies. Cookies does not get transferred. Having $http_host throws 404 error with the following entry in log- 2013/11/01 21:05:37 [error] 17515#0: *1 open() "/opt/nginx/html/test" failed (2: No such file or directory), client: 10.10.4.167, server: example.com, request: "GET /testHTTP/1.1", host: "example.com ", referrer: "http://example.com/test/test.html" I will try to get the tcpdump of traffic. Regards, Neha Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244308,244361#msg-244361 From glicerinu at gmail.com Fri Nov 1 21:29:06 2013 From: glicerinu at gmail.com (Marc Aymerich) Date: Fri, 1 Nov 2013 22:29:06 +0100 Subject: Honoring ETag of cached content In-Reply-To: References: Message-ID: Hi, I'm using nginx proxy pass to cache content of our dynamic web application. In order to save some bandwidth our client application uses conditional requests based on ETag. However nginx ignores the ETag of cached pages :( What would be required if I want to honor the ETag of cached pages? Is something that can be achieve by means of configuration? or I would need to write an nginx module in C? Thanks! -- Marc -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Nov 1 23:16:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 2 Nov 2013 03:16:54 +0400 Subject: Honoring ETag of cached content In-Reply-To: References: Message-ID: <20131101231654.GH95765@mdounin.ru> Hello! On Fri, Nov 01, 2013 at 10:29:06PM +0100, Marc Aymerich wrote: > Hi, > I'm using nginx proxy pass to cache content of our dynamic web application. > > In order to save some bandwidth our client application uses conditional > requests based on ETag. However nginx ignores the ETag of cached pages :( > > What would be required if I want to honor the ETag of cached pages? Is > something that can be achieve by means of configuration? or I would need to > write an nginx module in C? It is expected to just work in 1.3.3+. If you don't see it working, you may want to check the version you are running. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sat Nov 2 01:45:18 2013 From: nginx-forum at nginx.us (nehay2j) Date: Fri, 01 Nov 2013 21:45:18 -0400 Subject: proxy_pass not passing to dynamic $host In-Reply-To: <4f1e450f6c1c3300600eb28fea87877a.NginxMailingListEnglish@forum.nginx.org> References: <4f1e450f6c1c3300600eb28fea87877a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, I did a curl on the url - curl -i https://example.com/23.23.234.234 HTTP/1.1 302 Found Date: Sat, 02 Nov 2013 01:37:47 GMT Location: https://marketplace.example.com/marketplace/marketplace/login Server: nginx/1.4.2 Content-Length: 0 Connection: keep-alive Which is correct. But when I submit post request through browser, I gives me a 404 with no error in logs. Should i be using rewrite command here or not? If I remove rewrite command, it give file not found error as it picks default root path always. 2013/11/01 20:35:08 [error] 17339#0: *11 open() "/opt/nginx/html/test/main.html" failed (2: No such file or directory ), client: 10.10.4.167, server:example.com, request: "GET /test/main.html HTTP/1.1", host: "example.com", referrer: http://example.com/test_console/test.html location ~ /(?[0-9].*) { # alias http://$ec2instance:8080/test; # try_files $uri; rewrite $(.*)$ https://$http_host:8080/test break; proxy_pass http://$ec2instance:8080/test; proxy_read_timeout 300; proxy_connect_timeout 300; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } Regards, Neha Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244308,244365#msg-244365 From glicerinu at gmail.com Sat Nov 2 01:55:09 2013 From: glicerinu at gmail.com (Marc Aymerich) Date: Sat, 2 Nov 2013 02:55:09 +0100 Subject: Honoring ETag of cached content In-Reply-To: <20131101231654.GH95765@mdounin.ru> References: <20131101231654.GH95765@mdounin.ru> Message-ID: On Sat, Nov 2, 2013 at 12:16 AM, Maxim Dounin wrote: > Hello! > > On Fri, Nov 01, 2013 at 10:29:06PM +0100, Marc Aymerich wrote: > > > Hi, > > I'm using nginx proxy pass to cache content of our dynamic web > application. > > > > In order to save some bandwidth our client application uses conditional > > requests based on ETag. However nginx ignores the ETag of cached pages :( > > > > What would be required if I want to honor the ETag of cached pages? Is > > something that can be achieve by means of configuration? or I would need > to > > write an nginx module in C? > > It is expected to just work in 1.3.3+. If you don't see it > working, you may want to check the version you are running. Hi maxim, yeap I'm running 1.4.1 but I never get a 304 :( This is what a response from cache looks like # wget --no-check https://[fdf5:5351:1dfd:0:0:0:0:2]/api/ -S --header 'If-None-Match: "83393a3900e4abce27212d7a27cae589"' -q HTTP/1.1 200 OK Server: nginx/1.4.1 Date: Sat, 02 Nov 2013 01:40:24 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: keep-alive ETag: "83393a3900e4abce27212d7a27cae589" Allow: GET, HEAD, OPTIONS Vary: Accept, Cookie, Accept-Encoding Expires: Sat, 02 Nov 2013 01:41:24 GMT Cache-Control: max-age=60 This is my nginx config for this cached location location /api/ { proxy_pass http://127.0.0.1:8080; proxy_redirect off; proxy_ignore_headers Set-Cookie; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Protocol $scheme; proxy_cache cache; proxy_cache_key $host$uri$is_args$args$http_accept_encoding$http_accept; proxy_cache_valid 1m; expires 1m; set $skip_cache 0; if ($request_method != GET) { set $skip_cache 1; } if ($http_cookie) { set $skip_cache 1; } if ($http_authorization) { set $skip_cache 1; } proxy_cache_bypass $skip_cache; } Maybe is there some conflicting configuration? I can not see it :( and ETags work just fine if I request to my backend server # wget --no-check http://127.0.0.1:8080/api/ -S --header 'If-None-Match: "77e348fb6260a8dd90ca18c61f7cd472"' -q HTTP/1.1 304 NOT MODIFIED Server: nginx/1.4.1 Date: Sat, 02 Nov 2013 01:47:30 GMT Content-Length: 0 Connection: keep-alive Notice the etag value from cached content is different from the fresh one, even though the content is exactly the same, I presume this is nginx doing its job of updating the ETag for some differences because of caching, but still fails to reply a conditional request properly :( Thanks for your help!! -- Marc -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Nov 2 09:18:37 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 2 Nov 2013 09:18:37 +0000 Subject: proxy_pass not passing to dynamic $host In-Reply-To: References: <4f1e450f6c1c3300600eb28fea87877a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131102091837.GD16008@craic.sysops.org> On Fri, Nov 01, 2013 at 09:45:18PM -0400, nehay2j wrote: Hi there, > I did a curl on the url - curl -i https://example.com/23.23.234.234 > HTTP/1.1 302 Found > Location: https://marketplace.example.com/marketplace/marketplace/login > Which is correct. I confess that I don't see how the configuration you show below can lead to that response. But I don't know what upstream is doing, so maybe it all just works. Do you get the same response from upstream if you avoid nginx? curl -i -H Host:example.com http://23.23.234.234:8080/test Also, are you aware that a HTTP 302 will cause the browser url to change? Once the browser gets the above response, the next request goes to marketplace.example.com and not to example.com. > But when I submit post request through browser, I gives me > a 404 with no error in logs. If there's nothing in the logs, that suggests that you weren't talking to nginx. Or that your nginx logging level is too low for what you are trying to see. What happens when you submit a post using curl, so you can see exactly what happens without the browser getting in the way? curl -i -d key=value https://example.com/23.23.234.234 > Should i be using rewrite command here or not? I would have thought "not", but I don't know what that rewrite directive is intended to do in the first place. > If I remove rewrite command, it give file not found error as it picks > default root path always. Here, you haven't said what request you made or what result you expected. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Sat Nov 2 12:10:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 2 Nov 2013 16:10:52 +0400 Subject: Honoring ETag of cached content In-Reply-To: References: <20131101231654.GH95765@mdounin.ru> Message-ID: <20131102121052.GK95765@mdounin.ru> Hello! On Sat, Nov 02, 2013 at 02:55:09AM +0100, Marc Aymerich wrote: > On Sat, Nov 2, 2013 at 12:16 AM, Maxim Dounin wrote: > > > Hello! > > > > On Fri, Nov 01, 2013 at 10:29:06PM +0100, Marc Aymerich wrote: > > > > > Hi, > > > I'm using nginx proxy pass to cache content of our dynamic web > > application. > > > > > > In order to save some bandwidth our client application uses conditional > > > requests based on ETag. However nginx ignores the ETag of cached pages :( > > > > > > What would be required if I want to honor the ETag of cached pages? Is > > > something that can be achieve by means of configuration? or I would need > > to > > > write an nginx module in C? > > > > It is expected to just work in 1.3.3+. If you don't see it > > working, you may want to check the version you are running. > > > Hi maxim, > yeap I'm running 1.4.1 but I never get a 304 :( > This is what a response from cache looks like > > # wget --no-check https://[fdf5:5351:1dfd:0:0:0:0:2]/api/ -S --header > 'If-None-Match: "83393a3900e4abce27212d7a27cae589"' -q > HTTP/1.1 200 OK > Server: nginx/1.4.1 > Date: Sat, 02 Nov 2013 01:40:24 GMT > Content-Type: application/json > Transfer-Encoding: chunked > Connection: keep-alive > ETag: "83393a3900e4abce27212d7a27cae589" > Allow: GET, HEAD, OPTIONS > Vary: Accept, Cookie, Accept-Encoding > Expires: Sat, 02 Nov 2013 01:41:24 GMT > Cache-Control: max-age=60 [...] > Notice the etag value from cached content is different from the fresh one, > even though the content is exactly the same, I presume this is nginx doing > its job of updating the ETag for some differences because of caching, but > still fails to reply a conditional request properly :( Ok, I see what goes on here. Condintional requests (even ones using ETag in If-None-Match) are only handled by nginx if there is Last-Modified response date set, and it's not set in your backend responses. This needs to be addressed, If-None-Match / If-Match handling from cache shouldn't require Last-Modified being present in a cached response. Meanwhile, you may add some Last-Modified to your backend responses as a workaround. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sat Nov 2 14:21:14 2013 From: nginx-forum at nginx.us (nehay2j) Date: Sat, 02 Nov 2013 10:21:14 -0400 Subject: proxy_pass not passing to dynamic $host In-Reply-To: <20131102091837.GD16008@craic.sysops.org> References: <20131102091837.GD16008@craic.sysops.org> Message-ID: <7394a368224ed2eff51306c7b899a6ec.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I added rewrite command so that the url doesn't show IP passed to the nginx. Curl gives a 302 because it doesnt have the sessionid with it. If there is a session id that is passed to the application running on http://23.23.234.234:8080/test, it will take us to app. I can see in application logs that jsessionid does not get there and hence it redirects to the login page. Curl from upstream gave me- [ec2-user at clarity-test conf]$ curl -i -H ec2-54-208-198-229.compute-1.amazonaws.com:example.com http:/ /23.23.234.234:8080/test HTTP/1.1 302 Found Server: Apache-Coyote/1.1 Location: https://marketplace-staging.cloud.tibco.com/marketplace/marketplace/login Content-Length: 0 Date: Sat, 02 Nov 2013 14:13:05 GMT [ec2-user at clarity-test conf]$ curl -i -d key=value https://example.com/23.23.234.234 HTTP/1.1 302 Moved Temporarily Content-Type: text/html Date: Sat, 02 Nov 2013 14:11:53 GMT Location: https://clarity-test.cloud.tibco.com:8080/clarity Server: nginx/1.4.2 Content-Length: 160 Connection: keep-alive 302 Found

302 Found


nginx/1.4.2
If I remove the rewrite command and provides the html page in proxy_pass i.e. proxy_pass http://$ec2instance:8080/test/test.html, it gets to the test.html page of the application but does not load any .css .js files and says open failed "/opt/nginx/html/test/test.html. Regards, Neha Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244308,244369#msg-244369 From glicerinu at gmail.com Sat Nov 2 14:56:37 2013 From: glicerinu at gmail.com (Marc Aymerich) Date: Sat, 2 Nov 2013 15:56:37 +0100 Subject: Honoring ETag of cached content In-Reply-To: <20131102121052.GK95765@mdounin.ru> References: <20131101231654.GH95765@mdounin.ru> <20131102121052.GK95765@mdounin.ru> Message-ID: On Sat, Nov 2, 2013 at 1:10 PM, Maxim Dounin wrote: > Hello! > > On Sat, Nov 02, 2013 at 02:55:09AM +0100, Marc Aymerich wrote: > > > On Sat, Nov 2, 2013 at 12:16 AM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Fri, Nov 01, 2013 at 10:29:06PM +0100, Marc Aymerich wrote: > > > > > > > Hi, > > > > I'm using nginx proxy pass to cache content of our dynamic web > > > application. > > > > > > > > In order to save some bandwidth our client application uses > conditional > > > > requests based on ETag. However nginx ignores the ETag of cached > pages :( > > > > > > > > What would be required if I want to honor the ETag of cached pages? > Is > > > > something that can be achieve by means of configuration? or I would > need > > > to > > > > write an nginx module in C? > > > > > > It is expected to just work in 1.3.3+. If you don't see it > > > working, you may want to check the version you are running. > > > > > > Hi maxim, > > yeap I'm running 1.4.1 but I never get a 304 :( > > This is what a response from cache looks like > > > > # wget --no-check https://[fdf5:5351:1dfd:0:0:0:0:2]/api/ -S --header > > 'If-None-Match: "83393a3900e4abce27212d7a27cae589"' -q > > HTTP/1.1 200 OK > > Server: nginx/1.4.1 > > Date: Sat, 02 Nov 2013 01:40:24 GMT > > Content-Type: application/json > > Transfer-Encoding: chunked > > Connection: keep-alive > > ETag: "83393a3900e4abce27212d7a27cae589" > > Allow: GET, HEAD, OPTIONS > > Vary: Accept, Cookie, Accept-Encoding > > Expires: Sat, 02 Nov 2013 01:41:24 GMT > > Cache-Control: max-age=60 > > [...] > > > Notice the etag value from cached content is different from the fresh > one, > > even though the content is exactly the same, I presume this is nginx > doing > > its job of updating the ETag for some differences because of caching, but > > still fails to reply a conditional request properly :( > > Ok, I see what goes on here. Condintional requests (even ones > using ETag in If-None-Match) are only handled by nginx if there is > Last-Modified response date set, and it's not set in your backend > responses. > > This needs to be addressed, If-None-Match / If-Match handling from > cache shouldn't require Last-Modified being present in a cached > response. Meanwhile, you may add some Last-Modified to your > backend responses as a workaround. > Wowo that's it :) # wget --no-check https://[fdf5:5351:1dfd:0:0:0:0:2]/api/ -S --header 'If-None-Match: "83393a3900e4abce27212d7a27cae589"' -q HTTP/1.1 304 Not Modified Server: nginx/1.4.1 Date: Sat, 02 Nov 2013 14:53:29 GMT Connection: keep-alive Vary: Accept, Cookie, Accept-Encoding Last-Modified: Sat, 02 Nov 2013 14:53:29 GMT ETag: "83393a3900e4abce27212d7a27cae589" Allow: GET, HEAD, OPTIONS Expires: Sat, 02 Nov 2013 14:54:29 GMT Cache-Control: max-age=60 Thank you very much for your help maxim! Shall I report this to nginx bug tracker? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Nov 2 15:28:46 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 02 Nov 2013 11:28:46 -0400 Subject: -s reload does not update logfiles time, v1.5.6 Message-ID: <380be47f7ce43af426dfdca4ee8e9ef1.NginxMailingListEnglish@forum.nginx.org> A -s reload works as documented, however while the logfiles are being written to, the logfiles keep their last (before -s) time/date. Is this a known issue? After a simple restart the logfiles are time updated as normal. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244372,244372#msg-244372 From e1c1bac6253dc54a1e89ddc046585792 at posteo.net Sat Nov 2 15:50:33 2013 From: e1c1bac6253dc54a1e89ddc046585792 at posteo.net (e1c1bac6253dc54a1e89ddc046585792 at posteo.net) Date: Sat, 02 Nov 2013 16:50:33 +0100 Subject: -s reload does not update logfiles time, v1.5.6 In-Reply-To: <380be47f7ce43af426dfdca4ee8e9ef1.NginxMailingListEnglish@forum.nginx.org> References: <380be47f7ce43af426dfdca4ee8e9ef1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <85af4f9cc64179e484ce705ed4168c90@posteo.de> Am 02.11.2013 16:28 schrieb itpp2012: > A -s reload works as documented, however while the logfiles are being > written to, the logfiles keep their last (before -s) time/date. Is > this a > known issue? It's a documented "issue" .. http://nginx.org/en/docs/beginners_guide.html use -s reopen (or kill -USR1) to reopen the logfiles. From francis at daoine.org Sat Nov 2 16:22:42 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 2 Nov 2013 16:22:42 +0000 Subject: proxy_pass not passing to dynamic $host In-Reply-To: <7394a368224ed2eff51306c7b899a6ec.NginxMailingListEnglish@forum.nginx.org> References: <20131102091837.GD16008@craic.sysops.org> <7394a368224ed2eff51306c7b899a6ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131102162242.GE16008@craic.sysops.org> On Sat, Nov 02, 2013 at 10:21:14AM -0400, nehay2j wrote: Hi there, I suspect you'll get better help from someone else. I'm unable to work out what it is that you want nginx to do. > I added rewrite command so that the url doesn't show IP passed to the nginx. What does that mean, in the problem-report format of "I use this configuration; I make request A; I get response B; but I expect response C"? > Curl gives a 302 because it doesnt have the sessionid with it. If there is a > session id that is passed to the application running on > http://23.23.234.234:8080/test, it will take us to app. What does "take us to app" mean? The problem report should be in the same format as above. What happens when you add the sessionid to the curl request? (The whole point of using curl is to make it easier to see what is happening. If it doesn't make it easier, don't use curl. Instead, find some other way of clearly describing the unwanted behaviour that you see.) > If I remove the rewrite command and provides the html page in proxy_pass > i.e. proxy_pass http://$ec2instance:8080/test/test.html, it gets to the > test.html page of the application but does not load any .css .js files and > says open failed "/opt/nginx/html/test/test.html. The same problem report format will probably help here too. Each css or js request is a separate http request, so for each one you can say "I make request A; I get response B; but I expect response C". Good luck with it, f -- Francis Daly francis at daoine.org From pasik at iki.fi Mon Nov 4 11:58:09 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Mon, 4 Nov 2013 13:58:09 +0200 Subject: nginx http proxy support for backend server health checks / status monitoring url In-Reply-To: References: <20131031122641.GY2924@reaktio.net> <20131031124120.GD90747@lo0.su> Message-ID: <20131104115809.GE2924@reaktio.net> On Fri, Nov 01, 2013 at 01:58:52PM +0800, Weibin Yao wrote: > HPPS health check is difficult for the check module, I have added an > alternative feature for this request. The 'port' option can be > specifed with different port from the server's original port. For > example: > > server { > server 192.168.1.1:443; > > check interval=3000 rise=1 fall=3 timeout=2000 type=http port=80; > check_http_send "GET / HTTP/1.0\r\n\r\n"; > check_http_expect_alive http_2xx http_3xx; > } > > This feature has been exeisted in the develop branch for upstream > check module: (https://github.com/yaoweibin/nginx_upstream_check_module/tree/development) > Or tengine: http://tengine.taobao.org/document/http_upstream_check.html > Great! Exactly what I need.. I'll test it soon. -- Pasi > 2013/10/31 Ruslan Ermilov : > > On Thu, Oct 31, 2013 at 02:26:41PM +0200, Pasi K?rkk?inen wrote: > >> Hello, > >> > >> I'm using nginx as a http proxy / loadbalancer for an application which > >> which has the following setup on the backend servers: > >> > >> - https/403 provides the application at: > >> - https://hostname-of-backend/app/ > >> > >> - status monitoring url is available at: > >> - http://hostname-of-backend/foo/server_status > >> - https://hostname-of-backend/foo/server_status > >> > >> So the status url is available over both http and https, and the status url tells if the application is fully up and running or not. > >> Actual application is only available over https. > >> > >> It's important to decide the backend server availability based on the status url contents/reply, > >> otherwise you might push traffic to a backend that isn't fully up and running yet, > >> causing false errors to end users. > >> > >> So.. I don't think nginx currently provides proper status monitoring url support for proxy backends ? > >> > >> I've found some plugins for this, but they seem to have limitations aswell: > >> > >> - http://wiki.nginx.org/HttpHealthcheckModule > >> - https://github.com/cep21/healthcheck_nginx_upstreams > >> - only http 1.0 support, no http 1.1 support > >> - doesn't seem to be maintained anymore, latest version 2+ years old > >> > >> - https://github.com/yaoweibin/nginx_upstream_check_module > >> - only supports http backends, so health checks must be over http aswell, not over https > >> - if actual app is on 443/https, cannot configure separate port 80 for health checks over http > >> - only "ssl" health check possible for https backends > >> > >> > >> Any suggestions? Or should I start hacking and improving the existing plugins.. > >> Thanks! > > > > This functionality is currently available in our commercial version: > > http://nginx.com/products/ > > > > The documentation is here: > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#health_check > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Nov 4 13:00:46 2013 From: nginx-forum at nginx.us (odesport) Date: Mon, 04 Nov 2013 08:00:46 -0500 Subject: Define a proxy for Nginx Message-ID: Hello, My Nginx servers are behind a proxy. Some PHP apps need to reach external web sites (for RSS feeds for example). I've tried this in nginx.conf : env http_proxy=http://myproxy:port but there is no effect. How can I define a proxy for nginx ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244407,244407#msg-244407 From contact at jpluscplusm.com Mon Nov 4 14:01:39 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 4 Nov 2013 14:01:39 +0000 Subject: Define a proxy for Nginx In-Reply-To: References: Message-ID: On 4 November 2013 13:00, odesport wrote: > Hello, > > My Nginx servers are behind a proxy. Some PHP apps need to reach external > web sites (for RSS feeds for example). I've tried this in nginx.conf : > > env http_proxy=http://myproxy:port > > but there is no effect. > > How can I define a proxy for nginx ? I don't know the answer to that specific question, but I believe you're asking the /wrong/ question :-) Nginx doesn't execute your PHP, it just passes the requests to another process. Your PHP apps will be executing in the context of this different process (such as php-fpm) and it is /that/ process which you need to inform about an outbound HTTP proxy. The specifics of how you do that will depend on which process you've chosen to contain your PHP, and the way in which your PHP makes outbound HTTP calls. HTH, Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Mon Nov 4 14:09:58 2013 From: nginx-forum at nginx.us (odesport) Date: Mon, 04 Nov 2013 09:09:58 -0500 Subject: Define a proxy for Nginx In-Reply-To: References: Message-ID: <7ec2d3bb41a8c060b44a50c1f0ec49cd.NginxMailingListEnglish@forum.nginx.org> I can't modify PHP code. I've managed to do this for Apache by adding the line export http_proxy="http://myproxy:port" in /etc/apache2/envvars Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244407,244410#msg-244410 From ian.hobson at ntlworld.com Mon Nov 4 17:37:15 2013 From: ian.hobson at ntlworld.com (Ian Hobson) Date: Mon, 04 Nov 2013 17:37:15 +0000 Subject: Help needed with config Message-ID: <5277DB4B.9010906@ntlworld.com> Hi, I'm baffled. What I want to do is to serve static and php files from one root if they exist there, and from another if they don't, and give a 404 error if the file is in neither location. I have the following config file. server { server_name reseller.anake.hcs; listen 80; fastcgi_read_timeout 300; index index.php; set $resellerroot "/home/ian/websites/reseller/htdocs"; set $centralroot "/home/ian/websites/coachmaster3dev/htdocs"; root $resellerroot; # if / then redirect to index.php location = / { # serve /index.php rewrite ^ /index.php last; } # if local php file exits, serve with fcgi location ~ \.php$ { try_files $uri $uri/ @masterphp; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param CENTRAL_ROOT $centralroot; fastcgi_param RESELLER_ROOT $resellerroot; include /etc/nginx/fastcgi_params; } # serve php file from master root location @masterphp { root $centralroot; try_files $uri $uri/ =404; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $centralroot$fastcgi_script_name; fastcgi_param CENTRAL_ROOT $centralroot; fastcgi_param RESELLER_ROOT $resellerroot; include /etc/nginx/fastcgi_params; } # serve local static files if they exist try_files $uri @masterstatic; # switch to master set when they don't location @masterstatic { root $centralroot; try_files $uri =404; } # now to configure the long polling push_store_messages on; location /publish { push_publisher; set $push_channel_id $arg_id; push_message_timeout 30s; push_max_message_buffer_length 10; } # public long-polling endpoint location /activity { push_subscriber; push_subscriber_concurrency broadcast; set $push_channel_id $arg_id; default_type text/plain; } } It gives me "No input file specified. " for *all* inputs - and I mean all. Files in $centralroot, files in $resellerroot, files in neither, static files, and php files. Why? What am I doing silly??? I'm using nginx 1.2.6, compiled with the Comet module included. Thanks Ian From jeroen.ooms at stat.ucla.edu Mon Nov 4 18:42:00 2013 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Mon, 4 Nov 2013 10:42:00 -0800 Subject: Support for relative URL in Location header Message-ID: HTTP status codes such as 201, 301, 302, etc rely on the HTTP Location header. The current standard of HTTP specifies that this URL must be absolute. However, all popular browsers will accept a relative URL, and it is correct according to the upcoming revision of HTTP/1.1. See also [1]. I noticed that the version of nginx that I'm running (1.1.9, ubuntu precise) does not properly interpret a relative URL. The docs on "proxy_redirect" state that "The default replacement specified by the default parameter uses the parameters of the location and proxy_pass directives. " [2]. However it does not work when the Location path is relative. For example if: location /one/ { proxy_pass http://upstream:port/two/; proxy_redirect default; } Then a location header "http://upstream:port/two/foo.bar" gets rewritten to "/one/foo.bar". However a location header "/two/foo.bar" does not get rewritten to "/one/foo.bar", as it should if the relative URL were supported. Is this something that will, or already has been implemented in more recent versions of nginx? [1] http://en.wikipedia.org/wiki/HTTP_location#Relative_URL_example [2] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect From francis at daoine.org Mon Nov 4 19:01:37 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 4 Nov 2013 19:01:37 +0000 Subject: Help needed with config In-Reply-To: <5277DB4B.9010906@ntlworld.com> References: <5277DB4B.9010906@ntlworld.com> Message-ID: <20131104190137.GI16008@craic.sysops.org> On Mon, Nov 04, 2013 at 05:37:15PM +0000, Ian Hobson wrote: Hi there, > I'm baffled. What I want to do is to serve static and php files from one > root if they exist there, and > from another if they don't, and give a 404 error if the file is in > neither location. I have the following config file. > It gives me "No input file specified. " for *all* inputs - and I mean > all. Files in $centralroot, > files in $resellerroot, files in neither, static files, and php files. > > Why? What am I doing silly??? Not using that config file, that server block, or that nginx? > I'm using nginx 1.2.6, compiled with the Comet module included. With $ sbin/nginx -V nginx version: nginx/1.2.6 built by gcc 4.4.5 (Debian 4.4.5-8) configure arguments: --with-debug --add-module=../nginx_http_push_module-0.692/ it happily serves local files and php scripts for me. What does the debug log say is happening, when you use "curl" to access one specific url? f -- Francis Daly francis at daoine.org From ian.hobson at ntlworld.com Mon Nov 4 19:23:28 2013 From: ian.hobson at ntlworld.com (Ian Hobson) Date: Mon, 04 Nov 2013 19:23:28 +0000 Subject: Help needed with config In-Reply-To: <20131104190137.GI16008@craic.sysops.org> References: <5277DB4B.9010906@ntlworld.com> <20131104190137.GI16008@craic.sysops.org> Message-ID: <5277F430.4030609@ntlworld.com> Hi Francis, Your guess was spot on. I had domain "resellerdev.anake.hcs" confused with "reseller.anake.hcs" , becasue of a switch about on my favourites menu :( Thanks. Ian On 04/11/2013 19:01, Francis Daly wrote: > On Mon, Nov 04, 2013 at 05:37:15PM +0000, Ian Hobson wrote: > > Hi there, > >> I'm baffled. What I want to do is to serve static and php files from one >> root if they exist there, and >> from another if they don't, and give a 404 error if the file is in >> neither location. I have the following config file. >> It gives me "No input file specified. " for *all* inputs - and I mean >> all. Files in $centralroot, >> files in $resellerroot, files in neither, static files, and php files. >> >> Why? What am I doing silly??? > Not using that config file, that server block, or that nginx? > >> I'm using nginx 1.2.6, compiled with the Comet module included. > With > > $ sbin/nginx -V > nginx version: nginx/1.2.6 > built by gcc 4.4.5 (Debian 4.4.5-8) > configure arguments: --with-debug --add-module=../nginx_http_push_module-0.692/ > > it happily serves local files and php scripts for me. > > What does the debug log say is happening, when you use "curl" to access > one specific url? > > f -- Ian Hobson 31 Sheerwater, Northampton NN3 5HU, Tel: 01604 513875 Preparing eBooks for Kindle and ePub formats to give the best reader experience. From mdounin at mdounin.ru Tue Nov 5 01:08:02 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 05:08:02 +0400 Subject: Support for relative URL in Location header In-Reply-To: References: Message-ID: <20131105010802.GR95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 10:42:00AM -0800, Jeroen Ooms wrote: > HTTP status codes such as 201, 301, 302, etc rely on the HTTP Location > header. The current standard of HTTP specifies that this URL must be > absolute. However, all popular browsers will accept a relative URL, > and it is correct according to the upcoming revision of HTTP/1.1. See > also [1]. > > I noticed that the version of nginx that I'm running (1.1.9, ubuntu > precise) does not properly interpret a relative URL. The docs on > "proxy_redirect" state that "The default replacement specified by the > default parameter uses the parameters of the location and proxy_pass > directives. " [2]. However it does not work when the Location path is > relative. For example if: > > location /one/ { > proxy_pass http://upstream:port/two/; > proxy_redirect default; > } Docs explicity say that this is equivalent to location /one/ { proxy_pass http://upstream:port/two/; proxy_redirect http://upstream:port/two/ /one/; Note that it just illustrates what the sentence you've quoted means. > Then a location header "http://upstream:port/two/foo.bar" gets > rewritten to "/one/foo.bar". However a location header "/two/foo.bar" > does not get rewritten to "/one/foo.bar", as it should if the relative > URL were supported. > > Is this something that will, or already has been implemented in more > recent versions of nginx? The proxy_redirect directive does string replacement, not URI mapping. If you want it to replace "/two/" with "/one/", you can configure it to do so. It's just not something it does by default. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Nov 5 01:51:53 2013 From: nginx-forum at nginx.us (etrader) Date: Mon, 04 Nov 2013 20:51:53 -0500 Subject: How to use location before rewrite Message-ID: I have a set of rewrite rules as rewrite ^/(.*) /script.php?file=$1 last; location ~ \.php$ { php proxy } but I want to make a few exceptions as location = file1|file2|file3 { static delivery } but rewrite will change any static file to php file before the latter location. I cannot use rewrite inside the location, as it should generate php script before location (and I have other locations for subdirectories too). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244442,244442#msg-244442 From francis at daoine.org Tue Nov 5 07:34:37 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 5 Nov 2013 07:34:37 +0000 Subject: How to use location before rewrite In-Reply-To: References: Message-ID: <20131105073437.GL16008@craic.sysops.org> On Mon, Nov 04, 2013 at 08:51:53PM -0500, etrader wrote: Hi there, the answer to the question in the Subject: line is "you don't". > I have a set of rewrite rules as > > rewrite ^/(.*) /script.php?file=$1 last; > > location ~ \.php$ { > php proxy > } > > but I want to make a few exceptions as > > location = file1|file2|file3 { > static delivery > } Depending on the full plan, perhaps either put your rewrites inside "location / {}"; or add earlier rewrites of the form rewrite ^(/file1)$ $1 last; f -- Francis Daly francis at daoine.org From contact at jpluscplusm.com Tue Nov 5 11:42:44 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 5 Nov 2013 11:42:44 +0000 Subject: Define a proxy for Nginx In-Reply-To: <7ec2d3bb41a8c060b44a50c1f0ec49cd.NginxMailingListEnglish@forum.nginx.org> References: <7ec2d3bb41a8c060b44a50c1f0ec49cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 4 November 2013 14:09, odesport wrote: > I can't modify PHP code. I've managed to do this for Apache by adding the > line > > export http_proxy="http://myproxy:port" > > in /etc/apache2/envvars Glad to hear you've solved your problem :-) J From richard at kearsley.me Tue Nov 5 13:30:38 2013 From: richard at kearsley.me (Richard Kearsley) Date: Tue, 05 Nov 2013 13:30:38 +0000 Subject: multiple ssl certificates within single server {} block Message-ID: <5278F2FE.1090600@kearsley.me> Hi I was wondering if there's any way to have a configuration like this? server { listen 80; listen 443 ssl; ssl_certificate www.example.com.cer; ssl_certificate_key www.example.com.key; ssl_certificate www.test.com.cer; ssl_certificate_key www.test.com.key; ssl_certificate www.something.com.cer; ssl_certificate_key www.something.com.key; location / { # lots of config here # which I really don't want to duplicate } } I want to avoid duplicating server blocks since they will have exactly the same location configurations below them and I want to avoid using server_name since my server handles requests from lots of different domain names It would need to use SNI - only a single ip for all domains maybe having the server name as part of the "ssl_certificate" line would be quite elegant: ssl_certificate www.example.com.cer server=www.example.com; ssl_certificate_key www.example.com.key server=www.example.com; Thanks -- Richard From contact at jpluscplusm.com Tue Nov 5 13:50:48 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 5 Nov 2013 13:50:48 +0000 Subject: multiple ssl certificates within single server {} block In-Reply-To: <5278F2FE.1090600@kearsley.me> References: <5278F2FE.1090600@kearsley.me> Message-ID: On 5 November 2013 13:30, Richard Kearsley wrote: > Hi > > I was wondering if there's any way to have a configuration like this? > > server > { > listen 80; > listen 443 ssl; > > ssl_certificate www.example.com.cer; > ssl_certificate_key www.example.com.key; > ssl_certificate www.test.com.cer; > ssl_certificate_key www.test.com.key; > ssl_certificate www.something.com.cer; > ssl_certificate_key www.something.com.key; > > location / > { > # lots of config here > # which I really don't want to duplicate > } > } > I want to avoid duplicating server blocks since they will have exactly the > same location configurations below them > and I want to avoid using server_name since my server handles requests from > lots of different domain names > It would need to use SNI - only a single ip for all domains How are you intending to use SNI /without/ also providing multiple server_names (either split across several server{}s or all inside one server{})? Please show a duplicated (i.e. operationally inefficient) config that you wish to aggregate, as I don't understand the result you're aiming for. J From richard at kearsley.me Tue Nov 5 13:57:21 2013 From: richard at kearsley.me (Richard Kearsley) Date: Tue, 05 Nov 2013 13:57:21 +0000 Subject: multiple ssl certificates within single server {} block In-Reply-To: References: <5278F2FE.1090600@kearsley.me> Message-ID: <5278F941.8060905@kearsley.me> On 05/11/13 13:50, Jonathan Matthews wrote: > Please show a duplicated (i.e. operationally inefficient) config that > you wish to aggregate, as I don't understand the result you're aiming > for. J something like this is the only way I see to do it currently: http { server { listen 80; listen 443 ssl; server_name www.example.com ssl_certificate www.example.com.cer; ssl_certificate_key www.example.com.key; location / { # lots of config here # which I really don't want to duplicate } # and about 10 other locations! } server { listen 80; listen 443 ssl; server_name www.test.com ssl_certificate www.test.com.cer; ssl_certificate_key www.test.com.key; location / { # lots of config here # which I really don't want to duplicate } # and about 10 other locations! } server { listen 80; listen 443 ssl; server_name www.something.com ssl_certificate www.something.com.cer; ssl_certificate_key www.something.com.key; location / { # lots of config here # which I really don't want to duplicate } # and about 10 other locations! } } this could go on for 100's of domains... Cheers -- Richard From appa at perusio.net Tue Nov 5 15:25:10 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Tue, 5 Nov 2013 16:25:10 +0100 Subject: Define a proxy for Nginx In-Reply-To: <7ec2d3bb41a8c060b44a50c1f0ec49cd.NginxMailingListEnglish@forum.nginx.org> References: <7ec2d3bb41a8c060b44a50c1f0ec49cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Assuming you're using php-fpm or php-cgi you can set a param to pass that as a server variable: fastcgi_param HTTP_PROXY 'http://proxy:myport'; Then you'll have a $_SERVER['HTTP_PROXY'] entry for the global $_SERVER. HTH, ----appa On Mon, Nov 4, 2013 at 3:09 PM, odesport wrote: > I can't modify PHP code. I've managed to do this for Apache by adding the > line > > export http_proxy="http://myproxy:port" > > in /etc/apache2/envvars > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,244407,244410#msg-244410 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 5 15:35:13 2013 From: nginx-forum at nginx.us (odesport) Date: Tue, 05 Nov 2013 10:35:13 -0500 Subject: Define a proxy for Nginx In-Reply-To: References: Message-ID: Thanks, but with fastcgi_param I have to modify PHP code. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244407,244462#msg-244462 From appa at perusio.net Tue Nov 5 15:50:37 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Tue, 5 Nov 2013 16:50:37 +0100 Subject: Define a proxy for Nginx In-Reply-To: References: Message-ID: No you don't. It's a server config. It will set the same global as the Apache env thing AFAIK. ----appa On Tue, Nov 5, 2013 at 4:35 PM, odesport wrote: > Thanks, but with fastcgi_param I have to modify PHP code. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,244407,244462#msg-244462 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From timwolla at bastelstu.be Tue Nov 5 16:27:47 2013 From: timwolla at bastelstu.be (=?ISO-8859-1?Q?Tim_D=FCsterhus?=) Date: Tue, 05 Nov 2013 17:27:47 +0100 Subject: multiple ssl certificates within single server {} block In-Reply-To: <5278F941.8060905@kearsley.me> References: <5278F2FE.1090600@kearsley.me> <5278F941.8060905@kearsley.me> Message-ID: <52791C83.1030508@bastelstu.be> On 05.11.2013 14:57, Richard Kearsley wrote: > this could go on for 100's of domains... This sounds like you want to use `include`, i use it myself for general settings, valid for any domain: server { listen 443 ssl; include /etc/nginx/ssl-common.conf; ssl_certificate /etc/nginx/ssl/com.example.crt; server_name example.com; include /etc/nginx/common.conf; } With the contents of /etc/nginx/common.conf being: location ~ /.ht { return 444; } add_header X-Frame-Options SAMEORIGIN; Tim From jeroen.ooms at stat.ucla.edu Tue Nov 5 16:30:42 2013 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Tue, 5 Nov 2013 08:30:42 -0800 Subject: Support for relative URL in Location header In-Reply-To: <20131105010802.GR95765@mdounin.ru> References: <20131105010802.GR95765@mdounin.ru> Message-ID: On Mon, Nov 4, 2013 at 5:08 PM, Maxim Dounin wrote: > The proxy_redirect directive does string replacement, not URI > mapping. If you want it to replace "/two/" with "/one/", you can > configure it to do so. It's just not something it does by > default. Exactly. I was trying to argue that it probably should do this by default, otherwise it leads to behavior that is incorrect in light of the revised interpretation of the Location header. From richard at kearsley.me Tue Nov 5 16:51:12 2013 From: richard at kearsley.me (Richard Kearsley) Date: Tue, 05 Nov 2013 16:51:12 +0000 Subject: multiple ssl certificates within single server {} block In-Reply-To: <52791C83.1030508@bastelstu.be> References: <5278F2FE.1090600@kearsley.me> <5278F941.8060905@kearsley.me> <52791C83.1030508@bastelstu.be> Message-ID: <52792200.5080302@kearsley.me> On 05/11/13 16:27, Tim D?sterhus wrote: > This sounds like you want to use `include`, i use it myself for general > settings, valid for any domain: fair point would it work like this (an include in an include?) http { include www.example.com.conf; include www.test.com.conf; include www.something.com.conf; } www.example.com.conf: server { listen 80; listen 443 ssl; server_name www.example.com; ssl_certificate www.example.com.cer; ssl_certificate_key www.example.com.key; include locations.conf; } www.test.com.conf: server { listen 80; listen 443 ssl; server_name www.test.com; ssl_certificate www.test.com.cer; ssl_certificate_key www.test.com.key; include locations.conf; } www.something.com.conf: server { listen 80; listen 443 ssl; server_name www.something.com; ssl_certificate www.something.com.cer; ssl_certificate_key www.something.com.key; include locations.conf; } locations.conf: location / { # lots of config here # which I really don't want to duplicate } # and about 10 other locations! From mdounin at mdounin.ru Tue Nov 5 16:53:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 20:53:56 +0400 Subject: Support for relative URL in Location header In-Reply-To: References: <20131105010802.GR95765@mdounin.ru> Message-ID: <20131105165356.GZ95765@mdounin.ru> Hello! On Tue, Nov 05, 2013 at 08:30:42AM -0800, Jeroen Ooms wrote: > On Mon, Nov 4, 2013 at 5:08 PM, Maxim Dounin wrote: > > The proxy_redirect directive does string replacement, not URI > > mapping. If you want it to replace "/two/" with "/one/", you can > > configure it to do so. It's just not something it does by > > default. > > Exactly. I was trying to argue that it probably should do this by > default, otherwise it leads to behavior that is incorrect in light of > the revised interpretation of the Location header. It does exactly what's advertised, so I don't think it's incorrect in any light. It might be more convenient to have it to replace "/two/" with "/one/" by default, but given the number of various relative URI forms (e.g., consider "//upstream:port/two/"), I don't think it's feasible without changing proxy_redirect to actually do URI mapping instead of string replacement. -- Maxim Dounin http://nginx.org/en/donation.html From timwolla at bastelstu.be Tue Nov 5 16:59:26 2013 From: timwolla at bastelstu.be (=?ISO-8859-1?Q?Tim_D=FCsterhus?=) Date: Tue, 05 Nov 2013 17:59:26 +0100 Subject: multiple ssl certificates within single server {} block In-Reply-To: <52792200.5080302@kearsley.me> References: <5278F2FE.1090600@kearsley.me> <5278F941.8060905@kearsley.me> <52791C83.1030508@bastelstu.be> <52792200.5080302@kearsley.me> Message-ID: <527923EE.2080803@bastelstu.be> On 05.11.2013 17:51, Richard Kearsley wrote: > would it work like this (an include in an include?) Did you try it? ;) Yes it does work. Debian by default uses a folder /etc/nginx/sites-enabled for all vHosts / domains. You can easily include any file in there via: include /etc/nginx/sites-enabled/*; An excerpt of my /etc/nginx looks like this: /etc/nginx/ +-- common.conf +-- nginx.conf +-- passwd | +-- munin.example.com +-- sites-available | +-- _ | +-- example.com | +-- localhost | +-- munin.example.com +-- sites-enabled | +-- _ -> /etc/nginx/sites-available/_ | +-- example.com -> /etc/nginx/sites-available/example.com | +-- localhost -> /etc/nginx/sites-available/localhost | +-- munin.example.com -> /etc/nginx/sites-available/munin.example.com +-- ssl | +-- _ | +-- com.example.crt | +-- com.example.munin.crt +-- ssl-common.conf nginx.conf includes all the sites-enabled via the line above. The sites-enabled include the respective common.conf / ssl-common.conf like explained in my last mail. Tim From nginx-forum at nginx.us Wed Nov 6 07:24:46 2013 From: nginx-forum at nginx.us (zither) Date: Wed, 06 Nov 2013 02:24:46 -0500 Subject: Its this duty to inform you many abilities regarding diverse number of Nike shoes or boots Message-ID: High priced black-jack shoe? How much time do everyone have on? Exactly how generally can you rinse these individuals? The best way do everyone rinse? Right after washiing do you find it just like while brand new? Frequently I acquired coorespondence related to these kinds of doubts. Being a news reporter for countless years, MY PARTNER AND I come across its complicated to protect your own [url=http://www.lebronxmvp.fr/]lebron chaussure[/url] boots and shoes with regard to extended time period. Numerous readers worry about their favorite shoes and boots. I am aware the sensation, while i bathe this primary sneaker, no matter the way cautious My organization is. It develop into previous so I simply wear more than once. Its this duty to inform you many abilities regarding diverse number of Nike shoes or boots. Such as many sports shoes or boots, Nike Air shoes or boots may cost a tad. Even so, [url=http://www.lebronxmvp.fr/nike-lebron-chaussures]lebron x pas cher[/url] Boots and shoes in addition to solutions might endure bums and bruises (one with the causes these are minute pricey) should they tend to be effectively cared for. The particular shoes or boots can ought to be thoroughly clean regarding just about any soil, oil, mud, grass stains a strong etc. Preserving these individuals cleaned out will make these folks appear better as well as be preserved longer. Should you have on ones Nikes for just a court activity (basketball, volleyball, football, etc. ) trying to keep these individuals thoroughly clean may also enhance traction in addition to grip, leading to greater effectiveness as well as lesser number of injuries. ONE. Fill 2 tbsp. dishwashing detergent right into a medium-sized suitable container. ONLY TWO. Fill up the particular suitable container halfway in place together with nice normal water. THREE. Dunk your sponge to the soapy drinking water as well as start off wiping along this shoes and boots. Give you all dirt, mud plus dirt away from. SEVERAL. Dip some sort of toothbrush to the soapy drinking water as well as wash just about any highly dusty pieces. This particular should also end up being accustomed to scrub that crevices for the bottoms with the boots and shoes. A FEW. Allow the particular shoes or boots to help air dry out absolutely before sporting. A number of readers likewise expected me personally many abilities to make personality throughout sporting Nike Dunk shoes and boots. Dunks were part of [url=http://www.lebronxmvp.fr/nike-lebron-chaussures]lebron 11 prix[/url] lineup considering that 1980s. Formerly that will possibly be sensible sports shoes and boots, Dunks have grown to be a hot adornment between a bundle of groupings, such as field hockey participants, skaters plus fashion-conscious males and females. Possibly that which is almost all apparent with regards to Nike Dunk is the numerous coloring mixtures in which the shoes and boots are offered. Subsequently, it could be tricky to make the decision exactly what clothes Dunks need to be paired by using. ONE. Don possibly high-top or even low-top Dunks using a laid-back miniskirt as well as denim shorts, leaving behind your hip and legs bare. Adult males may also have on Dunks together with shorts, friends and family people in which ending at as well as underneath that leg. ONLY TWO. Binocular thin denims as well as leggings together with high-top, not necessarily low-top, [url=http://www.lebronxmvp.fr/nike-lebron-chaussures]chaussure lebron 10[/url] Dunks. Tuck from the bottom belonging to the leggings or even jeans in the boot. THREE. Develop a laid-back appearance simply by sporting your own Nike Dunks using baggy perspire trousers including a installed tee shirt. Inside warm temperatures, roll up the actual sweatpants to the knees. SEVERAL. Don Nike Dunks in conjunction with some sort of bomber jacket as well as jeans throughout frosty temperatures. Finish this particular laid-back appearance using a beanie as well as a new straight visor limit. A FEW. Develop a laid-back appearance through loosening this laces in addition to draging this language up wards to produce the illusion that the footwear are certainly not linked. Most of these just two forms of Nike shoes or boots. Like a a blueprint, for those who have give ideas related to additional number of Nike boots and shoes. And also get almost any queries, make contact with my home or even visit your on-line look [url=http://www.lebronxmvp.fr/nike-lebron-chaussures]lebron mvp shoes[/url]. You're going to get satisfied answer. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244479,244479#msg-244479 From nginx-forum at nginx.us Wed Nov 6 08:49:59 2013 From: nginx-forum at nginx.us (ans34) Date: Wed, 06 Nov 2013 03:49:59 -0500 Subject: Nginx 1.2.7 + Apache2 2.2.24 on Freebsd 9.1 speedtest mini hosting upload speed problem Message-ID: <7f135f1b7afa9c5d435f231d12a093db.NginxMailingListEnglish@forum.nginx.org> I have Nginx 1.2.7 + Apache2 2.2.24. Nginx config is default, no buffer sizes changed etc. I moved my speedtest mini host there and got problem with upload speed. Speedtest folder is on RAMdisk, so there is no problems with access speed and download speed is 300+ Mbit/s. But upload was anly 40-50 Mbit/s After setting sendfile off; in nginx.conf upload speed increased to 80-120 Mbit/s During upload test in top io mode nginx show 100% VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND 4839 0 0 16 0 16 100.00% nginx and gstat show writing operations on hdd Setting client_body_temp_path to ramdisk folder and changing client_body* variables to not default values make upload speed slower Is there anything onther to change to make uplaod speed much faster Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244483,244483#msg-244483 From contact at jpluscplusm.com Wed Nov 6 09:24:07 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 6 Nov 2013 09:24:07 +0000 Subject: Define a proxy for Nginx In-Reply-To: References: <7ec2d3bb41a8c060b44a50c1f0ec49cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 5 November 2013 15:25, Ant?nio P. P. Almeida wrote: > Assuming you're using php-fpm or php-cgi you can set a param to pass that as > a server variable: > > fastcgi_param HTTP_PROXY 'http://proxy:myport'; > > Then you'll have a $_SERVER['HTTP_PROXY'] entry for the global $_SERVER. I don't think this is right, for a couple of reasons. Firstly, some reading has suggested that there isn't a way to force the stock PHP HTTP request libraries to use a proxy just by setting an envvar. Witness, for instance, the code-level changes that are (/were?) required to get a relatively mainstream piece of s/w like WP to work with an outbound proxy: http://wpengineer.com/1227/wordpress-proxysupport/ Secondly, the specific string mentioned would (unless I'm missing something, which is very possible!) open a security hole: $_SERVER contains all user-specified HTTP request headers with added "HTTP_" prefixes. The method suggested, if it worked, would mean that, as a user, I could simply provide a "Proxy: my.proxy.server.ip" header and get all outbound HTTP traffic (for my request) proxied via *my* external server. Thereby exposing internal information such as 3rd party API passwords, internal HTTP API call details, etc etc. Again, I may be missing something with either of these points but, obviously, I don't see what it might be! :-) Regards, Jonathan From appa at perusio.net Wed Nov 6 09:39:55 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Wed, 6 Nov 2013 10:39:55 +0100 Subject: Define a proxy for Nginx In-Reply-To: References: <7ec2d3bb41a8c060b44a50c1f0ec49cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: As for the first point, of course that variable needs to be used on the application side. The OP suggested that is cased since he described basically that with the Apache env var directive. As for the second, I was not considering security issues, but: 1. You need to be able to edit the php-fpm configuration. 2. You need to do a reload for the config to take effect. In a properly setup Nginx both of these require *root* access. Yes it's hardly the best way to do things. But then it works and it isn't either the worst. Note that it's not a header at all, but a parameter passed through the FCGI daemon on each request. ----appa On Wed, Nov 6, 2013 at 10:24 AM, Jonathan Matthews wrote: > On 5 November 2013 15:25, Ant?nio P. P. Almeida wrote: > > Assuming you're using php-fpm or php-cgi you can set a param to pass > that as > > a server variable: > > > > fastcgi_param HTTP_PROXY 'http://proxy:myport'; > > > > Then you'll have a $_SERVER['HTTP_PROXY'] entry for the global $_SERVER. > > I don't think this is right, for a couple of reasons. > > Firstly, some reading has suggested that there isn't a way to force > the stock PHP HTTP request libraries to use a proxy just by setting an > envvar. Witness, for instance, the code-level changes that are > (/were?) required to get a relatively mainstream piece of s/w like WP > to work with an outbound proxy: > http://wpengineer.com/1227/wordpress-proxysupport/ > > Secondly, the specific string mentioned would (unless I'm missing > something, which is very possible!) open a security hole: $_SERVER > contains all user-specified HTTP request headers with added "HTTP_" > prefixes. The method suggested, if it worked, would mean that, as a > user, I could simply provide a "Proxy: my.proxy.server.ip" header and > get all outbound HTTP traffic (for my request) proxied via *my* > external server. Thereby exposing internal information such as 3rd > party API passwords, internal HTTP API call details, etc etc. > > Again, I may be missing something with either of these points but, > obviously, I don't see what it might be! :-) > > Regards, > Jonathan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Nov 6 23:15:23 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 6 Nov 2013 15:15:23 -0800 Subject: [ANN] ngx_openresty mainline version 1.4.3.3 released Message-ID: Hello guys! I am happy to announce that the new mainline version of ngx_openresty, 1.4.3.3, is now released: http://openresty.org/#Download Special thanks go to all the contributors for making this happen! Below is the complete change log for this release, as compared to the last (mainline) release, 1.4.3.1: * upgraded LuaNginxModule to 0.9.2. * feature: added new API function ngx.re.find(), which is similar to ngx.re.match, but only returns the beginning index and end index (1-based) of the whole match, which is 30% ~ 40% faster than "ngx.re.match" for simplest regexes. * feature: added new API function ngx.config.prefix() to return the Nginx server "prefix" path. * bugfix: reading ngx.header.HEADER could result in Lua string storage corruptions. thanks Dane Knecht for the report. * bugfix: ngx.re.match: the "ctx" parameter table's "pos" field should start from 1 instead of 0. * bugfix: fixed compilation errors with Nginx older than 1.0.0. * bugfix: localizing the coroutine.* API functions in init_by_lua* for future use in contexts like content_by_lua* might hang the request. thanks James Hurst for the report. * upgraded SrcacheNginxModule to 0.24. * bugfix: fixed compilation errors with Nginx older than 0.9.2. * bugfix: applied the cache_manager_exit patch to the Nginx core to fix an issue when the cache manager process is shutting down. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004003 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From nginx-forum at nginx.us Thu Nov 7 03:35:45 2013 From: nginx-forum at nginx.us (njvack) Date: Wed, 06 Nov 2013 22:35:45 -0500 Subject: 499s and repeated requests Message-ID: <860a31dcce7844bbfd47b5f038902c29.NginxMailingListEnglish@forum.nginx.org> So, I'm running a rather standard nginx -> unicorn -> rails application, all over HTTPS. Recently, I started looking into some strange load bursts (which have been going on for a while but are getting more important), and saw repeated requests (both GET and POST) in my access logs, all getting a 499 status code and reporting 0 bytes sent. I know the browser isn't seeing these 499s, but that they're closing the connection before getting data. These aren't long-running requests; I can find them in my Rails logs and they're returning in 50-100ms. I've seen in other threads that "the user clicked save twice really fast" is the most likely cause -- but we had one client make 6500 requests over the course of a few minutes the other day, so, um, it's probably not that. I'm rather sure this isn't some kind of denial-of-service attack, either; they appear to be legit requests repeated by... the browser...? insanely quickly. I can't, alas, find any way to replicate this locally. I know there are application-level things we can do to mitigate the effects of something like this (duplicate records being created and such), but I want to check at this end: is there some kind of configuration error I could have made that would cause this? In case it matters, we appear to still be running nginx 1.1.19. Thanks! -Nate Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244509,244509#msg-244509 From timwolla at bastelstu.be Thu Nov 7 10:27:02 2013 From: timwolla at bastelstu.be (=?ISO-8859-1?Q?Tim_D=FCsterhus?=) Date: Thu, 07 Nov 2013 11:27:02 +0100 Subject: 499s and repeated requests In-Reply-To: <860a31dcce7844bbfd47b5f038902c29.NginxMailingListEnglish@forum.nginx.org> References: <860a31dcce7844bbfd47b5f038902c29.NginxMailingListEnglish@forum.nginx.org> Message-ID: <527B6AF6.7090603@bastelstu.be> On 07.11.2013 04:35, njvack wrote: > they appear to be legit requests repeated by... the browser...? insanely quickly. I experienced that kind of issue in either Firefox or Chrome (i think it was Firefox) myself. The only difference is that the status code was 444. After the closing of the connection by nginx Firefox would try to repeat the request, instead of showing an error message, until I closed the browser window or nginx served a valid page. The nginx I was running at that time was an nginx 1.4.3 built by dotdeb. Tim From thilo at ginkel.com Thu Nov 7 10:37:22 2013 From: thilo at ginkel.com (Thilo-Alexander Ginkel) Date: Thu, 7 Nov 2013 11:37:22 +0100 Subject: PHP via FastCGI in / and sub-directory location w/ different roots Message-ID: Hi there, I am currently somewhat stuck getting the following setup up and running: I have one PHP web application residing in /usr/share/a that I'd like to have available at /. This works as expected. I have a second PHP web app residing in /var/www/b/public, that I'd like to have available at /b. My current ngnix (1.2.1) configuration looks like this: -- 8< -- server { listen *:443 ssl; listen [::]:443 ; [...] root /usr/share/a; index index.php; location ~ ^/b/.*\.php$ { rewrite ^/b(/.*) $1 break; root /var/www/b/public; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~ ^/b/ { root /var/www/b/public; index index.php; } } -- 8< -- /etc/nginx/fastcgi_params: -- 8< -- fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $uri?$args; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; -- 8< -- I included the rewrite in location ~ ^/b/.*\.php$ because otherwise nginx/php-fpm will look for the script /b/index.php in /var/www/b/public/b/index.php, which has an extra "b/" in the path. With the rewrite enabled, however, the PHP application guesses its own path incorrectly issuing redirects to locations that leave out "/b'. I would like to avoid symlinking /var/www/b into /usr/share/a. Any ideas? Thanks, Thilo -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Nov 7 12:43:12 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 7 Nov 2013 07:43:12 -0500 Subject: PHP via FastCGI in / and sub-directory location w/ different roots In-Reply-To: References: Message-ID: IMHO, using a rewrite in nginx to modify the way PHP processes the request looks strange to me. Maybe am I wrong. Have you had a look in the content of the params you send to PHP through CGI? Overloading some of the variables by removing the leading '/b' there might help PHP resolving correctly the filename while searching it on disk. I would personally look in that direction. I would also use another PHP pool (thus another socket) to isolate both applications. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Nov 8 07:25:33 2013 From: nginx-forum at nginx.us (FlorenceWhite) Date: Fri, 08 Nov 2013 02:25:33 -0500 Subject: Pandora Birthstone Charms UK Message-ID: <21bf2c6f1f28ff6d211997dc1e37cc6c.NginxMailingListEnglish@forum.nginx.org> Plus thomas sabo earrings water flow Thomas Sabo Australia [url=http://www.pandorajewelryukstore.co.uk/pandora]Pandora Sale UK 2013[/url] cheap is turbine thomas sabo charms store driven rather to get just discount thomas sabo earrings jets.Thomas Sabo Onlineshop Peace Rings manufactured their remarkable hand-cut rounded real diamonds set on everyone of your respective incline made out of thin jewelry rings rings together with an excellent variety development promoting Yatuo To acquire total lemon, Bright lights. Tiffany jewelry crafts men definitely so hand-crafted diamond [url=http://www.pandorajewelryukstore.co.uk/pandora/pandora-birthstone-charms]Pandora Birthstone Charms UK[/url] rings rings music band blackberry curve within the very same section. Glistening band along with music band faultlessly enchanting understanding related impressive design. Their influence from your international primary expensive jewelry Thomas Sabo keeping pros since the beneficial custom related real love, announced an upgraded brand of Thomas Sabo relationships diamond engagement ring, lemon virtue and elegance style terrific develop, cherish well lit. Thomas Sabo engagement ring is easily the most associated with a [url=http://www.pandorajewelryukstore.co.uk/pandora/pandora-flowers-charms]Pandora Flowers Charms Sale UK[/url] choice of spanking new Thomas Sabo relationships in the distinct the best wonderful anywhere for getting total virtue regarding real diamonds refract, Yao Ying fantastic condusive to romance followers of excellent expectations together with ambitions. Each are found at targeted dealers gem hard real diamonds with Tiffany rigorous specifications to realize underneath 2%. As a tiny bit of gemstone on the rings superstore credit vendors give [url=http://www.pandorajewelryukstore.co.uk/pandora/pandora-letter-charms]Pandora Letter Charms Cheap[/url] good results only one, Tiffany Gemological home of the primary real diamonds take stringent research survey. Similarity stage by way of a lemon certifying industry professionals charged utilising 4-c compared to a good number rigorous quality requirements simply for working out cut, shade, quality together with carats. Thomas Sabo lemon glimmering incredibly well lit, as well as its mysteries [url=http://www.pandorajewelryukstore.co.uk/pandora/pandora-heart-love-charms]Pandora Heart & Love Charms Sale[/url] from the wonderful virtue together with spared virtually no fee as being a the right cut. Zoran features extremely good cut to publish the best higher level of distribution along with electrifying whizz lemon virtue. An easily affordable thomas sabo necklaces good discounts massive amount website page householders in addition to folks store thomas sabo allure bags inquiring in relation to its sexual rejection involving Search engine thomas sabo allure totes retailer adsense Software program. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244529,244529#msg-244529 From David.Legg at smithelectric.com Fri Nov 8 11:51:36 2013 From: David.Legg at smithelectric.com (David Legg) Date: Fri, 8 Nov 2013 05:51:36 -0600 Subject: File Checking Rewriting and Handing off to Back End Message-ID: Hi All, We're starting to make more and more heavy use of Nginx having shifted from Apache, mostly in handing off to backends running Puma and php-fpm. However, wherever possible we naturally want Nginx to serve what it can with only the bare minimum hitting the back end. However, this generally means writing more and more complex rules. I have a requirement for Nginx to serve a file if it exists or hand off to a Rails controller. The URI is data?x=X&y=Y&z=Z which at the moment goes direct to a Rails controller, sends the file back directly and saves a static png file in the process. What I need to do in Nginx is take the parameters of the URI and check whether the file /data/X_Y_Z.png exists. If not hand off to the Rails controller. This means that Nginx serves files and only hands off to Rails when needed. It also means that I need to formulate the file that I want to check from the get parameters ($args_parameter?) in the Nginx config. I've been trying various things out through try_files which I think is the right area to be looking at. Something like: try_files /tiles/$args_z_$args_x_$args_y.png proxy_hand_off Any ideas how I could do this? Anything would be appreciated including an easier way I haven't thought of. I naturally want to keep 'if' statements down to a minimum, if any at all. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Nov 8 22:24:03 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 8 Nov 2013 22:24:03 +0000 Subject: PHP via FastCGI in / and sub-directory location w/ different roots In-Reply-To: References: Message-ID: <20131108222403.GD28353@craic.sysops.org> On Thu, Nov 07, 2013 at 11:37:22AM +0100, Thilo-Alexander Ginkel wrote: Hi there, > I am currently somewhat stuck getting the following setup up and running: I've tried to replicate what you're doing, and it turns out I'm unable to work out what exactly you're doing. > I have one PHP web application residing in /usr/share/a that I'd like to > have available at /. This works as expected. OK. > I have a second PHP web app residing in /var/www/b/public, that I'd like to > have available at /b. OK. So, when you ask for the url /b/file.txt, which file on the filesystem do you want nginx to return? (And: which file does nginx return?) And when you ask for the url /b/env.php, which file on the filesystem do you want your fastcgi server to process? (I would have expected the answers to these questions to be similar, but using the config you provided, they look different. I suspect I'm doing something wrong.) > location ~ ^/b/.*\.php$ { > rewrite ^/b(/.*) $1 break; > root /var/www/b/public; The rewrite and root here look a bit odd to me; but if it works, it works. > fastcgi_param REQUEST_URI $uri?$args; Is there a reason that isn't just "$request_uri" at the end? Maybe it's a default value from a distribution of nginx? > I included the rewrite in location ~ ^/b/.*\.php$ because otherwise > nginx/php-fpm will look for the script /b/index.php in > /var/www/b/public/b/index.php, which has an extra "b/" in the path. That kind-of answers one of the "which file?" questions, but leaves the other one open. > With the rewrite enabled, however, the PHP application guesses its own path > incorrectly issuing redirects to locations that leave out "/b'. Do you know which variable the application uses to guess its own path? It could be REQUEST_URI or DOCUMENT_URI or maybe something else. (If it is REQUEST_URI, then the change above may work for you.) > I would like to avoid symlinking /var/www/b into /usr/share/a. > > Any ideas? It should be possible; but I'm unclear on what you're trying to do, so I can't suggest how to do it. Maybe the REQUEST_URI change above is useful? Depending on what else is in your config file, maybe replacing the two /b/-related location blocks with the single nested === location ^~ /b/ { alias /var/www/b/public/; index index.php; location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } } === would do what you want? (I've tested this with 1.2.6, not the 1.2.1 that you are using.) f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Nov 8 22:43:48 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 8 Nov 2013 22:43:48 +0000 Subject: File Checking Rewriting and Handing off to Back End In-Reply-To: References: Message-ID: <20131108224348.GE28353@craic.sysops.org> On Fri, Nov 08, 2013 at 05:51:36AM -0600, David Legg wrote: > I've been trying various things out through try_files which I think is the right area to be looking at. Something like: > > try_files /tiles/$args_z_$args_x_$args_y.png proxy_hand_off > > Any ideas how I could do this? The arguments to try_files are prefixed with the current $document_root before being sought as files (with the final argument being magic). So if you put your to-be-served file in the correct place, and use the variables correctly, then it should Just Work. try_files /tiles/${arg_z}_${arg_x}_${arg_y}.png @fallback; You might want to wrap that in a "location = /data {}" block. f -- Francis Daly francis at daoine.org From jombik at platon.org Sat Nov 9 20:44:52 2013 From: jombik at platon.org (Ondrej Jombik) Date: Sat, 9 Nov 2013 21:44:52 +0100 (CET) Subject: Filtering out long (invalid) hostnames Message-ID: Recently we have seen some kind of hacker attempt on our hosting servers, passing very long hostnames in the HTTP Host: header. That means length(hostname) was higher than 2000, for few requests even more than 10000. This was processed well by nginx, passed further to our upstreams, what caused only little trouble there: logs were filled with a lot of garbage. After bit of investigation, I have found that according to RFC, the longest domain name should not be more than 253 characters. Also, splitting domain into labels (labels are strings between dots), each label should not exceed 63 characters. For more info: http://en.wikipedia.org/wiki/Domain_Name_System (search for "Domain name syntax" part) That raises question how nginx handles this kind of long hostnames, and why it still pasess those invalid hostnames to backends (upstreams). However it still passes it, and we want to filter that out. Because the performance matters us much, we want to do that the best possible way. CASE #1: if ($host ~* "^.{254,}$") { return 403; } CASE #2: (this is probably more efficient) server { server_name "~^.{254,}$"; listen 80; return 403; } Case #2 is probably more efficient, but in both cases are regular expressions used. Would it matter if we put that server {} block at the end of our server list? Also would it make any sense to check for a dot (\.) in a server_name or $host, and when not dot is present, return 403 as well? Thanks for sharing your thoughts Ondrej -- Ondrej JOMBIK Platon Technologies s.r.o., Hlavna 3, Sala SK-92701 +421 903 PLATON - info at platon.org - http://platon.org My current location: Phoenix, Arizona My current timezone: -0700 UTC (MST) (updated automatically) From David.Legg at smithelectric.com Sun Nov 10 23:08:32 2013 From: David.Legg at smithelectric.com (David Legg) Date: Sun, 10 Nov 2013 17:08:32 -0600 Subject: File Checking Rewriting and Handing off to Back End In-Reply-To: <20131108224348.GE28353@craic.sysops.org> Message-ID: On 08/11/2013 22:43, "Francis Daly" wrote: >The arguments to try_files are prefixed with the current $document_root >before being sought as files (with the final argument being magic). > >So if you put your to-be-served file in the correct place, and use the >variables correctly, then it should Just Work. > > try_files /tiles/${arg_z}_${arg_x}_${arg_y}.png @fallback; > >You might want to wrap that in a "location = /data {}" block. Yep, that?s exactly what I?m looking for and it?s working fine. I don?t think I had the syntax in the arguments quite correct. It?s not just a question of playing about with the location as to where things will be. From lists at ruby-forum.com Mon Nov 11 08:24:31 2013 From: lists at ruby-forum.com (Charlie O.) Date: Mon, 11 Nov 2013 09:24:31 +0100 Subject: How to deal with Windows 7 password recovery In-Reply-To: References: Message-ID: <3f1d6eeebdbf0f0a891e769e7b451a58@ruby-forum.com> It is not difficult to reset Windows 7 Password. Guide link step by step: http://www.passwordunlocker.com/knowledge/forgot-my-windows-password.html Youtube video: http://www.youtube.com/watch?v=Z4gZTNYR8BQ -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Mon Nov 11 13:09:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Nov 2013 17:09:45 +0400 Subject: Filtering out long (invalid) hostnames In-Reply-To: References: Message-ID: <20131111130945.GZ95765@mdounin.ru> Hello! On Sat, Nov 09, 2013 at 09:44:52PM +0100, Ondrej Jombik wrote: > Recently we have seen some kind of hacker attempt on our hosting > servers, passing very long hostnames in the HTTP Host: header. That > means length(hostname) was higher than 2000, for few requests even more > than 10000. > > This was processed well by nginx, passed further to our upstreams, what > caused only little trouble there: logs were filled with a lot of > garbage. > > After bit of investigation, I have found that according to RFC, the > longest domain name should not be more than 253 characters. Also, > splitting domain into labels (labels are strings between dots), each > label should not exceed 63 characters. > > For more info: http://en.wikipedia.org/wiki/Domain_Name_System > (search for "Domain name syntax" part) > > That raises question how nginx handles this kind of long hostnames, and > why it still pasess those invalid hostnames to backends (upstreams). While DNS names are indeed limited to no more than 255 octets, it's not a case for HTTP, which can be used with non-DNS names as well. > However it still passes it, and we want to filter that out. Because the > performance matters us much, we want to do that the best possible way. > > CASE #1: > > if ($host ~* "^.{254,}$") { > return 403; > } > > CASE #2: (this is probably more efficient) > > server { > server_name "~^.{254,}$"; > listen 80; > return 403; > } > > Case #2 is probably more efficient, but in both cases are regular > expressions used. Recommended use is to list valid names in a server_name directives, and filter out anything else by using a default server which returns an appropriate error. Between the above two cases I would recommend case #2, mostly because it's easier to support. > Would it matter if we put that server {} block at the > end of our server list? As long as there are no other regexp server names in your configuration - position doesn't matter, see http://nginx.org/en/docs/http/server_names.html. > Also would it make any sense to check for a dot (\.) in a server_name or > $host, and when not dot is present, return 403 as well? You may do so if your configuration assumes access via fully qualified domain names only, and you are not hosting TLDs. Otherwise it may cause problems. See here for details: http://en.wikipedia.org/wiki/Fully_qualified_domain_name -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Nov 11 14:51:01 2013 From: nginx-forum at nginx.us (mte03) Date: Mon, 11 Nov 2013 09:51:01 -0500 Subject: partial urlEncoding when using if and $request_uri Message-ID: Hi, I have observed strange behavior with nginx rewrites. What happens: I get request going to myserver.com/appID/path1/path2/uglyID and I need to proxy this to appID.backend.internal/path1/path2/uglyID. The uglyID is URL encoded because it contains characters like commas and forward slashes. When I do location matching in nginx, nginx will url decode $uri parameter on which it does matches. That's not a problem, when the location matches, I can extract the appID and then extract the rest from $request_uri. And here the issue happens. If I use: if ($request_uri ~ ^/[^\/]+(/.*)$ ) { set $path $1; }, the uglyID (but only uglyID) gets URL encoded again. The $path simply is /path1/path2/urlEncoded(uglyID). However, when I do if ($uri ~ ^/[^\/]+(/.*)$ ) { set $path $1; }, then $path will be the urlDecoded(/path1/path2/uglyID) - as expected. I have tested this on ubuntu 12.04 (Linux ip-10-50-20-57 3.2.0-36-virtual #57-Ubuntu SMP Tue Jan 8 22:04:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux) and nginx versions nginx/1.1.19 and nginx/1.5.6 - and the behavior is consistently same in both versions. The nginx config is identical in all cases, only difference is the name of the variable in the if statement, where using $request_uri simply implies uglyID (but only uglyID - not the whole matched string) will be url encoded again. Meanwhile I have found possible workaround using maps, where matching on $request_uri works correctly and doesn't modify matched data anyhow. So I wonder, has anyone else experienced this? Is this expected? Why is the uglyID (but only uglyID) url encoded again? I would expect either urlEncoding of the whole match, or none at all - as it doesn't happen when I try to match e.g. $uri... Could this indicate a possible bug? Thanks, Michael Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244573,244573#msg-244573 From nginx-forum at nginx.us Mon Nov 11 16:01:36 2013 From: nginx-forum at nginx.us (mte03) Date: Mon, 11 Nov 2013 11:01:36 -0500 Subject: partial urlEncoding when using if and $request_uri In-Reply-To: References: Message-ID: <3aa75f2b69346fb8cc3d32ee3fed3be8.NginxMailingListEnglish@forum.nginx.org> Update: I have turned on rewrite log. The rewrite log shows uglyID correctly matched and sent to proxy unencoded. However in TCP dump, I can see that it was re-encoded again. Also on the target server, I can see the incoming request having uglyID re-encoded again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244573,244575#msg-244575 From mdounin at mdounin.ru Mon Nov 11 16:03:35 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Nov 2013 20:03:35 +0400 Subject: partial urlEncoding when using if and $request_uri In-Reply-To: References: Message-ID: <20131111160335.GE95765@mdounin.ru> Hello! On Mon, Nov 11, 2013 at 09:51:01AM -0500, mte03 wrote: > Hi, > > I have observed strange behavior with nginx rewrites. What happens: I get > request going to myserver.com/appID/path1/path2/uglyID and I need to proxy > this to appID.backend.internal/path1/path2/uglyID. The uglyID is URL encoded > because it contains characters like commas and forward slashes. When I do > location matching in nginx, nginx will url decode $uri parameter on which it > does matches. That's not a problem, when the location matches, I can extract > the appID and then extract the rest from $request_uri. And here the issue > happens. If I use: > > if ($request_uri ~ ^/[^\/]+(/.*)$ ) { set $path $1; }, the uglyID (but only > uglyID) gets URL encoded again. The $path simply is > /path1/path2/urlEncoded(uglyID). However, when I do > > if ($uri ~ ^/[^\/]+(/.*)$ ) { set $path $1; }, then $path will be the > urlDecoded(/path1/path2/uglyID) - as expected. > > I have tested this on ubuntu 12.04 (Linux ip-10-50-20-57 3.2.0-36-virtual > #57-Ubuntu SMP Tue Jan 8 22:04:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux) > and nginx versions nginx/1.1.19 and nginx/1.5.6 - and the behavior is > consistently same in both versions. The nginx config is identical in all > cases, only difference is the name of the variable in the if statement, > where using $request_uri simply implies uglyID (but only uglyID - not the > whole matched string) will be url encoded again. Meanwhile I have found > possible workaround using maps, where matching on $request_uri works > correctly and doesn't modify matched data anyhow. > > So I wonder, has anyone else experienced this? Is this expected? Why is the > uglyID (but only uglyID) url encoded again? I would expect either > urlEncoding of the whole match, or none at all - as it doesn't happen when I > try to match e.g. $uri... Could this indicate a possible bug? Yes, this looks like a bug. Here is a ticket we already have for this: http://trac.nginx.org/nginx/ticket/348 -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Nov 11 16:35:34 2013 From: nginx-forum at nginx.us (mte03) Date: Mon, 11 Nov 2013 11:35:34 -0500 Subject: partial urlEncoding when using if and $request_uri In-Reply-To: <20131111160335.GE95765@mdounin.ru> References: <20131111160335.GE95765@mdounin.ru> Message-ID: <7224223b989d7742c555adb357fc08e3.NginxMailingListEnglish@forum.nginx.org> Hi, I can confirm that using named variables solves the issue (as stated in the ticket - maybe you can add my findings (rewrite log) to the ticket comments, as I have no rights to do so). Both if ($request_uri ~ ^/[^\/]+(?/.*)$ ) { set $patch $match; } and if ($request_uri ~ ^/[^\/]+(?/.*)$ ) { set $match abc; } do work correctly. However, from the performance standpoint, is this solution with IF faster (or more recommended) than doing st. like map $request_uri $path { ~^/[^\/]+(?/.*)$ $param; } Is there any way to further improve performance on this kind of matching? Thanks, Michael Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, Nov 11, 2013 at 09:51:01AM -0500, mte03 wrote: > > > Hi, > > > > I have observed strange behavior with nginx rewrites. What happens: > I get > > request going to myserver.com/appID/path1/path2/uglyID and I need to > proxy > > this to appID.backend.internal/path1/path2/uglyID. The uglyID is URL > encoded > > because it contains characters like commas and forward slashes. When > I do > > location matching in nginx, nginx will url decode $uri parameter on > which it > > does matches. That's not a problem, when the location matches, I can > extract > > the appID and then extract the rest from $request_uri. And here the > issue > > happens. If I use: > > > > if ($request_uri ~ ^/[^\/]+(/.*)$ ) { set $path $1; }, the uglyID > (but only > > uglyID) gets URL encoded again. The $path simply is > > /path1/path2/urlEncoded(uglyID). However, when I do > > > > if ($uri ~ ^/[^\/]+(/.*)$ ) { set $path $1; }, then $path will be > the > > urlDecoded(/path1/path2/uglyID) - as expected. > > > > I have tested this on ubuntu 12.04 (Linux ip-10-50-20-57 > 3.2.0-36-virtual > > #57-Ubuntu SMP Tue Jan 8 22:04:49 UTC 2013 x86_64 x86_64 x86_64 > GNU/Linux) > > and nginx versions nginx/1.1.19 and nginx/1.5.6 - and the behavior > is > > consistently same in both versions. The nginx config is identical in > all > > cases, only difference is the name of the variable in the if > statement, > > where using $request_uri simply implies uglyID (but only uglyID - > not the > > whole matched string) will be url encoded again. Meanwhile I have > found > > possible workaround using maps, where matching on $request_uri works > > correctly and doesn't modify matched data anyhow. > > > > So I wonder, has anyone else experienced this? Is this expected? Why > is the > > uglyID (but only uglyID) url encoded again? I would expect either > > urlEncoding of the whole match, or none at all - as it doesn't > happen when I > > try to match e.g. $uri... Could this indicate a possible bug? > > Yes, this looks like a bug. Here is a ticket we already have for > this: > > http://trac.nginx.org/nginx/ticket/348 > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244573,244577#msg-244577 From mdounin at mdounin.ru Mon Nov 11 17:33:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Nov 2013 21:33:41 +0400 Subject: partial urlEncoding when using if and $request_uri In-Reply-To: <7224223b989d7742c555adb357fc08e3.NginxMailingListEnglish@forum.nginx.org> References: <20131111160335.GE95765@mdounin.ru> <7224223b989d7742c555adb357fc08e3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131111173341.GG95765@mdounin.ru> Hello! On Mon, Nov 11, 2013 at 11:35:34AM -0500, mte03 wrote: > Hi, > > I can confirm that using named variables solves the issue (as stated in the > ticket - maybe you can add my findings (rewrite log) to the ticket comments, > as I have no rights to do so). Both You actually have rights to do so (though some login is required). But as the ticket already shows how to reproduce the problem, I don't think linking a rewrite log will be beneficial. > if ($request_uri ~ ^/[^\/]+(?/.*)$ ) { set $patch $match; } > and > if ($request_uri ~ ^/[^\/]+(?/.*)$ ) { set $match abc; } > > do work correctly. However, from the performance standpoint, is this > solution with IF faster (or more recommended) than doing st. like > > map $request_uri $path { > ~^/[^\/]+(?/.*)$ $param; > } > > Is there any way to further improve performance on this kind of matching? I don't think there is a measurable performance difference. Use of map{} might be a bit better as it doesn't involve if(), see http://wiki.nginx.org/IfIsEvil. -- Maxim Dounin http://nginx.org/en/donation.html From jan.algermissen at nordsc.com Tue Nov 12 14:28:48 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Tue, 12 Nov 2013 15:28:48 +0100 Subject: Handler invokation after upstream server being picked Message-ID: <07668B10-8652-4AAD-9ED0-5F799AD3145D@nordsc.com> Maxim, a while ago you replied to my question below. Since yesterday I am trying to get hold of the proxy_host variable set by the proxy module but without success, maybe you can help a little with the code. From the source I understood that the proxy module sets the proxy_host var during NGINX startup. I guess, that happens on a per-location basis, depending on encountering a proxy_pass directive. Correct? I should then be able to access the proxy_host variable during the authentication phase (this is where my module sits). Correct? To get hold of it, I try to find the index of the proxy_host var in my location config merge handler: var.data = (unsigned char*)"proxy_host"; var.len = 10; child->proxy_host_var_index = ngx_http_get_variable_index(cf, &var); if (child->proxy_host_var_index == NGX_ERROR) { /* NOT FOUND, TRY TO INHERIT */ child->proxy_host_var_index = parent->proxy_host_var_index; return NGX_CONF_OK; } /* FOUND - This loca config has its own proxy_pass configuration */ In my handler, I then do this: ngx_http_variable_value_t *value; ... value = ngx_http_get_indexed_variable(r, conf->proxy_host_var_index); Does that make sense? I does not work for me, unfortunately. (The log module does see the proxy_host variable and logs the expected value for it. Jan > Hello! > > On Fri, Oct 25, 2013 at 12:30:34PM +0200, Jan Algermissen wrote: > >> Hi, >> >> I am writing a module that needs to add/change the HTTP >> Authorization server for an upstream request. >> >> Since I am using a signature based authentication scheme where >> the signature base string includes the request host and port the >> header can only be added *after* the upstream module has >> determined which server to send the request to (e.g. after >> applying round-robin). >> >> Is it possible to hook my module into that 'phase' and if so - >> what is the preferred way to do that? >> >> I saw that I at least can access the target host and port set by >> the proxy module by reading the proxy module variables. However, >> that (of course) does only give the server group name to be used >> by the upstream module in the next step. > > A request to an upstream is created once, before a particular > server is known, and the same request is used for requests to all > upstream servers. That is, what you are trying to do isn't > something currently possible. > > -- > Maxim Dounin From mdounin at mdounin.ru Tue Nov 12 15:37:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Nov 2013 19:37:25 +0400 Subject: Handler invokation after upstream server being picked In-Reply-To: <07668B10-8652-4AAD-9ED0-5F799AD3145D@nordsc.com> References: <07668B10-8652-4AAD-9ED0-5F799AD3145D@nordsc.com> Message-ID: <20131112153725.GN95765@mdounin.ru> Hello! On Tue, Nov 12, 2013 at 03:28:48PM +0100, Jan Algermissen wrote: > Maxim, > > a while ago you replied to my question below. > > Since yesterday I am trying to get hold of the proxy_host > variable set by the proxy module but without success, maybe you > can help a little with the code. > > From the source I understood that the proxy module sets the > proxy_host var during NGINX startup. I guess, that happens on a > per-location basis, depending on encountering a proxy_pass > directive. > > Correct? Not exactly. Variables only exists during requests processing. That is, the $proxy_host variable isn't set during nginx startup, but instead its value becomes known when proxy module starts working with a request. > I should then be able to access the proxy_host variable during > the authentication phase (this is where my module sits). > > Correct? No, see above. The $proxy_host variable value is not known till proxy started to work (and, if variables are used in proxy_pass, evaluated its parameter). > To get hold of it, I try to find the index of the proxy_host var > in my location config merge handler: > > var.data = (unsigned char*)"proxy_host"; > var.len = 10; > child->proxy_host_var_index = ngx_http_get_variable_index(cf, &var); > > if (child->proxy_host_var_index == NGX_ERROR) { /* NOT FOUND, TRY TO INHERIT */ > child->proxy_host_var_index = parent->proxy_host_var_index; > return NGX_CONF_OK; > } > > /* FOUND - This loca config has its own proxy_pass configuration */ > > In my handler, I then do this: > > ngx_http_variable_value_t *value; > ... > value = ngx_http_get_indexed_variable(r, conf->proxy_host_var_index); > > Does that make sense? I does not work for me, unfortunately. > > (The log module does see the proxy_host variable and logs the expected value for it. See above for explanation why it doesn't work for you. Looking into src/http/modules/ngx_http_proxy_module.c might be helpful, too. -- Maxim Dounin http://nginx.org/en/donation.html From lagern at lafayette.edu Tue Nov 12 17:07:08 2013 From: lagern at lafayette.edu (Nathan) Date: Tue, 12 Nov 2013 12:07:08 -0500 Subject: SSL Handshake problems, nginx reverse web proxy. Message-ID: <5282603C.3080901@lafayette.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I am working on setting up an http reverse proxy in front of a pre-packaged jetty server. The jetty server is a pre-configured application, and not very flexible. Here's the quick and dirty. I have nginx configured to listen on 443, using its own SSL cert. Then behind nginx, i have anohter server running this jetty application, with its own cert, on port 9192. My nginx config looks like this: server { listen 139.147.165.99:443; server_name papercut.dev.lafayette.edu papercut.dev; access_log /var/log/nginx/papercut.dev.lafayette.edu_access; error_log /var/log/nginx/papercut.dev.lafayette.edu_error debug; ssl on; ssl_certificate /etc/nginx/ssl.crt/papercut.dev.lafayette.edu.crt; ssl_certificate_key /etc/nginx/ssl.key/papercut.dev.lafayette.edu.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:!SSLv2:+EXP; ssl_prefer_server_ciphers on; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; location / { proxy_pass https://printman.dev.lafayette.edu:9192; } } If i hit my vhost on https, i get a 502, bad gateway. The error log reports: 2013/11/12 12:02:10 [error] 28416#0: *230 SSL_do_handshake() failed (SSL: error:140773F2:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert unexpected message) while SSL handshaking to upstream, client: 10.100.0.12, server: papercut.dev.lafayette.edu, request: "GET / HTTP/1.1", upstream: "https://139.147.165.80:9192/", host: "papercut.dev.lafayette.edu" - From what I can tell, this is saying that the ssl connection from my proxy, to my jetty host is failing negotiation. If i browse directly to the target, on https and port 9192, it works perfectly. openssl s_connect from the proxy to the target seems to work ONLY if i force sslv3, If i use TSLv1, or sslv2 it fails. If i use TLSv2 and use -no_ticket, it works. I'm wondering if one of these would solve the proxy problem? But how can i force nginx to use sslv3, or no ticket, when connecting to its target? Thanks! -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlKCYDwACgkQsZqG4IN3suly1QCfbUmLesdBHsrm/diS/Sg0+n8O XN8An3XkdTp3m8P2dzEeoZAKMzp5qjX9 =4UkA -----END PGP SIGNATURE----- From mdounin at mdounin.ru Tue Nov 12 17:14:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Nov 2013 21:14:16 +0400 Subject: SSL Handshake problems, nginx reverse web proxy. In-Reply-To: <5282603C.3080901@lafayette.edu> References: <5282603C.3080901@lafayette.edu> Message-ID: <20131112171416.GP95765@mdounin.ru> Hello! On Tue, Nov 12, 2013 at 12:07:08PM -0500, Nathan wrote: > I am working on setting up an http reverse proxy in front of a > pre-packaged jetty server. The jetty server is a pre-configured > application, and not very flexible. > > Here's the quick and dirty. I have nginx configured to listen on 443, > using its own SSL cert. Then behind nginx, i have anohter server > running this jetty application, with its own cert, on port 9192. [...] > The error log reports: > 2013/11/12 12:02:10 [error] 28416#0: *230 SSL_do_handshake() failed > (SSL: error:140773F2:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert > unexpected message) while SSL handshaking to upstream, client: > 10.100.0.12, server: papercut.dev.lafayette.edu, request: "GET / > HTTP/1.1", upstream: "https://139.147.165.80:9192/", host: > "papercut.dev.lafayette.edu" > > - From what I can tell, this is saying that the ssl connection from my > proxy, to my jetty host is failing negotiation. > > If i browse directly to the target, on https and port 9192, it works > perfectly. > > openssl s_connect from the proxy to the target seems to work ONLY if i > force sslv3, If i use TSLv1, or sslv2 it fails. If i use TLSv2 and > use -no_ticket, it works. > > I'm wondering if one of these would solve the proxy problem? But how > can i force nginx to use sslv3, or no ticket, when connecting to its > target? As of nginx 1.5.6+, there is the proxy_ssl_protocols directive exacly for this kind of problems. Restricting proxy_ssl_ciphers to a smaller set may help too (again, in 1.5.6+). See here for more details: http://nginx.org/r/proxy_ssl_protocols http://nginx.org/r/proxy_ssl_ciphers -- Maxim Dounin http://nginx.org/en/donation.html From lagern at lafayette.edu Tue Nov 12 17:22:24 2013 From: lagern at lafayette.edu (Nathan) Date: Tue, 12 Nov 2013 12:22:24 -0500 Subject: SSL Handshake problems, nginx reverse web proxy. In-Reply-To: <20131112171416.GP95765@mdounin.ru> References: <5282603C.3080901@lafayette.edu> <20131112171416.GP95765@mdounin.ru> Message-ID: <528263D0.9040208@lafayette.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 11/12/2013 12:14 PM, Maxim Dounin wrote: > Hello! Hi! > > As of nginx 1.5.6+, there is the proxy_ssl_protocols directive > exacly for this kind of problems. Restricting proxy_ssl_ciphers to > a smaller set may help too (again, in 1.5.6+). > Good, so now all i have to do is convince Epel to carry a newer version of nginx. # rpm -qa | grep nginx nginx-1.0.15-5.el6.x86_64 I could to and get an rpm elsewhere i'm sure, that breaks our standards though. Any other suggestions? > See here for more details: > > http://nginx.org/r/proxy_ssl_protocols > http://nginx.org/r/proxy_ssl_ciphers > - -- - -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Nathan Lager, RHCSA, RHCE, RHCVA (#110-011-426) System Administrator 11 Pardee Hall Lafayette College, Easton, PA 18042 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlKCY9AACgkQsZqG4IN3sulH2ACcD6rCaefiWyNC11WeHm29jXdq nuEAn0JLJiK6ugUmmQY9csA0JAH9ietm =eSmS -----END PGP SIGNATURE----- From jan.algermissen at nordsc.com Tue Nov 12 18:55:28 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Tue, 12 Nov 2013 19:55:28 +0100 Subject: Handler invokation after upstream server being picked Message-ID: <63D00361-C9E5-44CF-8221-57D57884663D@nordsc.com> > Hello! > > On Tue, Nov 12, 2013 at 03:28:48PM +0100, Jan Algermissen wrote: > > > Maxim, > > > > a while ago you replied to my question below. > > > > Since yesterday I am trying to get hold of the proxy_host > > variable set by the proxy module but without success, maybe you > > can help a little with the code. > > > > From the source I understood that the proxy module sets the > > proxy_host var during NGINX startup. I guess, that happens on a > > per-location basis, depending on encountering a proxy_pass > > directive. > > > > Correct? > > Not exactly. Variables only exists during requests processing. > That is, the $proxy_host variable isn't set during nginx startup, > but instead its value becomes known when proxy module starts > working with a request. Ok. Sorry - I saw in the proxy code that ngx_http_proxy_set_vars() is called inside the proxy_pass directive handler. So I assumed they get set on startup. > > > I should then be able to access the proxy_host variable during > > the authentication phase (this is where my module sits). > > > > Correct? > > No, see above. The $proxy_host variable value is not known till > proxy started to work (and, if variables are used in proxy_pass, > evaluated its parameter). But the value of the proxy_host will never change after startup, is that right? I mean, it will always be the value of the proxy_host directve, or? At least when I log the var, it always is. Anyhow, do you have any suggestion how I best go about adding a request header that is based on the proxy_host variable value. (Background: I need to change the Authorization header, so that request signature actually uses the right target host - see my original question) Jan > > > To get hold of it, I try to find the index of the proxy_host var > > in my location config merge handler: > > > > var.data = (unsigned char*)"proxy_host"; > > var.len = 10; > > child->proxy_host_var_index = ngx_http_get_variable_index(cf, &var); > > > > if (child->proxy_host_var_index == NGX_ERROR) { /* NOT FOUND, TRY TO INHERIT */ > > child->proxy_host_var_index = parent->proxy_host_var_index; > > return NGX_CONF_OK; > > } > > > > /* FOUND - This loca config has its own proxy_pass configuration */ > > > > In my handler, I then do this: > > > > ngx_http_variable_value_t *value; > > ... > > value = ngx_http_get_indexed_variable(r, conf->proxy_host_var_index); > > > > Does that make sense? I does not work for me, unfortunately. > > > > (The log module does see the proxy_host variable and logs the expected value for it. > > See above for explanation why it doesn't work for you. Looking > into src/http/modules/ngx_http_proxy_module.c might be helpful, > too. From mdounin at mdounin.ru Tue Nov 12 21:18:33 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Nov 2013 01:18:33 +0400 Subject: SSL Handshake problems, nginx reverse web proxy. In-Reply-To: <528263D0.9040208@lafayette.edu> References: <5282603C.3080901@lafayette.edu> <20131112171416.GP95765@mdounin.ru> <528263D0.9040208@lafayette.edu> Message-ID: <20131112211833.GS95765@mdounin.ru> Hello! On Tue, Nov 12, 2013 at 12:22:24PM -0500, Nathan wrote: > On 11/12/2013 12:14 PM, Maxim Dounin wrote: > > > As of nginx 1.5.6+, there is the proxy_ssl_protocols directive > > exacly for this kind of problems. Restricting proxy_ssl_ciphers to > > a smaller set may help too (again, in 1.5.6+). > > > Good, so now all i have to do is convince Epel to carry a newer > version of nginx. > > # rpm -qa | grep nginx > nginx-1.0.15-5.el6.x86_64 > > I could to and get an rpm elsewhere i'm sure, that breaks our > standards though. > > > Any other suggestions? Source code can be downloaded here: http://nginx.org/en/download.html It's more or less trivial to compile. And we've even added precompiled mainline packages for various Linux'es, see links on the same page. If it doesn't work for you, you have another obvious option: fixing a backend will do the trick, too. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Nov 12 21:36:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Nov 2013 01:36:30 +0400 Subject: Handler invokation after upstream server being picked In-Reply-To: <63D00361-C9E5-44CF-8221-57D57884663D@nordsc.com> References: <63D00361-C9E5-44CF-8221-57D57884663D@nordsc.com> Message-ID: <20131112213630.GT95765@mdounin.ru> Hello! On Tue, Nov 12, 2013 at 07:55:28PM +0100, Jan Algermissen wrote: > > Hello! > > > > On Tue, Nov 12, 2013 at 03:28:48PM +0100, Jan Algermissen wrote: > > > > > Maxim, > > > > > > a while ago you replied to my question below. > > > > > > Since yesterday I am trying to get hold of the proxy_host > > > variable set by the proxy module but without success, maybe you > > > can help a little with the code. > > > > > > From the source I understood that the proxy module sets the > > > proxy_host var during NGINX startup. I guess, that happens on a > > > per-location basis, depending on encountering a proxy_pass > > > directive. > > > > > > Correct? > > > > Not exactly. Variables only exists during requests processing. > > That is, the $proxy_host variable isn't set during nginx startup, > > but instead its value becomes known when proxy module starts > > working with a request. > > Ok. Sorry - I saw in the proxy code that > ngx_http_proxy_set_vars() is called inside the proxy_pass > directive handler. So I assumed they get set on startup. It's called to cache appropriate values in location a configuration (plcf->vars) if there are no variables in proxy_pass. These values are later used to initialize run-time data in ctx->vars once proxy starts handling a request (again, if there are no variables in proxy_pass). > > > I should then be able to access the proxy_host variable during > > > the authentication phase (this is where my module sits). > > > > > > Correct? > > > > No, see above. The $proxy_host variable value is not known till > > proxy started to work (and, if variables are used in proxy_pass, > > evaluated its parameter). > > But the value of the proxy_host will never change after startup, > is that right? I mean, it will always be the value of the > proxy_host directve, or? At least when I log the var, it always > is. In a simple configuration with proxy_pass without variables - it's always the same in a given location, yes. It can be anything in a configuration like proxy_pass $backend; though. > Anyhow, do you have any suggestion how I best go about adding a > request header that is based on the proxy_host variable value. > > (Background: I need to change the Authorization header, so that > request signature actually uses the right target host - see my > original question) Write a configuration like this: proxy_pass http://backend.example.com; proxy_set_header Authorization $your_module_variable; In your variable's get handler obtain $proxy_host variable, do needed calculations, and return a value you want to use in a request to an upstream server. -- Maxim Dounin http://nginx.org/en/donation.html From jdeltener at realtruck.com Wed Nov 13 03:24:57 2013 From: jdeltener at realtruck.com (Justin Deltener) Date: Tue, 12 Nov 2013 21:24:57 -0600 Subject: limit_req_zone limit by location/proxy Message-ID: For the life of me I can't seem to get my configuration correct to limit requests. I'm running nginx 1.5.1 and have it serving up static content and pushing all non-existent requests to the apache2 proxy backend for serving up. I don't want to limit any requests to static content but do want to limit requests to the proxy. It seems no matter what I put in my configuration I continue to see entries in the error log for ip addresses which are not breaking the rate limit. 2013/11/12 20:55:28 [warn] 10568#0: *1640292 delaying request, excess: 0.412, by zone "proxyzone" client ABCD I've tried using a map in the top level like so limit_req_zone $limit_proxy_hits zone=proxyzone:10m rate=4r/s; map $request_filename $limit_proxy_hits { default ""; ~/$ $binary_remote_addr; (only limit filename requests ending in slash as we may have something.php which should not be limited) } yet when i look at the logs, ip ABCD has been delayed for a url ending in slash BUT when i look at all proxy requests for the IP, it is clearly not going over the limit. It really seems that no matter what, the limit_req_zone still counts static content against the limit or something else equally as confusing. I've also attempted limit_req_zone $limit_proxy_hits zone=proxyzone:10m rate=4r/s; and then use $limit_proxy_hits inside the server/location server { set $limit_proxy_hits ""; location / { set $limit_proxy_hits $binary_remote_addr; } } and while the syntax doesn't bomb, it seems to exhibit the exact same behavior as above as well. ASSERT: a) When i clearly drop 40 requests from an ip, it clearly lays the smack down on a ton of requests as it should b) I do a kill -HUP on the primary nginx process after each test c) I keep getting warnings on requests from ip's which are clearly not going over the proxy limit d) I have read the leaky-bucket algorithm and unless i'm totally missing something a max of 4r/s should always allow traffic until we start to go OVER 4r/s which isn't the case. The documentation doesn't have any real deep insight into how this works and I could really use a helping hand. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Nov 13 06:12:40 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 12 Nov 2013 22:12:40 -0800 Subject: [ANN] ngx_openresty stable version 1.4.3.4 released Message-ID: Hello folks! I am happy to announce that the new stable version of ngx_openresty, 1.4.3.4, is now released: http://openresty.org/#Download Special thanks go to my employer, CloudFlare, for supporting the development of OpenResty (and also LuaJIT!). The following components are bundled in this version: * LuaJIT-2.0.2 * array-var-nginx-module-0.03rc1 * auth-request-nginx-module-0.2 * drizzle-nginx-module-0.1.6 * echo-nginx-module-0.49 * encrypted-session-nginx-module-0.03 * form-input-nginx-module-0.07 * headers-more-nginx-module-0.23 * iconv-nginx-module-0.10 * lua-5.1.5 * lua-cjson-1.0.3 * lua-rds-parser-0.05 * lua-redis-parser-0.10 * lua-resty-dns-0.10 * lua-resty-lock-0.01 * lua-resty-memcached-0.12 * lua-resty-mysql-0.14 * lua-resty-redis-0.17 * lua-resty-string-0.08 * lua-resty-upload-0.09 * lua-resty-websocket-0.02 * memc-nginx-module-0.13 * nginx-1.4.3 * ngx_coolkit-0.2rc1 * ngx_devel_kit-0.2.19 * ngx_lua-0.9.2 * ngx_postgres-1.0rc3 * rds-csv-nginx-module-0.05 * rds-json-nginx-module-0.12 * redis-nginx-module-0.3.6 * redis2-nginx-module-0.10 * set-misc-nginx-module-0.22 * srcache-nginx-module-0.24 * xss-nginx-module-0.04 Just a quick heads-up: I've been working on the LuaJIT 2.1 support and the new FFI-based API for ngx_lua, which is in the form of the lua-resty-core library: https://github.com/agentzh/lua-resty-core . I'm going to upgrade the LuaJIT engine bundled in OpenResty to the latest v2.1 version and include lua-resty-core in the next mainline release if everything goes well. Then we'd expect a significant speedup in many nontrivial Lua apps running on OpenResty. I've already observed about 70% ~ 80% overall speedup for our real-world Lua WAF system with this new setup when loading the server by simplest GET requests. And there's still plenty of room for future speedup by JITting even more hot Lua code paths! ;) OpenResty is a web app server powered by Nginx and Lua. Both the standard Lua 5.1 interpreter and LuaJIT 2.0 are supported. See http://openresty.org for more details. Enjoy! -agentzh From nginx-forum at nginx.us Wed Nov 13 06:25:44 2013 From: nginx-forum at nginx.us (miguel_hamster) Date: Wed, 13 Nov 2013 01:25:44 -0500 Subject: Inconsistency in ability to use variables in nginx config Message-ID: <3c633c41796487feb2341e3d6694de25.NginxMailingListEnglish@forum.nginx.org> It seems like it's possible to use variables in some configuration directives but not in others. This works: root /usr/www/$sitename/httpdocs; works, while these: access_log /var/log/www/$sitename/access.log; proxy_cache $sitename; do not, instead generating errors when I try to restart nginx. Is there any documentation that explains where I can and cannot use a variable? Or is there some syntax I should be using to indicate variable interpolation? It would be immensely helpful if they worked in all these contexts, because then we wouldn't have to repeat so much across different site configuration files, leading to the requirement for generating them with clunky scripts. As it is, the variables are of very limited utility. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244605,244605#msg-244605 From ianevans at digitalhit.com Wed Nov 13 06:55:26 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Wed, 13 Nov 2013 01:55:26 -0500 Subject: Getting things humming on a 4 gig server Message-ID: <5283225E.2040700@digitalhit.com> Will soon be migrating from a 2 gig CentOS server to a 4 Gig Ubuntu. We'll be running nginx (of course), php-fpm, the fastcgi cache, MariaDB and Zend Opcache. Though I realize that 4 gigs of RAM is still modest, it is twice what we've previously had. I'm just curious for some general suggestions (or pointers to articles) as how to allocate 4 gigs between php-fpm, the opcache, the fastcgi cache and DB caches. Would appreciate any tips or pointers. From vbart at nginx.com Wed Nov 13 10:15:59 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 13 Nov 2013 14:15:59 +0400 Subject: Inconsistency in ability to use variables in nginx config In-Reply-To: <3c633c41796487feb2341e3d6694de25.NginxMailingListEnglish@forum.nginx.org> References: <3c633c41796487feb2341e3d6694de25.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201311131415.59785.vbart@nginx.com> On Wednesday 13 November 2013 10:25:44 miguel_hamster wrote: > It seems like it's possible to use variables in some configuration > directives but not in others. This works: > > root /usr/www/$sitename/httpdocs; > > works, while these: > > access_log /var/log/www/$sitename/access.log; This should work too. See documentation: http://nginx.org/r/access_log > proxy_cache $sitename; > > do not, instead generating errors when I try to restart nginx. > > Is there any documentation that explains where I can and cannot use a > variable? See official docs: http://nginx.org/en/docs/ It's usually explicitly stated if directive supports variables > Or is there some syntax I should be using to indicate variable > interpolation? It would be immensely helpful if they worked in all these > contexts, because then we wouldn't have to repeat so much across different > site configuration files, leading to the requirement for generating them > with clunky scripts. As it is, the variables are of very limited utility. Yes, it is. Variables in nginx are not supposed to replace template engines or special configuration-generation tools. See FAQ: http://nginx.org/en/docs/faq/variables_in_config.html Also, some duplication in config isn't something horrible. Nginx configuration is not a programming language, it utilizes another paradigm and trying to be declarative. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Nov 13 11:27:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Nov 2013 15:27:13 +0400 Subject: limit_req_zone limit by location/proxy In-Reply-To: References: Message-ID: <20131113112713.GW95765@mdounin.ru> Hello! On Tue, Nov 12, 2013 at 09:24:57PM -0600, Justin Deltener wrote: > For the life of me I can't seem to get my configuration correct to limit > requests. I'm running nginx 1.5.1 and have it serving up static content and > pushing all non-existent requests to the apache2 proxy backend for serving > up. I don't want to limit any requests to static content but do want to > limit requests to the proxy. It seems no matter what I put in my > configuration I continue to see entries in the error log for ip addresses > which are not breaking the rate limit. > > 2013/11/12 20:55:28 [warn] 10568#0: *1640292 delaying request, excess: > 0.412, by zone "proxyzone" client ABCD > > I've tried using a map in the top level like so > > limit_req_zone $limit_proxy_hits zone=proxyzone:10m rate=4r/s; > > map $request_filename $limit_proxy_hits > { > default ""; > ~/$ $binary_remote_addr; (only limit filename requests ending in > slash as we may have something.php which should not be limited) > } > > yet when i look at the logs, ip ABCD has been delayed for a url ending in > slash BUT when i look at all proxy requests for the IP, it is clearly not > going over the limit. It really seems that no matter what, the > limit_req_zone still counts static content against the limit or something > else equally as confusing. > > I've also attempted > > limit_req_zone $limit_proxy_hits zone=proxyzone:10m rate=4r/s; > > and then use $limit_proxy_hits inside the server/location > > server > { > set $limit_proxy_hits ""; > > location / > { > set $limit_proxy_hits $binary_remote_addr; > } > } > > and while the syntax doesn't bomb, it seems to exhibit the exact same > behavior as above as well. > > ASSERT: > > a) When i clearly drop 40 requests from an ip, it clearly lays the smack > down on a ton of requests as it should > b) I do a kill -HUP on the primary nginx process after each test > c) I keep getting warnings on requests from ip's which are clearly not > going over the proxy limit > d) I have read the leaky-bucket algorithm and unless i'm totally missing > something a max of 4r/s should always allow traffic until we start to go > OVER 4r/s which isn't the case. > > The documentation doesn't have any real deep insight into how this works > and I could really use a helping hand. Thanks! Just some arbitrary facts: 1. The config you've provided doesn't configure any limits, as it doesn't contatin limit_req directive. See http://nginx.org/r/limit_req for documentation. 2. The "delaying request" message means exactly this - nginx is delaying requests since average speed of requests exceeds configured request rate. It basically means that the "bucket" isn't empty and a request have to wait some time till it will be allowed to continue. This message shouldn't be confused with "limiting requests" message, which is logged when requests are rejected due to burst limit reached. As long as rate is set to 4r/s, it's enough to do two requests with less than 250ms between them to trigger "delaying request" message, which can easily happen as a pageview usually results in multiple requests (one request to load the page itself, and several other requests to load various include resources like css, images and so on). It might be a good idea to use "limit_req ... nodelay" to instruct nginx to don't do anything unless configured burst limit is reached. 3. Doing a "kill -HUP" doesn't clear limit_req stats and mostly useless between tests. 4. To differentiate between various resources, there is a directive called "location", see http://nginx.org/r/location. If you want to limit requests to some resources, but not others, it's good idea to do so by using two distinct locations, e.g.: location / { limit_req zone burst=10 nodelay; proxy_pass http://... } location /static/ { # static files, no limit_req here } 5. The documentation is here: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html -- Maxim Dounin http://nginx.org/en/donation.html From jdeltener at realtruck.com Wed Nov 13 13:17:36 2013 From: jdeltener at realtruck.com (Justin Deltener) Date: Wed, 13 Nov 2013 07:17:36 -0600 Subject: limit_req_zone limit by location/proxy In-Reply-To: <20131113112713.GW95765@mdounin.ru> References: <20131113112713.GW95765@mdounin.ru> Message-ID: I thought I did a good job detailing my issue and setup and clearly didn't do that well. I apologize. 1) I am using limits, which is why i mentioned it is delaying requests. Specifically i'm using limit_req zone=proxyzone burst=6; 2) I understand the difference between delay request and one responded to with a 503. What I think i'm getting hung up on is what to expect under a given scenario. If i setup the zone with a rate of 4r/s I would expect no matter what, pivoting on the IP a person should be able to perform 4 requests every second without any delay or 503's. (assuming we're able to count ONLY proxy hits and not take static content into account for the current requests..which is what i'm attempting to do) Using a burst of 6, i would expect a request of 8 in one second would have 4 at full speed, 2 delayed and 2 dropped but it seems that's where i'm horribly wrong. You said "As long as rate is set to 4r/s, it's enough to do two requests with less than 250ms between them to trigger "delaying request message". I'm confused, why would 4r/s not allow 4 requests per second at full speed?? Isn't that the entire point. I do realize a given page with have numerous static hits that would normally count against a person's request rate, but i'm literally attempting to take all other static requests out of the equation so the rate per second as well as the burst/503's are only applied to a given url pattern. I really shouldn't/will not throttle static requests but I do know for any proxy hits, an actual person browsing the site should never exceed 4 requests per second and for a bit of a fudge factor, allow them a burst of up to 6. After 6 I expect the site to start spitting out 503's. 3) I was pointing out i am refreshing the config. I"m not worried about hit counters as in reality if my counting of ONLY proxy hits was working properly, this wouldn't be any real issue. 4) Yup i do have numerous location directives and I'm only placing the limit_req directive under a single proxy location directive. 5) Thanks for the link, but I have read that document a hundred times and there is still a ton that it doesn't cover. I appreciate your response Maxim! On Wed, Nov 13, 2013 at 5:27 AM, Maxim Dounin wrote: > Hello! > > On Tue, Nov 12, 2013 at 09:24:57PM -0600, Justin Deltener wrote: > > > For the life of me I can't seem to get my configuration correct to limit > > requests. I'm running nginx 1.5.1 and have it serving up static content > and > > pushing all non-existent requests to the apache2 proxy backend for > serving > > up. I don't want to limit any requests to static content but do want to > > limit requests to the proxy. It seems no matter what I put in my > > configuration I continue to see entries in the error log for ip addresses > > which are not breaking the rate limit. > > > > 2013/11/12 20:55:28 [warn] 10568#0: *1640292 delaying request, excess: > > 0.412, by zone "proxyzone" client ABCD > > > > I've tried using a map in the top level like so > > > > limit_req_zone $limit_proxy_hits zone=proxyzone:10m rate=4r/s; > > > > map $request_filename $limit_proxy_hits > > { > > default ""; > > ~/$ $binary_remote_addr; (only limit filename requests ending in > > slash as we may have something.php which should not be limited) > > } > > > > yet when i look at the logs, ip ABCD has been delayed for a url ending in > > slash BUT when i look at all proxy requests for the IP, it is clearly not > > going over the limit. It really seems that no matter what, the > > limit_req_zone still counts static content against the limit or something > > else equally as confusing. > > > > I've also attempted > > > > limit_req_zone $limit_proxy_hits zone=proxyzone:10m rate=4r/s; > > > > and then use $limit_proxy_hits inside the server/location > > > > server > > { > > set $limit_proxy_hits ""; > > > > location / > > { > > set $limit_proxy_hits $binary_remote_addr; > > } > > } > > > > and while the syntax doesn't bomb, it seems to exhibit the exact same > > behavior as above as well. > > > > ASSERT: > > > > a) When i clearly drop 40 requests from an ip, it clearly lays the smack > > down on a ton of requests as it should > > b) I do a kill -HUP on the primary nginx process after each test > > c) I keep getting warnings on requests from ip's which are clearly not > > going over the proxy limit > > d) I have read the leaky-bucket algorithm and unless i'm totally missing > > something a max of 4r/s should always allow traffic until we start to go > > OVER 4r/s which isn't the case. > > > > The documentation doesn't have any real deep insight into how this works > > and I could really use a helping hand. Thanks! > > Just some arbitrary facts: > > 1. The config you've provided doesn't configure any limits, as it > doesn't contatin limit_req directive. See > http://nginx.org/r/limit_req for documentation. > > 2. The "delaying request" message means exactly this - nginx is > delaying requests since average speed of requests exceeds > configured request rate. It basically means that the "bucket" > isn't empty and a request have to wait some time till it will be > allowed to continue. This message shouldn't be confused with > "limiting requests" message, which is logged when requests are > rejected due to burst limit reached. > > As long as rate is set to 4r/s, it's enough to do two requests > with less than 250ms between them to trigger "delaying request" > message, which can easily happen as a pageview usually results in > multiple requests (one request to load the page itself, and > several other requests to load various include resources like css, > images and so on). > > It might be a good idea to use "limit_req ... nodelay" to > instruct nginx to don't do anything unless configured burst limit > is reached. > > 3. Doing a "kill -HUP" doesn't clear limit_req stats and mostly > useless between tests. > > 4. To differentiate between various resources, there is a > directive called "location", see http://nginx.org/r/location. > If you want to limit requests to some resources, but not others, it's > good idea to do so by using two distinct locations, e.g.: > > location / { > limit_req zone burst=10 nodelay; > proxy_pass http://... > } > > location /static/ { > # static files, no limit_req here > } > > 5. The documentation is here: > > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Justin Deltener Nerd Curator | Alpha Omega Battle Squadron Toll Free: 1-877-216-5446 x3921 Local: 701-253-5906 x3921 RealTruck.com Guiding Principle #3: Improve -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Nov 13 13:40:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Nov 2013 17:40:01 +0400 Subject: limit_req_zone limit by location/proxy In-Reply-To: References: <20131113112713.GW95765@mdounin.ru> Message-ID: <20131113134001.GZ95765@mdounin.ru> Hello! On Wed, Nov 13, 2013 at 07:17:36AM -0600, Justin Deltener wrote: [...] > current requests..which is what i'm attempting to do) Using a burst of 6, i > would expect a request of 8 in one second would have 4 at full speed, 2 > delayed and 2 dropped but it seems that's where i'm horribly wrong. You > said "As long as rate is set to 4r/s, it's enough to do two requests with > less than 250ms between them to trigger "delaying request message". I'm > confused, why would 4r/s not allow 4 requests per second at full speed?? > Isn't that the entire point. Two request with 100ms between them means that requests are coming at a 10 requests per second rate. That is, second request have to be delayed. Note that specifying rate of 4 r/s doesn't imply 1-second measurement granularity. Much like 60 km/h speed limit doesn't imply that you have to drive for an hour before you'll reach a limit. -- Maxim Dounin http://nginx.org/en/donation.html From lagern at lafayette.edu Wed Nov 13 13:49:33 2013 From: lagern at lafayette.edu (Nathan) Date: Wed, 13 Nov 2013 08:49:33 -0500 Subject: SSL Handshake problems, nginx reverse web proxy. In-Reply-To: <20131112211833.GS95765@mdounin.ru> References: <5282603C.3080901@lafayette.edu> <20131112171416.GP95765@mdounin.ru> <528263D0.9040208@lafayette.edu> <20131112211833.GS95765@mdounin.ru> Message-ID: <5283836D.5050207@lafayette.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 11/12/2013 04:18 PM, Maxim Dounin wrote: > If it doesn't work for you, you have another obvious option: fixing > a backend will do the trick, too. Yes, i think this is the optimal solution, but the back end is a blackbox controlled by a vendor. It's jetty, so its likely that it _could_ be fixed, but working with them is like pulling teeth. Thanks for the help. I'll start digging on both options (upgrading, or getting the backend fixed). At least now I know that its possible tofix at the nginx end if we're willing to update to latest. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEUEARECAAYFAlKDg20ACgkQsZqG4IN3sumVTQCYqc7U0biS0DuNGifoUd8BIrid 9QCeMipoeU9sqmXgCPlAvFcc4U3RL0k= =aKa2 -----END PGP SIGNATURE----- From jdeltener at realtruck.com Wed Nov 13 15:09:55 2013 From: jdeltener at realtruck.com (Justin Deltener) Date: Wed, 13 Nov 2013 09:09:55 -0600 Subject: limit_req_zone limit by location/proxy In-Reply-To: <20131113134001.GZ95765@mdounin.ru> References: <20131113112713.GW95765@mdounin.ru> <20131113134001.GZ95765@mdounin.ru> Message-ID: Aha, that is the lightbulb moment. So if we're talking actual rate..which makes sense how would you setup a scenario with the following requirements. You can have whatever rate you want as long as you don't exceed 5 proxy requests in the same second. I don't care if 5 come within 5ms of each other.. Hitting 6 total proxy requests in 1 second would kill the request. It seems we can't really specify that without increasing the rate which in turn could allow a sustained session with high rates to still have a ton of requests come in to kill the server. We're attempting to account for 301 redirects which spawn requests much faster than normal human requests. I realize we could add a get param to the url to excuse it from the limit, but that seems a bit out there.. I also don't quite understand how long a burst rate can be sustained. It seems one could set the default rate to 1/m and set the burst to whatever you like.. Does that make sense? On Wed, Nov 13, 2013 at 7:40 AM, Maxim Dounin wrote: > Hello! > > On Wed, Nov 13, 2013 at 07:17:36AM -0600, Justin Deltener wrote: > > [...] > > > current requests..which is what i'm attempting to do) Using a burst of > 6, i > > would expect a request of 8 in one second would have 4 at full speed, 2 > > delayed and 2 dropped but it seems that's where i'm horribly wrong. You > > said "As long as rate is set to 4r/s, it's enough to do two requests with > > less than 250ms between them to trigger "delaying request message". I'm > > confused, why would 4r/s not allow 4 requests per second at full speed?? > > Isn't that the entire point. > > Two request with 100ms between them means that requests are coming > at a 10 requests per second rate. That is, second request have to > be delayed. > > Note that specifying rate of 4 r/s doesn't imply 1-second > measurement granularity. Much like 60 km/h speed limit doesn't > imply that you have to drive for an hour before you'll reach a > limit. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Justin Deltener Nerd Curator | Alpha Omega Battle Squadron Toll Free: 1-877-216-5446 x3921 Local: 701-253-5906 x3921 RealTruck.com Guiding Principle #3: Improve -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmilas at noa.gr Wed Nov 13 15:10:25 2013 From: nmilas at noa.gr (Nikolaos Milas) Date: Wed, 13 Nov 2013 17:10:25 +0200 Subject: Nagios check for nginx with separate metrics In-Reply-To: <526D64FE.7030004@noa.gr> References: <526D64FE.7030004@noa.gr> Message-ID: <52839661.4000406@noa.gr> On 27/10/2013 9:09 ??, Nikolaos Milas wrote: > I am trying to run a Nagios check for nginx (in Opsview Core) but I > have a problem: All of the available (to my knowledge) nginx Nagios > checks > (http://exchange.nagios.org/directory/Plugins/Web-Servers/nginx/) > produce comprehensive output which includes all "metrics" together, > while I would want one that can output a selected metric at a time, by > using a parameter, like "-m ," in the following example: > > ./check_nginx.sh -H localhost -P 80 -p /var/run -n nginx.pid -s > nginx_status -o /tmp -m current_requests > - or - > ./check_nginx.sh -H localhost -P 80 -p /var/run -n nginx.pid -s > nginx_status -o /tmp -m requests_per_second > - or - > ./check_nginx.sh -H localhost -P 80 -p /var/run -n nginx.pid -s > nginx_status -o /tmp -m accesses > etc. > > Does anyone know whether such an nginx Nagios check exists (and where) > or we would have to modify source code of one of these plugins to > achieve the required behavior? (Frankly, I would be surprised if such > a check does not exist yet, but I couldn't find one on the Net, > despite my searches.) > > Output with a single metric at a time is important for use in > server/network monitoring systems. > Anyone? No NGINX Nagios users around? Regards, Nick From mdounin at mdounin.ru Wed Nov 13 16:01:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Nov 2013 20:01:49 +0400 Subject: limit_req_zone limit by location/proxy In-Reply-To: References: <20131113112713.GW95765@mdounin.ru> <20131113134001.GZ95765@mdounin.ru> Message-ID: <20131113160149.GA95765@mdounin.ru> Hello! On Wed, Nov 13, 2013 at 09:09:55AM -0600, Justin Deltener wrote: > Aha, that is the lightbulb moment. > > So if we're talking actual rate..which makes sense how would you setup a > scenario with the following requirements. > > You can have whatever rate you want as long as you don't exceed 5 proxy > requests in the same second. I don't care if 5 come within 5ms of each > other.. Hitting 6 total proxy requests in 1 second would kill the request. > It seems we can't really specify that without increasing the rate which in > turn could allow a sustained session with high rates to still have a ton of > requests come in to kill the server. What you are asking about is close to something like this: limit_req_zone ... rate=5r/s; limit_req ... burst=5 nodelay; That is, up to 5 requests (note "burst=5") are allowed at any rate without any delays. If there are more requests and the rate remains above 5r/s, they are rejected. > We're attempting to account for 301 redirects which spawn requests much > faster than normal human requests. I realize we could add a get param to > the url to excuse it from the limit, but that seems a bit out there.. > > I also don't quite understand how long a burst rate can be sustained. It > seems one could set the default rate to 1/m and set the burst to whatever > you like.. > > Does that make sense? The burst parameter configures maximum burst size, in requests (in terms of "leaky bucket" - it's the bucket size). In most cases, it's a reasonable aproach to set a relatively low rate, switch off delay, and configure a reasonable burst size to account for various things like redirects, opening multiple pages to read them later, and so on. -- Maxim Dounin http://nginx.org/en/donation.html From jdeltener at realtruck.com Wed Nov 13 16:20:31 2013 From: jdeltener at realtruck.com (Justin Deltener) Date: Wed, 13 Nov 2013 10:20:31 -0600 Subject: limit_req_zone limit by location/proxy In-Reply-To: <20131113160149.GA95765@mdounin.ru> References: <20131113112713.GW95765@mdounin.ru> <20131113134001.GZ95765@mdounin.ru> <20131113160149.GA95765@mdounin.ru> Message-ID: I'll give that a try. I really appreciate your help Maxim! On Wed, Nov 13, 2013 at 10:01 AM, Maxim Dounin wrote: > Hello! > > On Wed, Nov 13, 2013 at 09:09:55AM -0600, Justin Deltener wrote: > > > Aha, that is the lightbulb moment. > > > > So if we're talking actual rate..which makes sense how would you setup a > > scenario with the following requirements. > > > > You can have whatever rate you want as long as you don't exceed 5 proxy > > requests in the same second. I don't care if 5 come within 5ms of each > > other.. Hitting 6 total proxy requests in 1 second would kill the > request. > > It seems we can't really specify that without increasing the rate which > in > > turn could allow a sustained session with high rates to still have a ton > of > > requests come in to kill the server. > > What you are asking about is close to something like this: > > limit_req_zone ... rate=5r/s; > limit_req ... burst=5 nodelay; > > That is, up to 5 requests (note "burst=5") are allowed at any rate > without any delays. If there are more requests and the rate > remains above 5r/s, they are rejected. > > > We're attempting to account for 301 redirects which spawn requests much > > faster than normal human requests. I realize we could add a get param to > > the url to excuse it from the limit, but that seems a bit out there.. > > > > I also don't quite understand how long a burst rate can be sustained. It > > seems one could set the default rate to 1/m and set the burst to whatever > > you like.. > > > > Does that make sense? > > The burst parameter configures maximum burst size, in requests (in > terms of "leaky bucket" - it's the bucket size). In most cases, > it's a reasonable aproach to set a relatively low rate, switch off > delay, and configure a reasonable burst size to account for > various things like redirects, opening multiple pages to read them > later, and so on. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Justin Deltener Nerd Curator | Alpha Omega Battle Squadron Toll Free: 1-877-216-5446 x3921 Local: 701-253-5906 x3921 RealTruck.com Guiding Principle #3: Improve -------------- next part -------------- An HTML attachment was scrubbed... URL: From georg at riseup.net Wed Nov 13 21:35:48 2013 From: georg at riseup.net (georg at riseup.net) Date: Wed, 13 Nov 2013 22:35:48 +0100 Subject: sidebar menu + directory listing Message-ID: <20131113213548.GA22028@localhost> Hi all, I'd like to achieve the following, and found nothing so far trough research. Maybe someone could give me a pointer whether this is possible or not. I'd like to use directory listing for some folders, which are made accessible trough a location directive. At the same time, it should be sub.domain.com, I would like to create a sidebar menu (trough css or static html or whatever) to let users navigate trough a) some static html files and b) these specific folders, using directory listing. I hope this is understandable somehow... Thanks, Georg From contact at jpluscplusm.com Wed Nov 13 21:45:46 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 13 Nov 2013 21:45:46 +0000 Subject: sidebar menu + directory listing In-Reply-To: <20131113213548.GA22028@localhost> References: <20131113213548.GA22028@localhost> Message-ID: On 13 November 2013 21:35, georg at riseup.net wrote: > Hi all, > > I'd like to achieve the following, and found nothing so far trough > research. Maybe someone could give me a pointer whether this is possible > or not. > > I'd like to use directory listing for some folders, which are made > accessible trough a location directive. At the same time, it should be > sub.domain.com, I would like to create a sidebar menu (trough css or > static html or whatever) to let users navigate trough a) some static > html files and b) these specific folders, using directory listing. I > hope this is understandable somehow... Looks to me like you want to use frames and write yourself some pretty basic HTML. I don't know of anything that's that application-a-like that comes /inside/ nginx itself, however. Jonathan From georg at riseup.net Wed Nov 13 21:56:28 2013 From: georg at riseup.net (georg at riseup.net) Date: Wed, 13 Nov 2013 22:56:28 +0100 Subject: sidebar menu + directory listing In-Reply-To: References: <20131113213548.GA22028@localhost> Message-ID: <20131113215628.GA29284@localhost> Hi, On 13-11-13 21:45:46, Jonathan Matthews wrote: > On 13 November 2013 21:35, georg at riseup.net wrote: > Looks to me like you want to use frames and write yourself some pretty > basic HTML. I don't know of anything that's that application-a-like > that comes /inside/ nginx itself, however. Yeah, I tought so aswell. My question was more like "how do I combine html and directory listing at the same time..."? Thanks, Georg From francis at daoine.org Wed Nov 13 22:00:40 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 13 Nov 2013 22:00:40 +0000 Subject: Nagios check for nginx with separate metrics In-Reply-To: <52839661.4000406@noa.gr> References: <526D64FE.7030004@noa.gr> <52839661.4000406@noa.gr> Message-ID: <20131113220040.GA22339@craic.sysops.org> On Wed, Nov 13, 2013 at 05:10:25PM +0200, Nikolaos Milas wrote: > On 27/10/2013 9:09 ??, Nikolaos Milas wrote: Hi there, > >All of the available (to my knowledge) nginx Nagios checks > >produce comprehensive output which includes all "metrics" together, > >Output with a single metric at a time is important for use in > >server/network monitoring systems. If *everyone* who writes a checker provides multiple metrics at once, that kind of suggests that a single metric at a time isn't all that important. > >Does anyone know whether such an nginx Nagios check exists (and where) > >or we would have to modify source code of one of these plugins to > >achieve the required behavior? I suspect you're going to have to write your own -- maybe from scratch, maybe a wrapper around one you like. Presumably they all ask for a url where nginx serves "stub_status", and process the numbers returned. In your version, just ignore the numbers you don't care about. (Or cache them and serve slightly-stale information, to avoid making the same request three times in succession.) Good luck with it, f -- Francis Daly francis at daoine.org From appa at perusio.net Wed Nov 13 22:08:50 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Wed, 13 Nov 2013 23:08:50 +0100 Subject: sidebar menu + directory listing In-Reply-To: <20131113215628.GA29284@localhost> References: <20131113213548.GA22028@localhost> <20131113215628.GA29284@localhost> Message-ID: Quite simply make all locations that are to listed use the autoindex on; directive. If I understood correctly you want all vhosts of the form sub.domain.tld be listed. So just make the "catch all" location / use the autoindex directive. Le 13 nov. 2013 22:57, "georg at riseup.net" a ?crit : > Hi, > > On 13-11-13 21:45:46, Jonathan Matthews wrote: > > On 13 November 2013 21:35, georg at riseup.net wrote: > > Looks to me like you want to use frames and write yourself some pretty > > basic HTML. I don't know of anything that's that application-a-like > > that comes /inside/ nginx itself, however. > > Yeah, I tought so aswell. My question was more like "how do I combine > html and directory listing at the same time..."? > > Thanks, > Georg > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Nov 13 22:09:40 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 13 Nov 2013 22:09:40 +0000 Subject: sidebar menu + directory listing In-Reply-To: <20131113215628.GA29284@localhost> References: <20131113213548.GA22028@localhost> <20131113215628.GA29284@localhost> Message-ID: <20131113220940.GB22339@craic.sysops.org> On Wed, Nov 13, 2013 at 10:56:28PM +0100, georg at riseup.net wrote: > On 13-11-13 21:45:46, Jonathan Matthews wrote: > > On 13 November 2013 21:35, georg at riseup.net wrote: Hi there, > > Looks to me like you want to use frames and write yourself some pretty > > basic HTML. I don't know of anything that's that application-a-like > > that comes /inside/ nginx itself, however. > > Yeah, I tought so aswell. My question was more like "how do I combine > html and directory listing at the same time..."? Can you build a small directory tree, and manually create the files with the content that you would like to have returned? For directory listings, do a manual "ls" (or whatever) once to hard-code the html. That exercise might make clear to you what content you want nginx to return in response to different requests -- particularly, which parts are static and which parts are dynamic. And that in turn might help you decide whether you want an nginx module, or the plain directory handler, or something like an index.php that you can drop in each directory. (I you use frames, you will be making more than one http request, so nginx will be able to return more than one piece of content.) Good luck with it, f -- Francis Daly francis at daoine.org From georg at riseup.net Wed Nov 13 22:52:05 2013 From: georg at riseup.net (georg at riseup.net) Date: Wed, 13 Nov 2013 23:52:05 +0100 Subject: sidebar menu + directory listing In-Reply-To: <20131113220940.GB22339@craic.sysops.org> References: <20131113213548.GA22028@localhost> <20131113215628.GA29284@localhost> <20131113220940.GB22339@craic.sysops.org> Message-ID: <20131113225205.GA20049@localhost> Hi, Sorry, maybe I'm dumb, I'm not sure if I get it... On 13-11-13 22:09:40, Francis Daly wrote: > On Wed, Nov 13, 2013 at 10:56:28PM +0100, georg at riseup.net wrote: > > On 13-11-13 21:45:46, Jonathan Matthews wrote: > > > On 13 November 2013 21:35, georg at riseup.net wrote: > > > Looks to me like you want to use frames and write yourself some pretty > > > basic HTML. I don't know of anything that's that application-a-like > > > that comes /inside/ nginx itself, however. > > > > Yeah, I tought so aswell. My question was more like "how do I combine > > html and directory listing at the same time..."? > Can you build a small directory tree, and manually create the files with > the content that you would like to have returned? For the directory tree, you mean something like: root |- index.html (should be displayed as html) |- n.html (should be displayed as html) |- dir1 (should be displayed via directory listing) |- dir2 (should be displayed via directory listing) sidebar should look like: Index (should point to index.html) n (should point to n.html) dir1 (should point to dir1) dir2 (should point to dir2) > For directory listings, do a manual "ls" (or whatever) once to hard-code > the html. Doing "ls" where? Inside the html? > That exercise might make clear to you what content you want nginx to > return in response to different requests -- particularly, which parts > are static and which parts are dynamic. And that in turn might help you > decide whether you want an nginx module, or the plain directory handler, > or something like an index.php that you can drop in each directory. Still unclear to me how I display html and directory listing at the same tome on the "same page". > (I you use frames, you will be making more than one http request, so > nginx will be able to return more than one piece of content.) If possible, I would like to avoid frames. > Good luck with it, Thanks, Georg From francis at daoine.org Wed Nov 13 23:31:54 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 13 Nov 2013 23:31:54 +0000 Subject: sidebar menu + directory listing In-Reply-To: <20131113225205.GA20049@localhost> References: <20131113213548.GA22028@localhost> <20131113215628.GA29284@localhost> <20131113220940.GB22339@craic.sysops.org> <20131113225205.GA20049@localhost> Message-ID: <20131113233154.GA23462@craic.sysops.org> On Wed, Nov 13, 2013 at 11:52:05PM +0100, georg at riseup.net wrote: > On 13-11-13 22:09:40, Francis Daly wrote: > > On Wed, Nov 13, 2013 at 10:56:28PM +0100, georg at riseup.net wrote: Hi there, > Sorry, maybe I'm dumb, I'm not sure if I get it... I'm trying to suggest that you do the background not-nginx-related preparation work for your question outside of nginx, so that you have a very clear idea of what you want nginx to do, and can then describe clearly what that is. > > Can you build a small directory tree, and manually create the files with > > the content that you would like to have returned? > > For the directory tree, you mean something like: > > root > |- index.html (should be displayed as html) > |- n.html (should be displayed as html) > |- dir1 (should be displayed via directory listing) > |- dir2 (should be displayed via directory listing) So, on the file system you have two files and two directories. Presumably your web browser is going to make http requests for things like "/" and "/index.html" and "/dir1/"; and you want nginx to return specific content for each request. > sidebar should look like: > > Index (should point to index.html) > n (should point to n.html) > dir1 (should point to dir1) > dir2 (should point to dir2) This part, I don't understand. I suggest you do whatever it takes to manually create the html or javascript or whatever you want, and put it in a file, so that when your browser asks for that file, you see exactly what you want to see on-screen. Do this manually, for just this one example directory. Don't worry about any part of it being dynamically generated. That comes later. > > For directory listings, do a manual "ls" (or whatever) once to hard-code > > the html. > > Doing "ls" where? Inside the html? When you are manually creating the one-off static page that shows exactly what you want, it will include mention of all nearby files and directories. Do whatever it takes -- possibly including "ls" -- to hard-code the html-or-javascript that you want to see. > Still unclear to me how I display html and directory listing at the same > tome on the "same page". Do a static one-off test first. After that, you will have a much better idea of which parts you want to be static and which parts you want to be dynamic. And then your question might become "how do I change the 'autoindex' output to be *this* instead of *that*?"; or it might become "how do I get an 'index' file to generate output like *this*?"; or it might become something else specific. Right now, all I can understand of your question is "can nginx do what I want?". Maybe your question is already clear enough to other people, in which case maybe they can offer suggestions. f -- Francis Daly francis at daoine.org From georg at riseup.net Wed Nov 13 23:39:35 2013 From: georg at riseup.net (georg at riseup.net) Date: Thu, 14 Nov 2013 00:39:35 +0100 Subject: sidebar menu + directory listing In-Reply-To: <20131113225205.GA20049@localhost> References: <20131113213548.GA22028@localhost> <20131113215628.GA29284@localhost> <20131113220940.GB22339@craic.sysops.org> <20131113225205.GA20049@localhost> Message-ID: <20131113233935.GC20049@localhost> On 13-11-13 23:52:05, georg at riseup.net wrote: > On 13-11-13 22:09:40, Francis Daly wrote: > > On Wed, Nov 13, 2013 at 10:56:28PM +0100, georg at riseup.net wrote: > > > On 13-11-13 21:45:46, Jonathan Matthews wrote: > > For directory listings, do a manual "ls" (or whatever) once to hard-code > > the html. > > Doing "ls" where? Inside the html? ...maybe I'll just write a small script using tree -H to output a directory listing into static html, and serve this just as html. Clever doing it like this? Cheers, Georg From jdeltener at realtruck.com Thu Nov 14 02:06:08 2013 From: jdeltener at realtruck.com (Justin Deltener) Date: Wed, 13 Nov 2013 20:06:08 -0600 Subject: limit_req_zone limit by location/proxy In-Reply-To: References: <20131113112713.GW95765@mdounin.ru> <20131113134001.GZ95765@mdounin.ru> <20131113160149.GA95765@mdounin.ru> Message-ID: Rolled into production and after tens of thousands of page requests only 3 were smacked down and all were bogus security scanners or "bad dudes" MISSION ACCOMPLISHED! Thanks a ton Maxim! On Wed, Nov 13, 2013 at 10:20 AM, Justin Deltener wrote: > I'll give that a try. I really appreciate your help Maxim! > > > On Wed, Nov 13, 2013 at 10:01 AM, Maxim Dounin wrote: > >> Hello! >> >> On Wed, Nov 13, 2013 at 09:09:55AM -0600, Justin Deltener wrote: >> >> > Aha, that is the lightbulb moment. >> > >> > So if we're talking actual rate..which makes sense how would you setup a >> > scenario with the following requirements. >> > >> > You can have whatever rate you want as long as you don't exceed 5 proxy >> > requests in the same second. I don't care if 5 come within 5ms of each >> > other.. Hitting 6 total proxy requests in 1 second would kill the >> request. >> > It seems we can't really specify that without increasing the rate which >> in >> > turn could allow a sustained session with high rates to still have a >> ton of >> > requests come in to kill the server. >> >> What you are asking about is close to something like this: >> >> limit_req_zone ... rate=5r/s; >> limit_req ... burst=5 nodelay; >> >> That is, up to 5 requests (note "burst=5") are allowed at any rate >> without any delays. If there are more requests and the rate >> remains above 5r/s, they are rejected. >> >> > We're attempting to account for 301 redirects which spawn requests much >> > faster than normal human requests. I realize we could add a get param to >> > the url to excuse it from the limit, but that seems a bit out there.. >> > >> > I also don't quite understand how long a burst rate can be sustained. It >> > seems one could set the default rate to 1/m and set the burst to >> whatever >> > you like.. >> > >> > Does that make sense? >> >> The burst parameter configures maximum burst size, in requests (in >> terms of "leaky bucket" - it's the bucket size). In most cases, >> it's a reasonable aproach to set a relatively low rate, switch off >> delay, and configure a reasonable burst size to account for >> various things like redirects, opening multiple pages to read them >> later, and so on. >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > > Justin Deltener > > Nerd Curator | Alpha Omega Battle Squadron > > Toll Free: 1-877-216-5446 x3921 > > Local: 701-253-5906 x3921 > > RealTruck.com > > Guiding Principle #3: > Improve > -- Justin Deltener Nerd Curator | Alpha Omega Battle Squadron Toll Free: 1-877-216-5446 x3921 Local: 701-253-5906 x3921 RealTruck.com Guiding Principle #3: Improve -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Nov 14 08:09:17 2013 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 14 Nov 2013 03:09:17 -0500 Subject: sidebar menu + directory listing In-Reply-To: <20131113233935.GC20049@localhost> References: <20131113233935.GC20049@localhost> Message-ID: <0437e7aaf06834dfc64d8580feee263f.NginxMailingListEnglish@forum.nginx.org> georg at riseup.net Wrote: ------------------------------------------------------- > ....maybe I'll just write a small script using tree -H to output a > directory listing into static html, and serve this just as html. > Clever doing it like this? http://wiki.nginx.org/NgxFancyIndex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244638,244653#msg-244653 From nmilas at noa.gr Thu Nov 14 11:03:42 2013 From: nmilas at noa.gr (Nikolaos Milas) Date: Thu, 14 Nov 2013 13:03:42 +0200 Subject: Nagios check for nginx with separate metrics In-Reply-To: <20131113220040.GA22339@craic.sysops.org> References: <526D64FE.7030004@noa.gr> <52839661.4000406@noa.gr> <20131113220040.GA22339@craic.sysops.org> Message-ID: <5284AE0E.9050907@noa.gr> On 14/11/2013 12:00 ??, Francis Daly wrote: > If*everyone* who writes a checker provides multiple metrics at once, that > kind of suggests that a single metric at a time isn't all that important. I won't disagree, you need to correlate metrics, but you cannot graph them as a function of time if you don't log them separately. > Presumably they all ask for a url where nginx serves "stub_status", and > process the numbers returned. Right. That's how they operate. > In your version, just ignore the numbers you don't care about. (Or cache them and serve slightly-stale information, > to avoid making the same request three times in succession.) > > Good luck with it, I am still surprised, however, that I couldn't find such a thing around, esp. since it's standard in Apache Nagios checks! I can (most probably) do it, but life is just not enough to find time to do everything by yourself! :-( Thanks, Nick From nginx-forum at nginx.us Thu Nov 14 17:41:27 2013 From: nginx-forum at nginx.us (JStl) Date: Thu, 14 Nov 2013 12:41:27 -0500 Subject: SPDY + proxy cache static content failures In-Reply-To: <928247ac3646300bd8845120ca3aae9b.NginxMailingListEnglish@forum.nginx.org> References: <32F6D688-5016-451D-A43A-72FC22C5DBBB@gwynne.id.au> <928247ac3646300bd8845120ca3aae9b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, I have the same problem (nginx 1.4.3) when spdy is enabled and proxy cache too (with apache2 behind). Any news about this bug ? I would really like to be able to activate SPDY... Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233497,244667#msg-244667 From nginx-forum at nginx.us Thu Nov 14 18:13:08 2013 From: nginx-forum at nginx.us (justin) Date: Thu, 14 Nov 2013 13:13:08 -0500 Subject: Random 502 bad gateway with php-fpm, why? Message-ID: <024849234119bf5d8f3020d26eed3dd0.NginxMailingListEnglish@forum.nginx.org> My PHP application went down for a few hours with 502 bad gateway. In the nginx error log all I see is: 2013/11/14 10:02:16 [error] 1466#0: *57964 recv() failed (104: Connection reset by peer) while reading response header from upstream I fixed it by restarting php-fpm. However, what caused this? I don't see anything interesting logged in the php-fpm or php logs. No indication of why happened, and why simply restarting php-fpm fixed the issue. I thought that php-fpm auto-restarted processes if they fail? Any insights and place to determine the root cause? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244669,244669#msg-244669 From alex at zeitgeist.se Fri Nov 15 00:32:09 2013 From: alex at zeitgeist.se (Alex) Date: Fri, 15 Nov 2013 01:32:09 +0100 Subject: Any rough ETA on SPDY/3 & push? In-Reply-To: <3916ee6f3ad2df3c71f7d89bb60151bc.NginxMailingListEnglish@forum.nginx.org> References: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org> <3916ee6f3ad2df3c71f7d89bb60151bc.NginxMailingListEnglish@forum.nginx.org> Message-ID: > spdy/2 support has been removed from the Firefox code base ( > https://bugzilla.mozilla.org/show_bug.cgi?id=912550 ) and >= Firefox 27 will > only support >= spdy/3. Firefox 27 will be released in January 2014 ( > https://wiki.mozilla.org/RapidRelease/Calendar ) so there is some urgency in > getting spdy/3(.1) support into nginx. Just the heads up: It seems the latest Chrome beta (32.0.1700.14) has also removed support for spdy/2. Spdy is no longer enabled in Chrome on nginx-based sites, and all sites that do work run spdy/3 or later (according to chrome://net-internals/#spdy). From atynefield at gmail.com Fri Nov 15 04:37:03 2013 From: atynefield at gmail.com (Andrew Tynefield) Date: Thu, 14 Nov 2013 22:37:03 -0600 Subject: Proxy buffering Message-ID: Hello all, I've configured nginx as a load balancing proxy for my backend servers. My backend is expecting multi-part uploads for large files in small chunks (5-15mb). The issue I'm encountering, is that I would like for nginx to just pass the chunked data along to the backend servers and not buffer the requests. Current configuration: upstream riak-cs { server 192.168.1.19:8080; server 192.168.1.22:8080; #least_conn; } server { listen 80; server_name cs.domain.com *.cs.domain.com; location / { proxy_pass http://riak-cs; proxy_set_header Host $host; proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffering off; proxy_pass_header Server; add_header Backend $proxy_host:$proxy_port; add_header Upstream-Response-Time $upstream_response_time; } } user nginx; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 4096; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_requests 100; client_max_body_size 1000M; keepalive_timeout 3; reset_timedout_connection on; underscores_in_headers on; include /etc/nginx/conf.d/*.conf; } I have tried disabling buffers as shown above, however, when I capture the packets on the backend servers, I see that the stream of data doesn't occur until after the full body of the upload has completed. [ jedi ] ~ # nginx -v nginx version: nginx/1.4.3 If I enabled info error logging, I see: 2013/11/14 22:34:47 [warn] 2698#0: *1 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000001, client: 192.168.1.1, server: cs.domain.com, request: "PUT /huge/Windows7Ultimate.iso?partNumber=1&uploadId=1RjFvAcQTsWmpnIYD7nL7Q== HTTP/1.1", host: "big.cs.domain.com" How can I prevent this all together? Thanks, Andrew -- [Andrew Tynefield] -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Fri Nov 15 04:59:04 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Thu, 14 Nov 2013 20:59:04 -0800 Subject: Any rough ETA on SPDY/3 & push? In-Reply-To: References: <1bfb059c2635f2a293865024f7fc1ee8.NginxMailingListEnglish@forum.nginx.org> <3916ee6f3ad2df3c71f7d89bb60151bc.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hey, >> spdy/2 support has been removed from the Firefox code base ( >> https://bugzilla.mozilla.org/show_bug.cgi?id=912550 ) and >= Firefox 27 will >> only support >= spdy/3. Firefox 27 will be released in January 2014 ( >> https://wiki.mozilla.org/RapidRelease/Calendar ) so there is some urgency in >> getting spdy/3(.1) support into nginx. > > Just the heads up: It seems the latest Chrome beta (32.0.1700.14) has > also removed support for spdy/2. Spdy is no longer enabled in Chrome on > nginx-based sites, and all sites that do work run spdy/3 or later > (according to chrome://net-internals/#spdy). Actually, after some convincing [0], both Firefox [1] & Chrome [2] guys were nice enough to give us a bit more time and revert those changes, so SPDY/2 will stay around for one more release cycle (and be retired in late February). [0] https://groups.google.com/d/msg/spdy-dev/XDudMZSq3e4/oqklqhoouAsJ [1] https://bugzilla.mozilla.org/show_bug.cgi?id=912550 [2] https://code.google.com/p/chromium/issues/detail?id=318651 Best regards, Piotr Sikora From mdounin at mdounin.ru Fri Nov 15 10:51:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Nov 2013 14:51:15 +0400 Subject: Proxy buffering In-Reply-To: References: Message-ID: <20131115105115.GH95765@mdounin.ru> Hello! On Thu, Nov 14, 2013 at 10:37:03PM -0600, Andrew Tynefield wrote: > I've configured nginx as a load balancing proxy for my backend servers. My > backend is expecting multi-part uploads for large files in small chunks > (5-15mb). The issue I'm encountering, is that I would like for nginx to > just pass the chunked data along to the backend servers and not buffer the > requests. [...] > proxy_buffering off; [...] > I have tried disabling buffers as shown above, however, when I capture the > packets on the backend servers, I see that the stream of data doesn't occur > until after the full body of the upload has completed. The proxy_buffering directive disables response buffering, not request buffering. As of now, there is no way to prevent request body buffering in nginx. It's always fully read by nginx before a request is passed to an upstream server. It's basically a part of nginx being a web accelerator - it handles slow communication with clients by itself and only asks a backend to process a request when everything is ready. Implementing unbuffered uploads is in plans, no ETA though. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Nov 15 10:51:16 2013 From: nginx-forum at nginx.us (fatine,al) Date: Fri, 15 Nov 2013 05:51:16 -0500 Subject: NGINX 500 http error Message-ID: <3e95b5cc4e4f20c1dd436f4d5e8c7925.NginxMailingListEnglish@forum.nginx.org> Hi, I have a lot of 500 http error in nginx access.log. I think it's due to the number of requests that nginx receives. Because when I was testing the configuration with one ip adress I don't have this error but when I redirect all client traffic (sometimes more than 2000 simultaneous connexions) to nginx I'm getting 500 http error. I don't have any error log in error.log. I tried to modify this parameters : worker_connections, worker_rlimit_nofile, keepalive_timeout, proxy_connect_timeout ... And then I start to get this logs in error.log : worker process XXXXX exited on signal 11 an upstream response is buffered to a temporary file /var/lib/nginx/proxy/2/00/0000000002 failed (104: Connection reset by peer) while sending response to client Here is my nginx.log conf : user www-data; worker_processes 4; worker_rlimit_nofile 50000; pid /var/run/nginx.pid; events { worker_connections 50000; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 20; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 50M; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 128k; proxy_buffers 4 254k; proxy_busy_buffers_size 256k; include /etc/nginx/sites-enabled/*; } When I execute top command, cpu and RAM usage are not hignt (max cpu 7% and ram 4%). Any help would be appreciated. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244693,244693#msg-244693 From mdounin at mdounin.ru Fri Nov 15 11:02:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Nov 2013 15:02:08 +0400 Subject: NGINX 500 http error In-Reply-To: <3e95b5cc4e4f20c1dd436f4d5e8c7925.NginxMailingListEnglish@forum.nginx.org> References: <3e95b5cc4e4f20c1dd436f4d5e8c7925.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131115110208.GJ95765@mdounin.ru> Hello! On Fri, Nov 15, 2013 at 05:51:16AM -0500, fatine,al wrote: > Hi, > > I have a lot of 500 http error in nginx access.log. > I think it's due to the number of requests that nginx receives. Because when > I was testing the configuration with one ip adress I don't have this error > but when I redirect all client traffic (sometimes more than 2000 > simultaneous connexions) to nginx I'm getting 500 http error. > I don't have any error log in error.log. The 500 errors usually indicate either configuration problem (e.g, an infinite rewrite loop or something like this) or some resource shortage (memory, file descriptors, space for temporary files, and so on). > I tried to modify this parameters : worker_connections, > worker_rlimit_nofile, keepalive_timeout, proxy_connect_timeout ... > > And then I start to get this logs in error.log : > worker process XXXXX exited on signal 11 Here nginx exits due to memory access violation. This is a serious problem and usually indicate a bug somewhere (well, it might be also hardware problem). First of all, it might be a good idea to show "nginx -V" output (and basically make sure you are using latest version and no 3rd party modules), and obtain core dump and show a backtrace. See here for basic instructions and more details: http://wiki.nginx.org/Debugging > an upstream response is buffered to a temporary file > /var/lib/nginx/proxy/2/00/0000000002 > failed (104: Connection reset by peer) while sending response to > client These are not problems per se. > Here is my nginx.log conf : > > user www-data; > worker_processes 4; > worker_rlimit_nofile 50000; > pid /var/run/nginx.pid; [...] > include /etc/nginx/sites-enabled/*; Note that configuration you've provided isn't nearly a full one. It lacks an unsepcified number of included files. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Nov 15 12:00:02 2013 From: nginx-forum at nginx.us (fatine,al) Date: Fri, 15 Nov 2013 07:00:02 -0500 Subject: NGINX 500 http error In-Reply-To: <20131115110208.GJ95765@mdounin.ru> References: <20131115110208.GJ95765@mdounin.ru> Message-ID: <9676bac32c4f12cf30800f4ca9ed8077.NginxMailingListEnglish@forum.nginx.org> Hi, Thank you for your quick response. Here is the output of nginx -V : nginx version: nginx/1.2.1 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-http_ssl_module --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --without-http_uwsgi_module --without-http_scgi_module --with-ipv6 --with-http_stub_status_module --add-module=/build/buildd-nginx_1.2.1-2.2~bpo60+2-i386-EyCkdD/nginx-1.2.1/debian/modules/nginx-upstream-fair --add-module=/build/buildd-nginx_1.2.1-2.2~bpo60+2-i386-EyCkdD/nginx-1.2.1/debian/modules/nginx-cache-purge --add-module=/build/buildd-nginx_1.2.1-2.2~bpo60+2-i386-EyCkdD/nginx-1.2.1/debian/modules/naxsi/naxsi_src I added this two lines for core dump in nginx.conf : worker_rlimit_core 500M; working_directory /path/to/cores/; But I think I have to compile nginx with backtrace module to dump backtrace. What other includes do I need in my nginx.conf ? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244693,244700#msg-244700 From mdounin at mdounin.ru Fri Nov 15 13:21:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Nov 2013 17:21:51 +0400 Subject: NGINX 500 http error In-Reply-To: <9676bac32c4f12cf30800f4ca9ed8077.NginxMailingListEnglish@forum.nginx.org> References: <20131115110208.GJ95765@mdounin.ru> <9676bac32c4f12cf30800f4ca9ed8077.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131115132151.GQ95765@mdounin.ru> Hello! On Fri, Nov 15, 2013 at 07:00:02AM -0500, fatine,al wrote: > Hi, > > Thank you for your quick response. > > Here is the output of nginx -V : > > nginx version: nginx/1.2.1 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-log-path=/var/log/nginx/access.log > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock > --pid-path=/var/run/nginx.pid --with-pcre-jit --with-http_ssl_module > --without-mail_pop3_module --without-mail_smtp_module > --without-mail_imap_module --without-http_uwsgi_module > --without-http_scgi_module --with-ipv6 --with-http_stub_status_module > --add-module=/build/buildd-nginx_1.2.1-2.2~bpo60+2-i386-EyCkdD/nginx-1.2.1/debian/modules/nginx-upstream-fair > --add-module=/build/buildd-nginx_1.2.1-2.2~bpo60+2-i386-EyCkdD/nginx-1.2.1/debian/modules/nginx-cache-purge > --add-module=/build/buildd-nginx_1.2.1-2.2~bpo60+2-i386-EyCkdD/nginx-1.2.1/debian/modules/naxsi/naxsi_src The 1.2.1 is rather old and not supported. Even in 1.2.x branch there are at least several fixes of various segmentation faults. It doesn't really make any sense to try to debug anything further, even traditional "please recompile without 3rd party modules to see if it helps" doesn't really apply here. Upgrade to at least 1.4.3 (1.5.6 preferred) and come back if you'll see any problems there. See here for recent versions of nginx available for download, including precompiled linux packages: http://nginx.org/en/download.html -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Nov 15 14:16:18 2013 From: nginx-forum at nginx.us (fatine,al) Date: Fri, 15 Nov 2013 09:16:18 -0500 Subject: NGINX 500 http error In-Reply-To: <20131115132151.GQ95765@mdounin.ru> References: <20131115132151.GQ95765@mdounin.ru> Message-ID: <3d57ede6d4d086be412204da3a43654e.NginxMailingListEnglish@forum.nginx.org> OK. I will upgrade nginx and tell you if the problem is solved. Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244693,244709#msg-244709 From martinloy.uy at gmail.com Fri Nov 15 17:13:44 2013 From: martinloy.uy at gmail.com (Martin Loy) Date: Fri, 15 Nov 2013 15:13:44 -0200 Subject: ip_hash detailed behaviour Message-ID: Hello I would like to know more about the behaviour of ip_hash when NGINX is reloaded and in the scenario of removing an upstream node. Is ip_hash somehow clever/sticky and would reasign the IPs to the same node after a reload or restart? and what would happen if an upstream node is marked/flagged as down. I'm not sure if this is the correct list or i should write to nginx-dev. Regards M -- *Nunca hubo un amigo que hiciese un favor a un enano, ni un enemigo que le hiciese un mal, que no se viese recompensado por entero.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Nov 15 17:34:42 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Nov 2013 21:34:42 +0400 Subject: ip_hash detailed behaviour In-Reply-To: References: Message-ID: <20131115173442.GT95765@mdounin.ru> Hello! On Fri, Nov 15, 2013 at 03:13:44PM -0200, Martin Loy wrote: > Hello > > I would like to know more about the behaviour of ip_hash when NGINX is > reloaded and in the scenario of removing an upstream node. > > Is ip_hash somehow clever/sticky and would reasign the IPs to the same node > after a reload or restart? The ip_hash just uses a hash function to distribute clients among configured upstream servers. If a number of upstream servers changes, the distribution changes as well. > and what would happen if an upstream node is > marked/flagged as down. In this case clients which are normally passed to the server marked down will be distributed between other servers. See http://nginx.org/r/ip_hash for more details. -- Maxim Dounin http://nginx.org/en/donation.html From anmajumd at cisco.com Fri Nov 15 20:33:06 2013 From: anmajumd at cisco.com (Anamitra Dutta Majumdar (anmajumd)) Date: Fri, 15 Nov 2013 20:33:06 +0000 Subject: Queries in NGINX proxy features Message-ID: We are designing a deployment were NGINX front ends all incoming https connection and then forwards it to multiple web containers like Tomcat and Node.js which listen on internal ports on 127.0.0.1. I have some questions here 1. Is it possible to route Outbound connection through NGINX as well. I.e for requests outbound from Tomcat/Node.js, can the requests be forwarded to an internal nginx port first over HTTP and then Nginx will proxy them to the destination over HTTPS? 2. Are there any high to medium severity known threats for having an HTTP connection between nginx and the other web containers listening on local ports on the same machine instead of using HTTPS.Is is there any other alternative? 3. What is the best way to allow access from a list of know IP addresses at the NGINX layer. That is a White list of Ips. Would it be by using mod_security or the ngx_http_access_module. Is the one better over the other? Thanks, Anamitra -------------- next part -------------- An HTML attachment was scrubbed... URL: From savages at mozapps.com Fri Nov 15 21:07:11 2013 From: savages at mozapps.com (sv) Date: Fri, 15 Nov 2013 13:07:11 -0800 Subject: Queries in NGINX proxy features In-Reply-To: References: Message-ID: <52868CFF.4030103@mozapps.com> One of the things that I did was to use unix sockets to the backend. On 11/15/2013 12:33 PM, Anamitra Dutta Majumdar (anmajumd) wrote: > We are designing a deployment were NGINX front ends all incoming https > connection and then forwards it to multiple web containers like > Tomcat and Node.js which listen on internal ports on 127.0.0.1. > > I have some questions here > > 1. Is it possible to route Outbound connection through NGINX as well. > I.e for requests outbound from Tomcat/Node.js, can the requests > be forwarded to an internal nginx port first over HTTP and then > Nginx will proxy them to the destination over HTTPS? > 2. Are there any high to medium severity known threats for having an > HTTP connection between nginx and the other web containers > listening on local ports on the same machine instead of using > HTTPS.Is is there any other alternative? > 3. What is the best way to allow access from a list of know IP > addresses at the NGINX layer. That is a White list of Ips. Would > it be by using mod_security or the ngx_http_access_module. Is the > one better over the other? > > Thanks, > Anamitra > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From support-nginx at oeko.net Fri Nov 15 22:24:43 2013 From: support-nginx at oeko.net (Toni Mueller) Date: Fri, 15 Nov 2013 23:24:43 +0100 Subject: using uninitialized "pat" variable while logging request Message-ID: <20131115222443.GA9186@spruce.wiehl.oeko.net> Hi, to debug my locations, I have a variable in my configuration that I reference during logging. The log format, included from nginx.conf: log_format mylogformat '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" domain: $host branch: $pat'; Unfortunately, I cannot set the variable $pat already in nginx.conf, but in my virtual server configuration, I set it like this: server { listen 1.2.3.4:80 default_server; server_name www.example.com; charset utf-8; set $pat "-"; # rest of configuration here, eg.: location = / { set $pat "homepage"; # do something special } # more stuff... } So far, my understanding is that the variable $pat should be _always_ defined, right? Well... I find this in the error log: [warn] 11092#0: *15719589 using uninitialized "pat" variable while logging request, client: 4.3.2.1, server: www.example.com For a lot of requests, the value of the variable does appear in the log file, but for a good proportion, it doesn't. I would like to have this variable always set properly. Can it be a timing question? I get between ~5 and below 100 requests per second. The machine doesn't look anywhere near loaded, though (CPU utulization is under 5%). My software is nginx-full 1.2.1-2.2+wheezy1, on an amd64 VM. TIA! Kind regards, --Toni++ From support-nginx at oeko.net Fri Nov 15 22:55:06 2013 From: support-nginx at oeko.net (Toni Mueller) Date: Fri, 15 Nov 2013 23:55:06 +0100 Subject: Random 502 bad gateway with php-fpm, why? In-Reply-To: <024849234119bf5d8f3020d26eed3dd0.NginxMailingListEnglish@forum.nginx.org> References: <024849234119bf5d8f3020d26eed3dd0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131115225505.GA11935@spruce.wiehl.oeko.net> On Thu, Nov 14, 2013 at 01:13:08PM -0500, justin wrote: > My PHP application went down for a few hours with 502 bad gateway. In the > nginx error log all I see is: > > 2013/11/14 10:02:16 [error] 1466#0: *57964 recv() failed (104: Connection > reset by peer) while reading response header from upstream I recently hit the same problem and was able to ameliorate it by upping the OS limits (open files etc.). There are some discussions around this problem on stackoverflow and friends which suggest that one has to experiment. Kind regards, --Toni++ From nginx-forum at nginx.us Fri Nov 15 22:58:41 2013 From: nginx-forum at nginx.us (justin) Date: Fri, 15 Nov 2013 17:58:41 -0500 Subject: Random 502 bad gateway with php-fpm, why? In-Reply-To: <20131115225505.GA11935@spruce.wiehl.oeko.net> References: <20131115225505.GA11935@spruce.wiehl.oeko.net> Message-ID: <89a3d2e771eff927bd2c57a7b2b2c1fd.NginxMailingListEnglish@forum.nginx.org> Hey Tony. Can you link to the stackoverflow posts? I wish php-fpm told me what happened. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244669,244724#msg-244724 From support-nginx at oeko.net Fri Nov 15 23:03:11 2013 From: support-nginx at oeko.net (Toni Mueller) Date: Sat, 16 Nov 2013 00:03:11 +0100 Subject: Random 502 bad gateway with php-fpm, why? In-Reply-To: <89a3d2e771eff927bd2c57a7b2b2c1fd.NginxMailingListEnglish@forum.nginx.org> References: <20131115225505.GA11935@spruce.wiehl.oeko.net> <89a3d2e771eff927bd2c57a7b2b2c1fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131115230311.GB11935@spruce.wiehl.oeko.net> On Fri, Nov 15, 2013 at 05:58:41PM -0500, justin wrote: > Can you link to the stackoverflow posts? I wish php-fpm told me what > happened. See, my memory. =8-(( It was this link that helped me most: http://forum.nginx.org/read.php?11,215606,235395 Cheers, --Toni++ From vbart at nginx.com Sat Nov 16 00:07:19 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 16 Nov 2013 04:07:19 +0400 Subject: using uninitialized "pat" variable while logging request In-Reply-To: <20131115222443.GA9186@spruce.wiehl.oeko.net> References: <20131115222443.GA9186@spruce.wiehl.oeko.net> Message-ID: <201311160407.19219.vbart@nginx.com> On Saturday 16 November 2013 02:24:43 Toni Mueller wrote: > Hi, > > to debug my locations, I have a variable in my configuration that I > reference during logging. The log format, included from nginx.conf: > > log_format mylogformat '$remote_addr - $remote_user [$time_local] $request > ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" > domain: $host branch: $pat'; > > > Unfortunately, I cannot set the variable $pat already in nginx.conf, > but in my virtual server configuration, I set it like this: > > server { > listen 1.2.3.4:80 default_server; > server_name www.example.com; > > charset utf-8; > set $pat "-"; > > # rest of configuration here, eg.: > > location = / { > set $pat "homepage"; > # do something special > } > > # more stuff... > } > > > So far, my understanding is that the variable $pat should be _always_ > defined, right? Well... I find this in the error log: [..] The "set" directive isn't something essential, and actually it is just a directive from the rewrite module. See here how it works: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html It is evaluated on the rewrite phase of request processing. Thus, if the request is finalized before this phase, then your variable is left uninitialized. To debug your locations and for better understanding what is going on, you can use nginx debug log: http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From support-nginx at oeko.net Sat Nov 16 01:02:31 2013 From: support-nginx at oeko.net (Toni Mueller) Date: Sat, 16 Nov 2013 02:02:31 +0100 Subject: using uninitialized "pat" variable while logging request In-Reply-To: <201311160407.19219.vbart@nginx.com> References: <20131115222443.GA9186@spruce.wiehl.oeko.net> <201311160407.19219.vbart@nginx.com> Message-ID: <20131116010231.GA22204@spruce.wiehl.oeko.net> Hi, On Sat, Nov 16, 2013 at 04:07:19AM +0400, Valentin V. Bartenev wrote: > The "set" directive isn't something essential, and actually it is just a > directive from the rewrite module. > See here how it works: > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html > It is evaluated on the rewrite phase of request processing. Thus, if > the request is finalized before this phase, then your variable is left > uninitialized. Thanks for the explanation - I did not get that from this page. :/ > To debug your locations and for better understanding what is going on, > you can use nginx debug log: http://nginx.org/en/docs/debugging_log.html Unfortunately, there is nothing between the info (too little) and debug log levels (too much). But it was fruitful, as I found the problem. :) Cheers, --Toni++ From contact at jpluscplusm.com Sat Nov 16 08:12:52 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 16 Nov 2013 08:12:52 +0000 Subject: using uninitialized "pat" variable while logging request In-Reply-To: <20131116010231.GA22204@spruce.wiehl.oeko.net> References: <20131115222443.GA9186@spruce.wiehl.oeko.net> <201311160407.19219.vbart@nginx.com> <20131116010231.GA22204@spruce.wiehl.oeko.net> Message-ID: On 16 Nov 2013 01:02, "Toni Mueller" wrote: > Unfortunately, there is nothing between the info (too little) and > debug log levels (too much). But it was fruitful, as I found the > problem. :) Why don't you let the list know how you fixed it, so the next person with the same problem can find the answer in the list archives? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Nov 16 12:03:19 2013 From: nginx-forum at nginx.us (spdyg) Date: Sat, 16 Nov 2013 07:03:19 -0500 Subject: SPDY + proxy cache static content failures In-Reply-To: References: <32F6D688-5016-451D-A43A-72FC22C5DBBB@gwynne.id.au> <928247ac3646300bd8845120ca3aae9b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5233c5af2a8644288d22d953aeb63e11.NginxMailingListEnglish@forum.nginx.org> No unfortunately, but I have since filed a bug in trac: http://trac.nginx.org/nginx/ticket/428 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233497,244737#msg-244737 From nginx-forum at nginx.us Sat Nov 16 15:56:00 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 16 Nov 2013 10:56:00 -0500 Subject: [ANN] Windows nginx 1.5.7.1 Caterpillar Message-ID: <6aae2a6be3fd77f62e727f888059473a.NginxMailingListEnglish@forum.nginx.org> === Transforming nginx for Windows === 12:22 16-11-2013: nginx 1.5.7.1 Caterpillar The nginx 'Caterpillar' is a "you are no longer in Kansas Alice" *MONSTER* release bringing to Windows full scalability with multiple workers! This native build runs on Windows XP SP3 and higher, both 32 and 64 bit. Based on nginx 1.5.7 (9-11-2013 + spdy hang fix) with; + A solution for the multiple worker(shm_) issue, commercially sponsored solution by ITPP with a HUGE thanks to Vittorio Francesco Digilio from Italy for his relentless debugging, analysis and solution ! + Naxsi WAF (Web Application Firewall) v0.53 (https://github.com/nbs-system/naxsi) see https://github.com/nbs-system/naxsi/wiki how to use it and also see the conf/ folder + lua-nginx-module v0.9.2 (upgraded) + Streaming with nginx-rtmp-module, v1.0.6 (http://nginx-rtmp.blogspot.nl/) (upgraded) * Additional specifications are like 12:38 2-10-2013: nginx 1.5.6.4 Butterfly Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244739,244739#msg-244739 From ianevans at digitalhit.com Sun Nov 17 20:16:23 2013 From: ianevans at digitalhit.com (Ian M. Evans) Date: Sun, 17 Nov 2013 15:16:23 -0500 Subject: will request_uri get passed from cookieless server to 403 page on main server? Message-ID: <2c961eddf9714b6a01f3a75e27ae41f8.squirrel@www.digitalhit.com> Migrating to a new server and thought I'd take the time to set up a cookieless subdomain on it for static files. In my current setup, 403 errors are sent to a php file which grabs the $_SERVER["REQUEST_URI"], locates the page of the photo on the site, and redirects the person to that page so they see it in our context. So now I'm going to set up static.example.com. If I set the 403 error page to go to error_page 403 http://www.example.com/dhe403.shtml; in order to run the PHP on the 403 error, does the REQUEST_URI get passed between servers or do I have to do some rewrite magic that I currently don't do? Thanks. From ianevans at digitalhit.com Mon Nov 18 00:21:02 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Sun, 17 Nov 2013 19:21:02 -0500 Subject: will request_uri get passed from cookieless server to 403 page on main server? In-Reply-To: <2c961eddf9714b6a01f3a75e27ae41f8.squirrel@www.digitalhit.com> References: <2c961eddf9714b6a01f3a75e27ae41f8.squirrel@www.digitalhit.com> Message-ID: <52895D6E.1040608@digitalhit.com> On 17/11/2013 3:16 PM, Ian M. Evans wrote: > Migrating to a new server and thought I'd take the time to set up a > cookieless subdomain on it for static files. > > In my current setup, 403 errors are sent to a php file which grabs the > $_SERVER["REQUEST_URI"], locates the page of the photo on the site, and > redirects the person to that page so they see it in our context. > > So now I'm going to set up static.example.com. > > If I set the 403 error page to go to > error_page 403 http://www.example.com/dhe403.shtml; > > in order to run the PHP on the 403 error, does the REQUEST_URI get passed > between servers or do I have to do some rewrite magic that I currently > don't do? > > Thanks. > Just trying to think this through on my own...is it possible to pass the request_uri as a variable to the error page on the new server? e.g. error_page 403 http://www.example.com/dhe403.shtml?req=$request_uri; From owata at club.kyutech.ac.jp Mon Nov 18 02:49:14 2013 From: owata at club.kyutech.ac.jp (Yuta MASUMOTO) Date: Mon, 18 Nov 2013 11:49:14 +0900 Subject: I translated document from English to Japanese Message-ID: <5289802A.9000509@club.kyutech.ac.jp> Hi there, I intersted in nginx, but Japanese documents is out of date, so I will be translate documents from English to Japanese, but I cannnot find information to translation. Right now, I created git repository what documention page. https://github.com/owatan/nginx-doc-jp http://owatan.github.io/nginx-doc-jp/libxslt/ja/docs/ Could you merge translated this documents ? -- Yuta MASUMOTO Mail: owata at club.kyutech.ac.jp From nginx-forum at nginx.us Mon Nov 18 08:40:16 2013 From: nginx-forum at nginx.us (engenex) Date: Mon, 18 Nov 2013 03:40:16 -0500 Subject: Pass parameters though auth_request directive? Message-ID: <559d1090bd5a370fd1f545848b604bfd.NginxMailingListEnglish@forum.nginx.org> Hi, I am try to implement a simple auth system using nginx, nginx-auth-request-module and php-fpm. I want to do the following: User requests http://myserver.com/content/file1.zip?key=12345 location /content { auth_request /auth_http.php; # do some logic in auth_http.php then # depending on response from auth_http.php drop the connection or allow downloading { I can't get the parameters from the request URI to my auth_http.php using the auth_request directive. My simple auth_http.php to check for the key param: This shows error in php log: PHP Notice: Undefined index: key in /usr/local/nginx/html/auth_http.php I also checked the sizeof $_GET and it is 0. Can someone help me please? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244760,244760#msg-244760 From mdounin at mdounin.ru Mon Nov 18 12:30:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Nov 2013 16:30:29 +0400 Subject: Pass parameters though auth_request directive? In-Reply-To: <559d1090bd5a370fd1f545848b604bfd.NginxMailingListEnglish@forum.nginx.org> References: <559d1090bd5a370fd1f545848b604bfd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131118123028.GE41579@mdounin.ru> Hello! On Mon, Nov 18, 2013 at 03:40:16AM -0500, engenex wrote: > Hi, > > I am try to implement a simple auth system using nginx, > nginx-auth-request-module and php-fpm. > > I want to do the following: > User requests http://myserver.com/content/file1.zip?key=12345 > > location /content { > auth_request /auth_http.php; > # do some logic in auth_http.php then > # depending on response from auth_http.php drop the connection or allow > downloading > { > > I can't get the parameters from the request URI to my auth_http.php using > the auth_request directive. > My simple auth_http.php to check for the key param: > > $key = $_GET['key']; > error_log("key is $key \n"); ?> > > This shows error in php log: > PHP Notice: Undefined index: key in /usr/local/nginx/html/auth_http.php > > I also checked the sizeof $_GET and it is 0. Can someone help me please? This is expected - as auth request is to "/auth_http.php", and there is no query string in the subrequest. If you want to check original request arguments, you have to check them explicitly. Assuming default fastcgi_param configuration and php, original request URI (with original query string aka request arguments) should be available via $_SERVER['REQUEST_URI']. -- Maxim Dounin http://nginx.org/en/donation.html From vl at nginx.com Mon Nov 18 12:46:52 2013 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 18 Nov 2013 16:46:52 +0400 Subject: I translated document from English to Japanese In-Reply-To: <5289802A.9000509@club.kyutech.ac.jp> References: <5289802A.9000509@club.kyutech.ac.jp> Message-ID: <20131118124651.GA25263@vlpc.i.nginx.com> On Mon, Nov 18, 2013 at 11:49:14AM +0900, Yuta MASUMOTO wrote: > Hi there, > > I intersted in nginx, but Japanese documents is out of date, > so I will be translate documents from English to Japanese, > but I cannnot find information to translation. > > Right now, I created git repository what documention page. > https://github.com/owatan/nginx-doc-jp > http://owatan.github.io/nginx-doc-jp/libxslt/ja/docs/ > > Could you merge translated this documents ? > > -- > Yuta MASUMOTO > Mail: owata at club.kyutech.ac.jp > Hi Yuta! I've made a quick look into you repo and found that you have removed existing documentation from index (your index has only beginners guide and install documents, while current documentation has other articles) - any reasons to do so? As I understand, your contribution is to provide translation for beginners guide and install documents and some other minor changes. If you want your work to be merged, please provide a changeset against our nginx.org repository (http://hg.nginx.org/nginx.org/), following recommendations here: http://nginx.org/en/docs/contributing_changes.html so that we can see actual changes (edits and new articles separate) with commit logs that makes sense. Thank you! From nginx-forum at nginx.us Mon Nov 18 15:34:42 2013 From: nginx-forum at nginx.us (engenex) Date: Mon, 18 Nov 2013 10:34:42 -0500 Subject: Pass parameters though auth_request directive? In-Reply-To: <20131118123028.GE41579@mdounin.ru> References: <20131118123028.GE41579@mdounin.ru> Message-ID: <80b54bb7a1863e1b63e6ece834a741c1.NginxMailingListEnglish@forum.nginx.org> Hi Maxim! Your solution worked. Thanks for the fast response. I was able to access the original params through the $_SERVER['REQUEST_URI'] php array as you said. I didn't even have to change my conf. God bless you. I spent the better part of the last 24 hours trying to figure this one out. I should have paid closer attention to my nginx error.log... the answer was right there in front of me ;) [debug] 15544#0: *15 fastcgi param: "REQUEST_URI: /content/file3.zip?psk=abracadabra" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244760,244778#msg-244778 From nginx-forum at nginx.us Mon Nov 18 19:49:27 2013 From: nginx-forum at nginx.us (Nam) Date: Mon, 18 Nov 2013 14:49:27 -0500 Subject: limit_req and limit_conn in rewrite modules if statement Message-ID: <3f316b9bfe4988440a6ec68083007c7d.NginxMailingListEnglish@forum.nginx.org> Hello, I would like to see if it's possible to get limit_conn and limit_req working with the rewrite modules if statement. I have seen some discussion about this in the mailing list already saying to use stuff like throwing a 410 code and having that 410 code handled by a @named location that handles requests that should not be limited, such as ... location / { error_page 410 = @nolimit; if ($http_user_agent ~ Googlebot) { return 410; } limit_req zone=one burst=4; ... } location @nolimit { ... } I also know about how if statements are considered evil and should be avoided where possible, but I am working with dynamically generating config files which support multiple upstreams, with different upstream options per location directive with various features involved. I would like to be able to set a variable that I can than use in an if statement to determine if limit_con or limit_req should be used. For example... set $Whitelisted "No"; if ($http_user_agent ~ (googlebot|bingbot)) { set $Whitelisted "Yes"; } if ($Whitelisted ~* "No") { limit_conn conlimit-one 5; limit_req zone=limit-half burst=9; } Is there any possibility of this functionality being unlocked in the nginx code? I really need this functionality, and I hope it's not a big deal to do. Is there are particular reason why this module does not work with the rewrite if statement? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244781,244781#msg-244781 From nginx-forum at nginx.us Mon Nov 18 20:55:08 2013 From: nginx-forum at nginx.us (marianopeck) Date: Mon, 18 Nov 2013 15:55:08 -0500 Subject: rewrite and proxy_pass at the same time in nginx Message-ID: HI guys. I am trying to make something to work in nginx but I have no luck. I posted in StackOverflow, and no answer. I wish any of you can help me. I paste the SO question here as well: I am using nginx and I need a proxy to redirect some service. From my application, I should be able to do a POST, for example, to this URL: http://localhost:8776/specialService/aserverthatlistens.com/serviceToCall. And I need to rewrite it like this: https://aserverthatlistens.com/serviceToCall (notice that this URL is HTTPS, not HTTP). I know I can use rewrite but I don't know how to use proxy_pass also, because the url of the proxy_pass should be what I rewrote... Notice that I cannot know in advance what is the final url (aserverthatlistens.com in this example), so I always need to get it form the URL. I could send it as parameter if that help instead of being part of the URL. So far my server configuration looks like this: server { listen 8776; server_name localhost; access_log /var/log/nginx/tunnel.log; error_log /var/log/nginx/error.log info; location ~ ^/someService(/.*)$ { proxy_pass https://$1; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244782,244782#msg-244782 From nginx-forum at nginx.us Mon Nov 18 20:57:19 2013 From: nginx-forum at nginx.us (Nam) Date: Mon, 18 Nov 2013 15:57:19 -0500 Subject: limit_req and limit_conn in rewrite modules if statement In-Reply-To: <3f316b9bfe4988440a6ec68083007c7d.NginxMailingListEnglish@forum.nginx.org> References: <3f316b9bfe4988440a6ec68083007c7d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8b1de7dbd4a382d36d9d0d5ec74f03b8.NginxMailingListEnglish@forum.nginx.org> To elaborate a bit more, in a single location I may end up with something like this... set $Whitelisted "No"; if ($GeoList1 = allow) { set $Whitelisted "Yes"; } if ($GeoList5 = allow) { set $Whitelisted "Yes"; } if ($http_user_agent ~ (googlebot|bingbot)) { set $Whitelisted "Yes"; } if ($Whitelisted ~* "No") { limit_conn conlimit-one 5; limit_req zone=limit-half burst=9; } So that I can allow both IPs and user agents to avoid getting limited. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244781,244783#msg-244783 From nginx-forum at nginx.us Mon Nov 18 21:03:47 2013 From: nginx-forum at nginx.us (jimhowell) Date: Mon, 18 Nov 2013 16:03:47 -0500 Subject: nginx with cavium SSL In-Reply-To: <1354577945.77655.YahooMailNeo@web39305.mail.mud.yahoo.com> References: <1354577945.77655.YahooMailNeo@web39305.mail.mud.yahoo.com> Message-ID: Greetings Jacob, We are looking at a similar situation -- did you learn anything to update this post? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233535,244784#msg-244784 From contact at jpluscplusm.com Mon Nov 18 21:05:36 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 18 Nov 2013 21:05:36 +0000 Subject: limit_req and limit_conn in rewrite modules if statement In-Reply-To: <8b1de7dbd4a382d36d9d0d5ec74f03b8.NginxMailingListEnglish@forum.nginx.org> References: <3f316b9bfe4988440a6ec68083007c7d.NginxMailingListEnglish@forum.nginx.org> <8b1de7dbd4a382d36d9d0d5ec74f03b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 18 November 2013 20:57, Nam wrote: > To elaborate a bit more, in a single location I may end up with something > like this... > > set $Whitelisted "No"; > if ($GeoList1 = allow) { > set $Whitelisted "Yes"; > } > if ($GeoList5 = allow) { > set $Whitelisted "Yes"; > } > if ($http_user_agent ~ (googlebot|bingbot)) { > set $Whitelisted "Yes"; > } > if ($Whitelisted ~* "No") { > limit_conn conlimit-one 5; > limit_req zone=limit-half burst=9; > } Have you looked at using map{}s as the limit_* settings' arguments instead? They're much nicer than sequential if()s, which may not even achieve what you want to. J From francis at daoine.org Mon Nov 18 22:55:26 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Nov 2013 22:55:26 +0000 Subject: rewrite and proxy_pass at the same time in nginx In-Reply-To: References: Message-ID: <20131118225526.GA31289@craic.sysops.org> On Mon, Nov 18, 2013 at 03:55:08PM -0500, marianopeck wrote: Hi there, > server > { > location ~ ^/someService(/.*)$ { > proxy_pass https://$1; > } > } What does the error log say? What does the error log say when you change proxy_pass to have 2 slashes and not 3? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Nov 18 23:23:46 2013 From: nginx-forum at nginx.us (Nam) Date: Mon, 18 Nov 2013 18:23:46 -0500 Subject: limit_req and limit_conn in rewrite modules if statement In-Reply-To: References: Message-ID: Can you give an example of how I would accomplish the desired if statement I posted? I do not see how i could get map to do that myself. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244781,244789#msg-244789 From francis at daoine.org Mon Nov 18 23:26:59 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Nov 2013 23:26:59 +0000 Subject: will request_uri get passed from cookieless server to 403 page on main server? In-Reply-To: <52895D6E.1040608@digitalhit.com> References: <2c961eddf9714b6a01f3a75e27ae41f8.squirrel@www.digitalhit.com> <52895D6E.1040608@digitalhit.com> Message-ID: <20131118232659.GB31289@craic.sysops.org> On Sun, Nov 17, 2013 at 07:21:02PM -0500, Ian Evans wrote: > On 17/11/2013 3:16 PM, Ian M. Evans wrote: Hi there, > >In my current setup, 403 errors are sent to a php file which grabs the > >$_SERVER["REQUEST_URI"], locates the page of the photo on the site, and > >redirects the person to that page so they see it in our context. So when a 403 would be generated, you intercept it, do some processing, and return an appropriate 302 instead. > >So now I'm going to set up static.example.com. > > > >If I set the 403 error page to go to > >error_page 403 http://www.example.com/dhe403.shtml; > > > >in order to run the PHP on the 403 error, does the REQUEST_URI get passed > >between servers or do I have to do some rewrite magic that I currently > >don't do? No, it doesn't. That directive says "instead of sending a 403, send a 302 for this other url". What the browser does with that, is up to the browser. > Just trying to think this through on my own...is it possible to pass the > request_uri as a variable to the error page on the new server? > error_page 403 http://www.example.com/dhe403.shtml?req=$request_uri; You can try. You may have to take care about url escaping, though. (You can possibly avoid that if you use something like .../dhe403.shtml/$request_uri and handle it appropriately on the far side.) It would seem easier to either proxy_pass or fastcgi_pass to the old server, in an error_page 403 location on the new server, if you're happy to have that level of not-just-static involved (and not happy to run the full php suite on the new server). That way, the browser would get one redirect to the intended destination instead of one to an intermediate location first. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Nov 19 00:35:19 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Nov 2013 00:35:19 +0000 Subject: limit_req and limit_conn in rewrite modules if statement In-Reply-To: <3f316b9bfe4988440a6ec68083007c7d.NginxMailingListEnglish@forum.nginx.org> References: <3f316b9bfe4988440a6ec68083007c7d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131119003519.GC31289@craic.sysops.org> On Mon, Nov 18, 2013 at 02:49:27PM -0500, Nam wrote: Hi there, > I would like to see if it's possible to get limit_conn and limit_req > working with the rewrite modules if statement. Not according to the current documentation. I suspect that patches will be welcome, if they fit the usual criteria. > I have seen some discussion > about this in the mailing list already saying to use stuff like throwing a > 410 code and having that 410 code handled by a @named location that handles > requests that should not be limited You don't seem to say why that setup doesn't work for your situation. > I also know about how if statements are considered evil Yes. > and should be avoided where possible No. "if" inside "location" can be a problem. Otherwise, it should be fine. > but I am working with dynamically generating config > files which support multiple upstreams, with different upstream options per > location directive with various features involved. You may want to rethink the design, based on the nginx features available. > I would like to be able > to set a variable that I can than use in an if statement to determine if > limit_con or limit_req should be used. As above, if you're doing that, you're not using current nginx. > For example... > > set $Whitelisted "No"; > if ($http_user_agent ~ (googlebot|bingbot)) { > set $Whitelisted "Yes"; > } > if ($Whitelisted ~* "No") { > limit_conn conlimit-one 5; > limit_req zone=limit-half burst=9; Somewhere, you have defined limit_conn_zone to match that. Can you redefine that to use a new variable that takes the value you want, if $Whitelisted is "No", and is empty if $Whitelisted is "Yes" -- in this case, if $http_user_agent matches those patterns? That should be doable without patching nginx. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Nov 19 02:44:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 06:44:31 +0400 Subject: limit_req and limit_conn in rewrite modules if statement In-Reply-To: <20131119003519.GC31289@craic.sysops.org> References: <3f316b9bfe4988440a6ec68083007c7d.NginxMailingListEnglish@forum.nginx.org> <20131119003519.GC31289@craic.sysops.org> Message-ID: <20131119024431.GS41579@mdounin.ru> Hello! On Tue, Nov 19, 2013 at 12:35:19AM +0000, Francis Daly wrote: > On Mon, Nov 18, 2013 at 02:49:27PM -0500, Nam wrote: > > Hi there, > > > I would like to see if it's possible to get limit_conn and limit_req > > working with the rewrite modules if statement. > > Not according to the current documentation. > > I suspect that patches will be welcome, if they fit the usual criteria. Not really. Pathes to enable directives in if-in-location context are not considered. Instead, directives should be able to work with variables. In case of limit_req / limit_conn, variables support is already here. Whitelisting can be easily done with something like this: map $whitelist $limit { default $binary_remote_addr; 1 ""; } limit_req_zone $limit zone=one:10m rate=1r/s; server { ... set $whitelist ""; if (...) { set $whitelist 1; } limit_req one; ... } [...] -- Maxim Dounin http://nginx.org/en/donation.html From nmilas at noa.gr Tue Nov 19 07:39:44 2013 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 19 Nov 2013 09:39:44 +0200 Subject: alert: ... pread() read only Message-ID: <528B15C0.7020406@noa.gr> Hello, We are running a Joomla website loading a google map in an iframe (under NGINX) in the main (home) web page. This is the page mostly visited as it contains almost real-time data to be viewed by clients. The system info: Linux myserver.example.com 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug 28 17:19:38 UTC 2013 x86_64 Database Version 5.5.34 Database Collation utf8_general_ci PHP Version 5.3.3 Web Server nginx/1.4.2 WebServer to PHP Interface fpm-fcgi Joomla! Version Joomla! 2.5.8 Stable [ Ember ] 8-November-2012 14:00 GMT Joomla! Platform Version Joomla Platform 11.4.0 Stable [ Brian Kernighan ] 03-Jan-2012 00:00 GMT The problem is that there is a repeating error of the form (I have changed real host name and web root path, as well as client IP address): 2013/11/17 12:39:14 [alert] 20709#0: *9059 pread() read only 38605 of 39107 from "/path/to/web/root/HTML/gmap/gmapv3_auto_el.html" while sending response to client, client: ::ffff:xxx.xxx.241.42, server: www.example.com, request: "GET /HTML/gmap/gmapv3_auto_el.html HTTP/1.1", host: "www.example.com" I found here: http://translate.google.com/translate?sl=ru&tl=en&js=n&prev=_t&hl=pt-PT&ie=UTF-8&u=http%3A%2F%2Fforum.nginx.org%2Fread.php%3F21%2C9856&act=url ...that this is probably related to the "open_file_cache" directive, and in fact I do use (based on advice found on the Internet): open_file_cache max=5000 inactive=30s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; Is there anything I could/should do to optimize system operation to avoid errors? Should I disable open_file_cache or not? (I do not entirely understand its implications.) How should we determine the directive benefits? Note: The aim is to be able to serve a few thousand requests per sec at peak, while normal traffic is < 20 reqs/sec. I appreciate your suggestions. Regards, Nick From nmilas at noa.gr Tue Nov 19 07:48:37 2013 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 19 Nov 2013 09:48:37 +0200 Subject: alert: ... pread() read only In-Reply-To: <528B15C0.7020406@noa.gr> References: <528B15C0.7020406@noa.gr> Message-ID: <528B17D5.7020800@noa.gr> On 19/11/2013 9:39 ??, Nikolaos Milas wrote: > The system info: > > Linux myserver.example.com 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug > 28 17:19:38 UTC 2013 x86_64 I forgot to mention that this is a VPS running CentOS 6.4 as a VM under KVM. The file system at the VM is ext4 over LVM but I don't know the actual storage topology of the VM host cluster. I thought I should add this info as it may be pertinent to the situation. Regards, Nick From ianevans at digitalhit.com Tue Nov 19 09:13:30 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Tue, 19 Nov 2013 04:13:30 -0500 Subject: will request_uri get passed from cookieless server to 403 page on main server? In-Reply-To: <20131118232659.GB31289@craic.sysops.org> References: <2c961eddf9714b6a01f3a75e27ae41f8.squirrel@www.digitalhit.com> <52895D6E.1040608@digitalhit.com> <20131118232659.GB31289@craic.sysops.org> Message-ID: <528B2BBA.2090900@digitalhit.com> On 18/11/2013 6:26 PM, Francis Daly wrote: Thanks for your response. > (You can possibly avoid that if you use something like > .../dhe403.shtml/$request_uri and handle it appropriately on the far > side.) Ok. > > It would seem easier to either proxy_pass or fastcgi_pass to the old > server, in an error_page 403 location on the new server, if you're happy > to have that level of not-just-static involved (and not happy to run > the full php suite on the new server). > > That way, the browser would get one redirect to the intended destination > instead of one to an intermediate location first. Just to clarify, the two "servers" are on same physical box with the same document root, just migrating to a better host. So php is installed. I'm just creating two server blocks so that the images will be served on a cookieless sub-domain for web performance reasons, i.e.: server { server_name www.example.com; ... } server { server_name static.example.com; ... } On the www block, I currently have this 403 block: error_page 403 = /dhe403.shtml; location = /dhe403.shtml { fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_cache off; } So I could add that to my static block as well? Also, I'd have this "rewrite anything not static" location in the static server: if ($request_uri !~* "\.(gif|js|jpg|jpeg|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mov)$") { rewrite ^(.*) http://www.example.com$1 permanent; break; } If I put the 403 location before it, will it still redirect any non-static files EXCEPT for the dhe403.shtml? Thanks! From francis at daoine.org Tue Nov 19 09:19:19 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Nov 2013 09:19:19 +0000 Subject: alert: ... pread() read only In-Reply-To: <528B15C0.7020406@noa.gr> References: <528B15C0.7020406@noa.gr> Message-ID: <20131119091919.GD31289@craic.sysops.org> On Tue, Nov 19, 2013 at 09:39:44AM +0200, Nikolaos Milas wrote: Hi there, > 2013/11/17 12:39:14 [alert] 20709#0: *9059 pread() read only 38605 of > 39107 from "/path/to/web/root/HTML/gmap/gmapv3_auto_el.html" > ...that this is probably related to the "open_file_cache" directive, and > in fact I do use (based on advice found on the Internet): > > open_file_cache max=5000 inactive=30s; > Is there anything I could/should do to optimize system operation to > avoid errors? Don't change files that nginx has in open_file_cache. > Should I disable open_file_cache or not? (I do not entirely understand > its implications.) open_file_cache says "I am happy to occasionally return stale data or errors, in exchange for the speed-up in normal use". The errors will happen when a file size is changed in-place -- so "mv a.html b.html && echo new > a.html" should continue to serve the old content of a.html; but just "echo new > a.html" will show something odd if the old length of a.html was not exactly 4. > How should we determine the directive benefits? Measure. Only you can do a test against your hardware. Get some clients (on remote machines) and make the requests and examine the responses. Note that you later say that you are using a virtual machine on a shared system. That suggests that the actual hardware available to nginx will vary over time, so any tests will not be valid for any other time. You also describe how the file system is set up. I can't tell whether that will necessarily break open_file_cache, even if the files themselves are not changing underneath nginx. If you know that the files are not changing but the pread() errors persist, then disable open_file_cache to see if it stops the errors, and then it my be worth investigating whether that filesystem set up is as stable as nginx expects. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Nov 19 09:21:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 13:21:12 +0400 Subject: alert: ... pread() read only In-Reply-To: <528B15C0.7020406@noa.gr> References: <528B15C0.7020406@noa.gr> Message-ID: <20131119092112.GU41579@mdounin.ru> Hello! On Tue, Nov 19, 2013 at 09:39:44AM +0200, Nikolaos Milas wrote: [...] > The problem is that there is a repeating error of the form (I have > changed real host name and web root path, as well as client IP > address): > > 2013/11/17 12:39:14 [alert] 20709#0: *9059 pread() read only 38605 > of 39107 from "/path/to/web/root/HTML/gmap/gmapv3_auto_el.html" > while sending response to client, client: ::ffff:xxx.xxx.241.42, > server: www.example.com, request: "GET > /HTML/gmap/gmapv3_auto_el.html HTTP/1.1", host: "www.example.com" > > I found here: > > http://translate.google.com/translate?sl=ru&tl=en&js=n&prev=_t&hl=pt-PT&ie=UTF-8&u=http%3A%2F%2Fforum.nginx.org%2Fread.php%3F21%2C9856&act=url > > ...that this is probably related to the "open_file_cache" directive, > and in fact I do use (based on advice found on the Internet): It's not "related" to the "open_file_cache" directive, but the "open_file_cache" directive makes the underlying problem more visible. The root cause of the messages in question is non-atomic file update. Somebody on your system edited the file in question in-place, instead of re-creating it with a temporary name and then using "mv" to atomically update the file. Non-atomic updates create a race: a file which is opened by nginx (and stat()'ed for nginx to know it's length) suddenly changes. This can happen in the middle of a response, and results in a corrupted response - first part of a response is from original file, and second is from updated one. If nginx is able to detect the problem due to file size mismatch - it logs the message in question. The only correct solution is to update files atomically, i.e., create a new file and then rename to a desired name. However, the "open_file_cache" directive makes the race window bigger by keeping files open for a long time. Switching it off is a good idea if you can't eliminate non-atomic updates for some reason. [...] > Should I disable open_file_cache or not? (I do not entirely > understand its implications.) > > How should we determine the directive benefits? > > Note: The aim is to be able to serve a few thousand requests per sec > at peak, while normal traffic is < 20 reqs/sec. I don't think that open_file_cache results in a measurable difference in your case. I would recommend disabling it unless you have good reasons to enable it, just to simplify maintenance. -- Maxim Dounin http://nginx.org/en/donation.html From francis at daoine.org Tue Nov 19 09:31:14 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Nov 2013 09:31:14 +0000 Subject: will request_uri get passed from cookieless server to 403 page on main server? In-Reply-To: <528B2BBA.2090900@digitalhit.com> References: <2c961eddf9714b6a01f3a75e27ae41f8.squirrel@www.digitalhit.com> <52895D6E.1040608@digitalhit.com> <20131118232659.GB31289@craic.sysops.org> <528B2BBA.2090900@digitalhit.com> Message-ID: <20131119093114.GE31289@craic.sysops.org> On Tue, Nov 19, 2013 at 04:13:30AM -0500, Ian Evans wrote: > On 18/11/2013 6:26 PM, Francis Daly wrote: Hi there, > Just to clarify, the two "servers" are on same physical box with the > same document root, just migrating to a better host. So php is > installed. > On the www block, I currently have this 403 block: > > error_page 403 = /dhe403.shtml; > location = /dhe403.shtml { > fastcgi_pass unix:/var/run/php-fpm.sock; > fastcgi_cache off; > } > > So I could add that to my static block as well? Yes. *If* that php is set to always return cookies, then you might want to run a separate php-fpm that does not return cookies for the static site. But that's a side issue. > Also, I'd have this "rewrite anything not static" location in the static > server: > > if ($request_uri !~* > "\.(gif|js|jpg|jpeg|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mov)$") > { > rewrite ^(.*) http://www.example.com$1 permanent; > break; > } That's not a location. That's an if() block. > If I put the 403 location before it, will it still redirect any > non-static files EXCEPT for the dhe403.shtml? I believe that if() at server level gets tested before all locations -- but it shouldn't be too hard to check. Compare that output of "curl -i url" with what you want to see. If it doesn't work first time, perhaps putting the if() block inside a "location /" and adjusting accordingly would do what you want. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Nov 19 09:37:20 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Nov 2013 09:37:20 +0000 Subject: alert: ... pread() read only In-Reply-To: <20131119091919.GD31289@craic.sysops.org> References: <528B15C0.7020406@noa.gr> <20131119091919.GD31289@craic.sysops.org> Message-ID: <20131119093720.GF31289@craic.sysops.org> On Tue, Nov 19, 2013 at 09:19:19AM +0000, Francis Daly wrote: > On Tue, Nov 19, 2013 at 09:39:44AM +0200, Nikolaos Milas wrote: Hi there, Oh, and following Maxim's mail... > The errors will happen when a file size is changed in-place -- so "mv > a.html b.html && echo new > a.html" should continue to serve the old > content of a.html; but just "echo new > a.html" will show something odd > if the old length of a.html was not exactly 4. ...for later requests, that started after the file was changed. For a request ongoing while the file is changed, odd things can be shown even if the file size stays the same. f -- Francis Daly francis at daoine.org From ianevans at digitalhit.com Tue Nov 19 09:47:03 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Tue, 19 Nov 2013 04:47:03 -0500 Subject: will request_uri get passed from cookieless server to 403 page on main server? In-Reply-To: <20131119093114.GE31289@craic.sysops.org> References: <2c961eddf9714b6a01f3a75e27ae41f8.squirrel@www.digitalhit.com> <52895D6E.1040608@digitalhit.com> <20131118232659.GB31289@craic.sysops.org> <528B2BBA.2090900@digitalhit.com> <20131119093114.GE31289@craic.sysops.org> Message-ID: <528B3397.4050806@digitalhit.com> On 19/11/2013 4:31 AM, Francis Daly wrote: > *If* that php is set to always return cookies, then you might want to > run a separate php-fpm that does not return cookies for the static > site. But that's a side issue. Will look into that. Thanks, hadn't thought of it. Did a cursory glance at Google and can't see how just yet how to disable them. > > That's not a location. That's an if() block. I meant that...but it's 4am here and I haven't slept yet. :-) From owata at club.kyutech.ac.jp Tue Nov 19 11:07:39 2013 From: owata at club.kyutech.ac.jp (Yuta MASUMOTO) Date: Tue, 19 Nov 2013 20:07:39 +0900 Subject: I translated "install" and "beginners guide" articles into japanese Message-ID: <528B467B.6090004@club.kyutech.ac.jp> Hi there, I translated "install" and "beginners guide" articles into japanese. -- Yuta MASUMOTO Mail: owata at club.kyutech.ac.jp -------------- next part -------------- # HG changeset patch # User Yuta MASUMOTO # Date 1384857571 -32400 # Tue Nov 19 19:39:31 2013 +0900 # Node ID 6a6c13a1cbfbd414fa54361748fc5dac289c432a # Parent 649420cb8021fbb19d75c2abfbec7572b5dc1839 Translated "install" and "beginners guide" articles into japanese. diff -r 649420cb8021 -r 6a6c13a1cbfb xml/ja/GNUmakefile --- a/xml/ja/GNUmakefile Mon Nov 11 11:56:10 2013 +0400 +++ b/xml/ja/GNUmakefile Tue Nov 19 19:39:31 2013 +0900 @@ -1,4 +1,6 @@ DOCS = \ + install \ + beginners_guide \ faq \ http/request_processing \ http/server_names \ diff -r 649420cb8021 -r 6a6c13a1cbfb xml/ja/docs/beginners_guide.xml --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/xml/ja/docs/beginners_guide.xml Tue Nov 19 19:39:31 2013 +0900 @@ -0,0 +1,111 @@ + + +
+ +
+ +?????? nginx ???????????????????????????????? +??????????? nginx ???????????????????????? +??????? ???????? +???? nginx ???????????????????????? +????????????? nginx ???????????????????? +?? nginx ??????????????????FastCGI ??????????????????????? + + + +nginx ?? 1??????????????????????? +????????????????? (????????????????????????????)? +??????????????????????????? +?????????????????????????? +nginx ???????????????????????????????????????????????????? +?????????????????????????????? CPU ???????????????????????? +(??? ????????? ????????) + + + +nginx ??????????????????????????? +??????????????? nginx.conf ??????? +/usr/local/nginx/conf ? /etc/nginx? +?? /usr/local/etc/nginx ???????? + +
+ + +
+ +nginx ??????????????????????????? +nginx ??????? -s ??????????? nginx ????????????? ???????????? + +nginx -s signal + +signal ????????????????? + + + + +stop—???? + + + +quit—????? + + + +reload—??? + + + +reopen—???????????? + + + +???????????????????????????????????? nginx ??????? + + + nginx -s quit + + +??????? nginx ????????????????????????? + + + +????????????? nginx ???????????????????????? +????????????????????????? + + +nginx -s reload + + + + +?????? reload ???????????????????????????????????? ???????????????? +??????????????????????????????????????????????????????? +??????????????????????????????????????????????????? +???????????????????????????????????????????????? +??????????????????????????????????????????? + + + +nginx ????????? Unix ? kill ??????????????????? +??????????????? ID ?????????????????? +nginx ??????????? ID ?????????? nginx.pid ??????? +/usr/local/bginx/logs ? /var/run ???????????? +???????????? 1628 ?? QUIT???????????????????? + + + kill -s QUIT 1628 + + +nginx ???????????????? ps ??????????????????????????? + + +ps -ax | grep nginx + + +??????????????????????????? nginx ??????? ???????? + +
+ +
diff -r 649420cb8021 -r 6a6c13a1cbfb xml/ja/docs/index.xml --- a/xml/ja/docs/index.xml Mon Nov 11 11:56:10 2013 +0400 +++ b/xml/ja/docs/index.xml Tue Nov 19 19:39:31 2013 +0900 @@ -5,13 +5,20 @@ lang="ja" toc="no"> -
+ + + + + + + + @@ -32,7 +39,6 @@
-
diff -r 649420cb8021 -r 6a6c13a1cbfb xml/ja/docs/install.xml --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/xml/ja/docs/install.xml Tue Nov 19 19:39:31 2013 +0900 @@ -0,0 +1,39 @@ + + +
+ +
+ +nginx ???? OS ????????????????? + +
+ +
+ +Linux ???????????? +nginx.org ????????? ??????????????? + +
+ +
+ +FreeBSD ? nginx ???????????? +packages ? +ports ? 2??????????? +ports ????????????????????????????????????? +port ???????????????????????????????????????????? + +
+ +
+ +???????? packages ? ports ???????????????????????????? +???????????????????????????????????????????? +???????? nginx ???????????????????? + +
+ +
From vl at nginx.com Tue Nov 19 11:46:29 2013 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 19 Nov 2013 15:46:29 +0400 Subject: I translated "install" and "beginners guide" articles into japanese In-Reply-To: <528B467B.6090004@club.kyutech.ac.jp> References: <528B467B.6090004@club.kyutech.ac.jp> Message-ID: <20131119114628.GA11002@vlpc.i.nginx.com> On Tue, Nov 19, 2013 at 08:07:39PM +0900, Yuta MASUMOTO wrote: > Hi there, > I translated "install" and "beginners guide" articles into japanese. > > -- > Yuta MASUMOTO > Mail: owata at club.kyutech.ac.jp > # HG changeset patch > # User Yuta MASUMOTO > # Date 1384857571 -32400 > # Tue Nov 19 19:39:31 2013 +0900 > # Node ID 6a6c13a1cbfbd414fa54361748fc5dac289c432a > # Parent 649420cb8021fbb19d75c2abfbec7572b5dc1839 > Translated "install" and "beginners guide" articles into japanese. Hi, Yuto! >From what I see, the "beginners guide" translation is incomplete and lacks a lot of things comparing to the original english article. Do you have plans to complete the translation or something was lost in the patch? The "install" article looks ok (except minor formatting issues - just make diff against english version and see that empty lines gone). I suggest that you provide a separate changeset for the "install" article and another one for the "beginners guide" when it's complete. Also note that there is a "rev" attribute in the "article" element. One of its goals is to help with tracking of translation status. If original and translated document revisions differs, one can note that translation needs to be updated, as original document was changed since then. So, please indicate the revision of english article you translated in the japanese version. From nginx-list at puzzled.xs4all.nl Tue Nov 19 13:26:32 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Tue, 19 Nov 2013 14:26:32 +0100 Subject: alert: ... pread() read only In-Reply-To: <528B15C0.7020406@noa.gr> References: <528B15C0.7020406@noa.gr> Message-ID: <528B6708.7090207@puzzled.xs4all.nl> On 11/19/2013 08:39 AM, Nikolaos Milas wrote: > Hello, > > We are running a Joomla website loading a google map in an iframe (under > NGINX) in the main (home) web page. This is the page mostly visited as > it contains almost real-time data to be viewed by clients. > > The system info: > > Linux myserver.example.com 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug 28 > 17:19:38 UTC 2013 x86_64 > Database Version 5.5.34 > Database Collation utf8_general_ci > PHP Version 5.3.3 > Web Server nginx/1.4.2 > WebServer to PHP Interface fpm-fcgi > Joomla! Version Joomla! 2.5.8 Stable [ Ember ] 8-November-2012 14:00 > GMT Looking at your Joomla version, if this is an Internet facing server, it will be 0wned once they have probed your box. You might want to update your Joomla install asap to the latest version (currently 2.5.16) to plug all the root exploits. Regards, Patrick From nmilas at noa.gr Tue Nov 19 14:37:16 2013 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 19 Nov 2013 16:37:16 +0200 Subject: alert: ... pread() read only In-Reply-To: <20131119092112.GU41579@mdounin.ru> References: <528B15C0.7020406@noa.gr> <20131119092112.GU41579@mdounin.ru> Message-ID: <528B779C.5050105@noa.gr> On 19/11/2013 11:21 ??, Maxim Dounin wrote: > I don't think that open_file_cache results in a measurable > difference in your case. I would recommend disabling it unless > you have good reasons to enable it, just to simplify maintenance. Thank you all for your suggestions. It seems that disabling open_file_cache, stops these errors. Thanks again, Nick From nmilas at noa.gr Tue Nov 19 14:42:01 2013 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 19 Nov 2013 16:42:01 +0200 Subject: Suppress "no index file" errors when autoindex on Message-ID: <528B78B9.9030801@noa.gr> Hello, Can we suppress errors of the form: 2013/11/19 10:14:58 [error] 21848#0: *49602 "/path/to/web/root/DATA/2013/320/index.php" is not found (2: No such file or directory), client: ::ffff:xxx.xxx.154.69, server: www.example.com, request: "GET /location/path/2013/320/ HTTP/1.1", host: "www.example.com" ...when Autoindex is on? In these cases, we want clients to browse the directory for data and there is simply no index page available. Please advise. Thanks, Nick From mdounin at mdounin.ru Tue Nov 19 14:49:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 18:49:15 +0400 Subject: Suppress "no index file" errors when autoindex on In-Reply-To: <528B78B9.9030801@noa.gr> References: <528B78B9.9030801@noa.gr> Message-ID: <20131119144915.GB41579@mdounin.ru> Hello! On Tue, Nov 19, 2013 at 04:42:01PM +0200, Nikolaos Milas wrote: > Hello, > > Can we suppress errors of the form: > > 2013/11/19 10:14:58 [error] 21848#0: *49602 > "/path/to/web/root/DATA/2013/320/index.php" is not found (2: No such > file or directory), client: ::ffff:xxx.xxx.154.69, server: > www.example.com, request: "GET /location/path/2013/320/ HTTP/1.1", > host: "www.example.com" > > ...when Autoindex is on? In these cases, we want clients to browse > the directory for data and there is simply no index page available. Normal use of index files (as per the "index" directive, see http://nginx.org/r/index) doesn't cause such error messages to appear. Instead, index module is silently skipped, and autoindex starts working. Most likely you are using some rewrite instead, or something like this. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Nov 19 15:00:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 19:00:31 +0400 Subject: nginx-1.5.7 Message-ID: <20131119150031.GD41579@mdounin.ru> Changes with nginx 1.5.7 19 Nov 2013 *) Security: a character following an unescaped space in a request line was handled incorrectly (CVE-2013-4547); the bug had appeared in 0.8.41. Thanks to Ivan Fratric of the Google Security Team. *) Change: a logging level of auth_basic errors about no user/password provided has been lowered from "error" to "info". *) Feature: the "proxy_cache_revalidate", "fastcgi_cache_revalidate", "scgi_cache_revalidate", and "uwsgi_cache_revalidate" directives. *) Feature: the "ssl_session_ticket_key" directive. Thanks to Piotr Sikora. *) Bugfix: the directive "add_header Cache-Control ''" added a "Cache-Control" response header line with an empty value. *) Bugfix: the "satisfy any" directive might return 403 error instead of 401 if auth_request and auth_basic directives were used. Thanks to Jan Marc Hoffmann. *) Bugfix: the "accept_filter" and "deferred" parameters of the "listen" directive were ignored for listen sockets created during binary upgrade. Thanks to Piotr Sikora. *) Bugfix: some data received from a backend with unbufferred proxy might not be sent to a client immediately if "gzip" or "gunzip" directives were used. Thanks to Yichun Zhang. *) Bugfix: in error handling in ngx_http_gunzip_filter_module. *) Bugfix: responses might hang if the ngx_http_spdy_module was used with the "auth_request" directive. *) Bugfix: memory leak in nginx/Windows. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Nov 19 15:01:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 19:01:01 +0400 Subject: nginx-1.4.4 Message-ID: <20131119150101.GH41579@mdounin.ru> Changes with nginx 1.4.4 19 Nov 2013 *) Security: a character following an unescaped space in a request line was handled incorrectly (CVE-2013-4547); the bug had appeared in 0.8.41. Thanks to Ivan Fratric of the Google Security Team. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Nov 19 15:02:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 19:02:21 +0400 Subject: nginx security advisory (CVE-2013-4547) Message-ID: <20131119150221.GL41579@mdounin.ru> Hello! Ivan Fratric of the Google Security Team discovered a bug in nginx, which might allow an attacker to bypass security restrictions in certain configurations by using a specially crafted request, or might have potential other impact (CVE-2013-4547). Some checks on a request URI were not executed on a character following an unescaped space character (which is invalid per HTTP protocol, but allowed for compatibility reasons since nginx 0.8.41). One of the results is that it was possible to bypass security restrictions like location /protected/ { deny all; } by requesting a file as "/foo /../protected/file" (in case of static files, only if there is a "foo " directory with a trailing space), or to trigger processing of a file with a trailing space in a configuration like location ~ \.php$ { fastcgi_pass ... } by requesting a file as "/file \0.php". The problem affects nginx 0.8.41 - 1.5.6. The problem is fixed in nginx 1.5.7, 1.4.4. Patch for the problem can be found here: http://nginx.org/download/patch.2013.space.txt As a temporary workaround the following configuration can be used in each server{} block: if ($request_uri ~ " ") { return 444; } -- Maxim Dounin http://nginx.org/en/donation.html From nmilas at noa.gr Tue Nov 19 15:15:47 2013 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 19 Nov 2013 17:15:47 +0200 Subject: Suppress "no index file" errors when autoindex on In-Reply-To: <20131119144915.GB41579@mdounin.ru> References: <528B78B9.9030801@noa.gr> <20131119144915.GB41579@mdounin.ru> Message-ID: <528B80A3.8050000@noa.gr> On 19/11/2013 4:49 ??, Maxim Dounin wrote: > Most likely you are using some rewrite instead, or something like > this. Thank you, In the end, it seems someone was actually trying to enter a wrong URL. Thanks for the clarifications and sorry for the noise... Nick From ben at indietorrent.org Tue Nov 19 15:36:53 2013 From: ben at indietorrent.org (Ben Johnson) Date: Tue, 19 Nov 2013 10:36:53 -0500 Subject: Clean-URL rewrite rule with nested "location" and alias directive Message-ID: <528B8595.6050603@indietorrent.org> Hi! I am attempting to serve a staging website from a directory that is outside of the web-server's document root, while at the same time making the site accessible at a URL that "appears" to be a subdirectory of the top-level domain. (I have to do this so that the SSL certificate for the TLD can be used for the staging site.) This works as expected, with one major exception: "clean-URL" rewriting does not work. In other words, the homepage (/stage/) loads correctly, but all sub-pages return a 404. As soon as I attempt to add "rewrite" directives for clean-URLs, the server either returns a 500 (redirect cycle) or "Primary script unknown". I have tried adding "rewrite_log on;" in the "server" block, just before this configuration snippet (and reloading the nginx config), but the directive seems to have no effect (no rewrite information is logged to error.log). So, I am unable to determine what is happening here. Here's the configuration snippet: location ^~ /stage/ { alias /var/www/example.com/private/stage/web/; if ($scheme = http) { return 301 https://$server_name$request_uri; } index index.php index.html index.htm; location ~ ^/stage/(.+\.php)$ { alias /var/www/example.com/private/stage/web/$1; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param HTTPS on; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; # Trying to implement clean-URLs with this line, # but it causes "primary script unknown" when using # the "break" keyword, and redirect cycle when using # the "last" keyword. rewrite '^(.*)$' /stage/index.php?q=$1 break; } } Any assistance with making this to work is hugely appreciated. Thanks in advance! -Ben From francis at daoine.org Tue Nov 19 17:38:09 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Nov 2013 17:38:09 +0000 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <528B8595.6050603@indietorrent.org> References: <528B8595.6050603@indietorrent.org> Message-ID: <20131119173809.GG31289@craic.sysops.org> On Tue, Nov 19, 2013 at 10:36:53AM -0500, Ben Johnson wrote: Hi there, > This works as expected, with one major exception: "clean-URL" rewriting > does not work. In other words, the homepage (/stage/) loads correctly, > but all sub-pages return a 404. As soon as I attempt to add "rewrite" > directives for clean-URLs, the server either returns a 500 (redirect > cycle) or "Primary script unknown". Can you give an example of a "clean-URL" that matches the location that you use, and that does not do what you want it to do? Note that your rewrite only happens for urls that end ".php". f -- Francis Daly francis at daoine.org From ben at indietorrent.org Tue Nov 19 18:45:15 2013 From: ben at indietorrent.org (Ben Johnson) Date: Tue, 19 Nov 2013 13:45:15 -0500 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <20131119173809.GG31289@craic.sysops.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> Message-ID: <528BB1BB.1010208@indietorrent.org> On 11/19/2013 12:38 PM, Francis Daly wrote: > On Tue, Nov 19, 2013 at 10:36:53AM -0500, Ben Johnson wrote: > > Hi there, > >> This works as expected, with one major exception: "clean-URL" rewriting >> does not work. In other words, the homepage (/stage/) loads correctly, >> but all sub-pages return a 404. As soon as I attempt to add "rewrite" >> directives for clean-URLs, the server either returns a 500 (redirect >> cycle) or "Primary script unknown". > > Can you give an example of a "clean-URL" that matches the location that > you use, and that does not do what you want it to do? > > Note that your rewrite only happens for urls that end ".php". > > f > Thanks for your help, Francis! Certainly: an example URL is /stage/my-account/ . Ultimately, I would like for this URL to be rewritten to /stage/index.php?q=/stage/my-account/ (the file "index.php" exists on the filesystem at /var/www/example.com/private/stage/web/index.php). Regarding the rewrite happening only for URLs that end in .php, that had occurred to me, but if I try to move the rewrite "up one level", into the block "location ^~ /stage/ {", nginx complains upon requesting the URL /stage/ that '"alias" cannot be used in location "/stage/" where URI was rewritten'. Fundamentally, is the problem here that I need to be modifying the "alias" directive dynamically (with regular expressions and capture groups), instead of trying to achieve the same with the "rewrite" directive? Thanks again for your time, -Ben From francis at daoine.org Tue Nov 19 20:39:16 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Nov 2013 20:39:16 +0000 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <528BB1BB.1010208@indietorrent.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> Message-ID: <20131119203916.GH31289@craic.sysops.org> On Tue, Nov 19, 2013 at 01:45:15PM -0500, Ben Johnson wrote: > On 11/19/2013 12:38 PM, Francis Daly wrote: > > On Tue, Nov 19, 2013 at 10:36:53AM -0500, Ben Johnson wrote: Hi there, > >> This works as expected, with one major exception: "clean-URL" rewriting > >> does not work. In other words, the homepage (/stage/) loads correctly, > >> but all sub-pages return a 404. Which nginx version do you use, where you get that output with the config snippet that you provided? I've tried 1.4.3 and 1.5.7, and I don't get /stage/ to load correctly. I suspect I'm just doing something silly. > Certainly: an example URL is /stage/my-account/ . Ultimately, I would > like for this URL to be rewritten to > /stage/index.php?q=/stage/my-account/ (the file "index.php" exists on > the filesystem at /var/www/example.com/private/stage/web/index.php). I think that the problem may be due to the interaction between "alias" and "$document_root" being not-always-obvious. And $document_root is used in try_files, which is probably what you want for "clean URLs" -- serve the file if present, else hand it to a controller. Does the following do what you want for all cases you care about? === location ^~ /stage/ { alias /var/www/example.com/private/stage/web/; index index.php index.html index.htm; try_files $uri $uri/ /stage//stage/index.php?q=$uri; location ~ ^/stage/(.+\.php)$ { alias /var/www/example.com/private/stage/web/$1; try_files "" / /stage/index.php?q=$uri; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $request_filename; include /etc/nginx/fastcgi_params; } } === You may want to use something other than $uri in the last argument to try_files, depending on how you want /stage/my-account/?key=value to be processed. And the curious-looking try_files lines are deliberate. f -- Francis Daly francis at daoine.org From ben at indietorrent.org Tue Nov 19 22:48:59 2013 From: ben at indietorrent.org (Ben Johnson) Date: Tue, 19 Nov 2013 17:48:59 -0500 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <20131119203916.GH31289@craic.sysops.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> Message-ID: <528BEADB.2070608@indietorrent.org> On 11/19/2013 3:39 PM, Francis Daly wrote: > On Tue, Nov 19, 2013 at 01:45:15PM -0500, Ben Johnson wrote: >> On 11/19/2013 12:38 PM, Francis Daly wrote: >>> On Tue, Nov 19, 2013 at 10:36:53AM -0500, Ben Johnson wrote: > > Hi there, > >>>> This works as expected, with one major exception: "clean-URL" rewriting >>>> does not work. In other words, the homepage (/stage/) loads correctly, >>>> but all sub-pages return a 404. > > Which nginx version do you use, where you get that output with the config > snippet that you provided? > > I've tried 1.4.3 and 1.5.7, and I don't get /stage/ to load correctly. > > I suspect I'm just doing something silly. > Given your level of expertise, I doubt that you're doing anything wrong. :) I'm using nginx-1.1.19, because that's what's bundled with Ubuntu 12.04 LTS. It makes sense that without the rewrite rules in-place, only "index.php" (via /stage/) returns a 200 response and not a 404. >> Certainly: an example URL is /stage/my-account/ . Ultimately, I would >> like for this URL to be rewritten to >> /stage/index.php?q=/stage/my-account/ (the file "index.php" exists on >> the filesystem at /var/www/example.com/private/stage/web/index.php). > > I think that the problem may be due to the interaction between "alias" > and "$document_root" being not-always-obvious. > > And $document_root is used in try_files, which is probably what you > want for "clean URLs" -- serve the file if present, else hand it to > a controller. > I think that you're exactly right. I had tried try_files first, but was unable to get it to work given that this site a) must be accessed via a "subdirectory" relative to the domain-root URL, and b) is comprised of files that live in a "private" directory that is outside of the server-root on the filesystem. > Does the following do what you want for all cases you care about? > > === > location ^~ /stage/ { > alias /var/www/example.com/private/stage/web/; > index index.php index.html index.htm; > try_files $uri $uri/ /stage//stage/index.php?q=$uri; > > location ~ ^/stage/(.+\.php)$ { > alias /var/www/example.com/private/stage/web/$1; > try_files "" / /stage/index.php?q=$uri; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_param HTTPS on; > fastcgi_param SCRIPT_FILENAME $request_filename; > include /etc/nginx/fastcgi_params; > } > } > === > Wow, that actually works! While I don't fully understand the try_files magic, I didn't think try_files would ever serve the purpose because I had read at http://wiki.nginx.org/HttpCoreModule#alias : "Note that there is a longstanding bug that alias and try_files don't work together" (with link to http://trac.nginx.org/nginx/ticket/97 ). While I do realize that the above-cited Wiki is obsolete and the current reference is at http://nginx.org/en/docs/http/ngx_http_core_module.html#alias , the referenced bug does not appear to have been closed. Is the implication here that "alias" does indeed work with "try_files" (even in my stale nginx-1.1.19 version)? At least in this particular use-case? There is one last trick to pull-off, which is to add very similar clean-URL functionality for two other files (in addition to index.php), but I am hoping that I will be able to adapt your working sample myself. > You may want to use something other than $uri in the last argument to > try_files, depending on how you want /stage/my-account/?key=value to > be processed. > Understood; that makes sense. > And the curious-looking try_files lines are deliberate. > > f > You are a true guru, Francis. I can't thank you enough; I've been struggling with this for days. This is the second time that you've come-up huge for me. Respectfully, -Ben From lists at ruby-forum.com Wed Nov 20 06:17:30 2013 From: lists at ruby-forum.com (Keith W.) Date: Wed, 20 Nov 2013 07:17:30 +0100 Subject: How to deal with Windows 7 password recovery In-Reply-To: References: Message-ID: There are two tools to deal with Windows 7 password recovery: 1. Reset disk 2. Anmosoft Windows Password Reset: http://www.resetwindowspassword.com -- Posted via http://www.ruby-forum.com/. From john at disqus.com Wed Nov 20 08:50:00 2013 From: john at disqus.com (John Watson) Date: Wed, 20 Nov 2013 00:50:00 -0800 Subject: [PATCH] Mark HTTP chunk terminator to be flushed Message-ID: This really only affects SSL connections, where if the buffer with chunk terminator is copied into the SSL buffer by itself. Since the new SSL buffer has not reached 16k, it's not flushed thus causing a delay in the client seeing it. -------------- next part -------------- A non-text attachment was scrubbed... Name: flush_chunk_terminator.patch Type: application/octet-stream Size: 555 bytes Desc: not available URL: From francis at daoine.org Wed Nov 20 09:10:38 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Nov 2013 09:10:38 +0000 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <528BEADB.2070608@indietorrent.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> <528BEADB.2070608@indietorrent.org> Message-ID: <20131120091038.GI31289@craic.sysops.org> On Tue, Nov 19, 2013 at 05:48:59PM -0500, Ben Johnson wrote: > On 11/19/2013 3:39 PM, Francis Daly wrote: Hi there, > :) I'm using nginx-1.1.19, because that's what's bundled with Ubuntu It's good to know that this configuration works on older versions too. > > I think that the problem may be due to the interaction between "alias" > > and "$document_root" being not-always-obvious. > > > > And $document_root is used in try_files, which is probably what you > > want for "clean URLs" -- serve the file if present, else hand it to > > a controller. > > I think that you're exactly right. I had tried try_files first, but was > unable to get it to work given that this site a) must be accessed via a > "subdirectory" relative to the domain-root URL, and b) is comprised of > files that live in a "private" directory that is outside of the > server-root on the filesystem. Generally it shouldn't be a problem -- file space and url space are different, and one of the main points of web server configuration is to map between them. If you do have free choice in the matter, some things work more easily within nginx if you can use "root" and not "alias" -- so if you want files to be accessible below the url /stage/, having them directly in a directory called /stage/ is convenient. (So if you can either get rid of the web/ directory and move all contents up one, or add a stage/ below web/ and move all contents down one, then set "root" appropriately and the remaining configuration may become simpler.) > > === > > location ^~ /stage/ { > > alias /var/www/example.com/private/stage/web/; > > index index.php index.html index.htm; > > try_files $uri $uri/ /stage//stage/index.php?q=$uri; > > > > location ~ ^/stage/(.+\.php)$ { > > alias /var/www/example.com/private/stage/web/$1; > > try_files "" / /stage/index.php?q=$uri; > > fastcgi_pass unix:/var/run/php5-fpm.sock; > > fastcgi_param HTTPS on; > > fastcgi_param SCRIPT_FILENAME $request_filename; > > include /etc/nginx/fastcgi_params; > > } > > } > > === > had read at http://wiki.nginx.org/HttpCoreModule#alias : "Note that > there is a longstanding bug that alias and try_files don't work > together" (with link to http://trac.nginx.org/nginx/ticket/97 ). > Is the implication here that "alias" does indeed work with "try_files" > (even in my stale nginx-1.1.19 version)? At least in this particular > use-case? I'd say rather that this configuration works with the current implementation of the defect. So if the defect is fixed, this configuration will start to fail, but a more obvious working configuration will be available; but if the defect is changed or partly fixed, this configuration may start to fail without an equivalent workaround. The ticket-#97 page currently (last modified date 2013-08-23) lists three aspects in the description. The second aspect is, as I understand it, that in a prefix location with alias, if the fallback contains a $variable and begins with the location prefix, then the location prefix is stripped before the internal rewrite is done. In this configuration, that's why the extra /stage/ is in the first try_files. The third aspect is, as I understand it, that in a regex location with alias, $document_root is set to the alias, which in this case is equivalent to the wanted filename. That's why an argument of "" finds the file, if it is present, in the second try_files. I strongly suspect that the second argument there, /, can safely be dropped -- it can only apply if there is a directory called something.php; and in that case, I see no difference with the / there or not (in 1.5.7). > There is one last trick to pull-off, which is to add very similar > clean-URL functionality for two other files (in addition to index.php), > but I am hoping that I will be able to adapt your working sample myself. I don't fully follow what you mean there; but once you can isolate your requests in a location, then you should be able to set the fallback to whatever you want. It may get to the point that it is clearer to use a @named location as the fallback, and just hardcode SCRIPT_FILENAME in them. You'll see when you do it, no doubt. > > You may want to use something other than $uri in the last argument to > > try_files, depending on how you want /stage/my-account/?key=value to > > be processed. The usual caveats apply here regarding url escaping -- you've copied an unescaped string directly into query string, so if there were any characters like % or + or & involved, they may cause confusion. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Nov 20 10:40:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Nov 2013 14:40:48 +0400 Subject: [PATCH] Mark HTTP chunk terminator to be flushed In-Reply-To: References: Message-ID: <20131120104048.GV41579@mdounin.ru> Hello! On Wed, Nov 20, 2013 at 12:50:00AM -0800, John Watson wrote: > This really only affects SSL connections, where if the buffer with > chunk terminator is copied into the SSL buffer by itself. Since the > new SSL buffer has not reached 16k, it's not flushed thus causing a > delay in the client seeing it. I don't think that unconditionally setting flush flag is a good idea, it will cause unneeded work if a flush isn't needed. It may make sense to set it (or move?) if previous buffer has it set though. -- Maxim Dounin http://nginx.org/en/donation.html From markus.jelsma at openindex.io Wed Nov 20 14:20:57 2013 From: markus.jelsma at openindex.io (=?utf-8?Q?Markus_Jelsma?=) Date: Wed, 20 Nov 2013 14:20:57 +0000 Subject: UDPLog, multiple locations Message-ID: Hi, We're about to use the UDP log module from multiple locations to send log entries to Flume. But we stumbled upon the issue [1] that UDP log is having trouble working when multiple locations are involved. There has been no action on that issue for quite some time. Anyone here that fixed the issue? Or know about any possible workaround or any other solution? [1]: https://github.com/vkholodkov/nginx-udplog-module/issues/3 Many thanks, Markus From kworthington at gmail.com Wed Nov 20 16:39:32 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 20 Nov 2013 11:39:32 -0500 Subject: nginx-1.5.7 In-Reply-To: <20131119150031.GD41579@mdounin.ru> References: <20131119150031.GD41579@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.7 for Windows http://goo.gl/sZYAx9 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream (http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On 11/19/13, Maxim Dounin wrote: > Changes with nginx 1.5.7 19 Nov > 2013 > > *) Security: a character following an unescaped space in a request line > was handled incorrectly (CVE-2013-4547); the bug had appeared in > 0.8.41. > Thanks to Ivan Fratric of the Google Security Team. > > *) Change: a logging level of auth_basic errors about no user/password > provided has been lowered from "error" to "info". > > *) Feature: the "proxy_cache_revalidate", "fastcgi_cache_revalidate", > "scgi_cache_revalidate", and "uwsgi_cache_revalidate" directives. > > *) Feature: the "ssl_session_ticket_key" directive. > Thanks to Piotr Sikora. > > *) Bugfix: the directive "add_header Cache-Control ''" added a > "Cache-Control" response header line with an empty value. > > *) Bugfix: the "satisfy any" directive might return 403 error instead > of > 401 if auth_request and auth_basic directives were used. > Thanks to Jan Marc Hoffmann. > > *) Bugfix: the "accept_filter" and "deferred" parameters of the > "listen" > directive were ignored for listen sockets created during binary > upgrade. > Thanks to Piotr Sikora. > > *) Bugfix: some data received from a backend with unbufferred proxy > might not be sent to a client immediately if "gzip" or "gunzip" > directives were used. > Thanks to Yichun Zhang. > > *) Bugfix: in error handling in ngx_http_gunzip_filter_module. > > *) Bugfix: responses might hang if the ngx_http_spdy_module was used > with the "auth_request" directive. > > *) Bugfix: memory leak in nginx/Windows. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ (516) 647-1992 http://twitter.com/kworthington From kworthington at gmail.com Wed Nov 20 16:52:50 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 20 Nov 2013 11:52:50 -0500 Subject: [nginx-announce] nginx-1.4.4 In-Reply-To: <20131119150106.GI41579@mdounin.ru> References: <20131119150106.GI41579@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.4.4 for Windows http://goo.gl/XQLTvR (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream (http://twitter.com/kworthington), if you would like to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On 11/19/13, Maxim Dounin wrote: > Changes with nginx 1.4.4 19 Nov > 2013 > > *) Security: a character following an unescaped space in a request line > was handled incorrectly (CVE-2013-4547); the bug had appeared in > 0.8.41. > Thanks to Ivan Fratric of the Google Security Team. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > From aldernetwork at gmail.com Wed Nov 20 23:17:11 2013 From: aldernetwork at gmail.com (Alder Network) Date: Wed, 20 Nov 2013 15:17:11 -0800 Subject: Nginx Websocket proxy with Microsoft IE 10 client Message-ID: Is that a known issue? Any remedy available? Thanks, - Alder -------------- next part -------------- An HTML attachment was scrubbed... URL: From willpugh at gmail.com Thu Nov 21 02:03:11 2013 From: willpugh at gmail.com (Will Pugh) Date: Wed, 20 Nov 2013 18:03:11 -0800 Subject: Intermittant SSL Problems Message-ID: Hi folks, We are using Nginx for SSL termination, and then it proxies to an ATS or Haproxy server depending on our environment. We're running into a problem where every now and then, Nginx closes a connection due to a timeout. When investigating, it looks like the connections that are being timed-out are not being forwarded to the backend service. The scenario when we were able to best reproduce this is one where one of our Java client was running about 100 REST requests that were fairly similar. I've attached files that contain both the tcpdump from the client side as well as the debug log on the nginx side. I tried comparing a successful and unsuccessful request next to each other. From the client side, it looks like the messages back and forth look very consistent. On the nginx side, the first difference seems to happen when reading in the Http Request. The requests that fail, all seem to do a partial read: ==SNIPPET FROM FAILING REQUEST== 2013/11/20 01:58:05 [debug] 52611#0: *621 http wait request handler 2013/11/20 01:58:05 [debug] 52611#0: *621 malloc: 0000000000F1BE20:1024 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_read: 335 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_read: 1 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_read: -1 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_get_error: 2 2013/11/20 01:58:05 [debug] 52611#0: *621 reusable connection: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 posix_memalign: 0000000000EFB600:4096 @16 2013/11/20 01:58:05 [debug] 52611#0: *621 http process request line ==SNIPPET FROM SUCCEEDING REQUEST== 2013/11/20 01:58:04 [debug] 52611#0: *619 http wait request handler 2013/11/20 01:58:04 [debug] 52611#0: *619 malloc: 0000000000F1BE20:1024 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 335 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 167 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 2 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: -1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_get_error: 2 2013/11/20 01:58:04 [debug] 52611#0: *619 reusable connection: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 posix_memalign: 0000000000EFB600:4096 @16 2013/11/20 01:58:04 [debug] 52611#0: *619 http process request line This difference seems to be consistent with errors. Later on, the request that ends up failing attempts to load the request body, but only gets about 16 bytes, and looks like it keeps waiting for the rest of the data. However, looking at a tcpdump from the client, it looks like all the data is sent up and ack-ed. Then the client sees nothing for a minute until the connection is closed by the server. ==TCP DUMP FROM FAILING REQUEST== ... 17:58:05.259451 IP x.x.x.145.53167 > y.y.y.209.https: Flags [.], ack 2996, win 8023, options [nop,nop,TS val 145724966 ecr 155421032], length 0 17:58:05.261592 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 221:296, ack 2996, win 8192, options [nop,nop,TS val 145724968 ecr 155421032], length 75 17:58:05.262989 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 296:302, ack 2996, win 8192, options [nop,nop,TS val 145724969 ecr 155421032], length 6 17:58:05.263196 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 302:355, ack 2996, win 8192, options [nop,nop,TS val 145724969 ecr 155421032], length 53 17:58:05.295347 IP y.y.y.209.https > x.x.x.145.53167: Flags [.], ack 355, win 122, options [nop,nop,TS val 155421041 ecr 145724968], length 0 17:58:05.295387 IP y.y.y.209.https > x.x.x.145.53167: Flags [P.], seq 2996:3055, ack 355, win 122, options [nop,nop,TS val 155421042 ecr 145724968], length 59 17:58:05.295481 IP x.x.x.145.53167 > y.y.y.209.https: Flags [.], ack 3055, win 8188, options [nop,nop,TS val 145725000 ecr 155421042], length 0 17:58:05.300058 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 355:728, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 373 17:58:05.300253 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 728:765, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.300254 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 765:962, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 197 17:58:05.300282 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 962:999, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.300307 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 999:1036, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.300342 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 1036:1073, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.300365 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 1073:1110, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.332040 IP y.y.y.209.https > x.x.x.145.53167: Flags [.], ack 765, win 130, options [nop,nop,TS val 155421051 ecr 145725004], length 0 17:58:05.332247 IP y.y.y.209.https > x.x.x.145.53167: Flags [.], ack 1110, win 139, options [nop,nop,TS val 155421051 ecr 145725004], length 0 *17:59:05.338429 IP y.y.y.209.https > x.x.x.145.53167: Flags [F.], seq 3055, ack 1110, win 139, options [nop,nop,TS val 155436119 ecr 145725004], length 0* 17:59:05.338581 IP x.x.x.145.53167 > y.y.y.209.https: Flags [.], ack 3056, win 8192, options [nop,nop,TS val 145784751 ecr 155436119], length 0 17:59:05.338932 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 1110:1147, ack 3056, win 8192, options [nop,nop,TS val 145784751 ecr 155436119], length 37 17:59:05.338933 IP x.x.x.145.53167 > y.y.y.209.https: Flags [F.], seq 1147, ack 3056, win 8192, options [nop,nop,TS val 145784751 ecr 155436119], length 0 17:59:05.370400 IP y.y.y.209.https > x.x.x.145.53167: Flags [R], seq 325468762, win 0, length 0 ==TCP DUMP FROM SUCCESSFUL REQUEST== ... 17:58:04.481722 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 2996, win 8023, options [nop,nop,TS val 145724196 ecr 155420837], length 0 17:58:04.484108 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 221:296, ack 2996, win 8192, options [nop,nop,TS val 145724198 ecr 155420837], length 75 17:58:04.485569 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 296:302, ack 2996, win 8192, options [nop,nop,TS val 145724199 ecr 155420837], length 6 17:58:04.485767 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 302:355, ack 2996, win 8192, options [nop,nop,TS val 145724199 ecr 155420837], length 53 17:58:04.531685 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], ack 355, win 122, options [nop,nop,TS val 155420850 ecr 145724198], length 0 17:58:04.531689 IP y.y.y.209.https > x.x.x.145.53166: Flags [P.], seq 2996:3055, ack 355, win 122, options [nop,nop,TS val 155420850 ecr 145724198], length 59 17:58:04.531827 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 3055, win 8188, options [nop,nop,TS val 145724244 ecr 155420850], length 0 17:58:04.532709 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 355:728, ack 3055, win 8192, options [nop,nop,TS val 145724244 ecr 155420850], length 373 17:58:04.532906 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 728:765, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.532954 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 765:962, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 197 17:58:04.532983 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 962:999, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.533012 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 999:1036, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.533045 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 1036:1073, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.533143 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 1073:1110, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.565176 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], ack 962, win 139, options [nop,nop,TS val 155420858 ecr 145724244], length 0 17:58:04.565184 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], ack 1110, win 139, options [nop,nop,TS val 155420858 ecr 145724245], length 0 *17:58:05.184331 IP y.y.y.209.https > x.x.x.145.53166: Flags [P.], seq 3055:3540, ack 1110, win 139, options [nop,nop,TS val 155421011 ecr 145724245], length 485* 17:58:05.184442 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 3540, win 8161, options [nop,nop,TS val 145724894 ecr 155421011], length 0 17:58:05.184592 IP y.y.y.209.https > x.x.x.145.53166: Flags [F.], seq 3540, ack 1110, win 139, options [nop,nop,TS val 155421011 ecr 145724245], length 0 17:58:05.184658 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 3541, win 8192, options [nop,nop,TS val 145724894 ecr 155421011], length 0 17:58:05.184863 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 1110:1147, ack 3541, win 8192, options [nop,nop,TS val 145724894 ecr 155421011], length 37 17:58:05.184863 IP x.x.x.145.53166 > y.y.y.209.https: Flags [F.], seq 1147, ack 3541, win 8192, options [nop,nop,TS val 145724894 ecr 155421011], length 0 When I look at the TCP dumps, it looks like the client sends up all the data, and it looks like the server receives and acks it. At this point, I'm sorta stuck. Does anyone have any insight here? Is there a know bug that has been fixed that we may be missing? I've attached files that have more complete dumps of the requests. We are running Ubuntu 12.04.3 LTS, with: nginx version: nginx/1.4.1 openssl version: OpenSSL 1.0.1 14 Mar 2012 Thanks, --Will -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL handshake handler: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_do_handshake: 1 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL: TLSv1, cipher: "ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1" 2013/11/20 01:58:05 [debug] 52611#0: *621 reusable connection: 1 2013/11/20 01:58:05 [debug] 52611#0: *621 http wait request handler 2013/11/20 01:58:05 [debug] 52611#0: *621 malloc: 0000000000F1BE20:1024 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_read: -1 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_get_error: 2 2013/11/20 01:58:05 [debug] 52611#0: *621 free: 0000000000F1BE20 2013/11/20 01:58:05 [debug] 52611#0: *621 post event 0000000000E8F5A8 2013/11/20 01:58:05 [debug] 52611#0: *621 delete posted event 0000000000E8F5A8 2013/11/20 01:58:05 [debug] 52611#0: *621 http wait request handler 2013/11/20 01:58:05 [debug] 52611#0: *621 malloc: 0000000000F1BE20:1024 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_read: 335 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_read: 1 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_read: -1 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_get_error: 2 2013/11/20 01:58:05 [debug] 52611#0: *621 reusable connection: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 posix_memalign: 0000000000EFB600:4096 @16 2013/11/20 01:58:05 [debug] 52611#0: *621 http process request line 2013/11/20 01:58:05 [debug] 52611#0: *621 http request line: "POST /api/views/uv4d-it6e/columns?$$version=2.0 HTTP/1.1" 2013/11/20 01:58:05 [debug] 52611#0: *621 http uri: "/api/views/uv4d-it6e/columns" 2013/11/20 01:58:05 [debug] 52611#0: *621 http args: "$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *621 http exten: "" 2013/11/20 01:58:05 [debug] 52611#0: *621 http process request header line 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Accept: application/json" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Content-Type: application/json" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "X-App-Token: Ya8CU9HKxeh0ytjHJttm2FhaW" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Authorization: Basic d2lsbC5wdWdoQHNvY3JhdGEuY29tOlNzbjEyMzQ1Ng==" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "User-Agent: Java/1.7.0_12-ea" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Host: opendata.test-socrata.com" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Connection: close" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Transfer-Encoding: chunked" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header done 2013/11/20 01:58:05 [debug] 52611#0: *621 event timer del: 11: 1384912745204 2013/11/20 01:58:05 [debug] 52611#0: *621 generic phase: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 rewrite phase: 1 2013/11/20 01:58:05 [debug] 52611#0: *621 test location: "/" 2013/11/20 01:58:05 [debug] 52611#0: *621 test location: "socket.io" 2013/11/20 01:58:05 [debug] 52611#0: *621 using configuration "/" 2013/11/20 01:58:05 [debug] 52611#0: *621 http cl:-1 max:25769803776 2013/11/20 01:58:05 [debug] 52611#0: *621 rewrite phase: 3 2013/11/20 01:58:05 [debug] 52611#0: *621 posix_memalign: 0000000000EFB600:4096 @16 2013/11/20 01:58:05 [debug] 52611#0: *621 http process request line 2013/11/20 01:58:05 [debug] 52611#0: *621 http request line: "POST /api/views/uv4d-it6e/columns?$$version=2.0 HTTP/1.1" 2013/11/20 01:58:05 [debug] 52611#0: *621 http uri: "/api/views/uv4d-it6e/columns" 2013/11/20 01:58:05 [debug] 52611#0: *621 http args: "$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *621 http exten: "" 2013/11/20 01:58:05 [debug] 52611#0: *621 http process request header line 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Accept: application/json" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Content-Type: application/json" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "X-App-Token: Ya8CU9HKxeh0ytjHJttm2FhaW" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Authorization: Basic d2lsbC5wdWdoQHNvY3JhdGEuY29tOlNzbjEyMzQ1Ng==" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "User-Agent: Java/1.7.0_12-ea" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Host: opendata.test-socrata.com" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Connection: close" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header: "Transfer-Encoding: chunked" 2013/11/20 01:58:05 [debug] 52611#0: *621 http header done 2013/11/20 01:58:05 [debug] 52611#0: *621 event timer del: 11: 1384912745204 2013/11/20 01:58:05 [debug] 52611#0: *621 generic phase: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 rewrite phase: 1 2013/11/20 01:58:05 [debug] 52611#0: *621 test location: "/" 2013/11/20 01:58:05 [debug] 52611#0: *621 test location: "socket.io" 2013/11/20 01:58:05 [debug] 52611#0: *621 using configuration "/" 2013/11/20 01:58:05 [debug] 52611#0: *621 http cl:-1 max:25769803776 2013/11/20 01:58:05 [debug] 52611#0: *621 rewrite phase: 3 2013/11/20 01:58:05 [debug] 52611#0: *621 posix_memalign: 0000000000F18E70:4096 @16 2013/11/20 01:58:05 [debug] 52611#0: *621 http script complex value 2013/11/20 01:58:05 [debug] 52611#0: *621 http script var: "/var/www/nginx-default" 2013/11/20 01:58:05 [debug] 52611#0: *621 http script copy: "/maintenance.html^@" 2013/11/20 01:58:05 [debug] 52611#0: *621 http script file op 0000000000000000 "/var/www/nginx-default/maintenance.html" 2013/11/20 01:58:05 [debug] 52611#0: *621 http script file op false 2013/11/20 01:58:05 [debug] 52611#0: *621 http script if 2013/11/20 01:58:05 [debug] 52611#0: *621 http script if: false 2013/11/20 01:58:05 [debug] 52611#0: *621 post rewrite phase: 4 2013/11/20 01:58:05 [debug] 52611#0: *621 generic phase: 5 2013/11/20 01:58:05 [debug] 52611#0: *621 generic phase: 6 2013/11/20 01:58:05 [debug] 52611#0: *621 generic phase: 7 2013/11/20 01:58:05 [debug] 52611#0: *621 access phase: 8 2013/11/20 01:58:05 [debug] 52611#0: *621 access phase: 9 2013/11/20 01:58:05 [debug] 52611#0: *621 post access phase: 10 2013/11/20 01:58:05 [debug] 52611#0: *621 http client request body preread 1 2013/11/20 01:58:05 [debug] 52611#0: *621 http request body chunked filter 2013/11/20 01:58:05 [debug] 52611#0: *621 http body chunked buf t:1 f:0 0000000000F1BE20, pos 0000000000F1BF6F, size: 1 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 http chunked byte: 61 s:0 2013/11/20 01:58:05 [debug] 52611#0: *621 malloc: 0000000000F0EAE0:8192 2013/11/20 01:58:05 [debug] 52611#0: *621 http read client request body 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_read: 16 2013/11/20 01:58:05 [debug] 52611#0: *621 http client request body recv 16 2013/11/20 01:58:05 [debug] 52611#0: *621 http body chunked buf t:1 f:0 0000000000F0EAE0, pos 0000000000F0EAE0, size: 16 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 http chunked byte: 30 s:1 2013/11/20 01:58:05 [debug] 52611#0: *621 http chunked byte: 0D s:1 2013/11/20 01:58:05 [debug] 52611#0: *621 http chunked byte: 0A s:3 2013/11/20 01:58:05 [debug] 52611#0: *621 http chunked byte: 7B s:4 2013/11/20 01:58:05 [debug] 52611#0: *621 http body chunked buf t:1 f:0 0000000000F0EAE0, pos 0000000000F0EAF0, size: 0 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 http body new buf t:1 f:0 0000000000F0EAE3, pos 0000000000F0EAE3, size: 13 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 http client request body rest 151 2013/11/20 01:58:05 [debug] 52611#0: *621 event timer add: 11: 60000:1384912745281 2013/11/20 01:58:05 [debug] 52611#0: *621 http finalize request: -4, "/api/views/uv4d-it6e/columns?$$version=2.0" a:1, c:2 2013/11/20 01:58:05 [debug] 52611#0: *621 http request count:2 blk:0 2013/11/20 01:59:05 [debug] 52611#0: *621 event timer del: 11: 1384912745281 2013/11/20 01:59:05 [debug] 52611#0: *621 http run request: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:59:05 [debug] 52611#0: *621 http finalize request: 408, "/api/views/uv4d-it6e/columns?$$version=2.0" a:1, c:1 2013/11/20 01:59:05 [debug] 52611#0: *621 http terminate request count:1 2013/11/20 01:59:05 [debug] 52611#0: *621 http terminate cleanup count:1 blk:0 2013/11/20 01:59:05 [debug] 52611#0: *621 http posted request: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:59:05 [debug] 52611#0: *621 http terminate handler count:1 2013/11/20 01:59:05 [debug] 52611#0: *621 http request count:1 blk:0 2013/11/20 01:59:05 [debug] 52611#0: *621 http close request 2013/11/20 01:59:05 [debug] 52611#0: *621 http log handler 2013/11/20 01:59:05 [debug] 52611#0: *621 free: 0000000000F0EAE0 2013/11/20 01:59:05 [debug] 52611#0: *621 free: 0000000000EFB600, unused: 0 2013/11/20 01:59:05 [debug] 52611#0: *621 free: 0000000000F18E70, unused: 2006 2013/11/20 01:59:05 [debug] 52611#0: *621 close http connection: 11 2013/11/20 01:59:05 [debug] 52611#0: *621 SSL_shutdown: 1 2013/11/20 01:59:05 [debug] 52611#0: *621 reusable connection: 0 2013/11/20 01:59:05 [debug] 52611#0: *621 free: 0000000000F1BE20 2013/11/20 01:59:05 [debug] 52611#0: *621 free: 0000000000E1F1A0, unused: 0 2013/11/20 01:59:05 [debug] 52611#0: *621 free: 0000000000F0C650, unused: 40 -------------- next part -------------- 17:58:05.186256 IP x.x.x.145.53167 > y.y.y.209.https: Flags [S], seq 1022305023, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 145724895 ecr 0,sackOK,eol], length 0 17:58:05.216148 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], ack 1148, win 139, options [nop,nop,TS val 155421022 ecr 145724894], length 0 17:58:05.218560 IP y.y.y.209.https > x.x.x.145.53167: Flags [S.], seq 325465706, ack 1022305024, win 14480, options [mss 1440,sackOK,TS val 155421022 ecr 145724895,nop,wscale 7], length 0 17:58:05.218705 IP x.x.x.145.53167 > y.y.y.209.https: Flags [.], ack 1, win 8211, options [nop,nop,TS val 145724927 ecr 155421022], length 0 17:58:05.220137 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 1:221, ack 1, win 8211, options [nop,nop,TS val 145724928 ecr 155421022], length 220 17:58:05.254098 IP y.y.y.209.https > x.x.x.145.53167: Flags [.], ack 221, win 122, options [nop,nop,TS val 155421031 ecr 145724928], length 0 17:58:05.259022 IP y.y.y.209.https > x.x.x.145.53167: Flags [.], seq 1:1449, ack 221, win 122, options [nop,nop,TS val 155421032 ecr 145724928], length 1448 17:58:05.259263 IP y.y.y.209.https > x.x.x.145.53167: Flags [.], seq 1449:2897, ack 221, win 122, options [nop,nop,TS val 155421032 ecr 145724928], length 1448 17:58:05.259335 IP x.x.x.145.53167 > y.y.y.209.https: Flags [.], ack 2897, win 8030, options [nop,nop,TS val 145724966 ecr 155421032], length 0 17:58:05.259425 IP y.y.y.209.https > x.x.x.145.53167: Flags [P.], seq 2897:2996, ack 221, win 122, options [nop,nop,TS val 155421032 ecr 145724928], length 99 17:58:05.259451 IP x.x.x.145.53167 > y.y.y.209.https: Flags [.], ack 2996, win 8023, options [nop,nop,TS val 145724966 ecr 155421032], length 0 17:58:05.261592 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 221:296, ack 2996, win 8192, options [nop,nop,TS val 145724968 ecr 155421032], length 75 17:58:05.262989 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 296:302, ack 2996, win 8192, options [nop,nop,TS val 145724969 ecr 155421032], length 6 17:58:05.263196 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 302:355, ack 2996, win 8192, options [nop,nop,TS val 145724969 ecr 155421032], length 53 17:58:05.295347 IP y.y.y.209.https > x.x.x.145.53167: Flags [.], ack 355, win 122, options [nop,nop,TS val 155421041 ecr 145724968], length 0 17:58:05.295387 IP y.y.y.209.https > x.x.x.145.53167: Flags [P.], seq 2996:3055, ack 355, win 122, options [nop,nop,TS val 155421042 ecr 145724968], length 59 17:58:05.295481 IP x.x.x.145.53167 > y.y.y.209.https: Flags [.], ack 3055, win 8188, options [nop,nop,TS val 145725000 ecr 155421042], length 0 17:58:05.300058 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 355:728, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 373 17:58:05.300253 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 728:765, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.300254 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 765:962, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 197 17:58:05.300282 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 962:999, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.300307 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 999:1036, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.300342 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 1036:1073, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.300365 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 1073:1110, ack 3055, win 8192, options [nop,nop,TS val 145725004 ecr 155421042], length 37 17:58:05.332040 IP y.y.y.209.https > x.x.x.145.53167: Flags [.], ack 765, win 130, options [nop,nop,TS val 155421051 ecr 145725004], length 0 17:58:05.332247 IP y.y.y.209.https > x.x.x.145.53167: Flags [.], ack 1110, win 139, options [nop,nop,TS val 155421051 ecr 145725004], length 0 17:59:05.338429 IP y.y.y.209.https > x.x.x.145.53167: Flags [F.], seq 3055, ack 1110, win 139, options [nop,nop,TS val 155436119 ecr 145725004], length 0 17:59:05.338581 IP x.x.x.145.53167 > y.y.y.209.https: Flags [.], ack 3056, win 8192, options [nop,nop,TS val 145784751 ecr 155436119], length 0 17:59:05.338932 IP x.x.x.145.53167 > y.y.y.209.https: Flags [P.], seq 1110:1147, ack 3056, win 8192, options [nop,nop,TS val 145784751 ecr 155436119], length 37 17:59:05.338933 IP x.x.x.145.53167 > y.y.y.209.https: Flags [F.], seq 1147, ack 3056, win 8192, options [nop,nop,TS val 145784751 ecr 155436119], length 0 17:59:05.370400 IP y.y.y.209.https > x.x.x.145.53167: Flags [R], seq 325468762, win 0, length 0 -------------- next part -------------- 2013/11/20 01:58:04 [debug] 52611#0: *619 http check ssl handshake 2013/11/20 01:58:04 [debug] 52611#0: *619 http recv(): 1 2013/11/20 01:58:04 [debug] 52611#0: *619 https ssl handshake: 0x16 2013/11/20 01:58:04 [debug] 52611#0: *619 posix_memalign: 0000000000F0C650:256 @16 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL server name: "opendata.test-socrata.com" 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_do_handshake: -1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_get_error: 2 2013/11/20 01:58:04 [debug] 52611#0: *619 reusable connection: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 post event 0000000000E8F5A8 2013/11/20 01:58:04 [debug] 52611#0: *619 delete posted event 0000000000E8F5A8 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL handshake handler: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_do_handshake: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL: TLSv1, cipher: "ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1" 2013/11/20 01:58:04 [debug] 52611#0: *619 reusable connection: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 http wait request handler 2013/11/20 01:58:04 [debug] 52611#0: *619 malloc: 0000000000F1BE20:1024 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: -1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_get_error: 2 2013/11/20 01:58:04 [debug] 52611#0: *619 free: 0000000000F1BE20 2013/11/20 01:58:04 [debug] 52611#0: *619 post event 0000000000E8F5A8 2013/11/20 01:58:04 [debug] 52611#0: *619 delete posted event 0000000000E8F5A8 2013/11/20 01:58:04 [debug] 52611#0: *619 http wait request handler 2013/11/20 01:58:04 [debug] 52611#0: *619 malloc: 0000000000F1BE20:1024 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 335 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 167 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 2 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_read: -1 2013/11/20 01:58:04 [debug] 52611#0: *619 SSL_get_error: 2 2013/11/20 01:58:04 [debug] 52611#0: *619 reusable connection: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 posix_memalign: 0000000000EFB600:4096 @16 2013/11/20 01:58:04 [debug] 52611#0: *619 http process request line 2013/11/20 01:58:04 [debug] 52611#0: *619 http request line: "POST /api/views/uv4d-it6e/columns?$$version=2.0 HTTP/1.1" 2013/11/20 01:58:04 [debug] 52611#0: *619 http uri: "/api/views/uv4d-it6e/columns" 2013/11/20 01:58:04 [debug] 52611#0: *619 http args: "$$version=2.0" 2013/11/20 01:58:04 [debug] 52611#0: *619 http exten: "" 2013/11/20 01:58:04 [debug] 52611#0: *619 http process request header line 2013/11/20 01:58:04 [debug] 52611#0: *619 http header: "Accept: application/json" 2013/11/20 01:58:04 [debug] 52611#0: *619 http header: "Content-Type: application/json" 2013/11/20 01:58:04 [debug] 52611#0: *619 http header: "X-App-Token: Ya8CU9HKxeh0ytjHJttm2FhaW" 2013/11/20 01:58:04 [debug] 52611#0: *619 http header: "Authorization: Basic d2lsbC5wdWdoQHNvY3JhdGEuY29tOlNzbjEyMzQ1Ng==" 2013/11/20 01:58:04 [debug] 52611#0: *619 http header: "User-Agent: Java/1.7.0_12-ea" 2013/11/20 01:58:04 [debug] 52611#0: *619 http header: "Host: opendata.test-socrata.com" 2013/11/20 01:58:04 [debug] 52611#0: *619 http header: "Connection: close" 2013/11/20 01:58:04 [debug] 52611#0: *619 http header: "Transfer-Encoding: chunked" 2013/11/20 01:58:04 [debug] 52611#0: *619 http header done 2013/11/20 01:58:04 [debug] 52611#0: *619 event timer del: 11: 1384912744424 2013/11/20 01:58:04 [debug] 52611#0: *619 generic phase: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 rewrite phase: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 test location: "/" 2013/11/20 01:58:04 [debug] 52611#0: *619 test location: "socket.io" 2013/11/20 01:58:04 [debug] 52611#0: *619 using configuration "/" 2013/11/20 01:58:04 [debug] 52611#0: *619 http cl:-1 max:25769803776 2013/11/20 01:58:04 [debug] 52611#0: *619 rewrite phase: 3 2013/11/20 01:58:04 [debug] 52611#0: *619 posix_memalign: 0000000000F18E70:4096 @16 2013/11/20 01:58:04 [debug] 52611#0: *619 http script complex value 2013/11/20 01:58:04 [debug] 52611#0: *619 http script var: "/var/www/nginx-default" 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "/maintenance.html^@" 2013/11/20 01:58:04 [debug] 52611#0: *619 http script file op 0000000000000000 "/var/www/nginx-default/maintenance.html" 2013/11/20 01:58:04 [debug] 52611#0: *619 http script file op false 2013/11/20 01:58:04 [debug] 52611#0: *619 http script if 2013/11/20 01:58:04 [debug] 52611#0: *619 http script if: false 2013/11/20 01:58:04 [debug] 52611#0: *619 post rewrite phase: 4 2013/11/20 01:58:04 [debug] 52611#0: *619 generic phase: 5 2013/11/20 01:58:04 [debug] 52611#0: *619 generic phase: 6 2013/11/20 01:58:04 [debug] 52611#0: *619 generic phase: 7 2013/11/20 01:58:04 [debug] 52611#0: *619 access phase: 8 2013/11/20 01:58:04 [debug] 52611#0: *619 access phase: 9 2013/11/20 01:58:04 [debug] 52611#0: *619 post access phase: 10 2013/11/20 01:58:04 [debug] 52611#0: *619 http client request body preread 173 2013/11/20 01:58:04 [debug] 52611#0: *619 http request body chunked filter 2013/11/20 01:58:04 [debug] 52611#0: *619 http body chunked buf t:1 f:0 0000000000F1BE20, pos 0000000000F1BF6F, size: 173 file: 0, size: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 61 s:0 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 32 s:1 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 0D s:1 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 0A s:3 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 7B s:4 2013/11/20 01:58:04 [debug] 52611#0: *619 http body chunked buf t:1 f:0 0000000000F1BE20, pos 0000000000F1C015, size: 7 file: 0, size: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 0D s:5 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 0A s:6 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 30 s:0 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 0D s:1 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 0A s:8 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 0D s:9 2013/11/20 01:58:04 [debug] 52611#0: *619 http chunked byte: 0A s:10 2013/11/20 01:58:04 [debug] 52611#0: *619 http body new buf t:1 f:0 0000000000F1BF73, pos 0000000000F1BF73, size: 162 file: 0, size: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 http body new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 http init upstream, client timer: 0 2013/11/20 01:58:04 [debug] 52611#0: *619 epoll add event: fd:11 op:3 ev:80000005 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "X-Forwarded-For: " 2013/11/20 01:58:04 [debug] 52611#0: *619 http script var: "66.171.190.186" 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "^M " 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "X-Forwarded-Proto: " 2013/11/20 01:58:04 [debug] 52611#0: *619 http script var: "https" 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "^M " 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "Host: " 2013/11/20 01:58:04 [debug] 52611#0: *619 http script var: "opendata.test-socrata.com" 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "^M " 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "Connection: close^M " 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "Content-Length: " 2013/11/20 01:58:04 [debug] 52611#0: *619 http script var: "162" 2013/11/20 01:58:04 [debug] 52611#0: *619 http script copy: "^M " 2013/11/20 01:58:04 [debug] 52611#0: *619 http proxy header: "Accept: application/json" 2013/11/20 01:58:04 [debug] 52611#0: *619 http proxy header: "Content-Type: application/json" 2013/11/20 01:58:04 [debug] 52611#0: *619 http proxy header: "X-App-Token: Ya8CU9HKxeh0ytjHJttm2FhaW" 2013/11/20 01:58:04 [debug] 52611#0: *619 http proxy header: "Authorization: Basic d2lsbC5wdWdoQHNvY3JhdGEuY29tOlNzbjEyMzQ1Ng==" 2013/11/20 01:58:04 [debug] 52611#0: *619 http proxy header: "User-Agent: Java/1.7.0_12-ea" 2013/11/20 01:58:04 [debug] 52611#0: *619 http proxy header: "POST /api/views/uv4d-it6e/columns?$$version=2.0 HTTP/1.0^M X-Forwarded-For: 66.171.190.186^M X-Forwarded-Proto: https^M Host: opendata.test-socrata.com^M Connection: close^M Content-Length: 162^M Accept: application/json^M Content-Type: application/json^M X-App-Token: Ya8CU9HKxeh0ytjHJttm2FhaW^M Authorization: Basic d2lsbC5wdWdoQHNvY3JhdGEuY29tOlNzbjEyMzQ1Ng==^M User-Agent: Java/1.7.0_12-ea^M ^M " 2013/11/20 01:58:04 [debug] 52611#0: *619 http cleanup add: 0000000000F198D0 2013/11/20 01:58:04 [debug] 52611#0: *619 get rr peer, try: 1 2013/11/20 01:58:04 [debug] 52611#0: *619 socket 13 2013/11/20 01:58:04 [debug] 52611#0: *619 epoll add connection: fd:13 ev:80000005 2013/11/20 01:58:04 [debug] 52611#0: *619 connect to 0.0.0.0:8080, fd:13 #620 2013/11/20 01:58:04 [debug] 52611#0: *619 http upstream connect: -2 2013/11/20 01:58:04 [debug] 52611#0: *619 posix_memalign: 0000000000F09100:128 @16 2013/11/20 01:58:04 [debug] 52611#0: *619 event timer add: 13: 60000:1384912744514 2013/11/20 01:58:04 [debug] 52611#0: *619 http finalize request: -4, "/api/views/uv4d-it6e/columns?$$version=2.0" a:1, c:2 2013/11/20 01:58:04 [debug] 52611#0: *619 http request count:2 blk:0 2013/11/20 01:58:04 [debug] 52611#0: *619 post event 0000000000EC35B8 2013/11/20 01:58:04 [debug] 52611#0: *619 post event 0000000000EC3620 2013/11/20 01:58:04 [debug] 52611#0: *619 delete posted event 0000000000EC3620 2013/11/20 01:58:04 [debug] 52611#0: *619 http upstream request: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:04 [debug] 52611#0: *619 http upstream send request handler 2013/11/20 01:58:04 [debug] 52611#0: *619 http upstream send request 2013/11/20 01:58:04 [debug] 52611#0: *619 chain writer buf fl:0 s:387 2013/11/20 01:58:04 [debug] 52611#0: *619 chain writer buf fl:0 s:162 2013/11/20 01:58:04 [debug] 52611#0: *619 chain writer buf fl:1 s:0 2013/11/20 01:58:04 [debug] 52611#0: *619 chain writer in: 0000000000F19938 2013/11/20 01:58:04 [debug] 52611#0: *619 writev: 549 2013/11/20 01:58:04 [debug] 52611#0: *619 chain writer out: 0000000000000000 2013/11/20 01:58:04 [debug] 52611#0: *619 event timer del: 13: 1384912744514 2013/11/20 01:58:04 [debug] 52611#0: *619 event timer add: 13: 10800000:1384923484515 2013/11/20 01:58:04 [debug] 52611#0: *619 delete posted event 0000000000EC35B8 2013/11/20 01:58:04 [debug] 52611#0: *619 http run request: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:04 [debug] 52611#0: *619 http upstream check client, write event:1, "/api/views/uv4d-it6e/columns" 2013/11/20 01:58:04 [debug] 52611#0: *619 http upstream recv(): -1 (11: Resource temporarily unavailable) 2013/11/20 01:58:05 [debug] 52611#0: *619 post event 0000000000E8F610 2013/11/20 01:58:05 [debug] 52611#0: *619 post event 0000000000EC3620 2013/11/20 01:58:05 [debug] 52611#0: *619 delete posted event 0000000000EC3620 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream request: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream dummy handler 2013/11/20 01:58:05 [debug] 52611#0: *619 delete posted event 0000000000E8F610 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream request: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream process header 2013/11/20 01:58:05 [debug] 52611#0: *619 malloc: 0000000000F0EAE0:8192 2013/11/20 01:58:05 [debug] 52611#0: *619 recv: fd:13 397 of 8192 2013/11/20 01:58:05 [debug] 52611#0: *619 http proxy status 200 "200 OK" 2013/11/20 01:58:05 [debug] 52611#0: *619 http proxy header: "Date: Wed, 20 Nov 2013 01:58:04 GMT" 2013/11/20 01:58:05 [debug] 52611#0: *619 http proxy header: "Access-Control-Allow-Origin: *" 2013/11/20 01:58:05 [debug] 52611#0: *619 http proxy header: "Content-Type: application/json; charset=utf-8" 2013/11/20 01:58:05 [debug] 52611#0: *619 http proxy header: "Server: ATS/4.0.2" 2013/11/20 01:58:05 [debug] 52611#0: *619 http proxy header: "Age: 2" 2013/11/20 01:58:05 [debug] 52611#0: *619 http proxy header done 2013/11/20 01:58:05 [debug] 52611#0: *619 HTTP/1.1 200 OK^M Server: nginx^M Date: Wed, 20 Nov 2013 01:58:05 GMT^M Content-Type: application/json; charset=utf-8^M Transfer-Encoding: chunked^M Connection: close^M Access-Control-Allow-Origin: *^M Age: 2^M 2013/11/20 01:58:05 [debug] 52611#0: *619 write new buf t:1 f:0 0000000000F19C28, pos 0000000000F19C28, size: 205 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 http write filter: l:0 f:0 s:205 2013/11/20 01:58:05 [debug] 52611#0: *619 http cacheable: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 http proxy filter init s:200 h:0 c:0 l:-1 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream process upstream 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe read upstream: 1 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe preread: 235 2013/11/20 01:58:05 [debug] 52611#0: *619 readv: 1:7795 2013/11/20 01:58:05 [debug] 52611#0: *619 readv() not ready (11: Resource temporarily unavailable) 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe recv chain: -2 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe buf free s:0 t:1 f:0 0000000000F0EAE0, pos 0000000000F0EB82, size: 235 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe length: -1 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe write downstream: 1 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe write busy: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe write: out:0000000000000000, f:0 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe read upstream: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe buf free s:0 t:1 f:0 0000000000F0EAE0, pos 0000000000F0EB82, size: 235 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe length: -1 2013/11/20 01:58:05 [debug] 52611#0: *619 event timer del: 13: 1384923484515 2013/11/20 01:58:05 [debug] 52611#0: *619 event timer add: 13: 10800000:1384923485118 2013/11/20 01:58:05 [debug] 52611#0: *619 post event 0000000000E8F610 2013/11/20 01:58:05 [debug] 52611#0: *619 post event 0000000000EC3620 2013/11/20 01:58:05 [debug] 52611#0: *619 delete posted event 0000000000EC3620 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream request: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream dummy handler 2013/11/20 01:58:05 [debug] 52611#0: *619 delete posted event 0000000000E8F610 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream request: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream process upstream 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe read upstream: 1 2013/11/20 01:58:05 [debug] 52611#0: *619 readv: 1:7795 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe recv chain: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe buf free s:0 t:1 f:0 0000000000F0EAE0, pos 0000000000F0EB82, size: 235 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe length: -1 2013/11/20 01:58:05 [debug] 52611#0: *619 input buf #0 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe write downstream: 1 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe write downstream flush in 2013/11/20 01:58:05 [debug] 52611#0: *619 http output filter "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 http copy filter: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 posix_memalign: 0000000000F19FD0:4096 @16 2013/11/20 01:58:05 [debug] 52611#0: *619 http postpone filter "/api/views/uv4d-it6e/columns?$$version=2.0" 0000000000F19908 2013/11/20 01:58:05 [debug] 52611#0: *619 http chunk: 235 2013/11/20 01:58:05 [debug] 52611#0: *619 write old buf t:1 f:0 0000000000F19C28, pos 0000000000F19C28, size: 205 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 write new buf t:1 f:0 0000000000F1A0A8, pos 0000000000F1A0A8, size: 4 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 write new buf t:1 f:0 0000000000F0EAE0, pos 0000000000F0EB82, size: 235 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 write new buf t:0 f:0 0000000000000000, pos 0000000000493E1D, size: 2 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 http write filter: l:0 f:0 s:446 2013/11/20 01:58:05 [debug] 52611#0: *619 http copy filter: 0 "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 pipe write downstream done 2013/11/20 01:58:05 [debug] 52611#0: *619 event timer: 13, old: 1384923485118, new: 1384923485121 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream exit: 0000000000000000 2013/11/20 01:58:05 [debug] 52611#0: *619 finalize http upstream request: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 finalize http proxy request 2013/11/20 01:58:05 [debug] 52611#0: *619 free rr peer 1 0 2013/11/20 01:58:05 [debug] 52611#0: *619 close http upstream connection: 13 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000F09100, unused: 48 2013/11/20 01:58:05 [debug] 52611#0: *619 event timer del: 13: 1384923485118 2013/11/20 01:58:05 [debug] 52611#0: *619 reusable connection: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 http upstream temp fd: -1 2013/11/20 01:58:05 [debug] 52611#0: *619 http output filter "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 http copy filter: "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 http postpone filter "/api/views/uv4d-it6e/columns?$$version=2.0" 00007FFF96EB3030 2013/11/20 01:58:05 [debug] 52611#0: *619 http chunk: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 write old buf t:1 f:0 0000000000F19C28, pos 0000000000F19C28, size: 205 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 write old buf t:1 f:0 0000000000F1A0A8, pos 0000000000F1A0A8, size: 4 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 write old buf t:1 f:0 0000000000F0EAE0, pos 0000000000F0EB82, size: 235 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 write old buf t:0 f:0 0000000000000000, pos 0000000000493E1D, size: 2 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 write new buf t:0 f:0 0000000000000000, pos 0000000000493E1A, size: 5 file: 0, size: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 http write filter: l:1 f:0 s:451 2013/11/20 01:58:05 [debug] 52611#0: *619 http write filter limit 0 2013/11/20 01:58:05 [debug] 52611#0: *619 posix_memalign: 0000000000F099A0:256 @16 2013/11/20 01:58:05 [debug] 52611#0: *619 malloc: 0000000000EF7490:16384 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL buf copy: 205 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL buf copy: 4 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL buf copy: 235 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL buf copy: 2 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL buf copy: 5 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL to write: 451 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL_write: 451 2013/11/20 01:58:05 [debug] 52611#0: *619 http write filter 0000000000000000 2013/11/20 01:58:05 [debug] 52611#0: *619 http copy filter: 0 "/api/views/uv4d-it6e/columns?$$version=2.0" 2013/11/20 01:58:05 [debug] 52611#0: *619 http finalize request: 0, "/api/views/uv4d-it6e/columns?$$version=2.0" a:1, c:1 2013/11/20 01:58:05 [debug] 52611#0: *619 event timer add: 11: 5000:1384912690121 2013/11/20 01:58:05 [debug] 52611#0: *619 post event 0000000000EC35B8 2013/11/20 01:58:05 [debug] 52611#0: *619 delete posted event 0000000000EC35B8 2013/11/20 01:58:05 [debug] 52611#0: *619 http empty handler 2013/11/20 01:58:05 [debug] 52611#0: *619 post event 0000000000E8F5A8 2013/11/20 01:58:05 [debug] 52611#0: *619 post event 0000000000EC35B8 2013/11/20 01:58:05 [debug] 52611#0: *619 delete posted event 0000000000EC35B8 2013/11/20 01:58:05 [debug] 52611#0: *619 http empty handler 2013/11/20 01:58:05 [debug] 52611#0: *619 delete posted event 0000000000E8F5A8 2013/11/20 01:58:05 [debug] 52611#0: *619 http lingering close handler 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL_read: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL_get_error: 6 2013/11/20 01:58:05 [debug] 52611#0: *619 peer shutdown SSL cleanly 2013/11/20 01:58:05 [debug] 52611#0: *619 lingering read: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 http request count:1 blk:0 2013/11/20 01:58:05 [debug] 52611#0: *619 http close request 2013/11/20 01:58:05 [debug] 52611#0: *619 http log handler 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000F0EAE0 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000EFB600, unused: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000F18E70, unused: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000F19FD0, unused: 3310 2013/11/20 01:58:05 [debug] 52611#0: *619 close http connection: 11 2013/11/20 01:58:05 [debug] 52611#0: *619 SSL_shutdown: 1 2013/11/20 01:58:05 [debug] 52611#0: *619 event timer del: 11: 1384912690121 2013/11/20 01:58:05 [debug] 52611#0: *619 reusable connection: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000EF7490 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000F1BE20 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000E1F1A0, unused: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000F0C650, unused: 0 2013/11/20 01:58:05 [debug] 52611#0: *619 free: 0000000000F099A0, unused: 144 2013/11/20 01:58:05 [debug] 52611#0: *621 accept: 66.171.190.186 fd:11 2013/11/20 01:58:05 [debug] 52611#0: *621 event timer add: 11: 60000:1384912745204 2013/11/20 01:58:05 [debug] 52611#0: *621 reusable connection: 1 2013/11/20 01:58:05 [debug] 52611#0: *621 epoll add event: fd:11 op:1 ev:80000001 2013/11/20 01:58:05 [debug] 52611#0: *621 post event 0000000000E8F5A8 2013/11/20 01:58:05 [debug] 52611#0: *621 delete posted event 0000000000E8F5A8 2013/11/20 01:58:05 [debug] 52611#0: *621 http check ssl handshake 2013/11/20 01:58:05 [debug] 52611#0: *621 http recv(): 1 2013/11/20 01:58:05 [debug] 52611#0: *621 https ssl handshake: 0x16 2013/11/20 01:58:05 [debug] 52611#0: *621 posix_memalign: 0000000000F0C650:256 @16 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL server name: "opendata.test-socrata.com" 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_do_handshake: -1 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_get_error: 2 2013/11/20 01:58:05 [debug] 52611#0: *621 reusable connection: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 post event 0000000000E8F5A8 2013/11/20 01:58:05 [debug] 52611#0: *621 delete posted event 0000000000E8F5A8 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL handshake handler: 0 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_do_handshake: -1 2013/11/20 01:58:05 [debug] 52611#0: *621 SSL_get_error: 2 2013/11/20 01:58:05 [debug] 52611#0: *621 post event 0000000000E8F5A8 2013/11/20 01:58:05 [debug] 52611#0: *621 delete posted event 0000000000E8F5A8 -------------- next part -------------- 17:58:04.412927 IP x.x.x.145.53166 > y.y.y.209.https: Flags [S], seq 3988579492, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 145724129 ecr 0,sackOK,eol], length 0 17:58:04.444282 IP y.y.y.209.https > x.x.x.145.53166: Flags [S.], seq 3152825397, ack 3988579493, win 14480, options [mss 1440,sackOK,TS val 155420828 ecr 145724129,nop,wscale 7], length 0 17:58:04.444324 IP y.y.y.209.https > x.x.x.145.53165: Flags [.], ack 1148, win 139, options [nop,nop,TS val 155420828 ecr 145724128], length 0 17:58:04.444344 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 1, win 8211, options [nop,nop,TS val 145724160 ecr 155420828], length 0 17:58:04.445103 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 1:221, ack 1, win 8211, options [nop,nop,TS val 145724160 ecr 155420828], length 220 17:58:04.476946 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], ack 221, win 122, options [nop,nop,TS val 155420836 ecr 145724160], length 0 17:58:04.481414 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], seq 1:1449, ack 221, win 122, options [nop,nop,TS val 155420837 ecr 145724160], length 1448 17:58:04.481502 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], seq 1449:2897, ack 221, win 122, options [nop,nop,TS val 155420837 ecr 145724160], length 1448 17:58:04.481582 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 2897, win 8030, options [nop,nop,TS val 145724196 ecr 155420837], length 0 17:58:04.481657 IP y.y.y.209.https > x.x.x.145.53166: Flags [P.], seq 2897:2996, ack 221, win 122, options [nop,nop,TS val 155420837 ecr 145724160], length 99 17:58:04.481722 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 2996, win 8023, options [nop,nop,TS val 145724196 ecr 155420837], length 0 17:58:04.484108 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 221:296, ack 2996, win 8192, options [nop,nop,TS val 145724198 ecr 155420837], length 75 17:58:04.485569 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 296:302, ack 2996, win 8192, options [nop,nop,TS val 145724199 ecr 155420837], length 6 17:58:04.485767 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 302:355, ack 2996, win 8192, options [nop,nop,TS val 145724199 ecr 155420837], length 53 17:58:04.531685 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], ack 355, win 122, options [nop,nop,TS val 155420850 ecr 145724198], length 0 17:58:04.531689 IP y.y.y.209.https > x.x.x.145.53166: Flags [P.], seq 2996:3055, ack 355, win 122, options [nop,nop,TS val 155420850 ecr 145724198], length 59 17:58:04.531827 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 3055, win 8188, options [nop,nop,TS val 145724244 ecr 155420850], length 0 17:58:04.532709 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 355:728, ack 3055, win 8192, options [nop,nop,TS val 145724244 ecr 155420850], length 373 17:58:04.532906 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 728:765, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.532954 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 765:962, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 197 17:58:04.532983 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 962:999, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.533012 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 999:1036, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.533045 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 1036:1073, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.533143 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 1073:1110, ack 3055, win 8192, options [nop,nop,TS val 145724245 ecr 155420850], length 37 17:58:04.565176 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], ack 962, win 139, options [nop,nop,TS val 155420858 ecr 145724244], length 0 17:58:04.565184 IP y.y.y.209.https > x.x.x.145.53166: Flags [.], ack 1110, win 139, options [nop,nop,TS val 155420858 ecr 145724245], length 0 17:58:05.184331 IP y.y.y.209.https > x.x.x.145.53166: Flags [P.], seq 3055:3540, ack 1110, win 139, options [nop,nop,TS val 155421011 ecr 145724245], length 485 17:58:05.184442 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 3540, win 8161, options [nop,nop,TS val 145724894 ecr 155421011], length 0 17:58:05.184592 IP y.y.y.209.https > x.x.x.145.53166: Flags [F.], seq 3540, ack 1110, win 139, options [nop,nop,TS val 155421011 ecr 145724245], length 0 17:58:05.184658 IP x.x.x.145.53166 > y.y.y.209.https: Flags [.], ack 3541, win 8192, options [nop,nop,TS val 145724894 ecr 155421011], length 0 17:58:05.184863 IP x.x.x.145.53166 > y.y.y.209.https: Flags [P.], seq 1110:1147, ack 3541, win 8192, options [nop,nop,TS val 145724894 ecr 155421011], length 37 17:58:05.184863 IP x.x.x.145.53166 > y.y.y.209.https: Flags [F.], seq 1147, ack 3541, win 8192, options [nop,nop,TS val 145724894 ecr 155421011], length 0 From ben at indietorrent.org Thu Nov 21 03:31:07 2013 From: ben at indietorrent.org (Ben Johnson) Date: Wed, 20 Nov 2013 22:31:07 -0500 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <20131120091038.GI31289@craic.sysops.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> <528BEADB.2070608@indietorrent.org> <20131120091038.GI31289@craic.sysops.org> Message-ID: <528D7E7B.60207@indietorrent.org> On 11/20/2013 4:10 AM, Francis Daly wrote: >> I think that you're exactly right. I had tried try_files first, but was >> unable to get it to work given that this site a) must be accessed via a >> "subdirectory" relative to the domain-root URL, and b) is comprised of >> files that live in a "private" directory that is outside of the >> server-root on the filesystem. > > Generally it shouldn't be a problem -- file space and url space are > different, and one of the main points of web server configuration is to > map between them. > > If you do have free choice in the matter, some things work more easily > within nginx if you can use "root" and not "alias" -- so if you want > files to be accessible below the url /stage/, having them directly in > a directory called /stage/ is convenient. > > (So if you can either get rid of the web/ directory and move all contents > up one, or add a stage/ below web/ and move all contents down one, > then set "root" appropriately and the remaining configuration may become > simpler.) > Ultimately, I have control of the configuration, so I could go either route. But in the first case, my hesitation is two-fold: a.) I prefer to maintain identical directory structures within my staging and production environments. This allows my build scripts and relative paths to remain identical (provided that I set a "root path" variable or similar in each shell script), and it helps me to remember to where I must "cd" in order to execute a build script. b.) I really prefer to keep PHP and PHP-template files out of the document root (on the filesystem). I don't want user-agents to be able to request .tpl files directly, for example, by requesting /templates/home.tpl. The same applies to configuration files, e.g., /config.inc.php. Most folks have seen what happens when a misconfiguration allows a "sensitive configuration file" to be downloaded as a plaintext file. For these reasons, and as a matter of course, I build my applications such that every request is routed through a single point that has complete authority over how the request is handled. In the second case, this seems risky; if I were to put the "stage" directory within the production site's document root, it's conceivable that doing so could have unintended consequences, such as the staging site being deleted inadvertently during the build process (think rsync with the --delete flag, as an off-the-cuff-though-not-something-I-would-actually-do-myself example of a crude "clean build process"). Issue b.) from above applies here, too. >>> === >>> location ^~ /stage/ { >>> alias /var/www/example.com/private/stage/web/; >>> index index.php index.html index.htm; >>> try_files $uri $uri/ /stage//stage/index.php?q=$uri; >>> >>> location ~ ^/stage/(.+\.php)$ { >>> alias /var/www/example.com/private/stage/web/$1; >>> try_files "" / /stage/index.php?q=$uri; >>> fastcgi_pass unix:/var/run/php5-fpm.sock; >>> fastcgi_param HTTPS on; >>> fastcgi_param SCRIPT_FILENAME $request_filename; >>> include /etc/nginx/fastcgi_params; >>> } >>> } >>> === > >> had read at http://wiki.nginx.org/HttpCoreModule#alias : "Note that >> there is a longstanding bug that alias and try_files don't work >> together" (with link to http://trac.nginx.org/nginx/ticket/97 ). > >> Is the implication here that "alias" does indeed work with "try_files" >> (even in my stale nginx-1.1.19 version)? At least in this particular >> use-case? > > I'd say rather that this configuration works with the current > implementation of the defect. > > So if the defect is fixed, this configuration will start to fail, but a > more obvious working configuration will be available; but if the defect > is changed or partly fixed, this configuration may start to fail without > an equivalent workaround. > > The ticket-#97 page currently (last modified date 2013-08-23) lists > three aspects in the description. > > The second aspect is, as I understand it, that in a prefix location with > alias, if the fallback contains a $variable and begins with the location > prefix, then the location prefix is stripped before the internal rewrite > is done. In this configuration, that's why the extra /stage/ is in the > first try_files. > > The third aspect is, as I understand it, that in a regex location > with alias, $document_root is set to the alias, which in this case is > equivalent to the wanted filename. That's why an argument of "" finds > the file, if it is present, in the second try_files. > > I strongly suspect that the second argument there, /, can safely be > dropped -- it can only apply if there is a directory called something.php; > and in that case, I see no difference with the / there or not (in 1.5.7). > Your ability to digest exactly what's happening here is inspiring. And humbling. :) I'm glad that your explanation is now on-record (for my own reference, if no one else's). >> There is one last trick to pull-off, which is to add very similar >> clean-URL functionality for two other files (in addition to index.php), >> but I am hoping that I will be able to adapt your working sample myself. > > I don't fully follow what you mean there; but once you can isolate your > requests in a location, then you should be able to set the fallback to > whatever you want. > Oops, I misspoke! I meant to say that there are certain cases in which I want index.php to dispatch the request to dedicated CSS and JS controllers. So, rather than request /css.php?file=home.css in the HTML, for example, I would request /css2/home.css (the /css2/ is to distinguish "virtual" requests from "real" requests for un-processed files in the standard /css/ directory). In these cases, index.php looks for /css2/ and dispatches the request to the CSS controller, which in-turn obtains the requested file from the URL (everything after /css2/) and returns the output when the template file exists and is valid. Maybe it would be helpful for me to demonstrate how I achieve this when the "site" is not accessed in a subdirectory (with respect to the URL) and the files actually exist in the virtual host's document root. (This is counter to the scenario that we've thus far addressed in this thread.) You actually helped me arrive at this solution a few months back. ############################################################### location / { try_files $uri $uri/ @virtual; } location @virtual { include fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web1.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; if ($uri ~ '/download/.*/$') { rewrite ^/(.*)/$ /$1 permanent; } if ($uri !~ '(/|\.[a-zA-Z0-9]{1,12}|/download/.*)$') { return 301 $uri/$is_args$args; } if ($uri ~ '(/|/download/.*|/css2/.*|/js2/.*)$') { rewrite ^(.*)$ /index.php?q=$1 last; } } ############################################################### This is all I'm trying to do. I just want to replicate the above with the "less-friendly" directory structure that we've been discussing. Regarding the rewrite statements (in order of appearance): 1.) If the URI is requesting a download and a trailing slash was included, remove the trailing slash. 2.) If the URI doesn't end with a trailing slash, a file extension, or contain "/download/...", redirect and add a trailing slash. 3.) If the URI ends with a trailing slash or contains "/download/...", pass the request to PHP controller. Note that URLs ending in a file extension with a trailing slash will be passed to the PHP controller, too. If there is a better, more concise, or more secure way to accomplish the above, then I am eager to learn. > It may get to the point that it is clearer to use a @named location as > the fallback, and just hardcode SCRIPT_FILENAME in them. You'll see when > you do it, no doubt. > Hmm, I think I see what you're getting at here. But is there no means by which to achieve the desired configuration with my particular directory structure? I'm a bit surprised at how difficult it is to decouple the request location from the filesystem directory from which it is served, while maintaining this clean-URL setup. >>> You may want to use something other than $uri in the last argument to >>> try_files, depending on how you want /stage/my-account/?key=value to >>> be processed. > > The usual caveats apply here regarding url escaping -- you've copied > an unescaped string directly into query string, so if there were any > characters like % or + or & involved, they may cause confusion. > Understood. The snippet that I pasted above does everything that I want/need with respect to escaping (or not), so if that can be made to work with minimal "tweaking", it would be ideal. > Cheers, > > f > Thanks again for your time, dedication, and assistance. I'm very grateful! -Ben From agentzh at gmail.com Thu Nov 21 05:17:36 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 20 Nov 2013 21:17:36 -0800 Subject: [ANN] ngx_openresty stable version 1.4.3.6 released Message-ID: Hello! I just released a new stable version for ngx_openresty, 1.4.3.6, which includes the latest official fix for the security issue CVE-2013-4547 in the Nginx core: http://openresty.org/#Download Best regards, -agentzh From mdounin at mdounin.ru Thu Nov 21 08:48:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Nov 2013 12:48:41 +0400 Subject: Nginx Websocket proxy with Microsoft IE 10 client In-Reply-To: References: Message-ID: <20131121084841.GG41579@mdounin.ru> Hello! On Wed, Nov 20, 2013 at 03:17:11PM -0800, Alder Network wrote: > Is that a known issue? Any remedy available? What's the issue? -- Maxim Dounin http://nginx.org/en/donation.html From yzprofiles at gmail.com Thu Nov 21 09:15:58 2013 From: yzprofiles at gmail.com (yzprofile) Date: Thu, 21 Nov 2013 17:15:58 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaIG5naW54IHNlY3VyaXR5IGFkdmlzb3J5IChDVkUtMjAxMy00NTQ3?= =?UTF-8?B?KQ==?= In-Reply-To: <20131119150221.GL41579@mdounin.ru> References: <20131119150221.GL41579@mdounin.ru> Message-ID: <2FC997BAA28A4DE7B2DB850728DD23C7@gmail.com> Hi, I have a question with this POC: > location /protected/ { > deny all; > } > > location ~ \.php$ { > fastcgi_pass ... > } These locations own different priorities, http://nginx.org/en/docs/http/ngx_http_core_module.html#location I think every request like ?/protected/hello.php? can bypass this security restriction like ?location /protected {deny all;}?. Is there something wrong with this POC description or something I misunderstand? Thanks. Regards. yzprofile > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 21 11:34:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Nov 2013 15:34:59 +0400 Subject: Intermittant SSL Problems In-Reply-To: References: Message-ID: <20131121113459.GI41579@mdounin.ru> Hello! On Wed, Nov 20, 2013 at 06:03:11PM -0800, Will Pugh wrote: > Hi folks, > > We are using Nginx for SSL termination, and then it proxies to an ATS or > Haproxy server depending on our environment. > > We're running into a problem where every now and then, Nginx closes a > connection due to a timeout. When investigating, it looks like the > connections that are being timed-out are not being forwarded to the backend > service. The scenario when we were able to best reproduce this is one > where one of our Java client was running about 100 REST requests that were > fairly similar. I've attached files that contain both the tcpdump from the > client side as well as the debug log on the nginx side. > > I tried comparing a successful and unsuccessful request next to each > other. From the client side, it looks like the messages back and forth > look very consistent. On the nginx side, the first difference seems to > happen when reading in the Http Request. The requests that fail, all seem > to do a partial read: [...] I think I see what happens here: - Due to a partial read the c->read->ready flag is reset. - While processing request headers rest of the body becomes available in a socket buffer. - The ngx_http_do_read_client_request_body() function calls c->recv(), ngx_ssl_recv() in case of SSL, and rest of the data is read from the kernel to OpenSSL buffers. The c->recv() is called with a limited buffer space though (in the debug log provided, only 16 bytes - as this happens during reading http chunk header), and only part of the data becomes available to nginx. - As c->read->ready isn't set, ngx_http_do_read_client_request_body() arms a read event and returns to the event loop. - The socket buffer is empty, so no further events are reported by the kernel till timeout. Please try the following patch: --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -1025,6 +1025,7 @@ ngx_ssl_recv(ngx_connection_t *c, u_char size -= n; if (size == 0) { + c->read->ready = 1; return bytes; } @@ -1034,6 +1035,10 @@ ngx_ssl_recv(ngx_connection_t *c, u_char } if (bytes) { + if (c->ssl->last != NGX_AGAIN) { + c->read->ready = 1; + } + return bytes; } -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Nov 21 11:58:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Nov 2013 15:58:56 +0400 Subject: =?UTF-8?B?UmU6IOWbnuWkje+8miBuZ2lueCBzZWN1cml0eSBhZHZpc29yeSAoQ1ZFLTIwMTMt?= =?UTF-8?B?NDU0Nyk=?= In-Reply-To: <2FC997BAA28A4DE7B2DB850728DD23C7@gmail.com> References: <20131119150221.GL41579@mdounin.ru> <2FC997BAA28A4DE7B2DB850728DD23C7@gmail.com> Message-ID: <20131121115856.GJ41579@mdounin.ru> Hello! On Thu, Nov 21, 2013 at 05:15:58PM +0800, yzprofile wrote: > Hi, > > I have a question with this POC: > > > location /protected/ { > > deny all; > > } > > > > location ~ \.php$ { > > fastcgi_pass ... > > } > > > These locations own different priorities, http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > I think every request like ?/protected/hello.php? can bypass this security restriction like ?location /protected {deny all;}?. > > Is there something wrong with this POC description or something I misunderstand? Thanks. These are distinct examples of affected configurations. Obviously if you have both locations in your configuration exactly as written, access to "/protected/hello.php" is not restricted (and there is nothing to bypass). This is actually a common configuration mistake to write a configuration like this and assume that access to php files under "/protected/" is restricted. Correct solution would be to use "^~" modifier to prevent checking of regexp locations: location ^~ /protected/ { deny all; } location ~ \.php$ { ... } or using nested locations to isolate regexp locations: location / { # public location ~ \.php$ { ... } } location /protected/ { auth_basic ... location ~ \.php$ { ... } } -- Maxim Dounin http://nginx.org/en/donation.html From francis at daoine.org Thu Nov 21 20:45:02 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 21 Nov 2013 20:45:02 +0000 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <528D7E7B.60207@indietorrent.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> <528BEADB.2070608@indietorrent.org> <20131120091038.GI31289@craic.sysops.org> <528D7E7B.60207@indietorrent.org> Message-ID: <20131121204502.GB943@craic.sysops.org> On Wed, Nov 20, 2013 at 10:31:07PM -0500, Ben Johnson wrote: > On 11/20/2013 4:10 AM, Francis Daly wrote: Hi there, > > If you do have free choice in the matter, some things work more easily > > within nginx if you can use "root" and not "alias" -- so if you want > > files to be accessible below the url /stage/, having them directly in > > a directory called /stage/ is convenient. > a.) I prefer to maintain identical directory structures within my > staging and production environments. This allows my build scripts and > relative paths to remain identical (provided that I set a "root path" > variable or similar in each shell script), and it helps me to remember > to where I must "cd" in order to execute a build script. That's perfectly sensible. And the web server configurations for production and staging would also be very similar, if they were served at similar points in the hierarchy of the web domain or domains. But because you serve one at "/" and one at "/stable/", you end up having different configurations. And because of nginx's implementation of "alias", and your use of a filesystem hierarchy which means you need "alias", you end up having very different configurations. > b.) I really prefer to keep PHP and PHP-template files out of the > document root (on the filesystem). That seems mostly unrelated, unless I'm missing something. Put your php files in "php/" which is parallel to "web/" if you like, nginx won't care. In fact, if you don't use try_files, nginx won't even look. All it needs to know is what filename to tell the fastcgi server to read. > I don't want user-agents to be able > to request .tpl files directly, for example, by requesting > /templates/home.tpl. I confess that I thought that was standard -- the php file itself should have written in it "read your associated files from this directory location", which should not be web-accessible (unless you specifically want it to be). > course, I build my applications such that every request is routed > through a single point that has complete authority over how the request > is handled. If you really mean "every", then the only nginx config you need is location ^~ /myapp/ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/myapp.php; fastcgi_pass unix:php.sock; } Anything else is the web server getting in the way (and implementing things that it can probably do more efficiently than your application can). So the time spent arranging the web server configuration you want can be counted as time saved not implementing the extra features in your application. > In the second case, this seems risky; if I were to put the "stage" > directory within the production site's document root, I may have been unclear, or I may be misunderstanding you now. What I intended to suggest was that: currently all of your staging web content is visible on the filesystem in /var/www/example.com/private/stage/web/; if you either move or copy (or possibly even symlink) it to be visible in /var/www/example.com/private/stage/web/stage/, then you will be able to configure nginx using "root" instead of "alias", and your "staging" configuration can look much more like your "production" one. location ^~ /stage/ { root /var/www/example.com/private/stage/web/; # The files are read from /var/www/example.com/private/stage/web/stage/ index index.php index.html index.htm; try_files $uri $uri/ /stage/index.php?q=$uri; location ~ \.php$ { # use a different "root" here if you want; but make sure the php # files can be read from within "stage/" below that root. try_files $uri /stage/index.php?q=$uri; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } > I meant to say that there are certain cases in which I want index.php to > dispatch the request to dedicated CSS and JS controllers. If "dispatch to the controller" means "something within php, and therefore nginx doesn't have to know or care about it", then it probably should Just Work. > Maybe it would be helpful for me to demonstrate how I achieve this when > the "site" is not accessed in a subdirectory (with respect to the URL) > and the files actually exist in the virtual host's document root. > location / { > try_files $uri $uri/ @virtual; > } That'll become "/stage/" and "@stagevirtual", I guess. But it'll probably want a suitable "root" added -- your "location /" can safely inherit the server one, while "/stage/" can't. > location @virtual { > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; @stagevirtual will need to know its version of $document_root. So set a suitable "root" there too. > if ($uri ~ '/download/.*/$') { > rewrite ^/(.*)/$ /$1 permanent; "/stage/" will already be there in both parts of the rewrite, so no change needed. > if ($uri !~ '(/|\.[a-zA-Z0-9]{1,12}|/download/.*)$') { > return 301 $uri/$is_args$args; "/stage/" will already be there in $uri, so no change needed. > if ($uri ~ '(/|/download/.*|/css2/.*|/js2/.*)$') { > rewrite ^(.*)$ /index.php?q=$1 last; That should become "/stage/index.php". (Note that in this specific rewrite, $1 == $uri, so you could avoid the match-and-capture.) > This is all I'm trying to do. I just want to replicate the above with > the "less-friendly" directory structure that we've been discussing. I'm not seeing many changes being needed. Provided you avoid "alias". Which requires a filesystem change. Which hopefully won't break the non-nginx part of your workflow. (Actually, within your @stagevirtual, the only place root/alias seems to matter is in the SCRIPT_FILENAME thing. So if you can build your own from the variables you have to hand, you can probably get away with whatever file structure you like.) > If there is a better, more concise, or more secure way to accomplish the > above, then I am eager to learn. In your staging web directory: ln -s . stage. In your staging php directory: ln -s . stage. Then set "root" in your "location ^~/stage/" and in your "location @stagevirtual". So long as everything that tries to recurse into the directories can recognise the symlink loop, it should be fine. Then in production, either "rm stage", or don't create the link in the first place. > Hmm, I think I see what you're getting at here. But is there no means by > which to achieve the desired configuration with my particular directory > structure? I'm a bit surprised at how difficult it is to decouple the > request location from the filesystem directory from which it is served, > while maintaining this clean-URL setup. If you want nginx, and you want a directory structure that requires nginx's "alias", then you get to deal with nginx's "alias", which has some imperfections. Change one of your wants, and the problem disappears. All the best, f -- Francis Daly francis at daoine.org From aldernetwork at gmail.com Thu Nov 21 22:12:47 2013 From: aldernetwork at gmail.com (Alder Network) Date: Thu, 21 Nov 2013 14:12:47 -0800 Subject: Nginx Websocket proxy with Microsoft IE 10 client In-Reply-To: <20131121084841.GG41579@mdounin.ru> References: <20131121084841.GG41579@mdounin.ru> Message-ID: when client is Internet Explorer 10, the websocket session didn't get proxy'd to websocket server (websocketpp) other browser clients work fine. On Thu, Nov 21, 2013 at 12:48 AM, Maxim Dounin wrote: > Hello! > > On Wed, Nov 20, 2013 at 03:17:11PM -0800, Alder Network wrote: > > > Is that a known issue? Any remedy available? > > What's the issue? > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From willpugh at gmail.com Thu Nov 21 23:50:08 2013 From: willpugh at gmail.com (Will Pugh) Date: Thu, 21 Nov 2013 15:50:08 -0800 Subject: Intermittant SSL Problems In-Reply-To: <20131121113459.GI41579@mdounin.ru> References: <20131121113459.GI41579@mdounin.ru> Message-ID: Cool. Thanks! Initial testing looks like this fixed it. --Will On Thu, Nov 21, 2013 at 3:34 AM, Maxim Dounin wrote: > Hello! > > On Wed, Nov 20, 2013 at 06:03:11PM -0800, Will Pugh wrote: > > > Hi folks, > > > > We are using Nginx for SSL termination, and then it proxies to an ATS or > > Haproxy server depending on our environment. > > > > We're running into a problem where every now and then, Nginx closes a > > connection due to a timeout. When investigating, it looks like the > > connections that are being timed-out are not being forwarded to the > backend > > service. The scenario when we were able to best reproduce this is one > > where one of our Java client was running about 100 REST requests that > were > > fairly similar. I've attached files that contain both the tcpdump from > the > > client side as well as the debug log on the nginx side. > > > > I tried comparing a successful and unsuccessful request next to each > > other. From the client side, it looks like the messages back and forth > > look very consistent. On the nginx side, the first difference seems to > > happen when reading in the Http Request. The requests that fail, all > seem > > to do a partial read: > > [...] > > I think I see what happens here: > > - Due to a partial read the c->read->ready flag is reset. > > - While processing request headers rest of the body becomes available in a > socket buffer. > > - The ngx_http_do_read_client_request_body() function calls c->recv(), > ngx_ssl_recv() in case of SSL, and rest of the data is read from the > kernel to OpenSSL buffers. The c->recv() is called with a limited > buffer space though (in the debug log provided, only 16 bytes - as this > happens during reading http chunk header), and only part of the data > becomes available to nginx. > > - As c->read->ready isn't set, ngx_http_do_read_client_request_body() arms > a read event and returns to the event loop. > > - The socket buffer is empty, so no further events are reported by > the kernel till timeout. > > Please try the following patch: > > --- a/src/event/ngx_event_openssl.c > +++ b/src/event/ngx_event_openssl.c > @@ -1025,6 +1025,7 @@ ngx_ssl_recv(ngx_connection_t *c, u_char > size -= n; > > if (size == 0) { > + c->read->ready = 1; > return bytes; > } > > @@ -1034,6 +1035,10 @@ ngx_ssl_recv(ngx_connection_t *c, u_char > } > > if (bytes) { > + if (c->ssl->last != NGX_AGAIN) { > + c->read->ready = 1; > + } > + > return bytes; > } > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Nov 22 02:16:22 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Nov 2013 02:16:22 +0000 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <20131121204502.GB943@craic.sysops.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> <528BEADB.2070608@indietorrent.org> <20131120091038.GI31289@craic.sysops.org> <528D7E7B.60207@indietorrent.org> <20131121204502.GB943@craic.sysops.org> Message-ID: <20131122021622.GC943@craic.sysops.org> On Thu, Nov 21, 2013 at 08:45:02PM +0000, Francis Daly wrote: > On Wed, Nov 20, 2013 at 10:31:07PM -0500, Ben Johnson wrote: ...and one more possibility... If your application directory structure is such that: /var/www/myapp/web/ contains only static files that should be served as-is if requested, with appropriate "index" files for some directories if wanted; and /var/www/myapp/php/ contains "myapp.php", which is the single controlling script that will always be called if the web request is not for a static file; and whatever other scripts, templates, and other things that are necessary for myapp.php to refer to are here, or are anywhere other than in /var/www/myapp/web/, and the application should be accessible from the web url "/app1/", then possibly all the locations you need are === # all requests for this app location ^~ /app1/ { alias /var/www/myapp/web/; error_page 404 403 = @app1; } # the main controller. Set whatever you want in the include file location @app1 { fastcgi_pass unix:php.sock; fastcgi_param SCRIPT_FILENAME /var/www/myapp/php/myapp.php; include fastcgi_params; } === > > if ($uri ~ '/download/.*/$') { > > rewrite ^/(.*)/$ /$1 permanent; > > if ($uri !~ '(/|\.[a-zA-Z0-9]{1,12}|/download/.*)$') { > > return 301 $uri/$is_args$args; > > if ($uri ~ '(/|/download/.*|/css2/.*|/js2/.*)$') { > > rewrite ^(.*)$ /index.php?q=$1 last; If you want to keep those "rewrites" within nginx.conf, just use the first two directly within @app1, and I think the third will not be needed. myapp.php should be able to find all it needs in _SERVER["DOCUMENT_URI"], so it shouldn't need _REQUEST["q"]. And a staging version would merely be another application -- copy the two locations, change the prefix and name, change the alias, error_page, and SCRIPT_FILENAME. All done. An application at / or at /sub/url/ would be no different. You might be able to use "root" instead of "alias" in some cases, but I don't think it matters here. > > If there is a better, more concise, or more secure way to accomplish the > > above, then I am eager to learn. The above is more concise -- partly because it tries to do less. The previous version allowed you to access the urls /app1/one.php and /app1/two.php in the client, and the separate files would be processed. This version would return the php files directly from web/, or would let php/myapp.php decide what to do if the files aren't in web/. That's the key to the smaller config -- nginx only tells the fastcgi server to process a single file. f -- Francis Daly francis at daoine.org From ianevans at digitalhit.com Fri Nov 22 07:01:52 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Fri, 22 Nov 2013 02:01:52 -0500 Subject: Debugging and ubuntu Message-ID: <528F0160.3060400@digitalhit.com> Been running nginx for _years_ on centos and am in the process of migrating to an Ubuntu (raring) server. I've always compiled from source before but figured I'd use the Ubuntu apt-get install. For whatever reason, testing is not working at all (I can serve static but not php) and I set the error_log level to debug, but I'm still getting notice level messages. Does the package version not include debug (don't see it with -V) and if so how can I get it through the package system? My previous CentOS migration was smooth, so I don't know why this one has me tearing my hair out. From yatiohi at ideopolis.gr Fri Nov 22 07:49:08 2013 From: yatiohi at ideopolis.gr (Christos Trochalakis) Date: Fri, 22 Nov 2013 09:49:08 +0200 Subject: Debian packages for CVE-2013-4547 In-Reply-To: <20131119150221.GL41579@mdounin.ru> References: <20131119150221.GL41579@mdounin.ru> Message-ID: <20131122074907.GA5843@luke.ws.skroutz.gr> On Tue, Nov 19, 2013 at 07:02:21PM +0400, Maxim Dounin wrote: >Hello! > >Ivan Fratric of the Google Security Team discovered a bug in nginx, >which might allow an attacker to bypass security restrictions in certain >configurations by using a specially crafted request, or might have >potential other impact (CVE-2013-4547). > I wanted to inform the list that debian has uploaded packages to handle the issue: nginx 1.2.1-2.2+wheezy2 for wheezy (includes the backported patch) nginx 1.4.4-1 for sid http://lists.debian.org/debian-security-announce/2013/msg00215.html From francis at daoine.org Fri Nov 22 09:06:17 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Nov 2013 09:06:17 +0000 Subject: Debugging and ubuntu In-Reply-To: <528F0160.3060400@digitalhit.com> References: <528F0160.3060400@digitalhit.com> Message-ID: <20131122090617.GD943@craic.sysops.org> On Fri, Nov 22, 2013 at 02:01:52AM -0500, Ian Evans wrote: Hi there, > Does the package version not include debug (don't see it with -V) and if > so how can I get it through the package system? There's more than one nginx package in Ubuntu (raring). Perhaps you haven't installed the one you want to. > My previous CentOS migration was smooth, so I don't know why this one > has me tearing my hair out. You know The CentOS Way -- or you bypassed it by building yourself. You don't know The Ubuntu Way. I suspect that you'll be unhappy until you learn it, or until you learn to bypass it. f -- Francis Daly francis at daoine.org From farseas at gmail.com Fri Nov 22 09:14:12 2013 From: farseas at gmail.com (Bob S.) Date: Fri, 22 Nov 2013 04:14:12 -0500 Subject: Debugging and ubuntu In-Reply-To: <20131122090617.GD943@craic.sysops.org> References: <528F0160.3060400@digitalhit.com> <20131122090617.GD943@craic.sysops.org> Message-ID: We run Ubuntu on many servers and have come to prefer the binaries offered by nginx.org rather than the Ubuntu ppa's One advantage is that upgrades go smoother because the nginx upgrades don't ask unnecessary questions. On Fri, Nov 22, 2013 at 4:06 AM, Francis Daly wrote: > On Fri, Nov 22, 2013 at 02:01:52AM -0500, Ian Evans wrote: > > Hi there, > > > Does the package version not include debug (don't see it with -V) and if > > so how can I get it through the package system? > > There's more than one nginx package in Ubuntu (raring). > > Perhaps you haven't installed the one you want to. > > > My previous CentOS migration was smooth, so I don't know why this one > > has me tearing my hair out. > > You know The CentOS Way -- or you bypassed it by building yourself. > > You don't know The Ubuntu Way. > > I suspect that you'll be unhappy until you learn it, or until you learn > to bypass it. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianevans at digitalhit.com Fri Nov 22 09:44:36 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Fri, 22 Nov 2013 04:44:36 -0500 Subject: Debugging and ubuntu In-Reply-To: References: <528F0160.3060400@digitalhit.com> <20131122090617.GD943@craic.sysops.org> Message-ID: <528F2784.2080700@digitalhit.com> On 22/11/2013 4:14 AM, Bob S. wrote: > We run Ubuntu on many servers and have come to prefer the binaries > offered by nginx.org rather than the Ubuntu ppa's > One advantage is that upgrades go smoother because the nginx upgrades > don't ask unnecessary questions. > On Fri, Nov 22, 2013 at 4:06 AM, Francis Daly > wrote: > On Fri, Nov 22, 2013 at 02:01:52AM -0500, Ian Evans wrote: > Hi there, > > Does the package version not include debug (don't see it with -V) > and if > > so how can I get it through the package system? > > There's more than one nginx package in Ubuntu (raring). Well I uninstalled and changed to the ppa version, which comes with debug. Scratching my head on this one. I created a file that has just the phpinfo(); and looking at the debug log I can see that it appears to hit the fastcgi (it returns an x-powered-by header) but returns a blank page: 2013/11/22 04:23:51 [debug] 5453#0: *10 http upstream request: "/carsontest.php?" 2013/11/22 04:23:51 [debug] 5453#0: *10 http upstream process header 2013/11/22 04:23:51 [debug] 5453#0: *10 malloc: 0000000001BB71D0:4096 2013/11/22 04:23:51 [debug] 5453#0: *10 recv: fd:8 88 of 4096 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi record byte: 01 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi record byte: 06 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi record byte: 00 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi record byte: 01 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi record byte: 00 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi record byte: 3F 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi record byte: 01 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi record byte: 00 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi record length: 63 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi parser: 0 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi header: "X-Powered-By: PHP/5.4.9-4ubuntu2.3" 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi parser: 0 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi header: "Content-type: text/html" 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi parser: 1 2013/11/22 04:23:51 [debug] 5453#0: *10 http fastcgi header done 2013/11/22 04:23:51 [debug] 5453#0: *10 xslt filter header 2013/11/22 04:23:51 [debug] 5453#0: *10 posix_memalign: 0000000001BB81E0:4096 @16 2013/11/22 04:23:51 [debug] 5453#0: *10 HTTP/1.1 200 OK Server: nginx/1.4.3 Date: Fri, 22 Nov 2013 09:23:51 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding X-Powered-By: PHP/5.4.9-4ubuntu2.3 Content-Encoding: gzip Again...a head scratcher since I've been running nginx and php-fpm for years. Perhaps it's worked so well that my memory is foggy from the last time I did it. From ianevans at digitalhit.com Fri Nov 22 10:01:20 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Fri, 22 Nov 2013 05:01:20 -0500 Subject: Debugging and ubuntu In-Reply-To: <528F2784.2080700@digitalhit.com> References: <528F0160.3060400@digitalhit.com> <20131122090617.GD943@craic.sysops.org> <528F2784.2080700@digitalhit.com> Message-ID: <528F2B70.5000600@digitalhit.com> On 22/11/2013 4:44 AM, Ian Evans wrote:[snip] > Scratching my head on this one. I created a file that has just the > phpinfo(); and looking at the debug log I can see that it appears to hit > the fastcgi (it returns an x-powered-by header) but returns a blank page: > [snip] > Again...a head scratcher since I've been running nginx and php-fpm for > years. Perhaps it's worked so well that my memory is foggy from the last > time I did it. Found this... http://forum.nginx.org/read.php?3,197023,200719#msg-200719 Added fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; and so far so good. From yaoweibin at gmail.com Fri Nov 22 11:06:03 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Fri, 22 Nov 2013 19:06:03 +0800 Subject: [ANNOUNCE] Tengine-1.5.2 is released (fix CVE-2013-4547) Message-ID: Hi folks, Tengine-1.5.2 (stable version) has been released. You can either checkout the source code from github: https://github.com/alibaba/tengine/tree/stable or download the tar ball directly: http://tengine.taobao.org/download/tengine-1.5.2.tar.gz We have fixed the security problem CVE-2013-4547. A character following an unescaped space in a request line was handled incorrectly. This bug had appeared since 1.2.0. The full change log follows below: *) Security: a character following an unescaped space in a request line was handled incorrectly (CVE-2013-4547); the bug had appeared in 0.8.41. Thanks to Ivan Fratric of the Google Security Team. *) Bugfix: fix a bug of 'nodelay' might be ignored in limit_req module. (cfsego) *) Bugfix: fix a bug in trim module when processing javascript comment. (taoyuanyuan) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Regards, -- Weibin Yao Developer @ Server Platform Team of Taobao From mdounin at mdounin.ru Fri Nov 22 12:52:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 22 Nov 2013 16:52:53 +0400 Subject: Nginx Websocket proxy with Microsoft IE 10 client In-Reply-To: References: <20131121084841.GG41579@mdounin.ru> Message-ID: <20131122125253.GS41579@mdounin.ru> Hello! On Thu, Nov 21, 2013 at 02:12:47PM -0800, Alder Network wrote: > when client is Internet Explorer 10, the websocket session didn't get > proxy'd to websocket server (websocketpp) other browser > clients work fine. Works fine here, just tested: 192.168.56.1 - - [22/Nov/2013:16:48:39 +0400] "GET /websocket.html HTTP/1.1" 304 0 "-" "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)" 192.168.56.1 - - [22/Nov/2013:16:49:03 +0400] "GET /websocket/ HTTP/1.1" 101 21 "-" "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)" (Note "101" response logged after a tab is closed.) You may want to find out more details, see http://wiki.nginx.org/Debugging for some hints. -- Maxim Dounin http://nginx.org/en/donation.html From JPomorski at wolterskluwer.pl Fri Nov 22 13:47:23 2013 From: JPomorski at wolterskluwer.pl (=?iso-8859-2?Q?Pomorski_Jaros=B3aw?=) Date: Fri, 22 Nov 2013 14:47:23 +0100 Subject: Abstract behavior of nginx Message-ID: <7DBEE2A69F5C5544B53298D4CE1C956937709B66B0@cerber.wkpolska.pl> Hi, I can't understand behavior of nginx. Version 1.2.1 on Debian Wheezy from official repository. I send requests to cdn.some_domain.pl server, and in log /var/log/nginx/cdn.some_domain.pl/test.log I see: image/gif:1 image/png:1 image/png:1 image/gif:1 image/png:1 image/gif:1 It is correct. If I remove hash sign in 3 last line of configuration file, nginx puts to /var/log/nginx/cdn.some_domain.pl/test.log below entries: -:0 -:0 -:0 -:0 -:0 -:0 I don't understand, why in this configuration, value of $sent_http_content_type variable is empty. Configuration: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; proxy_set_header Host $host; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; map $sent_http_content_type $cdn { default 0; text/css 1; text/javascript 1; image/x-icon 1; image/gif 1; image/jpeg 1; image/png 1; } log_format test $sent_http_content_type:$cdn; server { listen 82 default; ## listen for ipv4 listen [::]:82 default ipv6only=on; ## listen for ipv6 server_name localhost ""; access_log /var/log/nginx/localhost.access.log; error_log /var/log/nginx/localhost.error.log; location / { root /var/www; index index.html index.htm; } location /test.txt { proxy_pass http://$server_addr:8080; } } server { listen 82; server_name cdn.some_domain.pl; location / { proxy_pass http://$server_addr:8080; } location /test.jsp { proxy_pass http://$server_addr:8080; allow 10.0.0.0/8; deny all; } access_log /var/log/nginx/cdn.some_domain.pl/test.log test; # if ($cdn) { # return 404; # } } } Regards, Jarek -------------- next part -------------- An HTML attachment was scrubbed... URL: From fusca14 at gmail.com Fri Nov 22 14:24:44 2013 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Fri, 22 Nov 2013 12:24:44 -0200 Subject: Compile NGINX with Openssl statically Message-ID: Hi members... I need your help. I'm trying to compile NGINX 1.4.4 with openssl 1.0.1e statically but the make process fail. Step 1: ./configure --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/bin/nginx --pid-path=/run/nginx.pid --lock-path=/run/lock/nginx.lock --user=http --group=http --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/client-body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-openssl=/tmp/compile/openssl-1.0.1e --with-md5=/tmp/compile/openssl-1.0.1e --with-md5-asm --with-sha1=/tmp/compile/openssl-1.0.1e --with-sha1-asm --with-pcre=/tmp/compile/pcre-8.33 --with-zlib=/tmp/compile/zlib-1.2.8 --with-imap --with-imap_ssl_module --with-pcre-jit --with-file-aio --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_realip_module --with-http_spdy_module --with-http_ssl_module --with-http_stub_status_module --with-http_addition_module --with-http_degradation_module --with-http_flv_module --with-http_mp4_module --with-http_secure_link_module --with-http_sub_module Step2: make ... and I get the GCC error message: ... making all in tools... make[3]: Entering directory '/tmp/compile/openssl-1.0.1e/tools' make[3]: Nothing to be done for 'all'. make[3]: Leaving directory '/tmp/compile/openssl-1.0.1e/tools' installing man1/asn1parse.1 installing man1/CA.pl.1 installing man1/ca.1 installing man1/ciphers.1 installing man1/cms.1 cms.pod around line 457: Expected text after =item, not a number cms.pod around line 461: Expected text after =item, not a number cms.pod around line 465: Expected text after =item, not a number cms.pod around line 470: Expected text after =item, not a number cms.pod around line 474: Expected text after =item, not a number POD document had syntax errors at /usr/bin/core_perl/pod2man line 71. Makefile:639: recipe for target 'install_docs' failed make[2]: *** [install_docs] Error 1 make[2]: Leaving directory '/tmp/compile/openssl-1.0.1e' objs/Makefile:1395: recipe for target '/tmp/compile/openssl-1.0.1e/.openssl/include/openssl/ssl.h' failed make[1]: *** [/tmp/compile/openssl-1.0.1e/.openssl/include/openssl/ssl.h] Error 2 make[1]: Leaving directory '/tmp/compile/nginx-1.4.4' Makefile:8: recipe for target 'build' failed make: *** [build] Error 2 What I am doing wrong? Thanks in advance and sorry about my poor english. Fabiano From agentzh at gmail.com Fri Nov 22 19:51:54 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 22 Nov 2013 11:51:54 -0800 Subject: [openresty-en] Re: [ANN] ngx_openresty stable version 1.4.3.6 released In-Reply-To: <4e37f0d8-b4a0-491f-9e59-6882d2abb958@googlegroups.com> References: <4e37f0d8-b4a0-491f-9e59-6882d2abb958@googlegroups.com> Message-ID: Hello! On Fri, Nov 22, 2013 at 10:07 AM, Roberto Ostinelli wrote: > can I ask you why the 'stable' version is more recent than the 'mainline' > one? > Because the new mainline version of openresty is not ready to release atm :) It's expected to be out soon. Best regards, -agentzh From rva at onvaoo.com Sat Nov 23 10:00:52 2013 From: rva at onvaoo.com (Ronald Van Assche) Date: Sat, 23 Nov 2013 11:00:52 +0100 Subject: blank page on some micro-cached pages Message-ID: Running Nginx 1.5.7 on Freebsd 9.1 with php-pfm 5.5.3 a wordpress site is "micro-cached" , but for some strange reason and not always , some pages ( wordpress post or pages even 3 or 7 days after their publication) are rendered for some users ( me and others and not always the same, with different browsers) as blank pages and the same connexion can read other cached pages/post with no problems. The size of the global site in /cache is 72 Kb. Very small indeed. The bad cached pages are logged like this : "GET /url HTTP/1.1" 200 31 always 200 as a result code, and 31 for its size.. 31 is not the actual size of the page of course. If a refresh my local cache ( SHIFT CMD-R) on chrome , the correct page is reloaded and cached in /cache/x/y/somefile by nginx. Any help ? Some config : fastcgi_cache_path /cache/nginx levels=1:2 keys_zone=microcache:5m max_size=2000m; sendfile on; sendfile_max_chunk 512K; aio sendfile; tcp_nopush on; read_ahead 256K; open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; keepalive_timeout 120; keepalive_requests 10000; client_max_body_size 2m; gzip on; gzip_buffers 48 8k; gzip_comp_level 4; gzip_http_version 1.0; gzip_vary on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; From ben at indietorrent.org Sat Nov 23 17:36:55 2013 From: ben at indietorrent.org (Ben Johnson) Date: Sat, 23 Nov 2013 12:36:55 -0500 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <20131121204502.GB943@craic.sysops.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> <528BEADB.2070608@indietorrent.org> <20131120091038.GI31289@craic.sysops.org> <528D7E7B.60207@indietorrent.org> <20131121204502.GB943@craic.sysops.org> Message-ID: <5290E7B7.7040100@indietorrent.org> On 11/21/2013 3:45 PM, Francis Daly wrote: > On Wed, Nov 20, 2013 at 10:31:07PM -0500, Ben Johnson wrote: >> On 11/20/2013 4:10 AM, Francis Daly wrote: > > Hi there, > >>> If you do have free choice in the matter, some things work more easily >>> within nginx if you can use "root" and not "alias" -- so if you want >>> files to be accessible below the url /stage/, having them directly in >>> a directory called /stage/ is convenient. > >> a.) I prefer to maintain identical directory structures within my >> staging and production environments. This allows my build scripts and >> relative paths to remain identical (provided that I set a "root path" >> variable or similar in each shell script), and it helps me to remember >> to where I must "cd" in order to execute a build script. > > That's perfectly sensible. > > And the web server configurations for production and staging would > also be very similar, if they were served at similar points in the > hierarchy of the web domain or domains. > > But because you serve one at "/" and one at "/stable/", you end up > having different configurations. And because of nginx's implementation > of "alias", and your use of a filesystem hierarchy which means you need > "alias", you end up having very different configurations. > I see; this seems to be the crux of my struggle. :) >> b.) I really prefer to keep PHP and PHP-template files out of the >> document root (on the filesystem). > > That seems mostly unrelated, unless I'm missing something. Put your php > files in "php/" which is parallel to "web/" if you like, nginx won't > care. In fact, if you don't use try_files, nginx won't even look. All > it needs to know is what filename to tell the fastcgi server to read. > >> I don't want user-agents to be able >> to request .tpl files directly, for example, by requesting >> /templates/home.tpl. > > I confess that I thought that was standard -- the php file itself should > have written in it "read your associated files from this directory > location", which should not be web-accessible (unless you specifically > want it to be). > It *should* be standard, but consider frameworks such as WordPress, Joomla, Drupal, etc. They all require (or at least assume) everything to be within the virtual host's document root (the "web" directory, per our discussion here). Not directly relevant; just an observation that we seem to be "ahead of the pack" in this best-practice. >> course, I build my applications such that every request is routed >> through a single point that has complete authority over how the request >> is handled. > > If you really mean "every", then the only nginx config you need is > > location ^~ /myapp/ { > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME /var/myapp.php; > fastcgi_pass unix:php.sock; > } > > Anything else is the web server getting in the way (and implementing > things that it can probably do more efficiently than your application > can). > You're right; I don't mean *every* request -- I mean only those that are not for a "real file" and that match the format of a resource that I may serve via PHP. > So the time spent arranging the web server configuration you want can > be counted as time saved not implementing the extra features in your > application. > >> In the second case, this seems risky; if I were to put the "stage" >> directory within the production site's document root, > > I may have been unclear, or I may be misunderstanding you now. > > What I intended to suggest was that: currently all of > your staging web content is visible on the filesystem in > /var/www/example.com/private/stage/web/; > > if you either move or copy (or possibly even symlink) it to be visible > in /var/www/example.com/private/stage/web/stage/, then you will be able > to configure nginx using "root" instead of "alias", and your "staging" > configuration can look much more like your "production" one. > Yes, this was a simple misunderstanding; I thought you meant to move everything to /var/www/example.com/web/stage/. Thanks for clarifying this point. This seems to be "the hot ticket". I have no reason for not doing this. > location ^~ /stage/ { > root /var/www/example.com/private/stage/web/; > # The files are read from /var/www/example.com/private/stage/web/stage/ > index index.php index.html index.htm; > try_files $uri $uri/ /stage/index.php?q=$uri; > > location ~ \.php$ { > # use a different "root" here if you want; but make sure the php > # files can be read from within "stage/" below that root. > try_files $uri /stage/index.php?q=$uri; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include /etc/nginx/fastcgi_params; > } > } > > >> I meant to say that there are certain cases in which I want index.php to >> dispatch the request to dedicated CSS and JS controllers. > > If "dispatch to the controller" means "something within php, and therefore > nginx doesn't have to know or care about it", then it probably should > Just Work. > >> Maybe it would be helpful for me to demonstrate how I achieve this when >> the "site" is not accessed in a subdirectory (with respect to the URL) >> and the files actually exist in the virtual host's document root. > >> location / { >> try_files $uri $uri/ @virtual; >> } > > That'll become "/stage/" and "@stagevirtual", I guess. But it'll probably > want a suitable "root" added -- your "location /" can safely inherit > the server one, while "/stage/" can't. > >> location @virtual { > >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > @stagevirtual will need to know its version of $document_root. So set > a suitable "root" there too. > >> if ($uri ~ '/download/.*/$') { >> rewrite ^/(.*)/$ /$1 permanent; > > "/stage/" will already be there in both parts of the rewrite, so no > change needed. > >> if ($uri !~ '(/|\.[a-zA-Z0-9]{1,12}|/download/.*)$') { >> return 301 $uri/$is_args$args; > > "/stage/" will already be there in $uri, so no change needed. > >> if ($uri ~ '(/|/download/.*|/css2/.*|/js2/.*)$') { >> rewrite ^(.*)$ /index.php?q=$1 last; > > That should become "/stage/index.php". > > (Note that in this specific rewrite, $1 == $uri, so you could avoid the > match-and-capture.) > >> This is all I'm trying to do. I just want to replicate the above with >> the "less-friendly" directory structure that we've been discussing. > > I'm not seeing many changes being needed. Provided you avoid > "alias". Which requires a filesystem change. Which hopefully won't break > the non-nginx part of your workflow. > > (Actually, within your @stagevirtual, the only place root/alias seems to > matter is in the SCRIPT_FILENAME thing. So if you can build your own from > the variables you have to hand, you can probably get away with whatever > file structure you like.) > >> If there is a better, more concise, or more secure way to accomplish the >> above, then I am eager to learn. > > In your staging web directory: ln -s . stage. In your staging php > directory: ln -s . stage. Then set "root" in your "location ^~/stage/" > and in your "location @stagevirtual". > > So long as everything that tries to recurse into the directories can > recognise the symlink loop, it should be fine. > > Then in production, either "rm stage", or don't create the link in the > first place. > Okay, I'm trying to implement this, but something seems to have gone terribly awry. I apologize for derailing our progress, but until this is resolved, I have no way to test your recommendations. It's bizarre. At some point while meddling with the configuration, requests for /stage/ began causing the browser to download index.php (and I can open the file and see the PHP code). And no matter what I change in the configuration, this behavior persists. I went so far as to rename the "stage" directory to "staging", and changed all references in nginx's configuration accordingly. Yet, for some reason, requests for /stage/ still download the index.php file! I'm not even sure where this index.php file is coming from, given that nginx should have absolutely no knowledge of this file's new/current location. The weirdest part is that the downloaded file is *not* the file that exists at /index.php (the "main site" index file). I confirmed this by adding some commented PHP code at the bottom of /index.php, and the comments do not appear in the downloaded file. Hell, I even tried deleting the entire "staging" directory and this still happens! I've restarted nginx, php5-fpm, etc. and nothing changes. Where is this file coming from??? >> Hmm, I think I see what you're getting at here. But is there no means by >> which to achieve the desired configuration with my particular directory >> structure? I'm a bit surprised at how difficult it is to decouple the >> request location from the filesystem directory from which it is served, >> while maintaining this clean-URL setup. > > If you want nginx, and you want a directory structure that requires > nginx's "alias", then you get to deal with nginx's "alias", which has > some imperfections. > > Change one of your wants, and the problem disappears. > > All the best, > > f > Thanks again for seeing me through this, Francis. A bottle of your favorite spirit is in order once this whole affair is resolved! -Ben From ianevans at digitalhit.com Sat Nov 23 17:55:09 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Sat, 23 Nov 2013 12:55:09 -0500 Subject: will request_uri get passed from cookieless server to 403 page on main server? In-Reply-To: <20131119093114.GE31289@craic.sysops.org> References: <2c961eddf9714b6a01f3a75e27ae41f8.squirrel@www.digitalhit.com> <52895D6E.1040608@digitalhit.com> <20131118232659.GB31289@craic.sysops.org> <528B2BBA.2090900@digitalhit.com> <20131119093114.GE31289@craic.sysops.org> Message-ID: <5290EBFD.5000604@digitalhit.com> On 19/11/2013 4:31 AM, Francis Daly wrote: > *If* that php is set to always return cookies, then you might want to > run a separate php-fpm that does not return cookies for the static > site. But that's a side issue. Still experimenting with this... What setting stops the php-fpm from not returning cookies? Is it 'fastcgi_ignore_headers'? Thanks. From ben at indietorrent.org Sat Nov 23 18:11:46 2013 From: ben at indietorrent.org (Ben Johnson) Date: Sat, 23 Nov 2013 13:11:46 -0500 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <5290E7B7.7040100@indietorrent.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> <528BEADB.2070608@indietorrent.org> <20131120091038.GI31289@craic.sysops.org> <528D7E7B.60207@indietorrent.org> <20131121204502.GB943@craic.sysops.org> <5290E7B7.7040100@indietorrent.org> Message-ID: <5290EFE2.8060109@indietorrent.org> On 11/23/2013 12:36 PM, Ben Johnson wrote: > Okay, I'm trying to implement this, but something seems to have gone > terribly awry. I apologize for derailing our progress, but until this is > resolved, I have no way to test your recommendations. > > It's bizarre. At some point while meddling with the configuration, > requests for /stage/ began causing the browser to download index.php > (and I can open the file and see the PHP code). And no matter what I > change in the configuration, this behavior persists. > > I went so far as to rename the "stage" directory to "staging", and > changed all references in nginx's configuration accordingly. Yet, for > some reason, requests for /stage/ still download the index.php file! I'm > not even sure where this index.php file is coming from, given that nginx > should have absolutely no knowledge of this file's new/current location. > > The weirdest part is that the downloaded file is *not* the file that > exists at /index.php (the "main site" index file). I confirmed this by > adding some commented PHP code at the bottom of /index.php, and the > comments do not appear in the downloaded file. > > Hell, I even tried deleting the entire "staging" directory and this > still happens! I've restarted nginx, php5-fpm, etc. and nothing changes. > Where is this file coming from??? I removed everything related to this effort (setting-up the staging site) from nginx's configuration, and deleted all of the staging site files from the filesystem, yet nginx is still serving an "index.php" file whenever I request /stage/! How is this possible? This "index.php" is indeed "mine"; it contains the code from my application. But, as I said, if I add some random, commented PHP code to the bottom of the "real index file" at /index.php, it's not present in the downloaded file. I just don't see where this file could be coming from at this point. If I request some other URL, e.g., /stag/, this does not happen; the problem is specific to the location /stage/. grep -ir "stage" /etc/nginx does not return any results, either, which proves that all references to this location have been expunged from the nginx configuration. Furthermore, any effort to modify the configuration to use a different location, such as /staging/ has no effect. It's like the nginx configuration is stuck in some cached state. And restarting nginx doesn't fix the problem. Am I out of my mind? Thanks for any insight as to what could be happening here... -Ben From anotherworldofworld at gmail.com Sat Nov 23 18:47:57 2013 From: anotherworldofworld at gmail.com (Alex toyer) Date: Sun, 24 Nov 2013 00:47:57 +0600 Subject: Headers at the html page Message-ID: Hello All, I have erlang web application with cowboy web server and i launched it behind nginx reverse proxy. My nginx configuration: server { listen 9090; location /test { proxy_pass http://localhost:8080; proxy_http_version 1.1; } } Where http://localhost:8080 is my cowboy web server. When i'm openning: http://localhost:8080/test i see my html page, it's ok, Browser shows response headers: Transfer-Encoding:chunked Server:nginx/1.2.6 (Ubuntu) Content-Type:text/html Content-Encoding:gzip Connection:keep-alive It's normal, but... I see the following string right on the my html page: "HTTP/1.1 204 No Content connection: close server: Cowboy content-length: 0" Why nginx adds this string with headers to the html page? And it occurs only if i set up Content-Type: text/html header. Thank you. -- best regards, twitter: @0xAX github: 0xAX -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Nov 23 20:47:33 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 23 Nov 2013 20:47:33 +0000 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <5290EFE2.8060109@indietorrent.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> <528BEADB.2070608@indietorrent.org> <20131120091038.GI31289@craic.sysops.org> <528D7E7B.60207@indietorrent.org> <20131121204502.GB943@craic.sysops.org> <5290E7B7.7040100@indietorrent.org> <5290EFE2.8060109@indietorrent.org> Message-ID: <20131123204733.GE943@craic.sysops.org> On Sat, Nov 23, 2013 at 01:11:46PM -0500, Ben Johnson wrote: > On 11/23/2013 12:36 PM, Ben Johnson wrote: Hi there, > > It's bizarre. At some point while meddling with the configuration, > > requests for /stage/ began causing the browser to download index.php > > (and I can open the file and see the PHP code). And no matter what I > > change in the configuration, this behavior persists. I tend to test with "curl", because it avoids any chance of a browser cache being involved. Any intermediate proxies or caches might still be in the way, of course. But the nginx access log and/or debug log should show what nginx thinks is happening. > > I went so far as to rename the "stage" directory to "staging", and > > changed all references in nginx's configuration accordingly. Yet, for > > some reason, requests for /stage/ still download the index.php file! I'm > > not even sure where this index.php file is coming from, given that nginx > > should have absolutely no knowledge of this file's new/current location. The usual reason for this kind of thing is that the nginx.conf that you are writing and the nginx.conf that nginx is reading are not the same. > > The weirdest part is that the downloaded file is *not* the file that > > exists at /index.php (the "main site" index file). I confirmed this by > > adding some commented PHP code at the bottom of /index.php, and the > > comments do not appear in the downloaded file. The other likely reason is that the web server you are talking to and the web server you think you are talking to are not the same. (Either a different machine entirely, or else a different server{} block in the same config file.) > > Hell, I even tried deleting the entire "staging" directory and this > > still happens! I've restarted nginx, php5-fpm, etc. and nothing changes. Possibly you did a "nginx -s reload", but an error in the conf file meant that nginx didn't stop using the older file, or something like that? > I removed everything related to this effort (setting-up the staging > site) from nginx's configuration, and deleted all of the staging site > files from the filesystem, yet nginx is still serving an "index.php" > file whenever I request /stage/! How is this possible? This "index.php" > is indeed "mine"; it contains the code from my application. But, as I > said, if I add some random, commented PHP code to the bottom of the > "real index file" at /index.php, it's not present in the downloaded > file. I just don't see where this file could be coming from at this point. Add something like location = /stage/ { return 200 "Just checking...\n"; } and restart nginx. If you don't see that response for that request, you're not using that conf file. If that much does work, then either read the debug log, or trace through nginx.conf to see what should happen for this request -- server-level rewrite module directives, plus the one location that the request "/stage/" should be handled in, are the most likely sources. > Thanks for any insight as to what could be happening here... Make sure nginx is really completely stopped. Then start it again with a known config file. And look at the responses to a few requests, and see which are not what you expect. I'm afraid I'm reduced to generalities with the information available. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Nov 23 21:03:17 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 23 Nov 2013 21:03:17 +0000 Subject: will request_uri get passed from cookieless server to 403 page on main server? In-Reply-To: <5290EBFD.5000604@digitalhit.com> References: <2c961eddf9714b6a01f3a75e27ae41f8.squirrel@www.digitalhit.com> <52895D6E.1040608@digitalhit.com> <20131118232659.GB31289@craic.sysops.org> <528B2BBA.2090900@digitalhit.com> <20131119093114.GE31289@craic.sysops.org> <5290EBFD.5000604@digitalhit.com> Message-ID: <20131123210317.GF943@craic.sysops.org> On Sat, Nov 23, 2013 at 12:55:09PM -0500, Ian Evans wrote: > On 19/11/2013 4:31 AM, Francis Daly wrote: Hi there, > > *If* that php is set to always return cookies, then you might want to > > run a separate php-fpm that does not return cookies for the static > > site. But that's a side issue. > > Still experimenting with this... > > What setting stops the php-fpm from not returning cookies? By "return" here, I meant "generate a new one if the request did not include one". It should be a php configuration, probably involving "session", possibly "session.use_cookies". > Is it 'fastcgi_ignore_headers'? No, but you could probably "fastcgi_hide_header Set-Cookie;", if your fastcgi server insists on sending that header. I don't know if it is sufficient to try in nginx -- Set-Cookie probably comes with some "please do not cache" headers that you would want to remove as well, if they aren't otherwise needed. f -- Francis Daly francis at daoine.org From ianevans at digitalhit.com Sun Nov 24 12:54:56 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Sun, 24 Nov 2013 07:54:56 -0500 Subject: debugging ssl and php-fpm Message-ID: <5291F720.5020907@digitalhit.com> Okay, so rule #1 is to never think a server migration will go easy. As I've said in another thread, I've been running nginx and php-fpm for years on my site. But I'm moving from a CentOS to an Ubuntu server and things aren't going as smooth as they should be. I've got the non-ssl server working just fine. Tested out the SSL pages and I'm getting blank pages but I can't seem to see anything in the logs or at least nothing that's clear to me. Here's a snippet of the SSL server: server { server_name www.example.com; listen 443; root /usr/share/nginx/html; index index.shtml index.php index.html; include /etc/nginx/fastcgi_params; error_log /var/log/nginx/sslerror.log debug; ssl on; ssl_certificate /etc/nginx/certs/example.pem; ssl_certificate_key /etc/nginx/certs/example.key; ssl_session_timeout 5m; error_page 404 /dhe404.shtml; location / { rewrite ^ http://www.example.com$request_uri? permanent; } location ~ \.(shtml|php|inc)$ { fastcgi_pass 127.0.0.1:9000; } location ^~ /rather/ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_param HTTPS on; fastcgi_index index.shtml; auth_basic "DHENEWS"; auth_basic_user_file .htpasswd; } ... } So, I'm trying to go a php page under /rather, a page I've used thousands of times on the old server. I get prompted for my username and password by the auth. That works, but then I get a blank page. so: - PHP is working on the non-ssl side - we've got fastcgi_pass in the locations. And most importantly...it works on the old server so why am I pulling my hair out? ;-) Is there something I'm missing in regards to ssl and php-fpm? Here's the fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; # cache stuff fastcgi_cache MYCACHE; fastcgi_keep_conn on; fastcgi_cache_bypass $no_cache $no_cache_dirs; fastcgi_no_cache $no_cache $no_cache_dirs; fastcgi_cache_valid 200 301 5m; fastcgi_cache_valid 302 5m; fastcgi_cache_valid 404 1m; fastcgi_cache_use_stale error timeout invalid_header updating http_500; fastcgi_ignore_headers Cache-Control Expires; fastcgi_cache_lock on; Thanks to the list for a fresh pair of eyes. From francis at daoine.org Sun Nov 24 14:43:14 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 24 Nov 2013 14:43:14 +0000 Subject: debugging ssl and php-fpm In-Reply-To: <5291F720.5020907@digitalhit.com> References: <5291F720.5020907@digitalhit.com> Message-ID: <20131124144314.GG943@craic.sysops.org> On Sun, Nov 24, 2013 at 07:54:56AM -0500, Ian Evans wrote: Hi there, > location ^~ /rather/ { > fastcgi_intercept_errors on; > fastcgi_pass 127.0.0.1:9000; > fastcgi_param HTTPS on; Does it work if you remove that line? It looks unnecessary to me. And it breaks your config. > fastcgi_index index.shtml; > auth_basic "DHENEWS"; > auth_basic_user_file .htpasswd; > } > And most importantly...it works on the old server so why am I pulling my > hair out? ;-) What does "diff" say about the config on the old server and the config on the new server? > Is there something I'm missing in regards to ssl and > php-fpm? Here's the fastcgi_params: fastcgi_params is included at server level, not within /rather/. Only some parts of its contents are inherited into /rather/. f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Sun Nov 24 14:51:21 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 24 Nov 2013 15:51:21 +0100 Subject: debugging ssl and php-fpm In-Reply-To: <20131124144314.GG943@craic.sysops.org> References: <5291F720.5020907@digitalhit.com> <20131124144314.GG943@craic.sysops.org> Message-ID: Hello, On Sun, Nov 24, 2013 at 3:43 PM, Francis Daly wrote: > On Sun, Nov 24, 2013 at 07:54:56AM -0500, Ian Evans wrote: > > Hi there, > > > location ^~ /rather/ { > > fastcgi_intercept_errors on; > > fastcgi_pass 127.0.0.1:9000; > > fastcgi_param HTTPS on; > > Does it work if you remove that line? It looks unnecessary to me. And > it breaks your config. > ?Sorry to interrupt, but could you explain a little bit more about what breaks the config?? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianevans at digitalhit.com Sun Nov 24 15:16:33 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Sun, 24 Nov 2013 10:16:33 -0500 Subject: debugging ssl and php-fpm In-Reply-To: <20131124144314.GG943@craic.sysops.org> References: <5291F720.5020907@digitalhit.com> <20131124144314.GG943@craic.sysops.org> Message-ID: <52921851.80103@digitalhit.com> On 24/11/2013 9:43 AM, Francis Daly wrote: > What does "diff" say about the config on the old server and the config > on the new server? As I moved to a new server, I split everytng from one file to the whole sites-available format so I'd have to recombine everything. However... > fastcgi_params is included at server level, not within /rather/. Only > some parts of its contents are inherited into /rather/. Tossing the params into /rather did the trick. Many, many thanks. From francis at daoine.org Sun Nov 24 16:09:52 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 24 Nov 2013 16:09:52 +0000 Subject: debugging ssl and php-fpm In-Reply-To: References: <5291F720.5020907@digitalhit.com> <20131124144314.GG943@craic.sysops.org> Message-ID: <20131124160952.GH943@craic.sysops.org> On Sun, Nov 24, 2013 at 03:51:21PM +0100, B.R. wrote: > On Sun, Nov 24, 2013 at 3:43 PM, Francis Daly wrote: > > On Sun, Nov 24, 2013 at 07:54:56AM -0500, Ian Evans wrote: Hi there, > > > location ^~ /rather/ { > > > fastcgi_intercept_errors on; > > > fastcgi_pass 127.0.0.1:9000; > > > fastcgi_param HTTPS on; > > > > Does it work if you remove that line? It looks unnecessary to me. And > > it breaks your config. > > Sorry to interrupt, but could you explain a little bit more about what > breaks the config? The upstream of fastcgi_pass usually requires a "fastcgi_param SCRIPT_FILENAME" to be set. Directive inheritance rules mean that this "fastcgi_param" directive is the only one that applies in this location. f -- Francis Daly francis at daoine.org From ben at indietorrent.org Sun Nov 24 16:14:11 2013 From: ben at indietorrent.org (Ben Johnson) Date: Sun, 24 Nov 2013 11:14:11 -0500 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <20131123204733.GE943@craic.sysops.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> <528BEADB.2070608@indietorrent.org> <20131120091038.GI31289@craic.sysops.org> <528D7E7B.60207@indietorrent.org> <20131121204502.GB943@craic.sysops.org> <5290E7B7.7040100@indietorrent.org> <5290EFE2.8060109@indietorrent.org> <20131123204733.GE943@craic.sysops.org> Message-ID: <529225D3.2030505@indietorrent.org> On 11/23/2013 3:47 PM, Francis Daly wrote: > On Sat, Nov 23, 2013 at 01:11:46PM -0500, Ben Johnson wrote: >> On 11/23/2013 12:36 PM, Ben Johnson wrote: > > Hi there, > >>> It's bizarre. At some point while meddling with the configuration, >>> requests for /stage/ began causing the browser to download index.php >>> (and I can open the file and see the PHP code). And no matter what I >>> change in the configuration, this behavior persists. > > I tend to test with "curl", because it avoids any chance of a browser > cache being involved. > That's very good advice, and yielded a somewhat surprising result when I added your test string (from your previous reply, down below) to my configuration: location = /stage/ { return 200 "Just checking...\n"; } When I request the /stage/ URL via curl, *from* the web-server on which this problem is occurring, I see our test string, "Just checking...". I am using the FQDN when I perform the curl test. And, when I hit the URL with curl from my local workstation, I see "Just checking..." Yet, when I hit the same URL in a web-browser (Chrome in Incognito mode, Firefox, doesn't seem to matter), from my local workstation, I am presented with a binary file download for this stale index.php that is coming from who-knows-where. The MIME-type registers as application/octet-stream. I tried hitting the URL in a browser on another machine that is in another physical location and the result is the same. When I ping the FQDN from my home workstation, the IP address is as I expect. > Any intermediate proxies or caches might still be in the way, of > course. But the nginx access log and/or debug log should show what nginx > thinks is happening. > The last time a DNS change was made for this domain was several weeks ago. But because it may be relevant, that DNS change did directly affect web services (the change was made as part of a server migration -- this problem is on the new server). >>> I went so far as to rename the "stage" directory to "staging", and >>> changed all references in nginx's configuration accordingly. Yet, for >>> some reason, requests for /stage/ still download the index.php file! I'm >>> not even sure where this index.php file is coming from, given that nginx >>> should have absolutely no knowledge of this file's new/current location. > > The usual reason for this kind of thing is that the nginx.conf that you > are writing and the nginx.conf that nginx is reading are not the same. > I tried intentionally throwing bad syntax into the nginx config file for this vhost and restarting nginx, and indeed nginx complains about the problem and won't restart, which is expected. This seems to confirm that the configuration file that I am editing is indeed effective. >>> The weirdest part is that the downloaded file is *not* the file that >>> exists at /index.php (the "main site" index file). I confirmed this by >>> adding some commented PHP code at the bottom of /index.php, and the >>> comments do not appear in the downloaded file. > > The other likely reason is that the web server you are talking to and > the web server you think you are talking to are not the same. > > (Either a different machine entirely, or else a different server{} > block in the same config file.) > I would be surprised if I am still hitting the "old" server, away from which I migrated over three weeks ago. Every machine from which I "ping" the FQDN returns the expected IP address (including the machine on which this is happening). >>> Hell, I even tried deleting the entire "staging" directory and this >>> still happens! I've restarted nginx, php5-fpm, etc. and nothing changes. > > Possibly you did a "nginx -s reload", but an error in the conf file > meant that nginx didn't stop using the older file, or something like that? > I'm using my OS-specific command, "service nginx reload" (which uses /etc/init.d/nginx) to test configuration changes. Restarting with "service nginx restart" doesn't seem to make any difference. Historically, "reload" has always been sufficient to render my changes effective. No errors appear in the log, unless I intentionally insert invalid syntax into the configuration. >> I removed everything related to this effort (setting-up the staging >> site) from nginx's configuration, and deleted all of the staging site >> files from the filesystem, yet nginx is still serving an "index.php" >> file whenever I request /stage/! How is this possible? This "index.php" >> is indeed "mine"; it contains the code from my application. But, as I >> said, if I add some random, commented PHP code to the bottom of the >> "real index file" at /index.php, it's not present in the downloaded >> file. I just don't see where this file could be coming from at this point. > > Add something like > > location = /stage/ { return 200 "Just checking...\n"; } > > and restart nginx. If you don't see that response for that request, > you're not using that conf file. > When I hit the URL with curl, the request is indeed logged to the vhost's access log: "GET /stage/ HTTP/1.1" 200 17 "-" "curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3" When I hit the URL with the browser, the request is logged, too: "GET /stage/ HTTP/1.1" 200 17 "-" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.57 Safari/537.36" Given that the requests look identical, excepting the user-agent, I'm left wondering why the browser downloads some stale index.php, yet curl displays our test message. Again, I've tried hitting the URL from different computers in different locations, using different browsers, and the behavior is consistent. This implies that the problem is with the server, not the user-agents with which I'm testing. > If that much does work, then either read the debug log, or trace through > nginx.conf to see what should happen for this request -- server-level > rewrite module directives, plus the one location that the request > "/stage/" should be handled in, are the most likely sources. > So, I just went to comment-out the test location and see what curl returns in that scenario, given that a browser is clearly unreliable, and now we're back to normal! All I did was comment-out the test line in the vhost's config file and "service nginx reload"! >> Thanks for any insight as to what could be happening here... > > Make sure nginx is really completely stopped. > > Then start it again with a known config file. > > And look at the responses to a few requests, and see which are not what > you expect. > > I'm afraid I'm reduced to generalities with the information available. > > Good luck with it, > > f > In summary, all I did today was comment-out one line of the config and reload nginx, and now all is back to normal. Needless to say, I had tried all of this and much more yesterday, and nothing I tried had any effect. I see no explanation for this behavior other than nginx caching its configuration in some fashion. Maybe after 20-something hours and a reload, it "let go" of whatever cached version it was using. If this ever happens again, what is the first avenue to explore? There has to be a better way to troubleshoot this. Maybe one of the developers can comment as to how nginx stores its configuration while running, and how to prevent this issue going forward (or at least fix it whenever it happens). Thanks again for your excellent suggestions and sound troubleshooting logic, Francis. Now, back to the original problem... :P -Ben From multiformeingegno at gmail.com Sun Nov 24 16:34:32 2013 From: multiformeingegno at gmail.com (Lorenzo Raffio) Date: Sun, 24 Nov 2013 08:34:32 -0800 (PST) Subject: access_log both compressed and uncompressed Message-ID: <1385310872396.0d4c9644@Nodemailer> In my "vhost" declaration I have: access_log /WEBSITE_DIR/logs/access.log.gz combined gzip; Problem is I get 2 files, an access.log.gz and an access.log Why? I want just the .log.gz one.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at indietorrent.org Sun Nov 24 21:31:24 2013 From: ben at indietorrent.org (Ben Johnson) Date: Sun, 24 Nov 2013 16:31:24 -0500 Subject: Clean-URL rewrite rule with nested "location" and alias directive In-Reply-To: <5290E7B7.7040100@indietorrent.org> References: <528B8595.6050603@indietorrent.org> <20131119173809.GG31289@craic.sysops.org> <528BB1BB.1010208@indietorrent.org> <20131119203916.GH31289@craic.sysops.org> <528BEADB.2070608@indietorrent.org> <20131120091038.GI31289@craic.sysops.org> <528D7E7B.60207@indietorrent.org> <20131121204502.GB943@craic.sysops.org> <5290E7B7.7040100@indietorrent.org> Message-ID: <5292702C.2020006@indietorrent.org> On 11/23/2013 12:36 PM, Ben Johnson wrote: > location ^~ /stage/ { >> root /var/www/example.com/private/stage/web/; >> # The files are read from /var/www/example.com/private/stage/web/stage/ >> index index.php index.html index.htm; >> try_files $uri $uri/ /stage/index.php?q=$uri; >> >> location ~ \.php$ { >> # use a different "root" here if you want; but make sure the php >> # files can be read from within "stage/" below that root. >> try_files $uri /stage/index.php?q=$uri; >> fastcgi_pass unix:/var/run/php5-fpm.sock; >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> include /etc/nginx/fastcgi_params; >> } >> } Francis, Yes! After making the changes you recommended on the filesystem (enabling me to ditch "alias"), the staging site is now working perfectly with a slight variation of the above configuration. For the sake of academic curiosity, I would like to try some of the alternate configurations that you cooked-up, but I'll save that for later. For now, I'm just thrilled to have this working! Again, I can't thank you enough for your patience, thoroughness, and generosity with your time. Best regards, and cheers to a working configuration! -Ben From mdounin at mdounin.ru Mon Nov 25 13:31:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Nov 2013 17:31:48 +0400 Subject: Abstract behavior of nginx In-Reply-To: <7DBEE2A69F5C5544B53298D4CE1C956937709B66B0@cerber.wkpolska.pl> References: <7DBEE2A69F5C5544B53298D4CE1C956937709B66B0@cerber.wkpolska.pl> Message-ID: <20131125133148.GY41579@mdounin.ru> Hello! On Fri, Nov 22, 2013 at 02:47:23PM +0100, Pomorski Jaros?aw wrote: > Hi, > > I can't understand behavior of nginx. > Version 1.2.1 on Debian Wheezy from official repository. I send > requests to cdn.some_domain.pl server, and in log > /var/log/nginx/cdn.some_domain.pl/test.log I see: > > image/gif:1 > image/png:1 > image/png:1 > image/gif:1 > image/png:1 > image/gif:1 > > It is correct. If I remove hash sign in 3 last line of > configuration file, nginx puts to > /var/log/nginx/cdn.some_domain.pl/test.log below entries: > > -:0 > -:0 > -:0 > -:0 > -:0 > -:0 > > I don't understand, why in this configuration, value of > $sent_http_content_type variable is empty. [...] > map $sent_http_content_type $cdn { > default 0; > text/css 1; > text/javascript 1; > image/x-icon 1; > image/gif 1; > image/jpeg 1; > image/png 1; > } [...] > # if ($cdn) { > # return 404; > # } The "if ($cdn)" is evaluated while processing rewrite rules, and at this point value of $sent_http_content_type isn't yet known (because response isn't sent). But due to this evaluation calculated values of the $sent_http_content_type and $cdn variables are cached, and not re-evaluated again later. That is, what you see in logs is a value of the $sent_http_content_type variable at the time the variable was evaluated for the first time during request processing. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Nov 25 14:02:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Nov 2013 18:02:57 +0400 Subject: Compile NGINX with Openssl statically In-Reply-To: References: Message-ID: <20131125140257.GZ41579@mdounin.ru> Hello! On Fri, Nov 22, 2013 at 12:24:44PM -0200, Fabiano Furtado Pessoa Coelho wrote: [...] > ... > making all in tools... > make[3]: Entering directory '/tmp/compile/openssl-1.0.1e/tools' > make[3]: Nothing to be done for 'all'. > make[3]: Leaving directory '/tmp/compile/openssl-1.0.1e/tools' > installing man1/asn1parse.1 > installing man1/CA.pl.1 > installing man1/ca.1 > installing man1/ciphers.1 > installing man1/cms.1 > cms.pod around line 457: Expected text after =item, not a number > cms.pod around line 461: Expected text after =item, not a number > cms.pod around line 465: Expected text after =item, not a number > cms.pod around line 470: Expected text after =item, not a number > cms.pod around line 474: Expected text after =item, not a number > POD document had syntax errors at /usr/bin/core_perl/pod2man line 71. > Makefile:639: recipe for target 'install_docs' failed > make[2]: *** [install_docs] Error 1 > make[2]: Leaving directory '/tmp/compile/openssl-1.0.1e' > objs/Makefile:1395: recipe for target > '/tmp/compile/openssl-1.0.1e/.openssl/include/openssl/ssl.h' failed > make[1]: *** [/tmp/compile/openssl-1.0.1e/.openssl/include/openssl/ssl.h] > Error 2 > make[1]: Leaving directory '/tmp/compile/nginx-1.4.4' > Makefile:8: recipe for target 'build' failed > make: *** [build] Error 2 > > > What I am doing wrong? Errors suggest compilation of the OpenSSL library fails as pod2man on your system (perl 5.18?) doesn't like one of OpenSSL's pod files. Try looking, e.g., here: https://bugs.archlinux.org/task/35868 There seems to be a patch for OpenSSL there. Alternatively, do something like cd /tmp/compile/openssl-1.0.1e && make install_sw and then continue building nginx. It should bypass OpenSSL's documentation processing this way, and nginx should be built without problems (not tested though). -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Nov 25 14:04:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Nov 2013 18:04:59 +0400 Subject: access_log both compressed and uncompressed In-Reply-To: <1385310872396.0d4c9644@Nodemailer> References: <1385310872396.0d4c9644@Nodemailer> Message-ID: <20131125140459.GA41579@mdounin.ru> Hello! On Sun, Nov 24, 2013 at 08:34:32AM -0800, Lorenzo Raffio wrote: > In my "vhost" declaration I have: > access_log /WEBSITE_DIR/logs/access.log.gz combined gzip; > Problem is I get 2 files, an access.log.gz and an access.log Why? I want just the .log.gz one.. Likely you have access.log configured elsewhere. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Nov 25 14:13:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Nov 2013 18:13:36 +0400 Subject: blank page on some micro-cached pages In-Reply-To: References: Message-ID: <20131125141336.GB41579@mdounin.ru> Hello! On Sat, Nov 23, 2013 at 11:00:52AM +0100, Ronald Van Assche wrote: > Running Nginx 1.5.7 on Freebsd 9.1 > with php-pfm 5.5.3 > > a wordpress site is "micro-cached" , but for some strange reason > and not always , some pages ( wordpress post or pages even 3 or > 7 days after their publication) are rendered for some users ( > me and others and not always the same, with different browsers) > as blank pages and the same connexion can read other cached > pages/post with no problems. > > The size of the global site in /cache is 72 Kb. Very small > indeed. > > The bad cached pages are logged like this : > "GET /url HTTP/1.1" 200 31 > > always 200 as a result code, and 31 for its size.. 31 is not the > actual size of the page of course. > > If a refresh my local cache ( SHIFT CMD-R) on chrome , the > correct page is reloaded and cached in /cache/x/y/somefile by > nginx. > > Any help ? Size is suspiciously small, and I would suggest it's some error returned by php. You may try adding $upstream_cache_status to access log to get some more details (likely there will be MISS or similar state there, indicating the response was returned by a php). If it doesn't help, some more hints about debugging can be found here: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Nov 25 14:20:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Nov 2013 18:20:37 +0400 Subject: Headers at the html page In-Reply-To: References: Message-ID: <20131125142036.GC41579@mdounin.ru> Hello! On Sun, Nov 24, 2013 at 12:47:57AM +0600, Alex toyer wrote: > Hello All, > > I have erlang web application with cowboy web server and i launched it > behind nginx reverse proxy. > > My nginx configuration: > > server { > listen 9090; > > location /test { > proxy_pass http://localhost:8080; > proxy_http_version 1.1; > } > } > > Where http://localhost:8080 is my cowboy web server. When i'm openning: > http://localhost:8080/test i see my html page, it's ok, > > Browser shows response headers: > > Transfer-Encoding:chunked > > Server:nginx/1.2.6 (Ubuntu) > > Content-Type:text/html > > Content-Encoding:gzip > > Connection:keep-alive > > It's normal, but... I see the following string right on the my html page: > > "HTTP/1.1 204 No Content connection: close server: Cowboy content-length: 0" > > Why nginx adds this string with headers to the html page? And it occurs > only if i set up Content-Type: text/html header. Looks like your backend returns malformed response for some reason. You may try digging further into what your backend returns. -- Maxim Dounin http://nginx.org/en/donation.html From rva at onvaoo.com Mon Nov 25 15:47:04 2013 From: rva at onvaoo.com (Ronald Van Assche) Date: Mon, 25 Nov 2013 16:47:04 +0100 Subject: blank page on some micro-cached pages In-Reply-To: <20131125141336.GB41579@mdounin.ru> References: <20131125141336.GB41579@mdounin.ru> Message-ID: It seems related to every update on a post and the use of the WP plugin Nginx-Helper if i add $upstream_cache_status to the log format , nothing is logged. Should i recompile Nginx 1.5 with some additional modules ? On the Freebsd port we have : [ ] HTTP_UPSTREAM_FAIR 3rd party upstream fair module [ ] HTTP_UPSTREAM_HASH 3rd party upstream hash module [ ] HTTP_UPSTREAM_STICKY 3rd party upstream sticky module which do I have to select ? Le 25 nov. 2013 ? 15:13, Maxim Dounin a ?crit : > Hello! > > On Sat, Nov 23, 2013 at 11:00:52AM +0100, Ronald Van Assche wrote: > >> Running Nginx 1.5.7 on Freebsd 9.1 >> with php-pfm 5.5.3 >> >> a wordpress site is "micro-cached" , but for some strange reason >> and not always , some pages ( wordpress post or pages even 3 or >> 7 days after their publication) are rendered for some users ( >> me and others and not always the same, with different browsers) >> as blank pages and the same connexion can read other cached >> pages/post with no problems. >> >> The size of the global site in /cache is 72 Kb. Very small >> indeed. >> >> The bad cached pages are logged like this : >> "GET /url HTTP/1.1" 200 31 >> >> always 200 as a result code, and 31 for its size.. 31 is not the >> actual size of the page of course. >> >> If a refresh my local cache ( SHIFT CMD-R) on chrome , the >> correct page is reloaded and cached in /cache/x/y/somefile by >> nginx. >> >> Any help ? > > Size is suspiciously small, and I would suggest it's some error > returned by php. > > You may try adding $upstream_cache_status to access log to get > some more details (likely there will be MISS or similar state > there, indicating the response was returned by a php). If it > doesn't help, some more hints about debugging can be found here: > > http://wiki.nginx.org/Debugging > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From fusca14 at gmail.com Mon Nov 25 16:03:40 2013 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Mon, 25 Nov 2013 14:03:40 -0200 Subject: Compile NGINX with Openssl statically In-Reply-To: <20131125140257.GZ41579@mdounin.ru> References: <20131125140257.GZ41579@mdounin.ru> Message-ID: Hi Maxim! It worked! I applyed de openssl-1.0.1e-fix_pod_syntax-1.patch and the openssl compiled! Now, I will try this procedure on my RHEL6. Thanks. On Mon, Nov 25, 2013 at 12:02 PM, Maxim Dounin wrote: > Hello! > > On Fri, Nov 22, 2013 at 12:24:44PM -0200, Fabiano Furtado Pessoa Coelho wrote: > > [...] > >> ... >> making all in tools... >> make[3]: Entering directory '/tmp/compile/openssl-1.0.1e/tools' >> make[3]: Nothing to be done for 'all'. >> make[3]: Leaving directory '/tmp/compile/openssl-1.0.1e/tools' >> installing man1/asn1parse.1 >> installing man1/CA.pl.1 >> installing man1/ca.1 >> installing man1/ciphers.1 >> installing man1/cms.1 >> cms.pod around line 457: Expected text after =item, not a number >> cms.pod around line 461: Expected text after =item, not a number >> cms.pod around line 465: Expected text after =item, not a number >> cms.pod around line 470: Expected text after =item, not a number >> cms.pod around line 474: Expected text after =item, not a number >> POD document had syntax errors at /usr/bin/core_perl/pod2man line 71. >> Makefile:639: recipe for target 'install_docs' failed >> make[2]: *** [install_docs] Error 1 >> make[2]: Leaving directory '/tmp/compile/openssl-1.0.1e' >> objs/Makefile:1395: recipe for target >> '/tmp/compile/openssl-1.0.1e/.openssl/include/openssl/ssl.h' failed >> make[1]: *** [/tmp/compile/openssl-1.0.1e/.openssl/include/openssl/ssl.h] >> Error 2 >> make[1]: Leaving directory '/tmp/compile/nginx-1.4.4' >> Makefile:8: recipe for target 'build' failed >> make: *** [build] Error 2 >> >> >> What I am doing wrong? > > Errors suggest compilation of the OpenSSL library fails as pod2man > on your system (perl 5.18?) doesn't like one of OpenSSL's pod > files. Try looking, e.g., here: > > https://bugs.archlinux.org/task/35868 > > There seems to be a patch for OpenSSL there. Alternatively, > do something like > > cd /tmp/compile/openssl-1.0.1e && make install_sw > > and then continue building nginx. It should bypass OpenSSL's > documentation processing this way, and nginx should be built > without problems (not tested though). > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Nov 25 16:11:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Nov 2013 20:11:09 +0400 Subject: blank page on some micro-cached pages In-Reply-To: References: <20131125141336.GB41579@mdounin.ru> Message-ID: <20131125161109.GD41579@mdounin.ru> Hello! On Mon, Nov 25, 2013 at 04:47:04PM +0100, Ronald Van Assche wrote: > It seems related to every update on a post and the use of the WP plugin Nginx-Helper >From a description of the plugin, it looks like it is expected to work with 3rd party cache purge module. It may explain things if you don't have the module compiled. Just switching off the pluging is probably a best solution. > if i add $upstream_cache_status to the log format , nothing is logged. > Should i recompile Nginx 1.5 with some additional modules ? > > On the Freebsd port we have : > > [ ] HTTP_UPSTREAM_FAIR 3rd party upstream fair module > [ ] HTTP_UPSTREAM_HASH 3rd party upstream hash module > [ ] HTTP_UPSTREAM_STICKY 3rd party upstream sticky module > > which do I have to select ? You don't need these modules. If $upstream_cache_status isn't logged, it's most likely a problem in your configuration - e.g., the log format you've modified isn't used by your access_log directive. -- Maxim Dounin http://nginx.org/en/donation.html From rva at onvaoo.com Mon Nov 25 16:12:00 2013 From: rva at onvaoo.com (Ronald Van Assche) Date: Mon, 25 Nov 2013 17:12:00 +0100 Subject: blank page on some micro-cached pages In-Reply-To: References: <20131125141336.GB41579@mdounin.ru> Message-ID: <0DF5171B-CA47-4DF2-BA78-D267D3D0BABB@onvaoo.com> Oups anwser to myself : ADDED the $upstream_cache_status to the LOG : the page are empty , still a 200 RESULT CODE and a HIT with a 31 byte size. myIP - - [25/Nov/2013:17:04:35 +0100] "GET /allairgoo HTTP/1.1" 200 HIT 31 "http://truc/" " Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.57 Safari/537.36" "-" myIP - - [25/Nov/2013:17:04:50 +0100] "GET /allairgoo HTTP/1.1" 200 HIT 31 "http://truc/" " Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.57 Safari/537.36" "-" Le 25 nov. 2013 ? 16:47, Ronald Van Assche a ?crit : > It seems related to every update on a post and the use of the WP plugin Nginx-Helper > > if i add $upstream_cache_status to the log format , nothing is logged. > Should i recompile Nginx 1.5 with some additional modules ? > > On the Freebsd port we have : > > [ ] HTTP_UPSTREAM_FAIR 3rd party upstream fair module > [ ] HTTP_UPSTREAM_HASH 3rd party upstream hash module > [ ] HTTP_UPSTREAM_STICKY 3rd party upstream sticky module > > which do I have to select ? > > > > > Le 25 nov. 2013 ? 15:13, Maxim Dounin a ?crit : > >> Hello! >> >> On Sat, Nov 23, 2013 at 11:00:52AM +0100, Ronald Van Assche wrote: >> >>> Running Nginx 1.5.7 on Freebsd 9.1 >>> with php-pfm 5.5.3 >>> >>> a wordpress site is "micro-cached" , but for some strange reason >>> and not always , some pages ( wordpress post or pages even 3 or >>> 7 days after their publication) are rendered for some users ( >>> me and others and not always the same, with different browsers) >>> as blank pages and the same connexion can read other cached >>> pages/post with no problems. >>> >>> The size of the global site in /cache is 72 Kb. Very small >>> indeed. >>> >>> The bad cached pages are logged like this : >>> "GET /url HTTP/1.1" 200 31 >>> >>> always 200 as a result code, and 31 for its size.. 31 is not the >>> actual size of the page of course. >>> >>> If a refresh my local cache ( SHIFT CMD-R) on chrome , the >>> correct page is reloaded and cached in /cache/x/y/somefile by >>> nginx. >>> >>> Any help ? >> >> Size is suspiciously small, and I would suggest it's some error >> returned by php. >> >> You may try adding $upstream_cache_status to access log to get >> some more details (likely there will be MISS or similar state >> there, indicating the response was returned by a php). If it >> doesn't help, some more hints about debugging can be found here: >> >> http://wiki.nginx.org/Debugging >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rva at onvaoo.com Mon Nov 25 16:24:32 2013 From: rva at onvaoo.com (Ronald Van Assche) Date: Mon, 25 Nov 2013 17:24:32 +0100 Subject: blank page on some micro-cached pages In-Reply-To: <20131125161109.GD41579@mdounin.ru> References: <20131125141336.GB41579@mdounin.ru> <20131125161109.GD41579@mdounin.ru> Message-ID: <0EDED5A7-5F20-430A-B610-547BBCD32C0B@onvaoo.com> Well nginx 1.5.7 is compiled with this 3 party module : configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --with-file-aio --with-ipv6 --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --add-module=/usr/ports/www/nginx-devel/work/ngx_cache_purge-2.1 --with-http_geoip_module --with-http_stub_status_module --with-pcre Le 25 nov. 2013 ? 17:11, Maxim Dounin a ?crit : > Hello! > > On Mon, Nov 25, 2013 at 04:47:04PM +0100, Ronald Van Assche wrote: > >> It seems related to every update on a post and the use of the WP plugin Nginx-Helper > > From a description of the plugin, it looks like it is expected to > work with 3rd party cache purge module. It may explain things if > you don't have the module compiled. Just switching off the > pluging is probably a best solution. > >> if i add $upstream_cache_status to the log format , nothing is logged. >> Should i recompile Nginx 1.5 with some additional modules ? >> >> On the Freebsd port we have : >> >> [ ] HTTP_UPSTREAM_FAIR 3rd party upstream fair module >> [ ] HTTP_UPSTREAM_HASH 3rd party upstream hash module >> [ ] HTTP_UPSTREAM_STICKY 3rd party upstream sticky module >> >> which do I have to select ? > > You don't need these modules. If $upstream_cache_status isn't > logged, it's most likely a problem in your configuration - e.g., > the log format you've modified isn't used by your access_log > directive. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jason.barnabe at gmail.com Mon Nov 25 16:44:15 2013 From: jason.barnabe at gmail.com (Jason Barnabe) Date: Mon, 25 Nov 2013 10:44:15 -0600 Subject: merge_slashes and decoding of the path Message-ID: On my site, I accept full URL-encoded URLs as part of the path, for example: http://www.mysite.com/search/http%3A%2F%2Fexample.com%2F I recently moved my site to nginx and I found that it was decoding and collapsing the slashes before passing it on to Passenger. It would pass along the URL like this: http://www.mysite.com/search/http:/example.com/ I found the merge_slashes setting, and on setting it to off, Passenger now receives URLs like this: http://www.mysite.com/search/http://example.com/ . So the slashes are kept, but the path is still decoded. The nginx documentation [1] says "However, for security considerations, it is better to avoid turning the compression off." What are the security considerations here? Why does nginx not allow the encoded slashes to be passed through (like Apache does[2]), and if it did so, would that negate the security concerns? [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#merge_slashes [2] http://httpd.apache.org/docs/2.2/mod/core.html#allowencodedslashes -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 25 17:08:02 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Nov 2013 21:08:02 +0400 Subject: merge_slashes and decoding of the path In-Reply-To: References: Message-ID: <20131125170802.GE41579@mdounin.ru> Hello! On Mon, Nov 25, 2013 at 10:44:15AM -0600, Jason Barnabe wrote: > On my site, I accept full URL-encoded URLs as part of the path, for example: > > http://www.mysite.com/search/http%3A%2F%2Fexample.com%2F > > I recently moved my site to nginx and I found that it was decoding and > collapsing the slashes before passing it on to Passenger. It would pass > along the URL like this: http://www.mysite.com/search/http:/example.com/ > > I found the merge_slashes setting, and on setting it to off, Passenger now > receives URLs like this: http://www.mysite.com/search/http://example.com/ . > So the slashes are kept, but the path is still decoded. The nginx > documentation [1] says "However, for security considerations, it is better > to avoid turning the compression off." > > What are the security considerations here? Example of a vulnerable configuration is given in the directive description you've linked (http://nginx.org/r/merge_slashes): : Note that compression is essential for the correct matching of : prefix string and regular expression locations. Without it, the : ?//scripts/one.php? request would not match : : location /scripts/ { : ... : } : and might be processed as a static file. So it gets converted to : ?/scripts/one.php?. That is, with merge_slashes switched off, restrictions like location /protected/ { deny all; } can be easily bypassed by using a request to "//protected/file". It should be taken into account while writing a configuration for a server with merge_slashes switched off. > Why does nginx not allow the > encoded slashes to be passed through (like Apache does[2]), and if it did > so, would that negate the security concerns? While not decoding slashes is probably a better than not merging them, it's not really a good aproach either. This way, the http://www.mysite.com/search/http%3A%2F%2Fexample.com%2F URL becomes equivalent to http://www.mysite.com/search/http%3A%252F%252Fexample.com%252F which isn't really consistent and may produce unexpected results. -- Maxim Dounin http://nginx.org/en/donation.html From jason.barnabe at gmail.com Mon Nov 25 17:30:46 2013 From: jason.barnabe at gmail.com (Jason Barnabe) Date: Mon, 25 Nov 2013 11:30:46 -0600 Subject: merge_slashes and decoding of the path In-Reply-To: <20131125170802.GE41579@mdounin.ru> References: <20131125170802.GE41579@mdounin.ru> Message-ID: On Mon, Nov 25, 2013 at 11:08 AM, Maxim Dounin wrote: > Example of a vulnerable configuration is given in the directive > description you've linked (http://nginx.org/r/merge_slashes): > > : Note that compression is essential for the correct matching of > : prefix string and regular expression locations. Without it, the > : ?//scripts/one.php? request would not match > : > : location /scripts/ { > : ... > : } > : and might be processed as a static file. So it gets converted to > : ?/scripts/one.php?. > > That is, with merge_slashes switched off, restrictions like > > location /protected/ { > deny all; > } > > can be easily bypassed by using a request to "//protected/file". > It should be taken into account while writing a configuration for > a server with merge_slashes switched off. > I'm not sure that applies in my configuration, where I'm using Passenger and have no "protected" locations, but I can see how this could lead to problems. > > Why does nginx not allow the > > encoded slashes to be passed through (like Apache does[2]), and if it did > > so, would that negate the security concerns? > > While not decoding slashes is probably a better than not merging > them, it's not really a good aproach either. This way, the > > http://www.mysite.com/search/http%3A%2F%2Fexample.com%2F > > URL becomes equivalent to > > http://www.mysite.com/search/http%3A%252F%252Fexample.com%252F > > which isn't really consistent and may produce unexpected results. > I don't see how this would necessarily be the case, but I'll defer to your knowledge. However, I don't think we should let perfect be the enemy of "better", especially if the thing "better" would replace is a potential security concern. I do have this worked around in my app code (look for http:/, replace with http://) so it's not a functional issue for me at the moment. It would be a nice feature if it could be done sensibly - should I file an issue in trac? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 25 17:48:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Nov 2013 21:48:24 +0400 Subject: merge_slashes and decoding of the path In-Reply-To: References: <20131125170802.GE41579@mdounin.ru> Message-ID: <20131125174824.GG41579@mdounin.ru> Hello! On Mon, Nov 25, 2013 at 11:30:46AM -0600, Jason Barnabe wrote: > On Mon, Nov 25, 2013 at 11:08 AM, Maxim Dounin wrote: > > > Example of a vulnerable configuration is given in the directive > > description you've linked (http://nginx.org/r/merge_slashes): > > > > : Note that compression is essential for the correct matching of > > : prefix string and regular expression locations. Without it, the > > : ?//scripts/one.php? request would not match > > : > > : location /scripts/ { > > : ... > > : } > > : and might be processed as a static file. So it gets converted to > > : ?/scripts/one.php?. > > > > That is, with merge_slashes switched off, restrictions like > > > > location /protected/ { > > deny all; > > } > > > > can be easily bypassed by using a request to "//protected/file". > > It should be taken into account while writing a configuration for > > a server with merge_slashes switched off. > > > > I'm not sure that applies in my configuration, where I'm using Passenger > and have no "protected" locations, but I can see how this could lead to > problems. > > > > > Why does nginx not allow the > > > encoded slashes to be passed through (like Apache does[2]), and if it did > > > so, would that negate the security concerns? > > > > While not decoding slashes is probably a better than not merging > > them, it's not really a good aproach either. This way, the > > > > http://www.mysite.com/search/http%3A%2F%2Fexample.com%2F > > > > URL becomes equivalent to > > > > http://www.mysite.com/search/http%3A%252F%252Fexample.com%252F > > > > which isn't really consistent and may produce unexpected results. > > > > I don't see how this would necessarily be the case, but I'll defer to your > knowledge. However, I don't think we should let perfect be the enemy of > "better", especially if the thing "better" would replace is a potential > security concern. E.g., consider a configuration using an imaginary "decode_slashes" directive: decode_slashes off; location / { proxy_pass http://backend; } location /protected/ { auth_basic ... proxy_pass http://backend; } A request to "/protected%2Ffile" would be passed to a backend with non-decoded slash, and if there is an nginx in default configuration, %2F will be decoded there. That is, a "/protected/file" will be returned, bypassing security restrictions configured. Given the above, I think that not decoding slashes isn't a solution. It may be useful in some configurations, but it isn't safe from security point of view either. > I do have this worked around in my app code (look for http:/, replace with > http://) so it's not a functional issue for me at the moment. It would be a > nice feature if it could be done sensibly - should I file an issue in trac? I don't think that we need this feature due to inconsistencies it introduce, see above. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Nov 25 20:45:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Nov 2013 00:45:36 +0400 Subject: blank page on some micro-cached pages In-Reply-To: <0EDED5A7-5F20-430A-B610-547BBCD32C0B@onvaoo.com> References: <20131125141336.GB41579@mdounin.ru> <20131125161109.GD41579@mdounin.ru> <0EDED5A7-5F20-430A-B610-547BBCD32C0B@onvaoo.com> Message-ID: <20131125204536.GH41579@mdounin.ru> Hello! On Mon, Nov 25, 2013 at 05:24:32PM +0100, Ronald Van Assche wrote: > Well nginx 1.5.7 is compiled with this 3 party module : > > configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --with-file-aio --with-ipv6 --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --add-module=/usr/ports/www/nginx-devel/work/ngx_cache_purge-2.1 --with-http_geoip_module --with-http_stub_status_module --with-pcre So the cache purge is compiled in. It may need to be configured to work properly with Nginx-Helper plugin though, take a look at plugin docs. I suspect the reason for small responses returned from cache (as your other messages indicate) is a special purge request which isn't handled cache purge module but passed back to php, and an error message returned is cached. As already suggested, if you are not familiar with nginx configuration it might be good idea to just switch off the plugin. -- Maxim Dounin http://nginx.org/en/donation.html From JPomorski at wolterskluwer.pl Tue Nov 26 10:56:45 2013 From: JPomorski at wolterskluwer.pl (=?utf-8?B?UG9tb3Jza2kgSmFyb3PFgmF3?=) Date: Tue, 26 Nov 2013 11:56:45 +0100 Subject: Abstract behavior of nginx In-Reply-To: <20131125133148.GY41579@mdounin.ru> References: <7DBEE2A69F5C5544B53298D4CE1C956937709B66B0@cerber.wkpolska.pl> <20131125133148.GY41579@mdounin.ru> Message-ID: <7DBEE2A69F5C5544B53298D4CE1C956937C25228F5@cerber.wkpolska.pl> Hello, It makes sense... Thank you for your help. Regards, Jarek -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: Monday, November 25, 2013 2:32 PM To: nginx at nginx.org Subject: Re: Abstract behavior of nginx Hello! The "if ($cdn)" is evaluated while processing rewrite rules, and at this point value of $sent_http_content_type isn't yet known (because response isn't sent). But due to this evaluation calculated values of the $sent_http_content_type and $cdn variables are cached, and not re-evaluated again later. That is, what you see in logs is a value of the $sent_http_content_type variable at the time the variable was evaluated for the first time during request processing. -- Maxim Dounin http://nginx.org/en/donation.html _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From radvenka at cisco.com Tue Nov 26 19:19:55 2013 From: radvenka at cisco.com (Radha Venkatesh (radvenka)) Date: Tue, 26 Nov 2013 19:19:55 +0000 Subject: Need to compare client certificate CN with an entry in /etc/hosts Message-ID: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> I am a newbie to Nginx. We plan to use nginx as a reverse proxy to tomcat and node js on our systems. We plan to use MTLS to secure server to server communication (between nginx on different servers). An additional requirement is that we have to match the client certificate CN with an existing entry in /etc/hosts. What would be the simplest mechanism to do this? HttpPerlModule? Uwsgi? Below is the config we have used to prototype nginx as reverse proxy with MTLS. server { listen 443 ssl; server_name localhost; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } #SSL Certs #SSL Certs ssl_certificate /etc/nginx/locations.d/b7k-vma170.crt; ssl_certificate_key /etc/nginx/locations.d/b7k-vma170.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers RC4:HIGH:!aNULL:!MD5:AES128-SHA:AES256-SHA:RC4-SHA:@STRENGTH; ssl_client_certificate /etc/nginx/locations.d/root-ca.crt; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; keepalive_timeout 70; include /etc/nginx/locations.d/*.conf; include /var/nginx/locations.d/*.conf; deny all; } ip-allow.conf contents allow 10.94.12.148; allow 10.94.12.165; deny all; webapps.conf contents location / { root /var/lib/tomcat/webapps; proxy_pass http://127.0.0.1:8082; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_connect_timeout 1200; proxy_send_timeout 1200; proxy_read_timeout 1200; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Tue Nov 26 19:55:12 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 26 Nov 2013 19:55:12 +0000 Subject: Need to compare client certificate CN with an entry in /etc/hosts In-Reply-To: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> References: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> Message-ID: On 26 November 2013 19:19, Radha Venkatesh (radvenka) wrote: > we have to match the client certificate CN with an > existing entry in /etc/hosts. Please could you specify *exactly* what you need to ensure matches? It's not obvious (to me!), given what you wrote and given the minimal information available in /etc/hosts. Jonathan From nginx-forum at nginx.us Tue Nov 26 20:34:45 2013 From: nginx-forum at nginx.us (ruben.herold) Date: Tue, 26 Nov 2013 15:34:45 -0500 Subject: SPDY + proxy cache static content failures In-Reply-To: <5233c5af2a8644288d22d953aeb63e11.NginxMailingListEnglish@forum.nginx.org> References: <32F6D688-5016-451D-A43A-72FC22C5DBBB@gwynne.id.au> <928247ac3646300bd8845120ca3aae9b.NginxMailingListEnglish@forum.nginx.org> <5233c5af2a8644288d22d953aeb63e11.NginxMailingListEnglish@forum.nginx.org> Message-ID: <710ae15830ebaf1c5625aae7749a23eb.NginxMailingListEnglish@forum.nginx.org> It would be nice to get some more informations about the current status and work. Since RHEL 6.5 / Centos 6.5 there is openssl 1.0.1e available for me and I would like to activate SPDY on our systems. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233497,244983#msg-244983 From nginx-forum at nginx.us Tue Nov 26 20:43:28 2013 From: nginx-forum at nginx.us (ruben.herold) Date: Tue, 26 Nov 2013 15:43:28 -0500 Subject: Any rough ETA on SPDY/3 & push? In-Reply-To: References: Message-ID: <4c9b9ef6e621968ea31e5ee43a7f0f15.NginxMailingListEnglish@forum.nginx.org> Ist there any new spdy code available for testing? I can't find any new code in the version control since spdy/2. Since RHEL 6.5 / Centos 6.5 (currently building but openssl 1.0.1 available in CR repro) this major distibutions are also capable to run spdy. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243684,244984#msg-244984 From radvenka at cisco.com Tue Nov 26 22:48:43 2013 From: radvenka at cisco.com (Radha Venkatesh (radvenka)) Date: Tue, 26 Nov 2013 22:48:43 +0000 Subject: Need to compare client certificate CN with an entry in /etc/hosts In-Reply-To: References: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> Message-ID: <80E8928689149A48BC47B755D8AE25E6123E91D5@xmb-rcd-x14.cisco.com> Jonathan, The requirement is that we match an existing hostname entry in /etc/hosts with the Client certificate CN (CN has to be the hostname of the client). Thanks, Radha. -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Jonathan Matthews Sent: Tuesday, November 26, 2013 11:55 AM To: nginx at nginx.org Subject: Re: Need to compare client certificate CN with an entry in /etc/hosts On 26 November 2013 19:19, Radha Venkatesh (radvenka) wrote: > we have to match the client certificate CN with an > existing entry in /etc/hosts. Please could you specify *exactly* what you need to ensure matches? It's not obvious (to me!), given what you wrote and given the minimal information available in /etc/hosts. Jonathan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From steve at greengecko.co.nz Tue Nov 26 22:55:19 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 27 Nov 2013 11:55:19 +1300 Subject: Need to compare client certificate CN with an entry in /etc/hosts In-Reply-To: <80E8928689149A48BC47B755D8AE25E6123E91D5@xmb-rcd-x14.cisco.com> References: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> <80E8928689149A48BC47B755D8AE25E6123E91D5@xmb-rcd-x14.cisco.com> Message-ID: <1385506519.9428.1919.camel@steve-new> Well, this has absolutely nothing to do with nginx, but openssl x509 -in -text -noout will tell you which domain ( or hostname ) the certificate is for. Steve On Tue, 2013-11-26 at 22:48 +0000, Radha Venkatesh (radvenka) wrote: > Jonathan, > > The requirement is that we match an existing hostname entry in /etc/hosts with the Client certificate CN (CN has to be the hostname of the client). > > Thanks, > Radha. > > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Jonathan Matthews > Sent: Tuesday, November 26, 2013 11:55 AM > To: nginx at nginx.org > Subject: Re: Need to compare client certificate CN with an entry in /etc/hosts > > On 26 November 2013 19:19, Radha Venkatesh (radvenka) > wrote: > > we have to match the client certificate CN with an > > existing entry in /etc/hosts. > > Please could you specify *exactly* what you need to ensure matches? > It's not obvious (to me!), given what you wrote and given the minimal > information available in /etc/hosts. > > Jonathan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From contact at jpluscplusm.com Tue Nov 26 23:00:17 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 26 Nov 2013 23:00:17 +0000 Subject: Need to compare client certificate CN with an entry in /etc/hosts In-Reply-To: <80E8928689149A48BC47B755D8AE25E6123E91D5@xmb-rcd-x14.cisco.com> References: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> <80E8928689149A48BC47B755D8AE25E6123E91D5@xmb-rcd-x14.cisco.com> Message-ID: On 26 November 2013 22:48, Radha Venkatesh (radvenka) wrote: > Jonathan, > > The requirement is that we match an existing hostname entry in /etc/hosts with the Client certificate CN (CN has to be the hostname of the client). That's not really saying anything /new/, is it? ;-) Here are some examples of different things that your requirement could mean: 1) Do you want to ensure that the CN that is presented merely *exists* in /etc/hosts? 2) Do you want to ensure that the connection came from an IP that the CN's entry in /etc/hosts matches? 3) Both of #1 and #2 combined? Please give some representative examples of CNs being presented, /etc/hosts contents, and the allow/deny behaviour you want to see based on those combinations. Your requirement, whilst obvious and clear to yourself, is not clear to some people (well, me at least!) as they don't have their head deep inside your project. Regards, Jonathan From francis at daoine.org Tue Nov 26 23:15:37 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 26 Nov 2013 23:15:37 +0000 Subject: Need to compare client certificate CN with an entry in /etc/hosts In-Reply-To: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> References: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> Message-ID: <20131126231537.GA11495@craic.sysops.org> On Tue, Nov 26, 2013 at 07:19:55PM +0000, Radha Venkatesh (radvenka) wrote: Hi there, > An additional requirement is that we have to match the client certificate > CN with an existing entry in /etc/hosts. What would be the simplest > mechanism to do this? HttpPerlModule? Uwsgi? In nginx terms, you have $remote_addr as the client IP address, and you have the variables described in http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables as "things from the certificate". I don't see CN listed there, so I suspect that whatever you do is going to involve some extra parsing of the certificate, which probably means something external or dynamic within nginx.conf. The "simplest" mechanism is probably whichever one you are most familiar with already. Whether you use an embedded language or something external, you can make sure to send the appropriate raw information to it, and let it decide whether this is good or not. You may be interested in trying http://nginx.org/r/auth_request as one possibly way of communicating the success or failure state of your check back to nginx, but it all depends on the extra code that you must write. Good luck with it, f -- Francis Daly francis at daoine.org From radvenka at cisco.com Wed Nov 27 00:01:16 2013 From: radvenka at cisco.com (Radha Venkatesh (radvenka)) Date: Wed, 27 Nov 2013 00:01:16 +0000 Subject: Need to compare client certificate CN with an entry in /etc/hosts In-Reply-To: <20131126231537.GA11495@craic.sysops.org> References: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> <20131126231537.GA11495@craic.sysops.org> Message-ID: <80E8928689149A48BC47B755D8AE25E6123E9218@xmb-rcd-x14.cisco.com> I found the below snippet which could provide me the cn from the certificate. What would be the easiest way to compare this with an entry in /etc/hosts? Do we need an external module to do this? The "map" directive with regex can be used instead of "if", something like this: map $ssl_client_s_dn $ssl_client_s_dn_cn { default ""; ~/CN=(?[^/]+) $CN; }; -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Francis Daly Sent: Tuesday, November 26, 2013 3:16 PM To: nginx at nginx.org Subject: Re: Need to compare client certificate CN with an entry in /etc/hosts On Tue, Nov 26, 2013 at 07:19:55PM +0000, Radha Venkatesh (radvenka) wrote: Hi there, > An additional requirement is that we have to match the client certificate > CN with an existing entry in /etc/hosts. What would be the simplest > mechanism to do this? HttpPerlModule? Uwsgi? In nginx terms, you have $remote_addr as the client IP address, and you have the variables described in http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables as "things from the certificate". I don't see CN listed there, so I suspect that whatever you do is going to involve some extra parsing of the certificate, which probably means something external or dynamic within nginx.conf. The "simplest" mechanism is probably whichever one you are most familiar with already. Whether you use an embedded language or something external, you can make sure to send the appropriate raw information to it, and let it decide whether this is good or not. You may be interested in trying http://nginx.org/r/auth_request as one possibly way of communicating the success or failure state of your check back to nginx, but it all depends on the extra code that you must write. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Nov 27 02:00:23 2013 From: nginx-forum at nginx.us (btpoole) Date: Tue, 26 Nov 2013 21:00:23 -0500 Subject: Is there a compile version NGINX RTMP for Windows Message-ID: <1c2a8edbb350153b1a4290770308c51a.NginxMailingListEnglish@forum.nginx.org> I am very new to NGINX, actually just came across it the other day but have been reading alot about it. Is there a compiled version with rtmp streaming for windows available. I know I can download the tools and files to create a compiled version. I have attempted to do this numerous times but can't seem to get everything just right for it to work. Thought it may be already be out there. Thanks in advance for any help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244990,244990#msg-244990 From lists at ruby-forum.com Wed Nov 27 04:14:07 2013 From: lists at ruby-forum.com (Beng D.) Date: Wed, 27 Nov 2013 05:14:07 +0100 Subject: How to deal with Windows 7 password recovery In-Reply-To: References: Message-ID: <1581afbfdb93361a2394309f9ac13c1d@ruby-forum.com> Forgot Windows 7 administrator password and locked out of computer? Have a try this Windows 7 password recovery without or with software video guide: http://youtu.be/WVUGqVFn9yo -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Wed Nov 27 04:18:07 2013 From: lists at ruby-forum.com (Beng D.) Date: Wed, 27 Nov 2013 05:18:07 +0100 Subject: Forgot Windows 8 Login Password After Upgrade to Windows 8.1, How to Reset it? Message-ID: I upgrade to Windows 8.1, but forgot my Windows 8 administrator login password. I have brought it to computer repair shop, but they fail to helped me reset my forgotten Win 8 password. My best friend, Tony, told me that I could took a fresh reinstallation on my win 8 OS. However, there are so much important data on this computer, and I was afraid of data loss after reinstallation. Then Tony helped me searched on ?how to reset Windows 8 password? on the internet. At first we found this video on YouTube : Forgot Windows 8 Password? How to Reset it without Data Loss? http://youtu.be/1a97woJXUpQ That seems to tell us to fix this issue. We do as the video guide step by step. Download and install the demo program called Windows Password Key on Tony?s computer. Then burn it to USB flash drive. It works prefect. After that, we insert it into my locked Win 8 computer, and boot it from USB. Just here, we came across a problem: how to get into the bios thing? Well, it seems troublesome for us, but we make it finally! At last, I just share, not teach! If you find it really helpful, share it with you friend, family member who are using Windows 8 OS, maybe they need Windows 8 password reset one day! -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Nov 27 07:32:15 2013 From: nginx-forum at nginx.us (joshua1991) Date: Wed, 27 Nov 2013 02:32:15 -0500 Subject: Compiled nginx doesn't work with PHP Message-ID: <5ed2b048626cdb2bf28761800a5ff406.NginxMailingListEnglish@forum.nginx.org> Hello there, I have posted this question on stackoverflow already but thought this place would be more appropriate. I have a properly installed (apt-get) nginx that works perfectly with html and php (php5-fpm) but when I download the source of the same version (I don't modify it for testing purposes), I configure it with "sudo ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin --without-http_gzip_module" then make, install, reload and restart it. All the PHP files stop working after that, they are being served as binary so the browser prompts to save them, HTML files continue to be served properly. Any insights on where to start debugging this issue? All the configs remained the same from the working version, the only difference is that .php files don't work. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244997,244997#msg-244997 From lists at ruby-forum.com Wed Nov 27 07:39:33 2013 From: lists at ruby-forum.com (Berry Huston) Date: Wed, 27 Nov 2013 08:39:33 +0100 Subject: Forgot Windows 8 Login Password After Upgrade to Windows 8.1, How to Reset it? In-Reply-To: References: Message-ID: Do not worry too much about that. There are tools for you to reset Windows password. try this one ever helped me. http://windows8password.com/ It would help you, too. -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Wed Nov 27 07:41:34 2013 From: lists at ruby-forum.com (Berry Huston) Date: Wed, 27 Nov 2013 08:41:34 +0100 Subject: How to deal with Windows 7 password recovery In-Reply-To: References: Message-ID: I have ever used this professional application for windows 8 password recovery. http://windows8password.com/ And it is compatible for Windows XP, windows vista, windows 7 and windows 8. So I think it is a right tool for you. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Nov 27 07:53:37 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 27 Nov 2013 02:53:37 -0500 Subject: Is there a compile version NGINX RTMP for Windows In-Reply-To: <1c2a8edbb350153b1a4290770308c51a.NginxMailingListEnglish@forum.nginx.org> References: <1c2a8edbb350153b1a4290770308c51a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <743a92c99a49ffeb4340d7e194ba06c8.NginxMailingListEnglish@forum.nginx.org> Here is one with rtmp(1.06) compiled in, http://nginx-win.ecsds.eu/ rtmp 1.07 upgrade is on the todo list for the next release. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244990,245000#msg-245000 From mdounin at mdounin.ru Wed Nov 27 09:48:23 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Nov 2013 13:48:23 +0400 Subject: Compiled nginx doesn't work with PHP In-Reply-To: <5ed2b048626cdb2bf28761800a5ff406.NginxMailingListEnglish@forum.nginx.org> References: <5ed2b048626cdb2bf28761800a5ff406.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131127094823.GH93176@mdounin.ru> Hello! On Wed, Nov 27, 2013 at 02:32:15AM -0500, joshua1991 wrote: > Hello there, I have posted this question on stackoverflow already but > thought this place would be more appropriate. > > I have a properly installed (apt-get) nginx that works perfectly with html > and php (php5-fpm) but when I download the source of the same version (I > don't modify it for testing purposes), I configure it with "sudo ./configure > --prefix=/etc/nginx --sbin-path=/usr/sbin --without-http_gzip_module" then > make, install, reload and restart it. All the PHP files stop working after > that, they are being served as binary so the browser prompts to save them, > HTML files continue to be served properly. > > Any insights on where to start debugging this issue? All the configs > remained the same from the working version, the only difference is that .php > files don't work. Compare "nginx -V" output for both versions, most likely they use diferent prefixes and/or conf paths. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Nov 27 11:38:09 2013 From: nginx-forum at nginx.us (Peleke) Date: Wed, 27 Nov 2013 06:38:09 -0500 Subject: Subdomains no longer work In-Reply-To: References: Message-ID: <3aae8ee36fdad1d1779e39dd23765ee7.NginxMailingListEnglish@forum.nginx.org> Should I add different config files or what other information are needed to solve this problem? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244807,245003#msg-245003 From nginx-forum at nginx.us Wed Nov 27 12:13:14 2013 From: nginx-forum at nginx.us (bjorntj) Date: Wed, 27 Nov 2013 07:13:14 -0500 Subject: Session is not kept when using Chrome, works for Firefox and IE. Message-ID: <91259d7bb3824dec2106b6ae0194306c.NginxMailingListEnglish@forum.nginx.org> I have the following config..: server { listen 80; server_name site.example.com; access_log /var/log/nginx/site.example.com_access.log main; location / { rewrite ^ http://site.example.com/webapp; } location /webapp/ { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://10.10.10.10:8080/webapp/; proxy_cookie_domain 10.10.10.10 site.example.com; proxy_cookie_path /webapp/ /; client_max_body_size 32m; client_body_buffer_size 128k; proxy_connect_timeout 240; proxy_send_timeout 240; proxy_read_timeout 240; proxy_buffers 32 4k; } } This config works as it should when using Firefox and IE, but not Google Chrome.. When using Chrome, I can login to the webapp but after I am logged in and try to do something, the webapp tells me I am not logged in.. I can see the jsessionid cookie under Settings in Chrome and it looks as it should but somehow the cookie is not used for the current session... Are there some special config needed for Chrome or am I missing something else? Regards, BTJ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245004,245004#msg-245004 From nginx-forum at nginx.us Wed Nov 27 13:34:21 2013 From: nginx-forum at nginx.us (btpoole) Date: Wed, 27 Nov 2013 08:34:21 -0500 Subject: Is there a compile version NGINX RTMP for Windows In-Reply-To: <743a92c99a49ffeb4340d7e194ba06c8.NginxMailingListEnglish@forum.nginx.org> References: <1c2a8edbb350153b1a4290770308c51a.NginxMailingListEnglish@forum.nginx.org> <743a92c99a49ffeb4340d7e194ba06c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thank you for the info. Is there documentation on how to set it up? Thanks again Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244990,245005#msg-245005 From nginx-forum at nginx.us Wed Nov 27 14:22:16 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 27 Nov 2013 09:22:16 -0500 Subject: Is there a compile version NGINX RTMP for Windows In-Reply-To: References: <1c2a8edbb350153b1a4290770308c51a.NginxMailingListEnglish@forum.nginx.org> <743a92c99a49ffeb4340d7e194ba06c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3c2997fb6f888cee57dae4ac8dbe9c46.NginxMailingListEnglish@forum.nginx.org> Follow the github pages, plenty of examples; https://github.com/arut/nginx-rtmp-module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244990,245007#msg-245007 From nginx-forum at nginx.us Wed Nov 27 15:10:42 2013 From: nginx-forum at nginx.us (Jugurtha) Date: Wed, 27 Nov 2013 10:10:42 -0500 Subject: Proxy_pass with decode_base64 result In-Reply-To: References: Message-ID: Hello, If you have just a clue, I'm interested. Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244811,245009#msg-245009 From nginx-forum at nginx.us Wed Nov 27 17:56:41 2013 From: nginx-forum at nginx.us (joshua1991) Date: Wed, 27 Nov 2013 12:56:41 -0500 Subject: Compiled nginx doesn't work with PHP In-Reply-To: <20131127094823.GH93176@mdounin.ru> References: <20131127094823.GH93176@mdounin.ru> Message-ID: Ah the capital -V thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244997,245010#msg-245010 From mkasinski.poczta at gmail.com Wed Nov 27 19:18:30 2013 From: mkasinski.poczta at gmail.com (=?iso-8859-2?Q?Marcin_Kasi=F1ski?=) Date: Wed, 27 Nov 2013 20:18:30 +0100 Subject: redirect url in nginx Message-ID: <013f01ceeba5$75e3aef0$61ab0cd0$@gmail.com> Hello, I have a problem with redirect url in nginx. I want to redirect http://almelle.atmserv.pl/poczatek na http://almelle.atmserv.pl/index.php?cat=poczatek In vhost conf file I have: server { listen 80; ## listen for ipv4; this line is default and implied # listen [::]:80 default_server ipv6only=on; ## listen for ipv6 server_name almelle.atmserv.pl; rewrite_log on; root /usr/share/nginx/www/almelle.atmserv.pl; index index.php index.html,index.htm; # server_name almelle.atmserv.pl; # rewrite_log on; access_log /var/log/nginx/almelle.atmserv.pl.access.log; # try_files $uri $uri/ @rewrite; error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } location ~ \.php$ { try_files $uri = 404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # location @rewrite { # # rewrite ^/(.*)$ /index.php?cat=$1; # # } location /poczatek { rewrite ^/(.*) http://almelle.atmserv.pl/index.php?cat=poczatek permanent; } } Please, help me. Thanks. Marcin Kasi?ski -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Wed Nov 27 19:47:02 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 28 Nov 2013 08:47:02 +1300 Subject: redirect url in nginx In-Reply-To: <013f01ceeba5$75e3aef0$61ab0cd0$@gmail.com> References: <013f01ceeba5$75e3aef0$61ab0cd0$@gmail.com> Message-ID: <1385581622.9428.1988.camel@steve-new> rewrite /poczatek(.*) $scheme://$host$uri/index.php?cat=poczatek permanent; *should* work. I've not used rewriites with gets though... Steve On Wed, 2013-11-27 at 20:18 +0100, Marcin Kasi?ski wrote: > Hello, > > > > I have a problem with redirect url in nginx. > > > > I want to redirect http://almelle.atmserv.pl/poczatek na > http://almelle.atmserv.pl/index.php?cat=poczatek > > > > In vhost conf file I have: > > > > server { > > listen 80; ## listen for ipv4; this line is default and > implied > > # listen [::]:80 default_server ipv6only=on; ## listen for > ipv6 > > > > server_name almelle.atmserv.pl; > > rewrite_log on; > > > > > > > > root /usr/share/nginx/www/almelle.atmserv.pl; > > index index.php index.html,index.htm; > > # server_name almelle.atmserv.pl; > > # rewrite_log on; > > > > access_log /var/log/nginx/almelle.atmserv.pl.access.log; > > > > # try_files $uri $uri/ @rewrite; > > > > error_page 404 /404.html; > > > > error_page 500 502 503 504 /50x.html; > > > > > > > > location = /50x.html { > > root /usr/share/nginx/www; > > } > > > > > > location ~ \.php$ { > > try_files $uri = 404; > > fastcgi_pass unix:/var/run/php5-fpm.sock; > > fastcgi_index index.php; > > fastcgi_param SCRIPT_FILENAME $document_root > $fastcgi_script_name; > > include fastcgi_params; > > } > > > > # location @rewrite { > > # > > # rewrite ^/(.*)$ /index.php?cat=$1; > > # > > # } > > > > location /poczatek { > > > > rewrite ^/(.*) > http://almelle.atmserv.pl/index.php?cat=poczatek permanent; > > > > } > > > > } > > > > Please, help me? > > > > Thanks? > > > > > > Marcin Kasi?ski > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From appa at perusio.net Wed Nov 27 19:47:48 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Wed, 27 Nov 2013 20:47:48 +0100 Subject: redirect url in nginx In-Reply-To: <013f01ceeba5$75e3aef0$61ab0cd0$@gmail.com> References: <013f01ceeba5$75e3aef0$61ab0cd0$@gmail.com> Message-ID: location poczatek { return 301 $scheme://$host/index.PHP?cat=$uri; } Le 27 nov. 2013 20:19, "Marcin Kasi?ski" a ?crit : > Hello, > > > > I have a problem with redirect url in nginx. > > > > I want to redirect http://almelle.atmserv.pl/poczatek na > http://almelle.atmserv.pl/index.php?cat=poczatek > > > > In vhost conf file I have: > > > > server { > > listen 80; ## listen for ipv4; this line is default and implied > > # listen [::]:80 default_server ipv6only=on; ## listen for ipv6 > > > > server_name almelle.atmserv.pl; > > rewrite_log on; > > > > > > > > root /usr/share/nginx/www/almelle.atmserv.pl; > > index index.php index.html,index.htm; > > # server_name almelle.atmserv.pl; > > # rewrite_log on; > > > > access_log /var/log/nginx/almelle.atmserv.pl.access.log; > > > > # try_files $uri $uri/ @rewrite; > > > > error_page 404 /404.html; > > > > error_page 500 502 503 504 /50x.html; > > > > > > > > location = /50x.html { > > root /usr/share/nginx/www; > > } > > > > > > location ~ \.php$ { > > try_files $uri = 404; > > fastcgi_pass unix:/var/run/php5-fpm.sock; > > fastcgi_index index.php; > > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > > include fastcgi_params; > > } > > > > # location @rewrite { > > # > > # rewrite ^/(.*)$ /index.php?cat=$1; > > # > > # } > > > > location /poczatek { > > > > rewrite ^/(.*) http://almelle.atmserv.pl/index.php?cat=poczatekpermanent; > > > > } > > > > } > > > > Please, help me? > > > > Thanks? > > > > > > Marcin Kasi?ski > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Wed Nov 27 19:49:53 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Wed, 27 Nov 2013 20:49:53 +0100 Subject: redirect url in nginx In-Reply-To: References: <013f01ceeba5$75e3aef0$61ab0cd0$@gmail.com> Message-ID: It's location /poczatek { Le 27 nov. 2013 20:47, "Ant?nio P. P. Almeida" a ?crit : > location poczatek { > return 301 $scheme://$host/index.PHP?cat=$uri; > } > Le 27 nov. 2013 20:19, "Marcin Kasi?ski" a > ?crit : > >> Hello, >> >> >> >> I have a problem with redirect url in nginx. >> >> >> >> I want to redirect http://almelle.atmserv.pl/poczatek na >> http://almelle.atmserv.pl/index.php?cat=poczatek >> >> >> >> In vhost conf file I have: >> >> >> >> server { >> >> listen 80; ## listen for ipv4; this line is default and implied >> >> # listen [::]:80 default_server ipv6only=on; ## listen for ipv6 >> >> >> >> server_name almelle.atmserv.pl; >> >> rewrite_log on; >> >> >> >> >> >> >> >> root /usr/share/nginx/www/almelle.atmserv.pl; >> >> index index.php index.html,index.htm; >> >> # server_name almelle.atmserv.pl; >> >> # rewrite_log on; >> >> >> >> access_log /var/log/nginx/almelle.atmserv.pl.access.log; >> >> >> >> # try_files $uri $uri/ @rewrite; >> >> >> >> error_page 404 /404.html; >> >> >> >> error_page 500 502 503 504 /50x.html; >> >> >> >> >> >> >> >> location = /50x.html { >> >> root /usr/share/nginx/www; >> >> } >> >> >> >> >> >> location ~ \.php$ { >> >> try_files $uri = 404; >> >> fastcgi_pass unix:/var/run/php5-fpm.sock; >> >> fastcgi_index index.php; >> >> fastcgi_param SCRIPT_FILENAME >> $document_root$fastcgi_script_name; >> >> include fastcgi_params; >> >> } >> >> >> >> # location @rewrite { >> >> # >> >> # rewrite ^/(.*)$ /index.php?cat=$1; >> >> # >> >> # } >> >> >> >> location /poczatek { >> >> >> >> rewrite ^/(.*) http://almelle.atmserv.pl/index.php?cat=poczatekpermanent; >> >> >> >> } >> >> >> >> } >> >> >> >> Please, help me? >> >> >> >> Thanks? >> >> >> >> >> >> Marcin Kasi?ski >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Nov 27 23:07:12 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 27 Nov 2013 23:07:12 +0000 Subject: Need to compare client certificate CN with an entry in /etc/hosts In-Reply-To: <80E8928689149A48BC47B755D8AE25E6123E9218@xmb-rcd-x14.cisco.com> References: <80E8928689149A48BC47B755D8AE25E6123E9115@xmb-rcd-x14.cisco.com> <20131126231537.GA11495@craic.sysops.org> <80E8928689149A48BC47B755D8AE25E6123E9218@xmb-rcd-x14.cisco.com> Message-ID: <20131127230712.GA15722@craic.sysops.org> On Wed, Nov 27, 2013 at 12:01:16AM +0000, Radha Venkatesh (radvenka) wrote: Hi there, > I found the below snippet which could provide me the cn from the certificate. Great, now you have a variable to hold the CN that you want to do something with. > What would be the easiest way to compare this with an entry in /etc/hosts? Do we need an external module to do this? > I think you need some form of programming, if you want to read /etc/hosts "live" each time -- you can try whatever language you have compiled in to your nginx, or you can use any one of the *_pass directives to talk to whatever you write in the language of your choice. If you are happy to statically write the contents of /etc/hosts into your nginx.conf, so that it is only read on startup, you could probably do it all in config: use another "map" to check that $ssl_client_s_dn_cn is one of your expected values: map $ssl_client_s_dn_cn $is_cn_in_etc_hosts { default "no"; hostname1 "yes"; host2.example.com "yes"; } Or you could check that the matching ip address is the same as $remote_addr, if that is what you want: map $ssl_client_s_dn_cn $what_ip_should_cn_have { default ""; hostname1 "127.0.0.3"; host2.example.com "127.0.0.4"; } and then compare $what_ip_should_cn_have with $remote_addr. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Nov 27 23:10:35 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 27 Nov 2013 23:10:35 +0000 Subject: Subdomains no longer work In-Reply-To: <3aae8ee36fdad1d1779e39dd23765ee7.NginxMailingListEnglish@forum.nginx.org> References: <3aae8ee36fdad1d1779e39dd23765ee7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131127231035.GB15722@craic.sysops.org> On Wed, Nov 27, 2013 at 06:38:09AM -0500, Peleke wrote: Hi there, > Should I add different config files or what other information are needed to > solve this problem? What does "no longer work" mean? Ideally your answer would be of the form "I issue this curl request; I get this response; but I expect that response; before I changed things, I got that response; and here are the things that I changed since it was working". f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Nov 27 23:14:07 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 27 Nov 2013 23:14:07 +0000 Subject: Proxy_pass with decode_base64 result In-Reply-To: References: Message-ID: <20131127231407.GC15722@craic.sysops.org> On Wed, Nov 27, 2013 at 10:10:42AM -0500, Jugurtha wrote: Hi there, > If you have just a clue, I'm interested. The clue is in the error message: "no resolver defined". http://nginx.org/r/resolver f -- Francis Daly francis at daoine.org From lists at ruby-forum.com Thu Nov 28 01:57:45 2013 From: lists at ruby-forum.com (Honeyer B.) Date: Thu, 28 Nov 2013 02:57:45 +0100 Subject: Forgot Windows 8 Login Password After Upgrade to Windows 8.1, How to Reset it? In-Reply-To: References: Message-ID: <93cd4d64131c926d7d32475342ead46f@ruby-forum.com> I used Windows Password Recovery Tool Ultimate for my Samsung Windows 8 login password recovery a couple months ago. Get it from http://t.co/qzsOIFcaYT -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Thu Nov 28 02:03:30 2013 From: lists at ruby-forum.com (Honeyer B.) Date: Thu, 28 Nov 2013 03:03:30 +0100 Subject: How to crack Windows 7 administrator user password Message-ID: ?I was locked out of my Dell Inspiron Windows 7 Pro. When I open it and wanted to login computer, the screen showed the message "This computer is locked. Need Administrator to open." How could I crack this admin login password to access pc earlier?? how to unlock Windows 7 admin
password unlock Windows 7 login password Do you anyone ever meet such password problem? Well, it is not hard to remove the locked Windows 7 password. You can fix your netbook as well as laptop password problems quickly with the right software. If you have already searched terms on the internet via Google: ?Forgot Windows 7 password?, you might have found and read various articles like ?3 Free Windows 7 Password Recovery Tools?, ?5 Easy Ways to Recover Lost Windows 7 Password?, ?How to Bypass Windows 7 Password via Safe Mode??,etc. Most Windows users are not always looking for a technical guide on how to unlock computer after you have locked yourself out of computer or forgotten laptop login password, but just looking for a simple quick way to regain access to the locked pc. The Windows Password Recovery Tool Ultimate below when used will efficiently get you login locked pc again. Whatever which Windows system you are running like Windows 8/7/Vista/XP/Server 2012/2008/2003 and even the newest Windows 8.1 version, the tool can work for them smoothly. If you need unlock your locked Windows 7 login password, then unlock laptop with Windows Password Recovery Tool Ultimate. Then you can get your Windows 7 login password within clicks. What you need is: 1. A spare CD/DVD or USB disk 2. This Windows Password Recovery Tool Ultimate 3. A few minutes of your time!

Remove Windows 7 Password Efficiently in 3 Steps:

Step 1, download Windows Password Recovery Tool Ultimate from http://www.windowspasswordsrecovery.com , install and run it on another available computer. Step 2, Get a spare CD/DVD/USB to burn the iSO image file into a Windows 7 password reset disk. Do BIOS setting for the locked Windows 7 laptop and insert the newly burned Windows 7 password recovery disk to remove or reset locked unknown Windows 7 login account password. Step 3, Restart locked Windows 7 computer and login with blank password or create a new account easily.
Follow video tutorial to crack Windows 7 Password step by step!
Once you buy this software you don?t need to worry about ever forgetting your windows password as you will have this software handy and you could possibly help out t friend as well who has got into trouble with forgetting their password!! And best of all is that you don?t need any technical or programming skills to use this software. You can download the software and reset your password in just 3 easy steps! -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Thu Nov 28 02:35:13 2013 From: lists at ruby-forum.com (Hang R.) Date: Thu, 28 Nov 2013 03:35:13 +0100 Subject: Forgot Windows 8 Login Password After Upgrade to Windows 8.1, How to Reset it? In-Reply-To: References: Message-ID: <2820479c8e7a7655be1980cbe49074c0@ruby-forum.com> The most powerful Windows 8 password reset tool I have ever used is Windows Password Key. It is really helpful. http://lnkd.in/ixJ2T5 -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Nov 28 06:48:15 2013 From: nginx-forum at nginx.us (pwrlove) Date: Thu, 28 Nov 2013 01:48:15 -0500 Subject: How can I use IOCP module on windows? Message-ID: Hi there, Can I use iocp module (ngx_iocp_module.c) ? If possible, how can I configure it? Could anyone knows about it? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245034,245034#msg-245034 From nginx-forum at nginx.us Thu Nov 28 08:10:37 2013 From: nginx-forum at nginx.us (Jugurtha) Date: Thu, 28 Nov 2013 03:10:37 -0500 Subject: Proxy_pass with decode_base64 result In-Reply-To: <20131127231407.GC15722@craic.sysops.org> References: <20131127231407.GC15722@craic.sysops.org> Message-ID: <0ae8c1543c99194441b13b7118e6a7d2.NginxMailingListEnglish@forum.nginx.org> Hello, Thank you for the response Francis, I saw this after several searches on the board but without success. I will continue my investigations to try to solve this problem. Thank you for the clue ;) Merci. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244811,245035#msg-245035 From shahzaib.cb at gmail.com Thu Nov 28 08:16:23 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 28 Nov 2013 13:16:23 +0500 Subject: Disable hotlinking protection for specific file !! Message-ID: Hello, The root directory path is /var/www/html/domain and every file within it is hotlink protected for (mp4). Now here's a file named /var/www/html/domain/videos/test.mp4 and i want this file to be available for public with no hotlinking restriction. Is that possible with nginx ? vhost config is given below. Please help me regarding it. server { listen 80; server_name mydomain.com; client_max_body_size 800m; limit_rate 250k; access_log /websites/theos.in/logs/access.log main; location / { root /var/www/html/domain; index index.html index.htm index.php; autoindex off; } location ~ -720\.(mp4)$ { mp4; expires 7d; limit_rate 1000k; root /var/www/html/domain; valid_referers none blocked test.com *.test.com *. facebook.com *.twitter.com; if ($invalid_referer) { return 403; } } location ~ -480\.(mp4)$ { mp4; expires 7d; limit_rate 250k; root /var/www/html/domain; valid_referers none blocked test.com *.test.com *. facebook.com *.twitter.com; if ($invalid_referer) { return 403; } } location ~ \.(mp4)$ { mp4; expires 7d; root /var/www/html/domain; valid_referers none blocked test.com *.test.com *. facebook.com *.twitter.com; if ($invalid_referer) { return 403; } } Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 28 08:53:14 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Nov 2013 12:53:14 +0400 Subject: How can I use IOCP module on windows? In-Reply-To: References: Message-ID: <20131128085314.GW93176@mdounin.ru> Hello! On Thu, Nov 28, 2013 at 01:48:15AM -0500, pwrlove wrote: > Hi there, > > Can I use iocp module (ngx_iocp_module.c) ? > If possible, how can I configure it? > > Could anyone knows about it? It's incomplete and doesn't work. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Nov 28 08:55:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Nov 2013 12:55:53 +0400 Subject: Disable hotlinking protection for specific file !! In-Reply-To: References: Message-ID: <20131128085553.GX93176@mdounin.ru> Hello! On Thu, Nov 28, 2013 at 01:16:23PM +0500, shahzaib shahzaib wrote: > Hello, > > The root directory path is /var/www/html/domain and every file > within it is hotlink protected for (mp4). Now here's a file named > /var/www/html/domain/videos/test.mp4 and i want this file to be available > for public with no hotlinking restriction. Is that possible with nginx ? > vhost config is given below. Please help me regarding it. Adding an exact match location will help, e.g.: location = /test.mp4 { # no hotlink protection here ... } See http://nginx.org/r/location for details on location matching. -- Maxim Dounin http://nginx.org/en/donation.html From francis at daoine.org Thu Nov 28 09:09:21 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 28 Nov 2013 09:09:21 +0000 Subject: Proxy_pass with decode_base64 result In-Reply-To: <0ae8c1543c99194441b13b7118e6a7d2.NginxMailingListEnglish@forum.nginx.org> References: <20131127231407.GC15722@craic.sysops.org> <0ae8c1543c99194441b13b7118e6a7d2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131128090921.GD15722@craic.sysops.org> On Thu, Nov 28, 2013 at 03:10:37AM -0500, Jugurtha wrote: Hi there, > Thank you for the response Francis, I saw this after several searches on the > board but without success. > I will continue my investigations to try to solve this problem. > > Thank you for the clue ;) You're welcome. I confess I'm not sure why you're still seeing a problem. You have "proxy_pass $variable" -- this means you must have a "resolver" configured which is a DNS server that nginx can use to find the IP address associated with whatever hostname is included in $variable. If you don't have a resolver, you get the error message you reported. If you do have a resolver, you should see success, or a different error message indicating why it failed. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Nov 28 11:21:49 2013 From: nginx-forum at nginx.us (fatine,al) Date: Thu, 28 Nov 2013 06:21:49 -0500 Subject: NGINX 500 http error In-Reply-To: <5d059809a78b8fea62f6d010a7c6650b.NginxMailingListEnglish@forum.nginx.org> References: <20131115132151.GQ95765@mdounin.ru> <3d57ede6d4d086be412204da3a43654e.NginxMailingListEnglish@forum.nginx.org> <719e036e94e70427f03196a4f080325a.NginxMailingListEnglish@forum.nginx.org> <5d059809a78b8fea62f6d010a7c6650b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56c3928692421282e9f1938dd8a7e758.NginxMailingListEnglish@forum.nginx.org> Hi, The problem is solved. I installed nginx-1.5.6 and naxsi-core-0.50 from sources, and compile nginx with naxsi module and some options : ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-pcre-jit --with-http_ssl_module --with-debug --with-ipv6 --with-http_stub_status_module --add-module=../gnosek-nginx-upstream-fair-a18b409/ --add-module=../ngx_cache_purge-2.1 --add-module=../naxsi-core-0.50/naxsi_src/ It works fine now. Thank you for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244693,245052#msg-245052 From nginx-forum at nginx.us Thu Nov 28 11:47:11 2013 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 28 Nov 2013 06:47:11 -0500 Subject: NGINX 500 http error In-Reply-To: <56c3928692421282e9f1938dd8a7e758.NginxMailingListEnglish@forum.nginx.org> References: <20131115132151.GQ95765@mdounin.ru> <3d57ede6d4d086be412204da3a43654e.NginxMailingListEnglish@forum.nginx.org> <719e036e94e70427f03196a4f080325a.NginxMailingListEnglish@forum.nginx.org> <5d059809a78b8fea62f6d010a7c6650b.NginxMailingListEnglish@forum.nginx.org> <56c3928692421282e9f1938dd8a7e758.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8cb2846812e4dc52b0eba105531f2e9a.NginxMailingListEnglish@forum.nginx.org> Note: --add-module=../naxsi... should be the first one. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244693,245053#msg-245053 From mkasinski.poczta at gmail.com Thu Nov 28 12:36:39 2013 From: mkasinski.poczta at gmail.com (=?iso-8859-2?Q?Marcin_Kasi=F1ski?=) Date: Thu, 28 Nov 2013 13:36:39 +0100 Subject: redirect url in nginx Message-ID: <016001ceec36$7ceb5630$76c20290$@gmail.com> Witam, Sorry, but solutions that you send to me isn't working. Any more idea's? Marcin Kasi?ski mobile: (+48) 512 - 370 - 209 skype: marcin-kasinski GG: 1258720 JabberID: marcin-kasinski at aqq.eu mail: mkasinski.poczta at gmail.com mail: marcin-kasinski at wp.pl website: http://markasblog.pl -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of nginx-request at nginx.org Sent: Thursday, November 28, 2013 1:00 PM To: nginx at nginx.org Subject: nginx Digest, Vol 49, Issue 46 Send nginx mailing list submissions to nginx at nginx.org To subscribe or unsubscribe via the World Wide Web, visit http://mailman.nginx.org/mailman/listinfo/nginx or, via email, send a message with subject or body 'help' to nginx-request at nginx.org You can reach the person managing the list at nginx-owner at nginx.org When replying, please edit your Subject line so it is more specific than "Re: Contents of nginx digest..." Today's Topics: 1. Re: Proxy_pass with decode_base64 result (Jugurtha) 2. Disable hotlinking protection for specific file !! (shahzaib shahzaib) 3. Re: How can I use IOCP module on windows? (Maxim Dounin) 4. Re: Disable hotlinking protection for specific file !! (Maxim Dounin) 5. Re: Proxy_pass with decode_base64 result (Francis Daly) 6. Re: NGINX 500 http error (fatine,al) 7. Re: NGINX 500 http error (itpp2012) ---------------------------------------------------------------------- Message: 1 Date: Thu, 28 Nov 2013 03:10:37 -0500 From: "Jugurtha" To: nginx at nginx.org Subject: Re: Proxy_pass with decode_base64 result Message-ID: <0ae8c1543c99194441b13b7118e6a7d2.NginxMailingListEnglish at forum.nginx.org> Content-Type: text/plain; charset=UTF-8 Hello, Thank you for the response Francis, I saw this after several searches on the board but without success. I will continue my investigations to try to solve this problem. Thank you for the clue ;) Merci. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244811,245035#msg-245035 ------------------------------ Message: 2 Date: Thu, 28 Nov 2013 13:16:23 +0500 From: shahzaib shahzaib To: nginx at nginx.org Subject: Disable hotlinking protection for specific file !! Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hello, The root directory path is /var/www/html/domain and every file within it is hotlink protected for (mp4). Now here's a file named /var/www/html/domain/videos/test.mp4 and i want this file to be available for public with no hotlinking restriction. Is that possible with nginx ? vhost config is given below. Please help me regarding it. server { listen 80; server_name mydomain.com; client_max_body_size 800m; limit_rate 250k; access_log /websites/theos.in/logs/access.log main; location / { root /var/www/html/domain; index index.html index.htm index.php; autoindex off; } location ~ -720\.(mp4)$ { mp4; expires 7d; limit_rate 1000k; root /var/www/html/domain; valid_referers none blocked test.com *.test.com *. facebook.com *.twitter.com; if ($invalid_referer) { return 403; } } location ~ -480\.(mp4)$ { mp4; expires 7d; limit_rate 250k; root /var/www/html/domain; valid_referers none blocked test.com *.test.com *. facebook.com *.twitter.com; if ($invalid_referer) { return 403; } } location ~ \.(mp4)$ { mp4; expires 7d; root /var/www/html/domain; valid_referers none blocked test.com *.test.com *. facebook.com *.twitter.com; if ($invalid_referer) { return 403; } } Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Thu, 28 Nov 2013 12:53:14 +0400 From: Maxim Dounin To: nginx at nginx.org Subject: Re: How can I use IOCP module on windows? Message-ID: <20131128085314.GW93176 at mdounin.ru> Content-Type: text/plain; charset=us-ascii Hello! On Thu, Nov 28, 2013 at 01:48:15AM -0500, pwrlove wrote: > Hi there, > > Can I use iocp module (ngx_iocp_module.c) ? > If possible, how can I configure it? > > Could anyone knows about it? It's incomplete and doesn't work. -- Maxim Dounin http://nginx.org/en/donation.html ------------------------------ Message: 4 Date: Thu, 28 Nov 2013 12:55:53 +0400 From: Maxim Dounin To: nginx at nginx.org Subject: Re: Disable hotlinking protection for specific file !! Message-ID: <20131128085553.GX93176 at mdounin.ru> Content-Type: text/plain; charset=us-ascii Hello! On Thu, Nov 28, 2013 at 01:16:23PM +0500, shahzaib shahzaib wrote: > Hello, > > The root directory path is /var/www/html/domain and every file > within it is hotlink protected for (mp4). Now here's a file named > /var/www/html/domain/videos/test.mp4 and i want this file to be available > for public with no hotlinking restriction. Is that possible with nginx ? > vhost config is given below. Please help me regarding it. Adding an exact match location will help, e.g.: location = /test.mp4 { # no hotlink protection here ... } See http://nginx.org/r/location for details on location matching. -- Maxim Dounin http://nginx.org/en/donation.html ------------------------------ Message: 5 Date: Thu, 28 Nov 2013 09:09:21 +0000 From: Francis Daly To: nginx at nginx.org Subject: Re: Proxy_pass with decode_base64 result Message-ID: <20131128090921.GD15722 at craic.sysops.org> Content-Type: text/plain; charset=us-ascii On Thu, Nov 28, 2013 at 03:10:37AM -0500, Jugurtha wrote: Hi there, > Thank you for the response Francis, I saw this after several searches on the > board but without success. > I will continue my investigations to try to solve this problem. > > Thank you for the clue ;) You're welcome. I confess I'm not sure why you're still seeing a problem. You have "proxy_pass $variable" -- this means you must have a "resolver" configured which is a DNS server that nginx can use to find the IP address associated with whatever hostname is included in $variable. If you don't have a resolver, you get the error message you reported. If you do have a resolver, you should see success, or a different error message indicating why it failed. Good luck with it, f -- Francis Daly francis at daoine.org ------------------------------ Message: 6 Date: Thu, 28 Nov 2013 06:21:49 -0500 From: "fatine,al" To: nginx at nginx.org Subject: Re: NGINX 500 http error Message-ID: <56c3928692421282e9f1938dd8a7e758.NginxMailingListEnglish at forum.nginx.org> Content-Type: text/plain; charset=UTF-8 Hi, The problem is solved. I installed nginx-1.5.6 and naxsi-core-0.50 from sources, and compile nginx with naxsi module and some options : ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-pcre-jit --with-http_ssl_module --with-debug --with-ipv6 --with-http_stub_status_module --add-module=../gnosek-nginx-upstream-fair-a18b409/ --add-module=../ngx_cache_purge-2.1 --add-module=../naxsi-core-0.50/naxsi_src/ It works fine now. Thank you for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244693,245052#msg-245052 ------------------------------ Message: 7 Date: Thu, 28 Nov 2013 06:47:11 -0500 From: "itpp2012" To: nginx at nginx.org Subject: Re: NGINX 500 http error Message-ID: <8cb2846812e4dc52b0eba105531f2e9a.NginxMailingListEnglish at forum.nginx.org> Content-Type: text/plain; charset=UTF-8 Note: --add-module=../naxsi... should be the first one. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244693,245053#msg-245053 ------------------------------ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx End of nginx Digest, Vol 49, Issue 46 ************************************* From debasish at geekyarticles.com Thu Nov 28 12:57:37 2013 From: debasish at geekyarticles.com (Debasish Ray Chawdhuri) Date: Thu, 28 Nov 2013 18:27:37 +0530 Subject: How to allow 404 server error to propagate to the user Message-ID: I am currently raising a 404 error from the backend server (running on port 9000), but nginx, instead of giving the client the same 404 error, responds with a 505, but still send the body of the response that is returned from the server with the 404. How do I fix this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From debasish at geekyarticles.com Thu Nov 28 13:25:58 2013 From: debasish at geekyarticles.com (Debasish Ray Chawdhuri) Date: Thu, 28 Nov 2013 18:55:58 +0530 Subject: How to allow 404 server error to propagate to the user In-Reply-To: References: Message-ID: The log created are as follows 14.141.60.139 - - [28/Nov/2013:13:24:13 +0000] "GET /afasdfadsfsadf HTTP/1.1" 505 5 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0" 14.141.60.139 - - [28/Nov/2013:13:24:19 +0000] "-" 400 0 "-" "-" On Thu, Nov 28, 2013 at 6:27 PM, Debasish Ray Chawdhuri < debasish at geekyarticles.com> wrote: > I am currently raising a 404 error from the backend server (running on > port 9000), but nginx, instead of giving the client the same 404 error, > responds with a 505, but still send the body of the response that is > returned from the server with the 404. > > How do I fix this? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Nov 29 01:25:48 2013 From: nginx-forum at nginx.us (Magissia) Date: Thu, 28 Nov 2013 20:25:48 -0500 Subject: Bandwidth limiting per virtualhost In-Reply-To: <4e21c200334232a830bb059613adce28@ruby-forum.com> References: <4e21c200334232a830bb059613adce28@ruby-forum.com> Message-ID: <0d2da54957f91216d57b5560562f71b4.NginxMailingListEnglish@forum.nginx.org> I'm interested by the answer too, after lot of searching, it seems we can only limit clients bandwidth, or whatever we want request rate. Documentation on throttling bandwidth of a location or a virtual-host is lacking. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236319,245060#msg-245060 From andre at digirati.com.br Fri Nov 29 11:12:39 2013 From: andre at digirati.com.br (Andre Nathan) Date: Fri, 29 Nov 2013 09:12:39 -0200 Subject: Service availability during reload Message-ID: <529876A7.3060301@digirati.com.br> Hello In an apache webserver with many virtual hosts, a "reload" command can cause a (quick) unavailability of the service while apache re-reads its configuration files. Does anyone have any experience with this on Nginx? Will sending it a SIGHUP cause the main process to block and not be able to handle connections during that instant? Thank you, Andre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 555 bytes Desc: OpenPGP digital signature URL: From oleg.khrustov at gmail.com Fri Nov 29 11:36:18 2013 From: oleg.khrustov at gmail.com (Oleg V. Khrustov) Date: Fri, 29 Nov 2013 15:36:18 +0400 Subject: Nginx fastcgi_intercept_errors Message-ID: nginx/1.5.4 doesnt intercept fastcgi errors. location / { .... fastcgi_pass bg; error_page 500 502 503 504 408 404 =204 /204.htm; fastcgi_intercept_errors on; ..... } However in root location access log we still see 91.192.148.232 - - [29/Nov/2013:13:39:20 +0400] "POST / HTTP/1.1" 504 182 "-" "-" "0.138" And tcpdump capture shows that we still send 504: Py...HTTP/1.1 504 Gateway Time-out Server: nginx/1.5.4 Date: Fri, 29 Nov 2013 09:42:12 GMT Content-Type: text/html Content-Length: 182 Connection: keep-alive 504 Gateway Time-out

504 Gateway Time-out


nginx/1.5.4
What can be wrong here? Thanks, OK -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg.khrustov at gmail.com Fri Nov 29 11:50:52 2013 From: oleg.khrustov at gmail.com (Oleg V. Khrustov) Date: Fri, 29 Nov 2013 15:50:52 +0400 Subject: Nginx fastcgi_next_upstream off doesn't work Message-ID: nginx/1.5.4 location / { ... fastcgi_pass bg; fastcgi_next_upstream off; ... } upstream bg { server unix:/tmp/dsp.1.sock; server unix:/tmp/dsp.2.sock; server unix:/tmp/dsp.3.sock; server unix:/tmp/dsp.4.sock; server unix:/tmp/dsp.5.sock; server unix:/tmp/dsp.6.sock; server unix:/tmp/dsp.7.sock; server unix:/tmp/dsp.8.sock; server unix:/tmp/dsp.9.sock; server unix:/tmp/dsp.10.sock; server unix:/tmp/dsp.11.sock; server unix:/tmp/dsp.12.sock; server unix:/tmp/dsp.13.sock; } log_format combined-r '$time_local: upstream $upstream_addr responded $upstream_status in $upstream_response_time ms, request status $status is in $request_time ms'; 29/Nov/2013:13:57:51 +0400: upstream unix:/tmp/dsp.4.sock : unix:/tmp/dsp.9.sock responded 504 : 504 in 0.148 : 0.050 ms, request status 504 is in 0.198 ms So nginx still pass request to next upstream dsp.4 -> dsp.9 What can be wrong with this config? Thanks, OK -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Nov 29 11:59:12 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 29 Nov 2013 12:59:12 +0100 Subject: Service availability during reload In-Reply-To: <529876A7.3060301@digirati.com.br> References: <529876A7.3060301@digirati.com.br> Message-ID: Hello, Sometimes reading the documentation might help : http://nginx.org/en/docs/control.html If you look at your service file, you'll notice that a 'reload' means sending SIGHUP to the master process for the particular case of nginx. I'll leave the conclusion to you, assuming that the doc is clear enough. As a general thinking methodology, considering implicitly that other webserver (especially modern ones with huge shifts in paradigms) behave the same as Apache only leads to deception... --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Nov 29 12:00:56 2013 From: nginx-forum at nginx.us (Jugurtha) Date: Fri, 29 Nov 2013 07:00:56 -0500 Subject: Proxy_pass with decode_base64 result In-Reply-To: <20131128090921.GD15722@craic.sysops.org> References: <20131128090921.GD15722@craic.sysops.org> Message-ID: <44277b00d4a12a9a81e2c20a0212875d.NginxMailingListEnglish@forum.nginx.org> Hi and thank you for your help Francis ;) I had already tested the IP resolver with Google DNS, but I just realized that the firewall of the company blocked the DNS traffic !!! Also, my server has a DNS entry (/etc/resolv.conf) so I do not understand why I had to add this tag in nginx. Anyway, with this 2 lines, everything is OK now : resolver 172.16.42.60; (Our DNS) proxy_set_header X-Forwarded-Host $host; Again thank you for your help and for the work you do on this board. Ps: sorry for my bad english ;) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244811,245068#msg-245068 From mdounin at mdounin.ru Fri Nov 29 12:13:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Nov 2013 16:13:17 +0400 Subject: Service availability during reload In-Reply-To: <529876A7.3060301@digirati.com.br> References: <529876A7.3060301@digirati.com.br> Message-ID: <20131129121316.GI93176@mdounin.ru> Hello! On Fri, Nov 29, 2013 at 09:12:39AM -0200, Andre Nathan wrote: > Hello > > In an apache webserver with many virtual hosts, a "reload" command can > cause a (quick) unavailability of the service while apache re-reads its > configuration files. > > Does anyone have any experience with this on Nginx? Will sending it a > SIGHUP cause the main process to block and not be able to handle > connections during that instant? Short answer: This is not a problem with nginx. Long answer: Connections are handled by worker processes. On SIGHUP, master process parses new configuration and spawns new worker processes (while old workers still handle requests). Then it asks old workers to gracefully shutdown. That is, all the time requests are handled by worker processes and service is available. Moreover, it is possible to upgrade nginx binary on the fly without loosing any single request. See here for more details: http://nginx.org/en/docs/control.html -- Maxim Dounin http://nginx.org/en/donation.html From andre at digirati.com.br Fri Nov 29 12:27:54 2013 From: andre at digirati.com.br (Andre Nathan) Date: Fri, 29 Nov 2013 10:27:54 -0200 Subject: Service availability during reload In-Reply-To: <20131129121316.GI93176@mdounin.ru> References: <529876A7.3060301@digirati.com.br> <20131129121316.GI93176@mdounin.ru> Message-ID: <5298884A.3020508@digirati.com.br> On 11/29/2013 10:13 AM, Maxim Dounin wrote: > Connections are handled by worker processes. On SIGHUP, master > process parses new configuration and spawns new worker processes > (while old workers still handle requests). Then it asks old > workers to gracefully shutdown. That is, all the time requests > are handled by worker processes and service is available. Thank you so much Maxim. That was the impression I had skimming through ngx_cycle.c and ngx_process_cycle.c but since I'm not familiar with the code, I just wanted to make sure. Cheers, Andre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 555 bytes Desc: OpenPGP digital signature URL: From mdounin at mdounin.ru Fri Nov 29 12:59:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Nov 2013 16:59:56 +0400 Subject: Nginx fastcgi_intercept_errors In-Reply-To: References: Message-ID: <20131129125956.GJ93176@mdounin.ru> Hello! On Fri, Nov 29, 2013 at 03:36:18PM +0400, Oleg V. Khrustov wrote: > nginx/1.5.4 doesnt intercept fastcgi errors. > location / { > .... > fastcgi_pass bg; > error_page 500 502 503 504 408 404 =204 /204.htm; > fastcgi_intercept_errors on; > > ..... > } > > However in root location access log we still see > > 91.192.148.232 - - [29/Nov/2013:13:39:20 +0400] "POST / HTTP/1.1" 504 182 > "-" "-" "0.138" > > And tcpdump capture shows that we still send 504: > > Py...HTTP/1.1 504 Gateway Time-out > Server: nginx/1.5.4 [...] > What can be wrong here? Most likely reasons, in no particular order: - You've forgot to reload configuration, or the configuration was rejected due to errors and you've missed it. - The request is handled in other location and/or server, not the one you are looking at. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Nov 29 13:02:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Nov 2013 17:02:34 +0400 Subject: Nginx fastcgi_intercept_errors In-Reply-To: <20131129125956.GJ93176@mdounin.ru> References: <20131129125956.GJ93176@mdounin.ru> Message-ID: <20131129130234.GK93176@mdounin.ru> Hello! On Fri, Nov 29, 2013 at 04:59:56PM +0400, Maxim Dounin wrote: > Hello! > > On Fri, Nov 29, 2013 at 03:36:18PM +0400, Oleg V. Khrustov wrote: > > > nginx/1.5.4 doesnt intercept fastcgi errors. > > location / { > > .... > > fastcgi_pass bg; > > error_page 500 502 503 504 408 404 =204 /204.htm; > > fastcgi_intercept_errors on; > > > > ..... > > } > > > > However in root location access log we still see > > > > 91.192.148.232 - - [29/Nov/2013:13:39:20 +0400] "POST / HTTP/1.1" 504 182 > > "-" "-" "0.138" > > > > And tcpdump capture shows that we still send 504: > > > > Py...HTTP/1.1 504 Gateway Time-out > > Server: nginx/1.5.4 > > [...] > > > What can be wrong here? > > Most likely reasons, in no particular order: > > - You've forgot to reload configuration, or the configuration was > rejected due to errors and you've missed it. > > - The request is handled in other location and/or server, not > the one you are looking at. Another possible one, after looking into your next question: - You've forgot to add a location to handle /204.htm, and it's passed to fastcgi backend again, and this again results in 504. As by default recursive_error_pages is disabled, the error is returned to a client. -- Maxim Dounin http://nginx.org/en/donation.html From oleg.khrustov at gmail.com Fri Nov 29 13:05:33 2013 From: oleg.khrustov at gmail.com (Oleg V. Khrustov) Date: Fri, 29 Nov 2013 17:05:33 +0400 Subject: Nginx fastcgi_intercept_errors In-Reply-To: <20131129125956.GJ93176@mdounin.ru> References: <20131129125956.GJ93176@mdounin.ru> Message-ID: 1) /etc/init.d/nginx configtest nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful It was definitely reloaded prior doing any tests. 2) I have a separate access log file for this specific (root) location with custon log format where I captured 504 response. It is set only for root location where I captured unexpected response code. Thanks, OK On Fri, Nov 29, 2013 at 4:59 PM, Maxim Dounin wrote: > Hello! > > On Fri, Nov 29, 2013 at 03:36:18PM +0400, Oleg V. Khrustov wrote: > > > nginx/1.5.4 doesnt intercept fastcgi errors. > > location / { > > .... > > fastcgi_pass bg; > > error_page 500 502 503 504 408 404 =204 /204.htm; > > fastcgi_intercept_errors on; > > > > ..... > > } > > > > However in root location access log we still see > > > > 91.192.148.232 - - [29/Nov/2013:13:39:20 +0400] "POST / HTTP/1.1" 504 182 > > "-" "-" "0.138" > > > > And tcpdump capture shows that we still send 504: > > > > Py...HTTP/1.1 504 Gateway Time-out > > Server: nginx/1.5.4 > > [...] > > > What can be wrong here? > > Most likely reasons, in no particular order: > > - You've forgot to reload configuration, or the configuration was > rejected due to errors and you've missed it. > > - The request is handled in other location and/or server, not > the one you are looking at. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Nov 29 13:05:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Nov 2013 17:05:57 +0400 Subject: Nginx fastcgi_next_upstream off doesn't work In-Reply-To: References: Message-ID: <20131129130557.GL93176@mdounin.ru> Hello! On Fri, Nov 29, 2013 at 03:50:52PM +0400, Oleg V. Khrustov wrote: > nginx/1.5.4 > location / { > ... > fastcgi_pass bg; > fastcgi_next_upstream off; > ... > } > > upstream bg { > > server unix:/tmp/dsp.1.sock; > server unix:/tmp/dsp.2.sock; > server unix:/tmp/dsp.3.sock; > server unix:/tmp/dsp.4.sock; > server unix:/tmp/dsp.5.sock; > server unix:/tmp/dsp.6.sock; > server unix:/tmp/dsp.7.sock; > server unix:/tmp/dsp.8.sock; > server unix:/tmp/dsp.9.sock; > server unix:/tmp/dsp.10.sock; > server unix:/tmp/dsp.11.sock; > server unix:/tmp/dsp.12.sock; > server unix:/tmp/dsp.13.sock; > > } > > log_format combined-r '$time_local: upstream $upstream_addr responded > $upstream_status in $upstream_response_time ms, request status $status is > in $request_time ms'; > > > 29/Nov/2013:13:57:51 +0400: upstream unix:/tmp/dsp.4.sock : > unix:/tmp/dsp.9.sock responded 504 : 504 in 0.148 : 0.050 ms, request > status 504 is in 0.198 ms > > > So nginx still pass request to next upstream dsp.4 -> dsp.9 > > > What can be wrong with this config? The "504 : 504" in $upstream_status indicate that there were two requests to the upstream, with an internal redirect between them (note ":"). See here for the variable format description: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables That is, everything works as expected, you just have an error_page redirection which does an additional request and confuses you. -- Maxim Dounin http://nginx.org/en/donation.html From oleg.khrustov at gmail.com Fri Nov 29 13:32:08 2013 From: oleg.khrustov at gmail.com (Oleg V. Khrustov) Date: Fri, 29 Nov 2013 17:32:08 +0400 Subject: Nginx fastcgi_intercept_errors In-Reply-To: <20131129130234.GK93176@mdounin.ru> References: <20131129125956.GJ93176@mdounin.ru> <20131129130234.GK93176@mdounin.ru> Message-ID: Many thanks, works like a charm. The reason is Wrong location settings. On Fri, Nov 29, 2013 at 5:02 PM, Maxim Dounin wrote: > Hello! > > On Fri, Nov 29, 2013 at 04:59:56PM +0400, Maxim Dounin wrote: > > > Hello! > > > > On Fri, Nov 29, 2013 at 03:36:18PM +0400, Oleg V. Khrustov wrote: > > > > > nginx/1.5.4 doesnt intercept fastcgi errors. > > > location / { > > > .... > > > fastcgi_pass bg; > > > error_page 500 502 503 504 408 404 =204 /204.htm; > > > fastcgi_intercept_errors on; > > > > > > ..... > > > } > > > > > > However in root location access log we still see > > > > > > 91.192.148.232 - - [29/Nov/2013:13:39:20 +0400] "POST / HTTP/1.1" 504 > 182 > > > "-" "-" "0.138" > > > > > > And tcpdump capture shows that we still send 504: > > > > > > Py...HTTP/1.1 504 Gateway Time-out > > > Server: nginx/1.5.4 > > > > [...] > > > > > What can be wrong here? > > > > Most likely reasons, in no particular order: > > > > - You've forgot to reload configuration, or the configuration was > > rejected due to errors and you've missed it. > > > > - The request is handled in other location and/or server, not > > the one you are looking at. > > Another possible one, after looking into your next question: > > - You've forgot to add a location to handle /204.htm, and it's > passed to fastcgi backend again, and this again results in 504. > As by default recursive_error_pages is disabled, the error is > returned to a client. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg.khrustov at gmail.com Fri Nov 29 13:35:18 2013 From: oleg.khrustov at gmail.com (Oleg V. Khrustov) Date: Fri, 29 Nov 2013 17:35:18 +0400 Subject: Nginx fastcgi_next_upstream off doesn't work In-Reply-To: <20131129130557.GL93176@mdounin.ru> References: <20131129130557.GL93176@mdounin.ru> Message-ID: Yes, you are absolutely right. Thanks again! -------------- next part -------------- An HTML attachment was scrubbed... URL: From chigga101 at gmail.com Fri Nov 29 16:59:24 2013 From: chigga101 at gmail.com (Matthew Ngaha) Date: Fri, 29 Nov 2013 16:59:24 +0000 Subject: new to web Message-ID: Hi guys i'm very new to web dev, and getting started with nginx i just wanted to ask a few things. Location blocks. Is the idea here to have as many location blocks as you have web pages? if a location has many files in it will nginx search if the requested file is in that location? i put a file "flood.jpg" inside my default root location (below) but i get permission denied when trying to access the file on my web browser. I do however see my nginx index homepage. To see "flood.jpg" do i need to add something to this block? location / { root html; index index.html index.htm; } i also made a file in my root html folder: html/test/filename.html This is also permission denied, but i want to ask. Is it ok accessing files like this or would it be better to make a specific location block for them, like: location /test/filename.html or location /test/ should it be INSIDE the root location block: location / or should it be a separate block? From nginx-forum at nginx.us Fri Nov 29 22:44:41 2013 From: nginx-forum at nginx.us (Todd@VRG) Date: Fri, 29 Nov 2013 17:44:41 -0500 Subject: Nginx/iptables passing ipclient ip Message-ID: Hi, I have nginx reverse proxy setup on a ubuntu server to pass to webservers... Setup.. Ubuntu-nginx eth1 = external_ ip eth0 = internal_ ip Webserver 1 ip XX1 webserver 2 ip XX2 nginx is forwarding traffic to webservers.. I tried both.. proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; I have iptables for my firewall with proper ports and IPs open.. the ip address I still see in the logs is the eth0 = internal_ ip I can change the ip the webserver is seeing using POSTROUTING SNAT iptables.. This lead me to believe I have something miss configured that the Nginx can not pass the real client IP to the webservers.. Thanks, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245092,245092#msg-245092 From francis at daoine.org Fri Nov 29 23:02:53 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 29 Nov 2013 23:02:53 +0000 Subject: Nginx/iptables passing ipclient ip In-Reply-To: References: Message-ID: <20131129230253.GE15722@craic.sysops.org> On Fri, Nov 29, 2013 at 05:44:41PM -0500, Todd at VRG wrote: Hi there, > nginx is forwarding traffic to webservers.. > proxy_set_header X-Forwarded-For $remote_addr; > the ip address I still see in the logs is the eth0 = internal_ ip Using tcpdump, or otherwise, watch the request going from nginx to the web server. Does is have what you expect to see in the X-Forwarded-For: header? If so, nginx is doing all it can do; you must configure the web server to make use of that header value instead of the address that it actually sees the connection coming from. That web server documentation should say how to do that. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Nov 29 23:27:57 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 29 Nov 2013 23:27:57 +0000 Subject: new to web In-Reply-To: References: Message-ID: <20131129232757.GF15722@craic.sysops.org> On Fri, Nov 29, 2013 at 04:59:24PM +0000, Matthew Ngaha wrote: Hi there, One of the best things you can learn about is your server error log. Read that, and it should tell you why nginx thinks something failed, every time. > Location blocks. Is the idea here to have as many location blocks as > you have web pages? The documentation for the directive "location" is at http://nginx.org/r/location Briefly, when a request comes in, nginx picks only one location to process it, and nginx does whatever that location says. The default action is "serve a file from the filesystem". So if all you want to do is send files from the filesystem, you don't need any location; or you can use one location that will match all requests. > i put a file > "flood.jpg" inside my default root location (below) but i get > permission denied when trying to access the file on my web browser. What does the error log say? It should tell you which file nginx tried to open, and it should tell you why it failed. (My guess is "Permission denied", which indicates that the user that nginx runs as does not have permission to read the file. If that is the case, you will want to change something about your system -- possibly the method by which you put the file there, or possibly the user account that nginx runs as.) > This is also permission denied, but i want to ask. Is it ok accessing > files like this or would it be better to make a specific location > block for them, like: >From the list of location blocks that you have, you should be able to say which one will be used for each request that is made. I tend to have one location block for one different type of request -- and all requests of the form "serve from the filesystem below /usr/local/nginx/html" are the same type. f -- Francis Daly francis at daoine.org From glicerinu at gmail.com Sat Nov 30 11:21:26 2013 From: glicerinu at gmail.com (Marc Aymerich) Date: Sat, 30 Nov 2013 12:21:26 +0100 Subject: Nginx/iptables passing ipclient ip In-Reply-To: References: Message-ID: On Fri, Nov 29, 2013 at 11:44 PM, Todd at VRG wrote: > Hi, > > I have nginx reverse proxy setup on a ubuntu server to pass to webservers... > > > Setup.. > > Ubuntu-nginx > eth1 = external_ ip > eth0 = internal_ ip > > Webserver 1 ip XX1 > webserver 2 ip XX2 > > nginx is forwarding traffic to webservers.. > > I tried both.. > > proxy_set_header X-Forwarded-For $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > I have iptables for my firewall with proper ports and IPs open.. > > the ip address I still see in the logs is the eth0 = internal_ ip > I can change the ip the webserver is seeing using POSTROUTING SNAT > iptables.. > > This lead me to believe I have something miss configured that the Nginx can > not pass the real client IP to the webservers.. Disclaimer: maybe I've misunderstood you :) Do you realize that "proxy_set_header X-Forwarded-For $remote_addr;" what would do is change the HTTP header, not the IP header. What this means is that your internal facing web servers will see IP traffic with SRC=internal_ip, however if you inspect the HTTP headers of those requests, you will find that there is and HTTP.X-Forwarded-For set to $remote_addr; no more, no less than that :) br -- Marc From nginx-forum at nginx.us Sat Nov 30 21:58:18 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 30 Nov 2013 16:58:18 -0500 Subject: [ANN] Windows nginx 1.5.8.1 Caterpillar Message-ID: <82f08a63fed0ac7d3fefdf3b8752ac2d.NginxMailingListEnglish@forum.nginx.org> 19:18 30-11-2013: nginx 1.5.8.1 Caterpillar Based on nginx 1.5.8 (29-11-2013) with (mainly bugfixes in add-on's); + Naxsi WAF (Web Application Firewall) v0.53-1 (upgraded) + lua-nginx-module v0.9.2 (upgraded 30-11) + Streaming with nginx-rtmp-module, v1.0.8 (upgraded 29-11) + Source changes back ported + Source changes add-on's back ported * Additional specifications are like 20:32 19-11-2013: nginx 1.5.7.2 Caterpillar Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245105,245105#msg-245105