From nginx-forum at nginx.us Fri Feb 1 09:45:05 2013 From: nginx-forum at nginx.us (rg00) Date: Fri, 01 Feb 2013 04:45:05 -0500 Subject: Nginx randomly crashes In-Reply-To: <20130131121814.GO40753@mdounin.ru> References: <20130131121814.GO40753@mdounin.ru> Message-ID: <08b814654d09a5a7dc5430b77971c15d.NginxMailingListEnglish@forum.nginx.org> I compiled 1.2.6 version and switched to it, here's my new nginx -V: nginx version: nginx/1.2.6 built by gcc 4.7.2 (Ubuntu/Linaro 4.7.2-2ubuntu1) TLS SNI support enabled configure arguments: --prefix=/opt/nginx-1.2.6 --with-http_flv_module --with-http_ssl_module --with-http_gzip_static_module --add-module=/root/src/ngx_http_auth_pam_module-1.2 --with-debug but *same* problem....................................... I'm really really confused.................................. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235763,235798#msg-235798 From vbart at nginx.com Fri Feb 1 10:50:26 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 1 Feb 2013 14:50:26 +0400 Subject: Nginx randomly crashes In-Reply-To: <08b814654d09a5a7dc5430b77971c15d.NginxMailingListEnglish@forum.nginx.org> References: <20130131121814.GO40753@mdounin.ru> <08b814654d09a5a7dc5430b77971c15d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201302011450.26636.vbart@nginx.com> On Friday 01 February 2013 13:45:05 rg00 wrote: > I compiled 1.2.6 version and switched to it, here's my new nginx -V: > > nginx version: nginx/1.2.6 > built by gcc 4.7.2 (Ubuntu/Linaro 4.7.2-2ubuntu1) > TLS SNI support enabled > configure arguments: --prefix=/opt/nginx-1.2.6 --with-http_flv_module > --with-http_ssl_module --with-http_gzip_static_module > --add-module=/root/src/ngx_http_auth_pam_module-1.2 --with-debug > > > but *same* problem....................................... I'm really really > confused.................................. > It is very likely that the cause of your problem is the http_auth_pam 3rd-party module. This module is known as being able to block worker processes when it used. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Feb 1 11:07:33 2013 From: nginx-forum at nginx.us (rg00) Date: Fri, 01 Feb 2013 06:07:33 -0500 Subject: Nginx randomly crashes In-Reply-To: <201302011450.26636.vbart@nginx.com> References: <201302011450.26636.vbart@nginx.com> Message-ID: <8411a8689b417341d6476ed44d099fa0.NginxMailingListEnglish@forum.nginx.org> I really need ldap authentication, is there an alternative to that module? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235763,235800#msg-235800 From nginx-list at puzzled.xs4all.nl Fri Feb 1 11:11:58 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Fri, 01 Feb 2013 12:11:58 +0100 Subject: Nginx randomly crashes In-Reply-To: <8411a8689b417341d6476ed44d099fa0.NginxMailingListEnglish@forum.nginx.org> References: <201302011450.26636.vbart@nginx.com> <8411a8689b417341d6476ed44d099fa0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <510BA2FE.4030204@puzzled.xs4all.nl> On 02/01/2013 12:07 PM, rg00 wrote: > I really need ldap authentication, is there an alternative to that module? A quick google gave: https://github.com/kvspb/nginx-auth-ldap http://forum.nginx.org/read.php?2,18552 Regards, Patrick From vbart at nginx.com Fri Feb 1 11:31:00 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 1 Feb 2013 15:31:00 +0400 Subject: Nginx randomly crashes In-Reply-To: <510BA2FE.4030204@puzzled.xs4all.nl> References: <201302011450.26636.vbart@nginx.com> <8411a8689b417341d6476ed44d099fa0.NginxMailingListEnglish@forum.nginx.org> <510BA2FE.4030204@puzzled.xs4all.nl> Message-ID: <201302011531.00636.vbart@nginx.com> On Friday 01 February 2013 15:11:58 Patrick Lists wrote: > On 02/01/2013 12:07 PM, rg00 wrote: > > I really need ldap authentication, is there an alternative to that > > module? > > A quick google gave: > > https://github.com/kvspb/nginx-auth-ldap > http://forum.nginx.org/read.php?2,18552 > Unfortunately, this module also uses blocking I/O operations. So, you should be ready for poor performance and arbitrary hangs. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html > Regards, > Patrick > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Fri Feb 1 11:34:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Feb 2013 15:34:41 +0400 Subject: Nginx randomly crashes In-Reply-To: <510BA2FE.4030204@puzzled.xs4all.nl> References: <201302011450.26636.vbart@nginx.com> <8411a8689b417341d6476ed44d099fa0.NginxMailingListEnglish@forum.nginx.org> <510BA2FE.4030204@puzzled.xs4all.nl> Message-ID: <20130201113441.GW40753@mdounin.ru> Hello! On Fri, Feb 01, 2013 at 12:11:58PM +0100, Patrick Lists wrote: > On 02/01/2013 12:07 PM, rg00 wrote: > >I really need ldap authentication, is there an alternative to that module? > > A quick google gave: > > https://github.com/kvspb/nginx-auth-ldap This one blocks as well. There is no non-blocking API in PAM, hence correct nginx module to use PAM for authentication is something impossible to write. And LDAP client libraries out there are blocking too, so writing LDAP authentication module isn't something simple. > http://forum.nginx.org/read.php?2,18552 Yep, using X-Accel-Redirect is a good way to handle any needed authentication/authorization. Alternatively, one may use auth request module: http://mdounin.ru/hg/ngx_http_auth_request_module I wrote it once tired reading questions about PAM/LDAP/whatever authentication modules and various blocking solutions people write and try to use. Compared to X-Accel-Redirect it is simplier for general use. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Feb 1 13:59:38 2013 From: nginx-forum at nginx.us (shrikeh) Date: Fri, 01 Feb 2013 08:59:38 -0500 Subject: $upstream_http_* variables exist but do not seem to be readable Message-ID: <52234cc778514fa0c5ff7c66470ad801.NginxMailingListEnglish@forum.nginx.org> Hi, I'm currently using OpenResty, and one of the things I am trying to do is have the backend send a specific header, and if that header is present, run a body_filter_by_lua call on the output. However, while I can use the $upstream_http_* vars for populating (i.e. I can go add_header SomeHeader $upstream_http_foo, and if Foo has been sent see SomeHeader: FooVar in the output. However, all tests for them seem to break. Example: location @backend { expires off; proxy_pass http://_varnish; set $test $upstream_http_csrf; # definitely exists, I can see it in the response headers add_header SomeHeader $upstream_http_csrf; # And I can read it, but.... if ($upstream_http_csrf = 1) { # doesn't matter if it's 1, "one" ~* "one", anything.... # this block never gets called } add_header someOtherHeader $test; # Not present in output as empty set_by_lua $use_token ' if not ngx.var.sent_http_csrf == "" then return ngx.var.upstream_http_csrf end return "baz" '; # Always return 'baz' add_header WIllThisWork $use_token; # Again, empty } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235808,235808#msg-235808 From nginx-forum at nginx.us Fri Feb 1 14:09:42 2013 From: nginx-forum at nginx.us (rg00) Date: Fri, 01 Feb 2013 09:09:42 -0500 Subject: Nginx randomly crashes In-Reply-To: <20130201113441.GW40753@mdounin.ru> References: <20130201113441.GW40753@mdounin.ru> Message-ID: Compiled nginx without auth_pam and using it as a reverse proxy. Now I'm monitoring and hoping there are no more service crashes. I'm using Apache to manage ldap authentication on certain locations. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235763,235809#msg-235809 From mdounin at mdounin.ru Fri Feb 1 14:48:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Feb 2013 18:48:30 +0400 Subject: $upstream_http_* variables exist but do not seem to be readable In-Reply-To: <52234cc778514fa0c5ff7c66470ad801.NginxMailingListEnglish@forum.nginx.org> References: <52234cc778514fa0c5ff7c66470ad801.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130201144829.GC40753@mdounin.ru> Hello! On Fri, Feb 01, 2013 at 08:59:38AM -0500, shrikeh wrote: > Hi, > > I'm currently using OpenResty, and one of the things I am trying to do is > have the backend send a specific header, and if that header is present, run > a body_filter_by_lua call on the output. However, while I can use the > $upstream_http_* vars for populating (i.e. I can go add_header SomeHeader > $upstream_http_foo, and if Foo has been sent see SomeHeader: FooVar in the > output. However, all tests for them seem to break. > > Example: > > location @backend { > expires off; > proxy_pass http://_varnish; > set $test $upstream_http_csrf; # definitely exists, I can > see it in the response headers The "set" directive is rewrite module directive, and it is executed during the rewrite request processing phase, which happens before a request is passed to the upstream. Unless the request was previusly processed using an upstream server, the $test variable will be empty. > add_header SomeHeader $upstream_http_csrf; # And I can read > it, but.... This will work. > if ($upstream_http_csrf = 1) { # doesn't matter if it's 1, > "one" ~* "one", anything.... > # this block never gets called > } This won't, for the same reasons as outlined above - the "if" directive can't see upstream response headers as it's executed before a response is available. > add_header someOtherHeader $test; # Not present in output as > empty See above. > set_by_lua $use_token > ' > if not ngx.var.sent_http_csrf == "" then > return ngx.var.upstream_http_csrf > end > return "baz" > '; # Always return 'baz' > add_header WIllThisWork $use_token; # Again, empty I'm not familiar with lua module, but likely the explanation is the same - set_by_lua is executed before a response is available. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Feb 1 15:38:37 2013 From: nginx-forum at nginx.us (billmanhillman) Date: Fri, 01 Feb 2013 10:38:37 -0500 Subject: Too Many Redirects Message-ID: Proxy Pass is causing to many redirects when web.xml is upshifting to SSL via security-constraint. It seems like tomcat doesn't like receiving proxy_pass with http://localhost:8080 and tries to convert to SSL again. What gives? Configs follow... Nginx 1.2.6 Config: server { listen www.mydomain.com:80; listen www.mydomain.com:443 ssl; ssl_certificate my.crt; ssl_certificate_key my.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; location / { proxy_pass http://localhost:8080; } location /images { root /var/www; } } ---------------------------------------------------------------------------- Web.xml Billing /billing/* Shipping /shipping/* Register /subscription/* Contact /contactus.url CONFIDENTIAL ------------------------------------------------------------------------------------------ Tomcat Server.xml proxyName="www.mydomain.com" proxyPort="80"/> Please help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235822,235822#msg-235822 From nginx-forum at nginx.us Fri Feb 1 17:33:46 2013 From: nginx-forum at nginx.us (jwilson) Date: Fri, 01 Feb 2013 12:33:46 -0500 Subject: Caching Objects, Passing Through and Rewrites Message-ID: <15f291bc493939e000b24d04ed01faa9.NginxMailingListEnglish@forum.nginx.org> I'm trying to set up nginx to reverse proxy for our CDN to prevent unauthorized access to raw video feeds. The idea is to restrict it to a set user-agent and referer, and if doesn't match, to instead call the page for that video. I would also like it to cache said video objects as well as any other cachable objects, and to just pass other URLs through to origin. Here's my config so far: upstream mainsite { server www.example.com; } upstream cdn { server example.cdnprovider.com; } server { listen *:80; # cachable objects, no restrictions location ~ (^/img|^/css|^/js|^/video/thumbnail|^/user/avatar) { proxy_pass http://cdn$request_uri; proxy_set_header Host "content.example.com"; } # raw video requests location ~ ^/video/raw { rewrite_log on; valid_referers *.example.com example.com; # get the video id from the end of the string if ($uri ~* ^/video/raw/(.*)$) { set $vidid $1; } # The app is automatically passed if ($http_user_agent ~* Example-App) { proxy_pass http://cdn$request_uri; } # redirect requests for raw video to page for that video if ($invalid_referer) { rewrite ^(.*)$ /!$vidid break; # example.com/!vidid } proxy_pass http://mainsite$request_uri; proxy_set_header Host "www.example.com"; } # everything else goes to origin, no caching location / { proxy_pass http://mainsite$request_uri; proxy_set_header Host "www.example.com"; } } The issue is that even without providing the correct user-agent or referer, I still get the raw video returned. Any help appreciated! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235825,235825#msg-235825 From nginx-forum at nginx.us Fri Feb 1 19:21:05 2013 From: nginx-forum at nginx.us (zacharyalexstern) Date: Fri, 01 Feb 2013 14:21:05 -0500 Subject: set arbitrary http header? Message-ID: <9785e4086386acd683c74160e9dc5e4b.NginxMailingListEnglish@forum.nginx.org> Currently in our setup, we are using the X-Real-IP header setting in nginx, to tell in our backend apache logs the IP address of the client. Is it possible to set up nginx to insert an arbitrary HTTP header with text of my choosing, or with it's own IP address? I'd like to set this up for the purposes of doing some easier log analysis. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235827,235827#msg-235827 From francis at daoine.org Fri Feb 1 19:36:46 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 1 Feb 2013 19:36:46 +0000 Subject: set arbitrary http header? In-Reply-To: <9785e4086386acd683c74160e9dc5e4b.NginxMailingListEnglish@forum.nginx.org> References: <9785e4086386acd683c74160e9dc5e4b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130201193646.GD4332@craic.sysops.org> On Fri, Feb 01, 2013 at 02:21:05PM -0500, zacharyalexstern wrote: Hi there, > Currently in our setup, we are using the X-Real-IP header setting in nginx, > to tell in our backend apache logs the IP address of the client. Which line in your nginx configuration is doing that? > Is it possible to set up nginx to insert an arbitrary HTTP header with text > of my choosing, or with it's own IP address? Yes. What happens if you copy the current line, and change some parts of it? It should be straightforward enough to test. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Feb 1 19:44:17 2013 From: nginx-forum at nginx.us (zacharyalexstern) Date: Fri, 01 Feb 2013 14:44:17 -0500 Subject: set arbitrary http header? In-Reply-To: <20130201193646.GD4332@craic.sysops.org> References: <20130201193646.GD4332@craic.sysops.org> Message-ID: Ah, so, we have this line: proxy_set_header X-Real-IP $remote_addr; Is there a list of the variables similar to $remote_addr somewhere? Is $proxy_host possibly what we need? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235827,235830#msg-235830 From sven at darkman.de Fri Feb 1 19:51:53 2013 From: sven at darkman.de (Sven 'Darkman' Michels) Date: Fri, 01 Feb 2013 20:51:53 +0100 Subject: nginx and pseudostreaming with embedded perl and proxy Message-ID: <510C1CD9.8090800@darkman.de> Hi there, i've the following setup: nginx as proxy in front of a central "storage" server who holds movies and pictures. So if a user requests some stuff, the proxy module fetches it from the central server and delivers it to the user. To protect some of the pictures and movies, i've written a small perl module which does some checks before the content is delivered. Its mainly an implementation of mod_secdownload from lighty for creating expiring links. This works fine so far with one small limitation: movies cannot seeked. I enabled the mp4 module for the locations without any luck. When i put a movie to the nginx server directly, it works. So i noticed that it might not work due to the proxy_cache stuff (since the movie is not directly stored on the disc, it has the http headers added). So i switched to proxy_store, which should work with the mp4 module. But still no look. Everytime i seek the movie starts from the beginning. Next idea: maybe the internal redirect stuff of the perl module breaks the pseudostreaming...? nginx.conf looks like: server { location ^~ /modcheck/ { internal; mp4; rewrite ^/modcheck/(.+)$ /$1 break; root /cache1/wwwroot; error_page 404 = @fetch; } location @fetch { internal; proxy_pass http://central_store_ip:8080; proxy_set_header Host central.server.tld; proxy_store on; proxy_store_access user:rw group:r all:r; proxy_temp_path /cache1/temp; root /cache1/wwwroot; } location ~ "^/prot/someregexp/$" { perl modcheck::handler; } location / { proxy_pass http://central_store_ip/; proxy_set_header Host central.domain.tld; proxy_cache statics: proxy_cache_valid 200 1d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; } } So when a request for images.domain.tld/prot/stuff/file.mp4 hits nginx, it will be captured by the perl module, that will check if the request is fine so far and if yes, it will redirect the final url to the modcheck block. This block removes the modcheck part to get only the real filename and try to deliver the local file. If file is not found, it will be fetched first, then delivered. So generally this works fine so far. The redirect/perl stuff is also to avoid caching the same file multiple times. Till the video skipping stuff this worked fine so far. Just the movie stuff only works when i put a movie to a local dir and deliver it directly. I can also copy the file from the proxy_store, so the file itself is fine. nginx in this case is 1.1.19 - but also development showed the same issue. Any ideas how to get this setup working like expected? Did i miss something? won't that work at all maybe? If there is anything missing, just ask. Thanks for your help! Best wishes, Sven From francis at daoine.org Fri Feb 1 19:53:10 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 1 Feb 2013 19:53:10 +0000 Subject: set arbitrary http header? In-Reply-To: References: <20130201193646.GD4332@craic.sysops.org> Message-ID: <20130201195310.GE4332@craic.sysops.org> On Fri, Feb 01, 2013 at 02:44:17PM -0500, zacharyalexstern wrote: > Ah, so, we have this line: > proxy_set_header X-Real-IP $remote_addr; http://nginx.org/r/proxy_set_header for details. > Is there a list of the variables similar to $remote_addr somewhere? http://nginx.org/en/docs/http/ngx_http_core_module.html#variables Other modules may provide other variables. > Is $proxy_host possibly what we need? Possibly. It really depends on what you wish to achieve, which I don't think is clear yet from this thread. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Feb 1 19:57:00 2013 From: nginx-forum at nginx.us (zacharyalexstern) Date: Fri, 01 Feb 2013 14:57:00 -0500 Subject: set arbitrary http header? In-Reply-To: <20130201195310.GE4332@craic.sysops.org> References: <20130201195310.GE4332@craic.sysops.org> Message-ID: <7cbc1badc7d3c1bbfc130498ded1358e.NginxMailingListEnglish@forum.nginx.org> I'd like nginx to set a header that contains the IP address of the server nginx is running on. So something like: proxy_set_header X-Nginx-IP $proxy_host; Assuming $proxy_host evaluates to the IP of the server nginx is running on. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235827,235833#msg-235833 From agentzh at gmail.com Fri Feb 1 20:26:16 2013 From: agentzh at gmail.com (agentzh) Date: Fri, 1 Feb 2013 12:26:16 -0800 Subject: $upstream_http_* variables exist but do not seem to be readable In-Reply-To: <20130201144829.GC40753@mdounin.ru> References: <52234cc778514fa0c5ff7c66470ad801.NginxMailingListEnglish@forum.nginx.org> <20130201144829.GC40753@mdounin.ru> Message-ID: Hello! On Fri, Feb 1, 2013 at 6:48 AM, Maxim Dounin wrote: >> set_by_lua $use_token >> ' >> if not ngx.var.sent_http_csrf == "" then >> return ngx.var.upstream_http_csrf >> end >> return "baz" >> '; # Always return 'baz' >> add_header WIllThisWork $use_token; # Again, empty > > I'm not familiar with lua module, but likely the explanation is > the same - set_by_lua is executed before a response is available. > Yes, set_by_lua runs at exactly the same phase of the standard ngx_rewrite module's "set" directive. Actually set_by_lua is injected into ngx_rewrite so that it can be mixed with other rewrite module directives. @shrikeh: you're recommended to read my Nginx tutorials to learn more about the Nginx config directive running order: http://openresty.org/#eBooks Best regards, -agentzh From francis at daoine.org Fri Feb 1 20:41:24 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 1 Feb 2013 20:41:24 +0000 Subject: set arbitrary http header? In-Reply-To: <7cbc1badc7d3c1bbfc130498ded1358e.NginxMailingListEnglish@forum.nginx.org> References: <20130201195310.GE4332@craic.sysops.org> <7cbc1badc7d3c1bbfc130498ded1358e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130201204124.GF4332@craic.sysops.org> On Fri, Feb 01, 2013 at 02:57:00PM -0500, zacharyalexstern wrote: > I'd like nginx to set a header that contains the IP address of the server > nginx is running on. There is (almost certainly) not exactly one IP address that fits that description. Maybe $server_addr is adequate? Easiest is probably "proxy_set_header X-Nginx-IP 10.11.12.13;" on the server that you want to identify as 10.11.12.13. But if you wind back to *why* you want that -- apache already knows what ip address the connection to it came from (which should be an address of the server that nginx is running on). It would have logged it, except that you configured apache to discard that address and instead use the content of the X-Real-IP header. Possibly changing the apache configuration to log the content of the X-Real-IP header as well as its client ip address is easiest of all. Whether that is appropriate in your environment depends on what else your apache does with the information. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Feb 1 20:47:34 2013 From: nginx-forum at nginx.us (zacharyalexstern) Date: Fri, 01 Feb 2013 15:47:34 -0500 Subject: set arbitrary http header? In-Reply-To: <20130201204124.GF4332@craic.sysops.org> References: <20130201204124.GF4332@craic.sysops.org> Message-ID: <973dbc741240f6b1bd538c16b0828dd5.NginxMailingListEnglish@forum.nginx.org> There are several nodes in between nginx and apache. It's not internet -> nginx -> apache. It's internet -> nginx -> varnish -> haproxy - > apache1 - > apache2 - > apacheN So far, everything is in the apache access logs except the IP of the nginx server connecting. But you're right, I could just set it manually, for each nginx server. I might just do that. And you're also again right that our servers have multiple IPs. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235827,235837#msg-235837 From francis at daoine.org Fri Feb 1 20:49:28 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 1 Feb 2013 20:49:28 +0000 Subject: Too Many Redirects In-Reply-To: References: Message-ID: <20130201204928.GG4332@craic.sysops.org> On Fri, Feb 01, 2013 at 10:38:37AM -0500, billmanhillman wrote: > Proxy Pass is causing to many redirects when web.xml is upshifting to SSL > via security-constraint. It seems like tomcat doesn't like receiving > proxy_pass with http://localhost:8080 and tries to convert to SSL again. > What gives? Configs follow... Your nginx accepts requests over http and https, and sends them both identically to your tomcat over http. If your tomcat cares about whether the request from the client came over http or over https, then you'll need (a) nginx to indicate the difference; and (b) tomcat to accept the difference. nginx could be configured to send a http header indicating whether the incoming request to it was over https or not. Or nginx could be configured to send from-http requests to one ip:port, and from-https requests to another ip:port. When you can configure your tomcat to respond to one of those differences, you can configure nginx appropriately. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Feb 2 00:27:31 2013 From: nginx-forum at nginx.us (billmanhillman) Date: Fri, 01 Feb 2013 19:27:31 -0500 Subject: Too Many Redirects In-Reply-To: <20130201204928.GG4332@craic.sysops.org> References: <20130201204928.GG4332@craic.sysops.org> Message-ID: I created another HTTP/1.1 connector in tomcat listening on another port 8443. I then separated the server settings in nginx for both http and https. I had the http server def proxy_pass to http://localhost:8080 I had the https server def proxy_pass to http://localhost:8443 I also put headers notifying tomcat the request was coming from http or https. Still no dice. Redirect loops can't seem to be fixed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235822,235846#msg-235846 From ruilue.zengrl at alibaba-inc.com Sat Feb 2 02:42:24 2013 From: ruilue.zengrl at alibaba-inc.com (=?gb2312?B?1PjI8MLU?=) Date: Sat, 2 Feb 2013 02:42:24 +0000 Subject: =?UTF-8?B?562U5aSNOiBuZ2lueCA0MTEgZXJyb3I=?= In-Reply-To: <0da0d257a3a6c0c7212e9a4c83d4e36d@webmail.xlrs.de> References: <0EF79984908C3844AA4E8E15EFDB335B2F4EC422@CNHZ-EXMAIL-07.ali.com> <20130130123757.GF40753@mdounin.ru> <05d116ac6a1a6edc55343247816035c8@webmail.xlrs.de> <20130130150952.GG40753@mdounin.ru>, <0da0d257a3a6c0c7212e9a4c83d4e36d@webmail.xlrs.de> Message-ID: <0EF79984908C3844AA4E8E15EFDB335B2F4ECE5A@CNHZ-EXMAIL-07.ali.com> thanks all ,I will upgrade to nginx 1.3.x. ________________________________________ ???: nginx-bounces at nginx.org [nginx-bounces at nginx.org] ?? Axel [ar at xlrs.de] ????: 2013?1?31? 0:15 ?: nginx at nginx.org ??: Re: nginx 411 error Thanks for clarification! rgds, Axel Am 30.01.2013 16:09, schrieb Maxim Dounin: > Hello! > > On Wed, Jan 30, 2013 at 01:57:27PM +0100, Axel wrote: > >> Hi Maxim, >> >> are you sure that an upgrade to nginx 1.3.x is required? >> >> I had this issue a while ago and I solved it by adding >> >> chunkin on; >> error_page 411 = @my_411_error; >> location @my_411_error { >> chunkin_resume; >> } >> >> to my vHost configuration. >> I never had this error again. > > This uses agentzh chunkin module, which is probably good enough if > you have no other options, but a) not something officially > supported and b) known to have limitations (e.g., AFAIR it doesn't > work with DAV module). > > With support for chunked Transfer-Encoding available in 1.3.9+ I > would recommend using nginx 1.3.x instead. -- Never argue with an idiot; people watching may not tell the difference _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ________________________________ This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you. ???(??????)?????????????????????????????????????????????????????????????????????? From francis at daoine.org Sat Feb 2 09:29:24 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 2 Feb 2013 09:29:24 +0000 Subject: Too Many Redirects In-Reply-To: References: <20130201204928.GG4332@craic.sysops.org> Message-ID: <20130202092924.GH4332@craic.sysops.org> On Fri, Feb 01, 2013 at 07:27:31PM -0500, billmanhillman wrote: Hi there, > I created another HTTP/1.1 connector in tomcat listening on another port > 8443. I then separated the server settings in nginx for both http and > https. > > I had the http server def proxy_pass to http://localhost:8080 > I had the https server def proxy_pass to http://localhost:8443 > > I also put headers notifying tomcat the request was coming from http or > https. You changed the nginx config so that tomcat could be able to tell whether the original request was https or not. Did you change the tomcat config so that it would recognise this signal, and would accept that "originally https" was enough to consider it as secure? > Still no dice. Redirect loops can't seem to be fixed. It looks to me like the redirect loops are coming from tomcat, not nginx. If you can't configure tomcat the way you want to, perhaps configuring nginx to proxy_pass to a https:// url when appropriate would be an adequate workaround, at least for testing purposes? f -- Francis Daly francis at daoine.org From crirus at gmail.com Sat Feb 2 14:58:48 2013 From: crirus at gmail.com (Cristian Rusu) Date: Sat, 2 Feb 2013 16:58:48 +0200 Subject: HDD util is 100% - aio questions In-Reply-To: <201301282146.38266.vbart@nginx.com> References: <201301282146.38266.vbart@nginx.com> Message-ID: I have Centos Linux with RAM 64GB, 18TB RAID 10 HDDs, CPU and load is practically 0, everything is in HDDs that hold the server back Should I remove directio? The server is mainly for streaming large video files or for direct download. Any particular setting I shuld make to nginx in this case to lower util? Thank you --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com On Mon, Jan 28, 2013 at 7:46 PM, Valentin V. Bartenev wrote: > On Monday 28 January 2013 10:53:52 Cristian Rusu wrote: > > Hello > > > > Right now nginx manages to put hdds in the server to high util rate. > > > > I try to run Nginx 1.2.3 with aio support to deliver mp4 videos with the > > streaming module. > > I compiled the server with aio and it starts fine. > > In config I set it like this > [...] > > directio 512; > > > > So, you effectively switched off the page cache for any response longer > than 512 bytes. > > > I read that sendfile should be off, but it won't send video unless I turn > > it on. > > No, it should not for Linux. > > > In this case does aio work at all? How can I tell, before I wait a week > and > > see that maybe HDD util is not 100% all the time anymore :P > > > > It seems, and you have almost all the data read directly from drive, > which is resulted in 100% disk utilization. > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > > > > --------------------------------------------------------------- > > Cristian Rusu > > Web Developement & Electronic Publishing > > > > ====== > > Crilance.com > > Crilance.blogspot.com > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Feb 2 15:34:53 2013 From: nginx-forum at nginx.us (billmanhillman) Date: Sat, 02 Feb 2013 10:34:53 -0500 Subject: Too Many Redirects In-Reply-To: <20130202092924.GH4332@craic.sysops.org> References: <20130202092924.GH4332@craic.sysops.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > On Fri, Feb 01, 2013 at 07:27:31PM -0500, billmanhillman wrote: > > Hi there, > > > I created another HTTP/1.1 connector in tomcat listening on another > port > > 8443. I then separated the server settings in nginx for both http > and > > https. > > > > I had the http server def proxy_pass to http://localhost:8080 > > I had the https server def proxy_pass to http://localhost:8443 > > > > I also put headers notifying tomcat the request was coming from http > or > > https. > > You changed the nginx config so that tomcat could be able to tell > whether > the original request was https or not. Agreed. > > Did you change the tomcat config so that it would recognise this > signal, > and would accept that "originally https" was enough to consider it > as secure? The connection is secured on the Nginx side. Tomcat should be able to handle this since I'm just swapping out overblown apache for Nginx and it worked fine on apache before switching to Nginx. I've tried X-Proxy-For and X-Real-IP headers. Am I missing any other headers? The Java Application to "tells" the container the request has entered a secured area. I don't want to go down the road of creating Rewrites for https since the config for the application will reside in the Nginx config (bad practice). > > > Still no dice. Redirect loops can't seem to be fixed. > > It looks to me like the redirect loops are coming from tomcat, not > nginx. > > If you can't configure tomcat the way you want to, perhaps configuring > nginx to proxy_pass to a https:// url when appropriate would be an > adequate workaround, at least for testing purposes? I tried proxy_pass with https:// before but I always get a Bad Gateway. This is frustrating because I'm doing a write up for Nginx integration along with other servers to help others like myself to have a step by step guide for configuring reverse proxies and any flavor of application server (Tomcat, Jetty, Geronimo, WebSphere, JBoss, etc...) for PCI compliance. You'll simply download the .deb(debian only) and it will compile, install, secure, configure, and add a new node if it's in a clustered environment. I'm simply trying to get this right. Thanks for your help and suggestions. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235822,235853#msg-235853 From contact at jpluscplusm.com Sat Feb 2 18:58:30 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 2 Feb 2013 18:58:30 +0000 Subject: Too Many Redirects In-Reply-To: References: <20130202092924.GH4332@craic.sysops.org> Message-ID: On 2 February 2013 15:34, billmanhillman wrote: > Francis Daly Wrote: > ------------------------------------------------------- >> On Fri, Feb 01, 2013 at 07:27:31PM -0500, billmanhillman wrote: >> >> Hi there, >> >> > I created another HTTP/1.1 connector in tomcat listening on another >> port >> > 8443. I then separated the server settings in nginx for both http >> and >> > https. >> > >> > I had the http server def proxy_pass to http://localhost:8080 >> > I had the https server def proxy_pass to http://localhost:8443 >> > >> > I also put headers notifying tomcat the request was coming from http >> or >> > https. >> >> You changed the nginx config so that tomcat could be able to tell >> whether >> the original request was https or not. > > Agreed. > >> >> Did you change the tomcat config so that it would recognise this >> signal, >> and would accept that "originally https" was enough to consider it >> as secure? > > The connection is secured on the Nginx side. Tomcat should be able to handle > this since I'm just swapping out overblown apache for Nginx and it worked > fine on apache before switching to Nginx. I've tried X-Proxy-For and > X-Real-IP headers. Am I missing any other headers? You haven't mentioned X-Forwarded-For (IP address) or X-Forwarded-Proto ("http" or "https"), both of which I routinely set up, but why don't you just swap out tomcat for a simple netcat listener in a non-prod environment. Then you can just see what Apache passes through to it, and don't have to try and understand the Apache setup - just replicate it precisely in nginx. Then you can start to understand the setup and modify its behaviour ... Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Sun Feb 3 03:27:40 2013 From: nginx-forum at nginx.us (jdiana) Date: Sat, 02 Feb 2013 22:27:40 -0500 Subject: Restricting access to specific subdirectories Message-ID: <7f2870ebe24ee8cb7198d042e23e96aa.NginxMailingListEnglish@forum.nginx.org> Hey all, I'm a little stumped about what I'm doing wrong here. Basically I have a subdirectory that I want to restrict access to specific IP's, otherwise return a 403. If I do the following (inside my server {} block): server { // normal processing code here ... location ~ ^/my_ws$ { allow XX.XX.XX.XX; allow XX.XX.XX.XX/24; deny all; } } Hitting the following URL works as intended and I get a 403 if I try from anywhere other than the specified URL's: http://www.mydomain.com/my_ws However, if there's anything AFTER that (i.e. my_ws/, my_ws/page2, my_ws?parameter1, etc.) it allows them to proceed regardless of IP. I'm sure it's something required before or after the $, but I can't figure it out. Thanks in advance! Justin Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235864,235864#msg-235864 From steve at greengecko.co.nz Sun Feb 3 03:46:03 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sun, 03 Feb 2013 16:46:03 +1300 Subject: Restricting access to specific subdirectories In-Reply-To: <7f2870ebe24ee8cb7198d042e23e96aa.NginxMailingListEnglish@forum.nginx.org> References: <7f2870ebe24ee8cb7198d042e23e96aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <510DDD7B.9030401@greengecko.co.nz> On 03/02/13 16:27, jdiana wrote: > Hey all, > > I'm a little stumped about what I'm doing wrong here. Basically I have a > subdirectory that I want to restrict access to specific IP's, otherwise > return a 403. > > If I do the following (inside my server {} block): > > server { > // normal processing code here > ... > > location ~ ^/my_ws$ { > allow XX.XX.XX.XX; > allow XX.XX.XX.XX/24; > deny all; > } > } > > Hitting the following URL works as intended and I get a 403 if I try from > anywhere other than the specified URL's: http://www.mydomain.com/my_ws > > However, if there's anything AFTER that (i.e. my_ws/, my_ws/page2, > my_ws?parameter1, etc.) it allows them to proceed regardless of IP. > > I'm sure it's something required before or after the $, but I can't figure > it out. > > Thanks in advance! > > Justin > > do you need a $ at all? It's a placeholder for the end of the string, and all you care about s the start?? Steve From luky-37 at hotmail.com Sun Feb 3 10:57:58 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sun, 3 Feb 2013 11:57:58 +0100 Subject: HDD util is 100% - aio questions In-Reply-To: References: , <201301282146.38266.vbart@nginx.com>, Message-ID: Yes, remove directio and aio, and let the pagecache handle load. Monitor the performance and load carefully and report back the results. ________________________________ > From: crirus at gmail.com > Date: Sat, 2 Feb 2013 16:58:48 +0200 > Subject: Re: HDD util is 100% - aio questions > To: nginx at nginx.org > > I have Centos Linux with RAM 64GB, 18TB RAID 10 HDDs, > CPU and load is practically 0, everything is in HDDs that hold the > server back > > Should I remove directio? > The server is mainly for streaming large video files or for direct download. > > Any particular setting I shuld make to nginx in this case to lower util? > > Thank you > > --------------------------------------------------------------- > Cristian Rusu > Web Developement & Electronic Publishing > > ====== > Crilance.com > Crilance.blogspot.com > From anoopalias01 at gmail.com Sun Feb 3 11:00:18 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sun, 3 Feb 2013 16:30:18 +0530 Subject: Strange php-fpm ini parsing Message-ID: Hi, This is probably Off Topic . But I think most people use php-fpm for php with nginx and thought people here could really help I have a strange issue of php.fpm parsing a file ending in .ini whereas this .ini file doesnt have anything to do with php as per nginx error log : *26 FastCGI sent in stderr: "PHP message: WARNING: You have errors in you INI file (/home/xxxxx/public_html/xxx/etc/xxxxx/magiczoomplus.settings.ini) on line 1!" I renamed the file and the error was gone . So the question is - does php-fpm look for files ending in .ini ; if so how to disable that and does this default behavior slow down things ? Thanks -- Anoop P Alias (PGP Key ID : 0x014F9953) GNU system administrator http://UniversalAdm.in -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun Feb 3 11:53:09 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 3 Feb 2013 11:53:09 +0000 Subject: Restricting access to specific subdirectories In-Reply-To: <7f2870ebe24ee8cb7198d042e23e96aa.NginxMailingListEnglish@forum.nginx.org> References: <7f2870ebe24ee8cb7198d042e23e96aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 3 February 2013 03:27, jdiana wrote: [snip] > location ~ ^/my_ws$ { [snip] > However, if there's anything AFTER that (i.e. my_ws/, my_ws/page2, > my_ws?parameter1, etc.) it allows them to proceed regardless of IP. > > I'm sure it's something required before or after the $, but I can't figure > it out. Your problem is absolutely to do with the "$", and if you don't yet understand regex well enough to fix it, have a read through a simple intro such as http://www.zytrax.com/tech/web/regex.htm#positioning. BTW that's a pointer to the exact section you need, but I recommend you digest at least the first half of that guide - it's not long or difficult, and will serve you well in the future. HTH, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From mdounin at mdounin.ru Sun Feb 3 21:53:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 Feb 2013 01:53:04 +0400 Subject: Restricting access to specific subdirectories In-Reply-To: <7f2870ebe24ee8cb7198d042e23e96aa.NginxMailingListEnglish@forum.nginx.org> References: <7f2870ebe24ee8cb7198d042e23e96aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130203215304.GG40753@mdounin.ru> Hello! On Sat, Feb 02, 2013 at 10:27:40PM -0500, jdiana wrote: > Hey all, > > I'm a little stumped about what I'm doing wrong here. Basically I have a > subdirectory that I want to restrict access to specific IP's, otherwise > return a 403. > > If I do the following (inside my server {} block): > > server { > // normal processing code here > ... > > location ~ ^/my_ws$ { > allow XX.XX.XX.XX; > allow XX.XX.XX.XX/24; > deny all; > } > } > > Hitting the following URL works as intended and I get a 403 if I try from > anywhere other than the specified URL's: http://www.mydomain.com/my_ws > > However, if there's anything AFTER that (i.e. my_ws/, my_ws/page2, > my_ws?parameter1, etc.) it allows them to proceed regardless of IP. > > I'm sure it's something required before or after the $, but I can't figure > it out. You don't need regular expressions, just use normal prefix location: location /my_ws { allow ... deny all; } See http://nginx.org/r/location for details. -- Maxim Dounin http://nginx.com/support.html From crirus at gmail.com Mon Feb 4 06:58:45 2013 From: crirus at gmail.com (Cristian Rusu) Date: Mon, 4 Feb 2013 08:58:45 +0200 Subject: Features Message-ID: Hello I added some features to version 1.2.3, but since nginx moves forward, I'd like to see if there's a way to put those as permanent patch The features refers to limit_rate_after directive and a new one limit_rate_max, both used to control bandwidth. I changed limit_rate_after to take a variable as well, not just config. I also added limit_rate_max to control the speed of the initial bytes sent that are defined with limit_rate_after. So basically right now I can send 2MB at a speed of 600KB and then slow down at 100KB. This is extremely useful for video streaming websites with huge traffic, I save a lot of bandwidth and get able to serve many more users at the same time. My config looks something like: limit_rate_max 600k; if ($arg_u) { set $limit_rate_max $arg_u; } limit_rate_after 2m; if ($arg_burst) { set $limit_rate_after $arg_burst; } Here I can tell through the url what speed and when to slowdown, based on user account, so premium users receive the video and start playback faster. Is anyone interested to push this changes/features to nginx dev team? --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From crirus at gmail.com Mon Feb 4 07:00:39 2013 From: crirus at gmail.com (Cristian Rusu) Date: Mon, 4 Feb 2013 09:00:39 +0200 Subject: HDD util is 100% - aio questions In-Reply-To: References: <201301282146.38266.vbart@nginx.com> Message-ID: Pagecache? --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com On Sun, Feb 3, 2013 at 12:57 PM, Lukas Tribus wrote: > > Yes, remove directio and aio, and let the pagecache handle load. Monitor > the performance and load carefully and report back the results. > > > > ________________________________ > > From: crirus at gmail.com > > Date: Sat, 2 Feb 2013 16:58:48 +0200 > > Subject: Re: HDD util is 100% - aio questions > > To: nginx at nginx.org > > > > I have Centos Linux with RAM 64GB, 18TB RAID 10 HDDs, > > CPU and load is practically 0, everything is in HDDs that hold the > > server back > > > > Should I remove directio? > > The server is mainly for streaming large video files or for direct > download. > > > > Any particular setting I shuld make to nginx in this case to lower util? > > > > Thank you > > > > --------------------------------------------------------------- > > Cristian Rusu > > Web Developement & Electronic Publishing > > > > ====== > > Crilance.com > > Crilance.blogspot.com > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Feb 4 07:26:36 2013 From: agentzh at gmail.com (agentzh) Date: Sun, 3 Feb 2013 23:26:36 -0800 Subject: [ANN] ngx_openresty devel version 1.2.6.3 released In-Reply-To: References: Message-ID: I am delighted to announce the new development version of ngx_openresty, 1.2.6.3: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (development) release, 1.2.6.1: * upgraded LuaNginxModule to 0.7.14. * feature: implemented named subpattern support in ngx.re.match, ngx.re.gmatch, ngx.re.sub, and ngx.re.gsub; also added new regex option "D" to allow duplicate named subpattern names. thanks Ray Bejjani for the patch. * feature: implemented the "J" regex option for the PCRE Javascript compatible mode in the ngx.re API. thanks lhmwzy for requesting this. * feature: setting ngx.header.HEADER after sending out the response headers now only produced an error message in the Nginx error logs and does not throw out a Lua exception. this should be handy for Lua development. thanks Matthieu Tourne for requesting this. * feature: automatic Lua 5.1 interpreter detection on OpenBSD 5.2. thanks Ilya Shipitsin for the patch. * refactor: when the Nginx core fails to send the "100 Continue" response in case of the "Expect: 100-continue" request header (or just running out of memory), ngx.req.read_body() will no longer throw out a "failed to read request body" Lua error but will just terminate the current request and returns the 500 error page immediately, just as what the Nginx core currently does in this case. * bugfix: because of the recent API behaviour changes in Nginx 1.2.6+ and 1.3.9+, the "http request count is zero" alert might happen when ngx.req.read_body() was called to read the request body and Nginx failed to send out the "100 Continue" response (like client connection early abortion and etc). thanks stonehuzhan for reporting this issue. * bugfix: setting the "eof" argument (i.e., "ngx.arg[2]") in body_filter_by_lua* for a subrequest could truncate the main request's response data stream. * bugfix: in body_filter_by_lua*, the "eof" argument (i.e., "ngx.arg[2]") was never set in Nginx subrequests. * bugfix: for nginx 1.3.9+ compatibility, we return an error while using ngx.req.socket() to read the chunked request body (for now), because chunked support in the downstream cosocket API is still a TODO. * bugfix: for nginx 1.3.9+ compatibility, rewrite_by_lua* or access_by_lua* handlers might hang if the request body was read there, because the Nginx core now overwrites "r->write_event_handler" to "ngx_http_request_empty_handler" in its "ngx_http_read_client_request_body" API. * bugfix: for nginx 1.3.9+ compatibility, we now throw an error in ngx.req.init_body(), ngx.req.set_body_data(), and ngx.req.set_body_file() when calling them without calling ngx.req.read_body() or after calling ngx.req.discard_body(). * bugfix: a compilation error would happen when building with an Nginx core patched by the SPDY patch 58_1.3.11 because the patch had removed a request field from the Nginx core. thanks Chris Lea for reporting this. * bugfix: we did not get the request reference counter (i.e., "r->main->count") right when lua_need_request_body was turned on and Nginx versions older than 1.2.6 or 1.2.9 were used. * optimize: we no longer traverse the captured body chain every time we append a new link to it in ngx.location.capture and ngx.location.capture_multi. * docs: documented the ngx.quote_sql_str API. * upgraded SrcacheNginxModule to 0.18. * bugfix: we might serve a truncated srcache_fetch subrequest's response body as the cached response. * upgraded EchoNginxModule to 0.42. * feature: the echo_after_body directive is now enabled in Nginx subrequests (again). * bugfix: we did not set the "last_in_chain" buffer flag when echo_after_body was used in subrequests. * upgraded FormInputNginxModule to 0.07. * bugfix: Nginx might hang when it failed to send the "100 Continue" response for Nginx versions older than 1.2.6 (and those older than 1.3.9 in the 1.3.x series). * upgraded NginxDevelKit ot 0.2.18. * bugfix: various fixes for C89 compliance. also stripped some line-trailing spaces. * bugfix: guard macros were missing in the "ndk_set_var.h" header file. * bugfix: the "ndk_string" submodule failed to compile with gcc 4.6. thanks Jon Kolb for the patch. * bugfix: the "ndk_set_var" example did not use the new way in its "config" file. thanks Amos Wenger for the patch. * docs: fixes in README to reflect recent changes. thanks Amos Wenger for the patch. * applied Ruslan Ermilov's resolver_wev_handler_segfault_with_poll patch to the Nginx core bundled. see the related nginx-devel thread for details. * excluded the allow_request_body_updating patch from the Nginx core bundled. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002006 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From nginx-forum at nginx.us Mon Feb 4 08:36:39 2013 From: nginx-forum at nginx.us (twine) Date: Mon, 04 Feb 2013 03:36:39 -0500 Subject: SSL proxy client certificate rewrite to username/password Message-ID: <1ae2ced7731d3fd0d7a7abea4bc9c319.NginxMailingListEnglish@forum.nginx.org> I want to use VoltDB's HTTP JSON interface http://voltdb.com/docs/UsingVoltDB/ProgLangJson.php section 15.2.1 I also want to use certificate based authentication of the client. I am thinking that I can use nginx as SSL frontend and have it rewrite the URL to include a randomly generated username and password as required by VoltDB. But, I can't figure out how to write the rewrite rule in nginx's conf file. I need to write multiple rules like this: if client certificate common name is X, add to URL the arguments, user=PRF(x, u)&password=PRF(x, p) where PRF is a pseudo random function Is this possible? How? I appreciate your help! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235879,235879#msg-235879 From nginx-forum at nginx.us Mon Feb 4 11:09:39 2013 From: nginx-forum at nginx.us (shrikeh) Date: Mon, 04 Feb 2013 06:09:39 -0500 Subject: $upstream_http_* variables exist but do not seem to be readable In-Reply-To: References: Message-ID: <2d52c5c6e8043bb1d44b299195ae14d4.NginxMailingListEnglish@forum.nginx.org> Here is what I'm trying to achieve: 1. Backend sends response header saying "this page requires filtering". 2. Lua script generates a token, and populates the appropriate tag in the body copy with the token. 3. Token is pushed into user's cookies and also stored in Redis. >From what I can gather, to do this, I need to: 1. Add conditional logic within body_filter_by_lua based on the value of ngx.var.sent_http_csrf 2. Generate the token 3. Filter the body. 4. Make a subrequest with the token to a separate internal location block so that Redis can save the token (as redis will be disabled within the context of body_filter_by_lua) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235808,235881#msg-235881 From contact at jpluscplusm.com Mon Feb 4 12:34:18 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 4 Feb 2013 12:34:18 +0000 Subject: HDD util is 100% - aio questions In-Reply-To: References: <201301282146.38266.vbart@nginx.com> Message-ID: On 4 February 2013 07:00, Cristian Rusu wrote: > Pagecache? Yes, page cache. http://bit.ly/12mJ61j From nginx-forum at nginx.us Mon Feb 4 12:46:32 2013 From: nginx-forum at nginx.us (shrikeh) Date: Mon, 04 Feb 2013 07:46:32 -0500 Subject: $upstream_http_* variables exist but do not seem to be readable In-Reply-To: References: Message-ID: Hi! Thanks for the responses. Is there any way for nginx to work conditionally based on headers in the upstream response? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235808,235878#msg-235878 From crirus at gmail.com Mon Feb 4 16:28:49 2013 From: crirus at gmail.com (Cristian Rusu) Date: Mon, 4 Feb 2013 18:28:49 +0200 Subject: HDD util is 100% - aio questions In-Reply-To: References: <201301282146.38266.vbart@nginx.com> Message-ID: I read alreayd, pagecache is a plugin for Magento? --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com On Mon, Feb 4, 2013 at 2:34 PM, Jonathan Matthews wrote: > On 4 February 2013 07:00, Cristian Rusu wrote: > > Pagecache? > > Yes, page cache. http://bit.ly/12mJ61j > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Feb 4 16:53:34 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 4 Feb 2013 17:53:34 +0100 Subject: HDD util is 100% - aio questions In-Reply-To: References: , <201301282146.38266.vbart@nginx.com>, , , , , Message-ID: No, read this (first hit for page cache @Google): http://en.wikipedia.org/wiki/Page_cache ________________________________ > From: crirus at gmail.com > Date: Mon, 4 Feb 2013 18:28:49 +0200 > Subject: Re: HDD util is 100% - aio questions > To: nginx at nginx.org > > I read alreayd, pagecache is a plugin for Magento? > --------------------------------------------------------------- > Cristian Rusu > Web Developement & Electronic Publishing > > ====== > Crilance.com > Crilance.blogspot.com > > > On Mon, Feb 4, 2013 at 2:34 PM, Jonathan Matthews > > wrote: > On 4 February 2013 07:00, Cristian Rusu > > wrote: > > Pagecache? > > Yes, page cache. http://bit.ly/12mJ61j > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From shahzaib.cb at gmail.com Mon Feb 4 17:38:55 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 4 Feb 2013 22:38:55 +0500 Subject: hard-disk util% got higher on enabling aio for nginx-1.2.1 Message-ID: Hello, I followed this post http://stackoverflow.com/questions/11250798/best-file-system-for-serving-1gb-files-using-nginx-under-moderate-write-read-p to optimize nginx for large static files i.e (flv,mp4) and enabled aio on nginx config which you can see below, and after enabling aio, directio, and output_buffers, i could notice(iostat -x -d 3) that cpu util% got higher from 10.00 to 35.00 and svctime got reduced to 1.00 from 4.00. So i came to the conclusion that after enabling these directives , the i/o util% starts getting higher and svctime start getting reduced. 1.Can someone guide me if aio directive helps improving nginx flv stream, if yes than why it is utilizing too much hard-disk? 2. Reducing the svctime(iostat -x -d 3) for i/o is a good thing or not ? http { include mime.types; default_type application/octet-stream; client_body_buffer_size 128K; sendfile_max_chunk 128k; access_log off; sendfile off; client_header_timeout 3m; client_body_timeout 3m; server { listen 80; server_name domain.com; client_max_body_size 800m; limit_rate 100k; location / { root /var/www/html/content; index index.html index.htm index.php; } location ~ \.(flv|jpeg|jpg)$ { flv; root /var/www/html/content; aio on; directio 512; output_buffers 1 8m; expires 15d; valid_referers none blocked domain.com; if ($invalid_referer) { return 403; } } Best Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 4 18:08:44 2013 From: nginx-forum at nginx.us (f.corriga) Date: Mon, 04 Feb 2013 13:08:44 -0500 Subject: Looking for help in setting up Magento Full Page Cache using NGINX. Message-ID: <844ecc9d8063111417aad94eb09dec58.NginxMailingListEnglish@forum.nginx.org> Hello. I've set up a custom Magento store on NGINX+PHP5-FPM+Percona DB. I would like to use the NGINX cache instead of Varnish or other FPCs. I'm looking for help to set up NGINX FCGI_Cache for Magento the correct way (if it's possible). Is anyone willing to help in setting up the NGINX configuration? Thanks in advance, Francesco Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235898,235898#msg-235898 From jgehrcke at googlemail.com Mon Feb 4 18:19:09 2013 From: jgehrcke at googlemail.com (Jan-Philip Gehrcke) Date: Mon, 04 Feb 2013 19:19:09 +0100 Subject: HDD util is 100% - aio questions In-Reply-To: References: , <201301282146.38266.vbart@nginx.com>, , , , , Message-ID: <510FFB9D.20401@googlemail.com> 64 GB of RAM might not be sufficient for keeping a significant part of his video data in memory. Hence, depending on the number of concurrent users and the average size of the videos Cristian wants to stream it is entirely possible that caching videos in memory does not help at all. In this case, he needs proper disk I/O settings. On 02/04/2013 05:53 PM, Lukas Tribus wrote: > > No, read this (first hit for page cache @Google): > > http://en.wikipedia.org/wiki/Page_cache > > > > > ________________________________ >> From: crirus at gmail.com >> Date: Mon, 4 Feb 2013 18:28:49 +0200 >> Subject: Re: HDD util is 100% - aio questions >> To: nginx at nginx.org >> >> I read alreayd, pagecache is a plugin for Magento? >> --------------------------------------------------------------- >> Cristian Rusu >> Web Developement & Electronic Publishing >> >> ====== >> Crilance.com >> Crilance.blogspot.com >> >> >> On Mon, Feb 4, 2013 at 2:34 PM, Jonathan Matthews >> > wrote: >> On 4 February 2013 07:00, Cristian Rusu >> > wrote: >>> Pagecache? >> >> Yes, page cache. http://bit.ly/12mJ61j >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ nginx mailing list >> nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From luky-37 at hotmail.com Mon Feb 4 18:29:14 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 4 Feb 2013 19:29:14 +0100 Subject: HDD util is 100% - aio questions In-Reply-To: <510FFB9D.20401@googlemail.com> References: , , <201301282146.38266.vbart@nginx.com>, , , , , , , , , , , , <510FFB9D.20401@googlemail.com> Message-ID: Yes, I'm aware, thats why I told him to monitor the box carefully. However, async IO is not so easy to accomplish under linux, and since he is also using the streaming module, things can get complicated. I wonder if switching to FreeBSD would be a better idea to avoid the linux AIO limitations (as in the nginx documentation). Anyway, he needs to do some serious testing/thinking/spending time with it. Nobody will come up with the perfect configuration for him resolving all the issues. ---------------------------------------- > Date: Mon, 4 Feb 2013 19:19:09 +0100 > From: jgehrcke at googlemail.com > To: nginx at nginx.org > Subject: Re: HDD util is 100% - aio questions > > 64 GB of RAM might not be sufficient for keeping a significant part of > his video data in memory. Hence, depending on the number of concurrent > users and the average size of the videos Cristian wants to stream it is > entirely possible that caching videos in memory does not help at all. In > this case, he needs proper disk I/O settings. From nginx-forum at nginx.us Tue Feb 5 00:54:44 2013 From: nginx-forum at nginx.us (rtsai) Date: Mon, 04 Feb 2013 19:54:44 -0500 Subject: HTTP/1.1 505 HTTP Version Not Supported Server: nginx/1.2.6 Message-ID: Hi, I am using nginx as a ssl offloader in front of HAProxy. The issue I am having is I get a 505 error on one of the calls I make to Nginx/haproxy. If I make that same call to haproxy directory, I get a 200. Server OS: centos 5.6 Nginx sever version: 1.2.6 Here is my config: worker_processes 8; #worker_cpu_affinity 0001 0010 0100 1000; worker_rlimit_nofile 70000; events { worker_connections 6144; } http { include mime.types; default_type application/octet-stream; #sendfile on; #tcp_nopush on; #keepalive_timeout 0; #keepalive_timeout 65; #gzip on; log_format upstream '$remote_addr - - [$time_local] "$request" $status ' 'upstream $upstream_response_time request $request_time ' '[for $host via $upstream_addr]'; upstream haproxy { # POINT TO HAPROXY:1443 server 127.0.0.1:1443 } server { listen 443 ssl; # CERT server_name my.testinstant.com; ssl_certificate_key /etc/nginx/certs/mytestinstant.pem ssl_certificate /etc/nginx/certs/mytestinstant.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; # SINGLE PROCESS CACHE ssl_session_cache builtin:1000 ; ssl_ciphers !aNULL:!eNULL:!EXPORT:!DSS:!DES:!kEDH:!ADH:!EXPORT:!LOW:!SSLv2:!RC4-MD5:RC4+RSA:DES-CBC3-SHA:AES+RSA:+HIGH:+MEDIUM; # POINT TO HAPROXY UPSTREAM location / { proxy_pass http://haproxy; } # DEFINE NGINX STATUS PAGE location /nginx_status { stub_status on; access_log off; allow all; } } #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /var/www/nginx-default; #} } Any help with this would be aprreciated. Thanks Rob Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235903,235903#msg-235903 From nginx-forum at nginx.us Tue Feb 5 01:53:33 2013 From: nginx-forum at nginx.us (rtsai) Date: Mon, 04 Feb 2013 20:53:33 -0500 Subject: HTTP/1.1 505 HTTP Version Not Supported Server: nginx/1.2.6 In-Reply-To: References: Message-ID: <4654ea2b190226dac71849186ef4962c.NginxMailingListEnglish@forum.nginx.org> Here is the error from the curl Fault Name: HttpRequestReceiveError Error Type: Default Description: Http request received failed Root Cause Code: -19002 Root Cause : HTTP Transport: Http version not supported Binding State: CLIENT_CONNECTION_ESTABLISHED Service: null Endpoint: null Operation (Client): Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235903,235904#msg-235904 From nginx-forum at nginx.us Tue Feb 5 03:23:46 2013 From: nginx-forum at nginx.us (otherjohn) Date: Mon, 04 Feb 2013 22:23:46 -0500 Subject: YA Converting .htaccess to nginx, I can't get it to work! Message-ID: <9845e529a8828eab65cd225972294d6f.NginxMailingListEnglish@forum.nginx.org> Hi all, I am trying to convert this htaccess file to nginx config and it is not working. Here is my .htaccess Header unset ETag FileETag None Header unset Last-Modified Header set Expires "Fri, 21 Dec 2020 00:00:00 GMT" Header set Cache-Control "public, no-transform" RewriteEngine On RewriteCond %{QUERY_STRING} ^$ RewriteRule ^((.)?)$ index.php?p=home [L] RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^(.*)$ $1 [QSA,L] RewriteCond $1 !^(\#(.)*|\?(.)*|\.htaccess(.)*|\.htaccess\.back(.)*|.idea\/(.)*|.svn\/(.)*|admin\.php(.)*|content\/(.)*|download\.php(.)*|ecc\/(.)*|images\/(.)*|index\.php(.)*|install\/(.)*|login\.php(.)*|readme\.txt(.)*|robots\.txt(.)*) RewriteRule ^(.+)$ index.php?url=$1&%{QUERY_STRING} [L] SetOutputFilter DEFLATE and here is my config. I can get it to somewhat work if I only use one of the following lines but then the rest is broke! location / { if ($query_string ~ "^$"){ rewrite ^/((.)?)$ /index.php?p=home break; } if (-e $request_filename){ rewrite ^(.*)$ /$1 break; } rewrite ^(.+)$ /index.php?url=$1&$query_string break; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Not to mention that I hate using the if else clauses in my nginx config because I think I read where it is poor for performance. Can someone give me a hand here. I have been working on this for hours without any luck. John Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235905,235905#msg-235905 From ianevans at digitalhit.com Tue Feb 5 04:20:42 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Mon, 04 Feb 2013 23:20:42 -0500 Subject: Updated hotlink protection with new Google Image Search Message-ID: <5110889A.9050209@digitalhit.com> As you can see from this thread on Webmasterworld (http://www.webmasterworld.com/google/4537063.htm), a lot of people are upset about the new Google Image Search changes that allow visitors to see much larger previews of images without visiting the site. Google has also stopped showing the site in the background which removes any potential visits from people seeing the context of the image's "home". I currently have use the following in my nginx.conf: error_page 403 = /403.shtml; expires 30d; valid_referers none blocked *.example.com example.com ; if ($invalid_referer) { return 403; } The 403 page actually looks up the requested image in my database then redirects the person to the page that it's part of. That's worked for a years but it's not working as well in the new Google Image Search and we're now serving up more image requests without their surrounding pages, and yes, the advertising that allows me to travel to take the shots at events. From the thread (and looking at my logs) it appears that Google seems to be sending the referrer less and less, especially in situations where people are logged into to Google. While reading about this new search that has people up in arms I came across this site and how they're handling it. When you see the larger image preview pop up on the new Google search the image flashes for a second then gets replaced by the image covered by a warning and an note to click to see it without it: https://www.google.com/search?q=site:fansshare.com&hl=en&safe=off&tbo=d&source=lnms&tbm=isch&sa=X&ei=SIEQUdDAPMnkygGKrYGACg&ved=0CAoQ_AUoAA&biw=1266&bih=878 Any idea how they're doing this? Any idea how to implement it with nginx? From what I've read elsewhere some of the new hotlink options are going as far as intercepting the image requests and adding overlays with PHP and GD to create the overlays if they're not on the host site. Thanks for any thoughts. From crirus at gmail.com Tue Feb 5 06:48:02 2013 From: crirus at gmail.com (Cristian Rusu) Date: Tue, 5 Feb 2013 08:48:02 +0200 Subject: HDD util is 100% - aio questions In-Reply-To: References: <201301282146.38266.vbart@nginx.com> <510FFB9D.20401@googlemail.com> Message-ID: Hello Well, we don't have a single box, we have a few setups with large slow HDDs and a couple of edge servers running on 1.5TB RAID SSD for actual streaming. Right now it's stable(70% max util) as we managed to write a caching code so basically the slow HDDs mainly feed the edges and only a few users at first. The edge servers are way faster and cope with 10Gbit bandwidth so far. I just hoped that aio is a solution for less strain in HDDs util%. As for page cache, that is pretty useless as web-mysql boxes does just fine in serving up to 2500 connections per second. The main issue is around getting large videos in and out of HDDs fast. --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com On Mon, Feb 4, 2013 at 8:29 PM, Lukas Tribus wrote: > > Yes, I'm aware, thats why I told him to monitor the box carefully. > However, async IO is not so easy to accomplish under linux, and since he is > also using the streaming module, things can get complicated. > > I wonder if switching to FreeBSD would be a better idea to avoid the linux > AIO limitations (as in the nginx documentation). > > Anyway, he needs to do some serious testing/thinking/spending time with > it. Nobody will come up with the perfect configuration for him resolving > all the issues. > > > > ---------------------------------------- > > Date: Mon, 4 Feb 2013 19:19:09 +0100 > > From: jgehrcke at googlemail.com > > To: nginx at nginx.org > > Subject: Re: HDD util is 100% - aio questions > > > > 64 GB of RAM might not be sufficient for keeping a significant part of > > his video data in memory. Hence, depending on the number of concurrent > > users and the average size of the videos Cristian wants to stream it is > > entirely possible that caching videos in memory does not help at all. In > > this case, he needs proper disk I/O settings. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From crirus at gmail.com Tue Feb 5 06:53:03 2013 From: crirus at gmail.com (Cristian Rusu) Date: Tue, 5 Feb 2013 08:53:03 +0200 Subject: hard-disk util% got higher on enabling aio for nginx-1.2.1 In-Reply-To: References: Message-ID: Hello It looks like you'r eon my spot right now. I learned directio is useless for large files as it set nginx to skip caching for anything larger than 512 Kb. If you have enough RAM (128MB maybe) you get to cache videos and avoid reading from HDD each time. Some other people here said aio is not ideally either at least on Centos, they said about FreeBSD but I have to google a bit on that. I made it work using more servers, alrge one with slow HDDs for storage and fast server with SSD for caching. So I have a setup of 1 www, 1 mysql, 2 storage, 2 edge servers and 2 converter servers each dealing with their specific tasks. It works fine for 10Gbit and about 2500 users per second. --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com On Mon, Feb 4, 2013 at 7:38 PM, shahzaib shahzaib wrote: > Hello, > > I followed this post > http://stackoverflow.com/questions/11250798/best-file-system-for-serving-1gb-files-using-nginx-under-moderate-write-read-p > to optimize nginx for large static files i.e (flv,mp4) and enabled aio on > nginx config which you can see below, and after enabling aio, directio, and > output_buffers, i could notice(iostat -x -d 3) that cpu util% got higher > from 10.00 to 35.00 and svctime got reduced to 1.00 from 4.00. So i came to > the conclusion that after enabling these directives , the i/o util% starts > getting higher and svctime start getting reduced. > > 1.Can someone guide me if aio directive helps improving nginx flv stream, > if yes than why it is utilizing too much hard-disk? > 2. Reducing the svctime(iostat -x -d 3) for i/o is a good thing or not ? > > > http { > include mime.types; > default_type application/octet-stream; > client_body_buffer_size 128K; > sendfile_max_chunk 128k; > access_log off; > sendfile off; > client_header_timeout 3m; > client_body_timeout 3m; > > server { > listen 80; > server_name domain.com; > client_max_body_size 800m; > limit_rate 100k; > > > location / { > root /var/www/html/content; > index index.html index.htm index.php; > > } > location ~ \.(flv|jpeg|jpg)$ { > flv; > root /var/www/html/content; > aio on; > directio 512; > output_buffers 1 8m; > expires 15d; > valid_referers none blocked domain.com; > if ($invalid_referer) { > return 403; > } > } > > > Best Regards. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 5 07:15:04 2013 From: nginx-forum at nginx.us (dencivi) Date: Tue, 05 Feb 2013 02:15:04 -0500 Subject: [Nginx&TLS] How to make log show a successful exchange of digital certificates. Message-ID: <646aa295fee80412dfcd650ac3a1ca2f.NginxMailingListEnglish@forum.nginx.org> Hello, I've make nginx support TLS(Mutual Authentication), It's can be work and very cool. but i want have some detail log about exchange of digital certificates. for example, the log have exchange success and client digital certificates information in this exchange action. My system like: Broswer <--TLS--> Nginx 1.0.8 <--HTTP--> Tomcat So, what can i do? Thanks for your work. =============== nginx.conf ================ server { listen 8889; server_name 192.168.10.251; index index.jsp index.html index.htm; charset utf-8; log_format tls_log '$remote_addr $remote_user [$time_local] "$request" $http_host ' '$status $upstream_status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $ssl_protocol $ssl_cipher $upstream_addr ' '$request_time $upstream_response_time'; access_log /usr/local/nginx/logs/http_8889_access.log tls_log; #TLS start ssl on; ssl_certificate ssl/server.crt; ssl_certificate_key ssl/server.key; ssl_client_certificate ssl/ca.crt ; ssl_verify_client on; ssl_protocols SSLv2 SSLv3 TLSv1; #TLS end #chunkin for XTOM chunkin on; error_page 411 = @my_411_error; location @my_411_error { chunkin_resume; } location ~ /mux-.+ { proxy_pass http://192.168.10.123:8080; proxy_redirect default; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; } } =========== Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235909,235909#msg-235909 From shahzaib.cb at gmail.com Tue Feb 5 07:15:33 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 5 Feb 2013 12:15:33 +0500 Subject: hard-disk util% got higher on enabling aio for nginx-1.2.1 In-Reply-To: References: Message-ID: We've got similar setup to you, 1db server, 3 www servers (App clustering), 5 content servers(nginx-1.2.1) for static content i.e (jpg,flv,mp4) but we don't have a separate conversion server, conversion and streaming is served by these 5 content servers and we've 32Gb Ram with Raid10 Sas drives and 1Gbps port for each of the content server and also got one large content storage server with slow HDD i.e software Raid. Please keep in mind that all the servers are using linux(centos-6). 1. So should i disable aio for centos 6 ? 2. What about sendfile, should i keep it off for all content servers? 3. Can i enable aio for storage server ? Best Regards. On Tue, Feb 5, 2013 at 11:53 AM, Cristian Rusu wrote: > Hello > > It looks like you'r eon my spot right now. > I learned directio is useless for large files as it set nginx to skip > caching for anything larger than 512 Kb. > If you have enough RAM (128MB maybe) you get to cache videos and avoid > reading from HDD each time. > Some other people here said aio is not ideally either at least on Centos, > they said about FreeBSD but I have to google a bit on that. > > I made it work using more servers, alrge one with slow HDDs for storage > and fast server with SSD for caching. > > So I have a setup of 1 www, 1 mysql, 2 storage, 2 edge servers and 2 > converter servers each dealing with their specific tasks. > It works fine for 10Gbit and about 2500 users per second. > > --------------------------------------------------------------- > Cristian Rusu > Web Developement & Electronic Publishing > > ====== > Crilance.com > Crilance.blogspot.com > > > On Mon, Feb 4, 2013 at 7:38 PM, shahzaib shahzaib wrote: > >> Hello, >> >> I followed this post >> http://stackoverflow.com/questions/11250798/best-file-system-for-serving-1gb-files-using-nginx-under-moderate-write-read-p >> to optimize nginx for large static files i.e (flv,mp4) and enabled aio on >> nginx config which you can see below, and after enabling aio, directio, and >> output_buffers, i could notice(iostat -x -d 3) that cpu util% got higher >> from 10.00 to 35.00 and svctime got reduced to 1.00 from 4.00. So i came to >> the conclusion that after enabling these directives , the i/o util% starts >> getting higher and svctime start getting reduced. >> >> 1.Can someone guide me if aio directive helps improving nginx flv stream, >> if yes than why it is utilizing too much hard-disk? >> 2. Reducing the svctime(iostat -x -d 3) for i/o is a good thing or not ? >> >> >> http { >> include mime.types; >> default_type application/octet-stream; >> client_body_buffer_size 128K; >> sendfile_max_chunk 128k; >> access_log off; >> sendfile off; >> client_header_timeout 3m; >> client_body_timeout 3m; >> >> server { >> listen 80; >> server_name domain.com; >> client_max_body_size 800m; >> limit_rate 100k; >> >> >> location / { >> root /var/www/html/content; >> index index.html index.htm index.php; >> >> } >> location ~ \.(flv|jpeg|jpg)$ { >> flv; >> root /var/www/html/content; >> aio on; >> directio 512; >> output_buffers 1 8m; >> expires 15d; >> valid_referers none blocked domain.com; >> if ($invalid_referer) { >> return 403; >> } >> } >> >> >> Best Regards. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 5 08:00:01 2013 From: nginx-forum at nginx.us (dencivi) Date: Tue, 05 Feb 2013 03:00:01 -0500 Subject: [Nginx&TLS] How to make log show a successful exchange of digital certificates. In-Reply-To: <646aa295fee80412dfcd650ac3a1ca2f.NginxMailingListEnglish@forum.nginx.org> References: <646aa295fee80412dfcd650ac3a1ca2f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, i find way to show exchange of digital certificates information. i'm sorry, I did not read the document http://wiki.nginx.org/HttpSslModule ========ref========= Module ngx_http_ssl_module supports the following built-in variables: $ssl_cipher returns the cipher suite being used for the currently established SSL/TLS connection $ssl_client_serial returns the serial number of the client certificate for the currently established SSL/TLS connection ? if applicable, i.e., if client authentication is activated in the connection $ssl_client_s_dn returns the subject Distinguished Name (DN) of the client certificate for the currently established SSL/TLS connection ? if applicable, i.e., if client authentication is activated in the connection $ssl_client_i_dn returns the issuer DN of the client certificate for the currently established SSL/TLS connection ? if applicable, i.e., if client authentication is activated in the connection $ssl_protocol returns the protocol of the currently established SSL/TLS connection ? depending on the configuration and client available options it's one of SSLv2, SSLv3 or TLSv1 $ssl_session_id the Session ID of the established secure connection ? requires Nginx version greater or equal to 0.8.20 $ssl_client_cert $ssl_client_raw_cert $ssl_client_verify takes the value "SUCCESS" when the client certificate is successfully verified Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235909,235912#msg-235912 From nginx-forum at nginx.us Tue Feb 5 08:10:16 2013 From: nginx-forum at nginx.us (dencivi) Date: Tue, 05 Feb 2013 03:10:16 -0500 Subject: [Nginx&TLS] How to make log show a successful exchange of digital certificates. In-Reply-To: References: <646aa295fee80412dfcd650ac3a1ca2f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1e12fd058cb4c39e2758b6a445e6d8ee.NginxMailingListEnglish@forum.nginx.org> My log format log_format tls_log '$remote_addr $remote_user [$time_local] "$request" $http_host ' '$status $upstream_status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $upstream_addr [$request_time/$upstream_response_time] ' '[SSL]: $ssl_protocol $ssl_cipher SSL_CLIENT{Verify:$ssl_client_verify, Serial:$ssl_client_serial, SDN:$ssl_client_s_dn, IDN:$ssl_client_i_dn}'; and, thank your work. nginx is very cool. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235909,235913#msg-235913 From mdounin at mdounin.ru Tue Feb 5 10:19:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Feb 2013 14:19:48 +0400 Subject: HTTP/1.1 505 HTTP Version Not Supported Server: nginx/1.2.6 In-Reply-To: References: Message-ID: <20130205101948.GS40753@mdounin.ru> Hello! On Mon, Feb 04, 2013 at 07:54:44PM -0500, rtsai wrote: > I am using nginx as a ssl offloader in front of HAProxy. The issue I am > having is I get a 505 error on one of the calls I make to Nginx/haproxy. If > I make that same call to haproxy directory, I get a 200. To understand what goes on first of all you have to debug what happens on the wire, either with tcpdump/wireshark, or using nginx debug log, see http://nginx.org/en/docs/debugging_log.html. -- Maxim Dounin http://nginx.com/support.html From r at roze.lv Tue Feb 5 13:01:28 2013 From: r at roze.lv (Reinis Rozitis) Date: Tue, 5 Feb 2013 15:01:28 +0200 Subject: HTTP/1.1 505 HTTP Version Not Supported Server: nginx/1.2.6 In-Reply-To: References: Message-ID: <084D80AB53634CB0AB5337ED517C735F@MasterPC> > I am using nginx as a ssl offloader in front of HAProxy. It is a bit out of topic but the 1.5.x tree of haproxy has inbuilt SSL support (and it works fine) .. so kinda one less moving part in the network setup. rr From aweber at comcast.net Tue Feb 5 13:45:17 2013 From: aweber at comcast.net (AJ Weber) Date: Tue, 05 Feb 2013 08:45:17 -0500 Subject: MaxMind-GeoIP Question Message-ID: <51110CED.10303@comcast.net> I see older pages referencing the ability to use geoip_org (I assume if you purchase MaxMind's Organization DB), but the latest documentation that I looked at only specifies country/city directives. Can anyone confirm whether the recent versions of Ngnix (like 1.3.9 and later) still allow the organization directives when the geoip module is compiled-in? Thanks, AJ From igor at sysoev.ru Tue Feb 5 13:49:43 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 5 Feb 2013 17:49:43 +0400 Subject: MaxMind-GeoIP Question In-Reply-To: <51110CED.10303@comcast.net> References: <51110CED.10303@comcast.net> Message-ID: <886E6338-77FE-43B1-A5E5-F30E57A33178@sysoev.ru> On Feb 5, 2013, at 17:45 , AJ Weber wrote: > I see older pages referencing the ability to use geoip_org (I assume if you purchase MaxMind's Organization DB), but the latest documentation that I looked at only specifies country/city directives. > > Can anyone confirm whether the recent versions of Ngnix (like 1.3.9 and later) still allow the organization directives when the geoip module is compiled-in? http://nginx.org/en/docs/http/ngx_http_geoip_module.html#geoip_org -- Igor Sysoev http://nginx.com/support.html From dast at c-base.org Tue Feb 5 14:08:03 2013 From: dast at c-base.org (dast@c-base) Date: Tue, 5 Feb 2013 15:08:03 +0100 Subject: Problem with client_max_body_size Message-ID: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> Hi, i want to use Nginx with apache2 and mod_dav_svn for hosting my SVN Repository via https. But i have problems on commit large files. On a 8MB ffmpeg binary commit, my SVN client brings this error: Commit failed (details follow): Server sent unexpected return value (413 Request Entity Too Large) in response to PUT request for '/svn/repo1/!svn/wrk/b2f0560a-05fd-427c-9039-d47dea9ff9c4/path/ffmpeg' The Nginx error log says: 2013/02/05 14:20:25 [error] 22931#0: *2693 client intended to send too large body: 8309431 bytes, client: 93.220.123.123, server: mydomain.com, request: "PUT /svn/repo1/!svn/wrk/b2f0560a-05fd-427c-9039-ababea9ff9c4/path/ffmpeg HTTP/1.1", host: "mydomain.com" And nothing about the request in the apache logs. So i think the nginx blocks the request, not the proxy to apache. The Requests to the Nginx goes over HTTPS: https://public-domain.com/svn/ (nginx) <> routing to http://localhost:8080 (apache2) My Nginx config already has client_max_body_size 256M; in the nginx.conf inside http { } and server { } in the vost site config. But it does not helps or is ignored. i have searched all other nginx configfiles for "client_max_body_size" without succes: #> grep -R 'client_max_body_size' ./* ./nginx.conf: client_max_body_size 256M; ./sites-available/443_mydomain.com: client_max_body_size 256M; ./sites-available/443_mydomain.com: client_max_body_size 256M; ./sites-enabled/443_mydomain.com: client_max_body_size 256M; ./sites-enabled/443_mydomain.com: client_max_body_size 256M; my site config file: server { listen 443; server_name mydomain.com; client_max_body_size 256M; ssl on; ssl_certificate /path/ssl-cert/nginx/mydomain.com.2013-01.cacert.crt; ssl_certificate_key /path/ssl-cert/nginx/mydomain.com.2013-01.key; access_log /path/logs/nginx.https.mydomain.com.access.log; error_log /path/logs/nginx.https.mydomain.com.error.log debug; root /path/htdocs/mydomain.com; index index.php index.html; location / { try_files $uri $uri/ /index.php; } location /svn { client_max_body_size 256M; keepalive_timeout 60; include /etc/nginx/proxy_params; proxy_pass http://127.0.0.1:8080; set $dest $http_destination; if ($http_destination ~ "^https://(.+)") { set $dest http://$1; } proxy_set_header Destination $dest; } } So, what can i check? What is wrong in my config? Why is client_max_body_size ignored? Does client_max_body_size not work on https? Does client_max_body_size not work on PUT requests? After 2 days of testing i hav no idea that to check. :( best regards, Daniel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 5 14:22:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Feb 2013 18:22:28 +0400 Subject: nginx-1.3.12 Message-ID: <20130205142227.GY40753@mdounin.ru> Changes with nginx 1.3.12 05 Feb 2013 *) Feature: variables support in the "proxy_bind", "fastcgi_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" directives. *) Feature: the $pipe, $request_length, $time_iso8601, and $time_local variables can now be used not only in the "log_format" directive. Thanks to Kiril Kalchev. *) Feature: IPv6 support in the ngx_http_geoip_module. Thanks to Gregor Kali?nik. *) Bugfix: in the "proxy_method" directive. *) Bugfix: a segmentation fault might occur in a worker process if resolver was used with the poll method. *) Bugfix: nginx might hog CPU during SSL handshake with a backend if the select, poll, or /dev/poll methods were used. *) Bugfix: the "[crit] SSL_write() failed (SSL:)" error. *) Bugfix: in the "client_body_in_file_only" directive; the bug had appeared in 1.3.9. *) Bugfix: in the "fastcgi_keep_conn" directive. -- Maxim Dounin http://nginx.com/support.html From aweber at comcast.net Tue Feb 5 14:27:37 2013 From: aweber at comcast.net (AJ Weber) Date: Tue, 05 Feb 2013 09:27:37 -0500 Subject: MaxMind-GeoIP Question In-Reply-To: <886E6338-77FE-43B1-A5E5-F30E57A33178@sysoev.ru> References: <51110CED.10303@comcast.net> <886E6338-77FE-43B1-A5E5-F30E57A33178@sysoev.ru> Message-ID: <511116D9.90400@comcast.net> http://nginx.org/en/docs/http/ngx_http_geoip_module.html#geoip_org OK, that's great! I was looking here: http://wiki.nginx.org/HttpGeoipModule Should I always assume that the nginx.org/en/docs... URLs have the most up-to-date information? Thx, AJ From dast at c-base.org Tue Feb 5 14:41:08 2013 From: dast at c-base.org (dast@c-base) Date: Tue, 5 Feb 2013 15:41:08 +0100 Subject: Problem with client_max_body_size In-Reply-To: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> References: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> Message-ID: <53821586-1E3C-4D5C-8BED-B1D546D7F929@c-base.org> forgot my version: # >nginx -V nginx version: nginx/1.1.19 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.1.19/debian/modules/chunkin-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/headers-more-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-development-kit --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-http-push --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-lua --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-progress --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-dav-ext-module Am 05.02.2013 um 15:08 schrieb dast at c-base : > Hi, > > i want to use Nginx with apache2 and mod_dav_svn for hosting my SVN Repository via https. > > But i have problems on commit large files. > ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From black.fledermaus at arcor.de Tue Feb 5 14:51:48 2013 From: black.fledermaus at arcor.de (basti) Date: Tue, 05 Feb 2013 15:51:48 +0100 Subject: Problem with client_max_body_size In-Reply-To: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> References: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> Message-ID: <51111C84.6060409@arcor.de> If your site use PHP so have a look on your php.ini there are 2 param's: upload_max_filesize post_max_size Am 05.02.2013 15:08, schrieb dast at c-base: > Hi, > > i want to use Nginx with apache2 and mod_dav_svn for hosting my SVN > Repository via https. > > But i have problems on commit large files. > > On a 8MB ffmpeg binary commit, my SVN client brings this error: > > Commit failed (details follow): > Server sent unexpected return value (413 Request Entity Too Large) in > response to PUT request for > '/svn/repo1/!svn/wrk/b2f0560a-05fd-427c-9039-d47dea9ff9c4/path/ffmpeg' > > > The Nginx error log says: > > > 2013/02/05 14:20:25 [error] 22931#0: *2693 client intended to send too > large body: 8309431 bytes, client: 93.220.123.123, server: > mydomain.com , request: "PUT > /svn/repo1/!svn/wrk/b2f0560a-05fd-427c-9039-ababea9ff9c4/path/ffmpeg > HTTP/1.1", host: "mydomain.com " > > > And nothing about the request in the apache logs. > So i think the nginx blocks the request, not the proxy to apache. > > > The Requests to the Nginx goes over HTTPS: > > https://public-domain.com/svn/ (nginx) <> routing to > http://localhost:8080 (apache2) > > > My Nginx config already has *client_max_body_size 256M;* in the > nginx.conf inside http { } and server { } in the vost site config. > But it does not helps or is ignored. > > i have searched all other nginx configfiles for "client_max_body_size" > without succes: > > *#> grep -R 'client_max_body_size' ./** > ./nginx.conf: client_max_body_size 256M; > ./sites-available/443_mydomain.com : > client_max_body_size 256M; > ./sites-available/443_mydomain.com : > client_max_body_size 256M; > ./sites-enabled/443_mydomain.com : > client_max_body_size 256M; > ./sites-enabled/443_mydomain.com : > client_max_body_size 256M; > > > my site config file: > > > server { > listen 443; > server_name mydomain.com ; > > client_max_body_size 256M; > > ssl on; > ssl_certificate /path/ssl-cert/nginx/mydomain.com.2013-01.cacert.crt; > ssl_certificate_key /path/ssl-cert/nginx/mydomain.com.2013-01.key; > > access_log /path/logs/nginx.https.mydomain.com.access.log; > error_log /path/logs/nginx.https.mydomain.com.error.log debug; > > root /path/htdocs/mydomain.com ; > index index.php index.html; > > location / { > try_files $uri $uri/ /index.php; > } > > location /svn { > client_max_body_size 256M; > keepalive_timeout 60; > include /etc/nginx/proxy_params; > proxy_pass http://127.0.0.1:8080 ; > set $dest $http_destination; > if ($http_destination ~ "^https://(.+)") { > set $dest http://$1; > } > proxy_set_header Destination $dest; > } > > } > > > So, what can i check? > What is wrong in my config? > Why is client_max_body_size ignored? > Does client_max_body_size not work on https? > Does client_max_body_size not work on PUT requests? > > After 2 days of testing i hav no idea that to check. :( > > best regards, > Daniel. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From dast at c-base.org Tue Feb 5 15:02:47 2013 From: dast at c-base.org (dast@c-base) Date: Tue, 5 Feb 2013 16:02:47 +0100 Subject: Problem with client_max_body_size In-Reply-To: <51111C84.6060409@arcor.de> References: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> <51111C84.6060409@arcor.de> Message-ID: Hi Basti, thanks for ur answer. no - its no PHP involved. Its only proxy to apache localhost:8080 where the mod_dav_svn handles the request. best regards, daniel. Am 05.02.2013 um 15:51 schrieb basti : > If your site use PHP so have a look on your php.ini > > there are 2 param's: > > upload_max_filesize > post_max_size > > > > Am 05.02.2013 15:08, schrieb dast at c-base: >> >> Hi, >> >> i want to use Nginx with apache2 and mod_dav_svn for hosting my SVN Repository via https. >> >> But i have problems on commit large files. >> >> On a 8MB ffmpeg binary commit, my SVN client brings this error: >> Commit failed (details follow): >> Server sent unexpected return value (413 Request Entity Too Large) in response to PUT request for '/svn/repo1/!svn/wrk/b2f0560a-05fd-427c-9039-d47dea9ff9c4/path/ffmpeg' >> >> >> The Nginx error log says: >> >> >> 2013/02/05 14:20:25 [error] 22931#0: *2693 client intended to send too large body: 8309431 bytes, client: 93.220.123.123, server: mydomain.com, request: "PUT /svn/repo1/!svn/wrk/b2f0560a-05fd-427c-9039-ababea9ff9c4/path/ffmpeg HTTP/1.1", host: "mydomain.com" >> >> >> And nothing about the request in the apache logs. >> So i think the nginx blocks the request, not the proxy to apache. >> >> >> The Requests to the Nginx goes over HTTPS: >> >> https://public-domain.com/svn/ (nginx) <> routing to http://localhost:8080 (apache2) >> >> >> My Nginx config already has client_max_body_size 256M; in the nginx.conf inside http { } and server { } in the vost site config. >> But it does not helps or is ignored. >> >> i have searched all other nginx configfiles for "client_max_body_size" without succes: >> >> #> grep -R 'client_max_body_size' ./* >> ./nginx.conf: client_max_body_size 256M; >> ./sites-available/443_mydomain.com: client_max_body_size 256M; >> ./sites-available/443_mydomain.com: client_max_body_size 256M; >> ./sites-enabled/443_mydomain.com: client_max_body_size 256M; >> ./sites-enabled/443_mydomain.com: client_max_body_size 256M; >> >> >> my site config file: >> >> >> server { >> listen 443; >> server_name mydomain.com; >> >> client_max_body_size 256M; >> >> ssl on; >> ssl_certificate /path/ssl-cert/nginx/mydomain.com.2013-01.cacert.crt; >> ssl_certificate_key /path/ssl-cert/nginx/mydomain.com.2013-01.key; >> >> access_log /path/logs/nginx.https.mydomain.com.access.log; >> error_log /path/logs/nginx.https.mydomain.com.error.log debug; >> >> root /path/htdocs/mydomain.com; >> index index.php index.html; >> >> location / { >> try_files $uri $uri/ /index.php; >> } >> >> location /svn { >> client_max_body_size 256M; >> keepalive_timeout 60; >> include /etc/nginx/proxy_params; >> proxy_pass http://127.0.0.1:8080; >> set $dest $http_destination; >> if ($http_destination ~ "^https://(.+)") { >> set $dest http://$1; >> } >> proxy_set_header Destination $dest; >> } >> >> } >> >> >> So, what can i check? >> What is wrong in my config? >> Why is client_max_body_size ignored? >> Does client_max_body_size not work on https? >> Does client_max_body_size not work on PUT requests? >> >> After 2 days of testing i hav no idea that to check. :( >> >> best regards, >> Daniel. >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 5 15:08:02 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Feb 2013 19:08:02 +0400 Subject: Problem with client_max_body_size In-Reply-To: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> References: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> Message-ID: <20130205150802.GC40753@mdounin.ru> Hello! On Tue, Feb 05, 2013 at 03:08:03PM +0100, dast at c-base wrote: [...] > So, what can i check? > What is wrong in my config? > Why is client_max_body_size ignored? > Does client_max_body_size not work on https? > Does client_max_body_size not work on PUT requests? > > After 2 days of testing i hav no idea that to check. :( Try starting with "nginx -t". Or, more precisely, making sure the configuration you have on disk is actually loaded into nginx. I suspect your problem is that there is some error in your config, and nginx refuses to reload configuration due to it (nginx will complain into global error log in such a case, but it is proven to be easy to overlook). -- Maxim Dounin http://nginx.com/support.html From dast at c-base.org Tue Feb 5 15:23:08 2013 From: dast at c-base.org (dast@c-base) Date: Tue, 5 Feb 2013 16:23:08 +0100 Subject: Problem with client_max_body_size In-Reply-To: <20130205150802.GC40753@mdounin.ru> References: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> <20130205150802.GC40753@mdounin.ru> Message-ID: Hi Maxim, i have checked /var/log/syslog - but no entrys about nginx. same in /var/log/nginx/error.log. i only found this in nginx error.log on restart: 2013/02/05 16:21:13 [info] 26394#0: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:80 # >/etc/init.d/nginx configtest Testing nginx configuration: Enter PEM pass phrase: nginx. # >echo $? 0 and # >nginx -t Enter PEM pass phrase: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful any ideas? best regards, Daniel. Am 05.02.2013 um 16:08 schrieb Maxim Dounin : > Try starting with "nginx -t". Or, more precisely, making sure the > configuration you have on disk is actually loaded into nginx. > > I suspect your problem is that there is some error in your config, > and nginx refuses to reload configuration due to it (nginx will > complain into global error log in such a case, but it is proven to > be easy to overlook). -------------- next part -------------- An HTML attachment was scrubbed... URL: From WBrown at e1b.org Tue Feb 5 15:38:35 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Tue, 5 Feb 2013 10:38:35 -0500 Subject: Newbie question on ip_hash Message-ID: Why does ip_hash only use the first 3 octects of the IP address? The reason I ask is that we run we servers for a number of schools. Each school is going the be their own subnet, ranging from a /24 to a /20 in size. Since ip_hash will lump everyone from a /24 in the same hash, it will direct them to the same server, correct? If I am correct above, is there any way to create persistent connections based on the full IPv4 address? -- William Brown Core Hosted Application Technical Team and Messaging Team Technology Services, WNYRIC, Erie 1 BOCES Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From mdounin at mdounin.ru Tue Feb 5 15:42:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Feb 2013 19:42:38 +0400 Subject: Problem with client_max_body_size In-Reply-To: References: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> <20130205150802.GC40753@mdounin.ru> Message-ID: <20130205154237.GE40753@mdounin.ru> Hello! On Tue, Feb 05, 2013 at 04:23:08PM +0100, dast at c-base wrote: > Hi Maxim, > > i have checked /var/log/syslog - but no entrys about nginx. > same in /var/log/nginx/error.log. > > i only found this in nginx error.log on restart: > > 2013/02/05 16:21:13 [info] 26394#0: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:80 > > > # >/etc/init.d/nginx configtest > Testing nginx configuration: Enter PEM pass phrase: > nginx. > # >echo $? > 0 > > and > > # >nginx -t > Enter PEM pass phrase: > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is successful > > any ideas? Try showing full nginx config you use (that is, nginx.conf and all included files). -- Maxim Dounin http://nginx.com/support.html From dast at c-base.org Tue Feb 5 16:09:58 2013 From: dast at c-base.org (dast@c-base) Date: Tue, 5 Feb 2013 17:09:58 +0100 Subject: Problem with client_max_body_size In-Reply-To: <20130205154237.GE40753@mdounin.ru> References: <7683FB94-FFD0-46A8-9E0A-5DF67690176E@c-base.org> <20130205150802.GC40753@mdounin.ru> <20130205154237.GE40753@mdounin.ru> Message-ID: <195373C4-D1D2-4BD5-81A4-3E64136F65CC@c-base.org> WTF, it works now :( what i have done: - backup /etc/nginx - removed all commented lines in the config files in /etc/nginx - stoped nginx - start nginx (from now nginx asks 2 times for the ssl-cert passthrase, on restart too, before it only asked one time for the passthrase on restart) - tested svn commit - and now committing 8MB File works fine now i was confused. i restored backup'd config (/etc/nginx) and restarted nginx - still works stoped and started nginx - still work. so: can it be, that a "/etc/init.d/nginx restart" has not loaded my changed config? and a stop+start has oared the new config and now it use the client_max_body_size config? on all my tests last 2 days i have only used the "restart" command - not stop+start. best regards, Daniel. Am 05.02.2013 um 16:42 schrieb Maxim Dounin : > Try showing full nginx config you use (that is, nginx.conf and > all included files). From ru at nginx.com Tue Feb 5 17:25:18 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 5 Feb 2013 21:25:18 +0400 Subject: MaxMind-GeoIP Question In-Reply-To: <511116D9.90400@comcast.net> References: <51110CED.10303@comcast.net> <886E6338-77FE-43B1-A5E5-F30E57A33178@sysoev.ru> <511116D9.90400@comcast.net> Message-ID: <20130205172518.GE49024@lo0.su> On Tue, Feb 05, 2013 at 09:27:37AM -0500, AJ Weber wrote: > http://nginx.org/en/docs/http/ngx_http_geoip_module.html#geoip_org > > OK, that's great! I was looking here: http://wiki.nginx.org/HttpGeoipModule > > Should I always assume that the nginx.org/en/docs... URLs have the most > up-to-date information? These are official docs. We *try* to keep them up-to-date. From nginx-forum at nginx.us Tue Feb 5 20:51:23 2013 From: nginx-forum at nginx.us (rtsai) Date: Tue, 05 Feb 2013 15:51:23 -0500 Subject: HTTP/1.1 505 HTTP Version Not Supported Server: nginx/1.2.6 In-Reply-To: <084D80AB53634CB0AB5337ED517C735F@MasterPC> References: <084D80AB53634CB0AB5337ED517C735F@MasterPC> Message-ID: <6258c6bdc093ab9c59e0ad531a09ca4d.NginxMailingListEnglish@forum.nginx.org> Thanks, I'll give that a shot. I assume HAProxy 1.5 is still in dev. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235903,235951#msg-235951 From wenwei.li at dianping.com Wed Feb 6 03:23:24 2013 From: wenwei.li at dianping.com (=?GB2312?B?wO7OxM6w?=) Date: Wed, 6 Feb 2013 11:23:24 +0800 Subject: how proxy_next_upstream control retry times Message-ID: hi? now i am looking for a 50x retry method, config like this: upstream jboss8080 { server 10.1.2.164:8080 weight=1 max_fails=1 fail_timeout=2s; server 10.1.2.174:8080 weight=1 max_fails=1 fail_timeout=2s; server 10.1.2.209:8080 weight=1 max_fails=1 fail_timeout=2s; server 10.1.7.136:8080 weight=1 max_fails=1 fail_timeout=2s; server 10.1.7.137:8080 weight=1 max_fails=1 fail_timeout=2s; server 10.1.7.138:8080 weight=1 max_fails=1 fail_timeout=2s; } server { ........ location / { proxy_next_upstream http_500 http_502 http_503 http_504 timeout error invalid_header; ....... if ( !-f $request_filename ) { proxy_pass http://jboss8080; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } then how proxy_next_upstream control retry times. by the way, i used error_page, config like this: upstream backend { server localhost:8080 weight=5; } upstream backup1 { server localhost:8081 weight=5; } upstream backup2 { server localhost:8082 weight=5; } server { listen 80; server_name localhost; proxy_intercept_errors on; location / { error_page 502 @backup1; proxy_pass http://backend; } location @backup1 { error_page 502 @backup2; proxy_pass http://backup1; } location @backup2 { proxy_pass http://backup2; } } @backup1works, but @backup2 doesn't. how can i let backup2 works. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 6 06:57:27 2013 From: nginx-forum at nginx.us (longnt) Date: Wed, 06 Feb 2013 01:57:27 -0500 Subject: shared memory zone "media" conflicts with already declared size 0 In-Reply-To: <20090929134604.GR1229@mdounin.ru> References: <20090929134604.GR1229@mdounin.ru> Message-ID: <06c1a04f6e536dd0ae23f7147d96454e.NginxMailingListEnglish@forum.nginx.org> SO cool ! I tested it and fix, Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,9716,235955#msg-235955 From steffen.weber at gmail.com Wed Feb 6 10:00:42 2013 From: steffen.weber at gmail.com (Steffen Weber) Date: Wed, 6 Feb 2013 11:00:42 +0100 Subject: fastcgi_keep_conn + PHP-FPM Message-ID: The changelog of nginx 1.3.12 mentions a bugfix in the "fastcgi_keep_conn" directive. Therefore I decided to give this feature a try. Snippet from nginx.conf: fastcgi_keep_conn on; upstream php { server 127.0.0.1:9000; keepalive 6; } Snippet from php-fpm.conf: listen = 127.0.0.1:9000 pm = static pm.max_children = 4 pm.max_requests = 5 # very low for demo purpose Now I run the following command: while true; do wget http://localhost/test.php -O- > /dev/null; done After 5 requests (pm.max_requests = 5) wget hangs and nginx logs the following error: 2013/02/06 10:47:09 [error] 6795#0: *6 readv() failed (104: Connection reset by peer) while reading upstream, client: ::ffff:127.0.0.1, server: localhost, request: "GET /test.php HTTP/1.1", upstream: "fastcgi:// 127.0.0.1:9000", host: "localhost" It seems that nginx cannot cope with a restarting php-fpm process. Anything I can do? Thanks, Steffen -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 6 10:47:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Feb 2013 14:47:28 +0400 Subject: how proxy_next_upstream control retry times In-Reply-To: References: Message-ID: <20130206104728.GM40753@mdounin.ru> Hello! On Wed, Feb 06, 2013 at 11:23:24AM +0800, ??? wrote: > hi? > now i am looking for a 50x retry method, config like this: > > > > upstream jboss8080 { > server 10.1.2.164:8080 weight=1 max_fails=1 > fail_timeout=2s; > server 10.1.2.174:8080 weight=1 max_fails=1 > fail_timeout=2s; > server 10.1.2.209:8080 weight=1 max_fails=1 > fail_timeout=2s; > server 10.1.7.136:8080 weight=1 max_fails=1 > fail_timeout=2s; > server 10.1.7.137:8080 weight=1 max_fails=1 > fail_timeout=2s; > server 10.1.7.138:8080 weight=1 max_fails=1 > fail_timeout=2s; > } > > server { > ........ > location / { > proxy_next_upstream http_500 http_502 http_503 http_504 timeout error > invalid_header; > ....... > if ( !-f $request_filename ) { > proxy_pass http://jboss8080; > break; > } > } > > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root html; > } > } > > > then how proxy_next_upstream control retry times. Quote from http://nginx.org/r/upstream: : If an error occurs when communicating with the server, a request : will be passed to the next server, and so on until all of the : functioning servers will be tried. If a successful response could : not be obtained from any of the servers, the client will be : returned the result of contacting the last server. That is, errors are handled as they appear, and there is no retry-specific controls except proxy_next_upstream itself. In a worst case all servers will be tried. > by the way, i used error_page, config like this: > > > upstream backend { > server localhost:8080 weight=5; > } > > upstream backup1 { > server localhost:8081 weight=5; > } > > upstream backup2 { > server localhost:8082 weight=5; > } > > server { > listen 80; > server_name localhost; > proxy_intercept_errors on; > > location / { > error_page 502 @backup1; > proxy_pass http://backend; > } > > location @backup1 { > error_page 502 @backup2; > proxy_pass http://backup1; > } > > location @backup2 { > proxy_pass http://backup2; > } > } > > @backup1works, but @backup2 doesn't. > how can i let backup2 works. http://nginx.org/r/recursive_error_pages -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Wed Feb 6 16:22:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Feb 2013 20:22:34 +0400 Subject: fastcgi_keep_conn + PHP-FPM In-Reply-To: References: Message-ID: <20130206162234.GP40753@mdounin.ru> Hello! On Wed, Feb 06, 2013 at 11:00:42AM +0100, Steffen Weber wrote: > The changelog of nginx 1.3.12 mentions a bugfix in the "fastcgi_keep_conn" > directive. Therefore I decided to give this feature a try. > > Snippet from nginx.conf: > > fastcgi_keep_conn on; > > upstream php { > server 127.0.0.1:9000; > keepalive 6; > } > > Snippet from php-fpm.conf: > > listen = 127.0.0.1:9000 > pm = static > pm.max_children = 4 > pm.max_requests = 5 # very low for demo purpose > > Now I run the following command: > > while true; do wget http://localhost/test.php -O- > /dev/null; done > > After 5 requests (pm.max_requests = 5) wget hangs and nginx logs the > following error: > > 2013/02/06 10:47:09 [error] 6795#0: *6 readv() failed (104: Connection > reset by peer) while reading upstream, client: ::ffff:127.0.0.1, server: > localhost, request: "GET /test.php HTTP/1.1", upstream: "fastcgi:// > 127.0.0.1:9000", host: "localhost" > > It seems that nginx cannot cope with a restarting php-fpm process. Anything > I can do? >From error message it looks like php-fpm closed connection uncleanly (socket closed with unread data in socket buffer?). Not idea about details as I can't reproduce it here on FreeBSD (connection reset on close if there are unread data in socket buffer is Linux-specific), but I don't think there is a room for improvement on nginx side. You may want to ask someone from php-fpm side to look into this (anight, are you here?). -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Wed Feb 6 16:47:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Feb 2013 20:47:22 +0400 Subject: Newbie question on ip_hash In-Reply-To: References: Message-ID: <20130206164722.GR40753@mdounin.ru> Hello! On Tue, Feb 05, 2013 at 10:38:35AM -0500, WBrown at e1b.org wrote: > Why does ip_hash only use the first 3 octects of the IP address? > > The reason I ask is that we run we servers for a number of schools. Each > school is going the be their own subnet, ranging from a /24 to a /20 in > size. Since ip_hash will lump everyone from a /24 in the same hash, it > will direct them to the same server, correct? Yes. The ip_hash balancing was designed to work with internet services, and use of /24 networks allows it to keep users from migrating between backend servers as they get new IP address on reconnect/reboot (typically from the same /24 network, at least at the time ip_hash was introduced) while still providing good distribution between backend servers. This probably isn't very useful nowadays, but this is how it works. > If I am correct above, is there any way to create persistent connections > based on the full IPv4 address? There is a number of 3rd party modules available which do hash calculation based on arbitrary variables, and these may be used if you need a hash based on full client's IPv4 address (there is $remote_addr variable). -- Maxim Dounin http://nginx.com/support.html From WBrown at e1b.org Wed Feb 6 17:00:05 2013 From: WBrown at e1b.org (WBrown at e1b.org) Date: Wed, 6 Feb 2013 12:00:05 -0500 Subject: Newbie question on ip_hash In-Reply-To: <20130206164722.GR40753@mdounin.ru> References: <20130206164722.GR40753@mdounin.ru> Message-ID: Maxim wrote on 02/06/2013 11:47:22 AM: > The ip_hash balancing was designed to work with internet services, > and use of /24 networks allows it to keep users from migrating > between backend servers as they get new IP address on > reconnect/reboot (typically from the same /24 network, at least at > the time ip_hash was introduced) while still providing good > distribution between backend servers. This probably isn't very > useful nowadays, but this is how it works. Thank you for the explanation. > > If I am correct above, is there any way to create persistent connections > > based on the full IPv4 address? > > There is a number of 3rd party modules available which do hash > calculation based on arbitrary variables, and these may be used if > you need a hash based on full client's IPv4 address (there is > $remote_addr variable). The backend I am hoping to use nginx for works fine without persistence, I was just thinking it would help with troubleshooting by keeping all of a user's activity on one server. That way I would have one log to check. I will look into those modules. Confidentiality Notice: This electronic message and any attachments may contain confidential or privileged information, and is intended only for the individual or entity identified above as the addressee. If you are not the addressee (or the employee or agent responsible to deliver it to the addressee), or if this message has been addressed to you in error, you are hereby notified that you may not copy, forward, disclose or use any part of this message or any attachments. Please notify the sender immediately by return e-mail or telephone and delete this message from your system. From primoz at slo-tech.com Wed Feb 6 17:24:28 2013 From: primoz at slo-tech.com (Primoz Bratanic) Date: Wed, 6 Feb 2013 18:24:28 +0100 Subject: RSA+DSA+ECC bundles Message-ID: <006701ce048e$d1b9e090$752da1b0$@slo-tech.com> Hi, Apache supports specifying multiple certificates (different types) for same host in line with OpenSSL support (RSA, DSA, ECC). This allows using ECC key exchange methods with clients that support it and it's backwards compatible. I wonder how much work would it be to add support for this to nginx. Is it just allowing specifying 2-3 certificates (and checking they have different key type) + adding support for returning proper key chain or are the any other obvious roadblocks (that are not obvious to me). Thanks, Primoz begin 666 smime.p7s M,( &"2J&2(;W#0$'`J" ,( "`0$Q#S -!@E at AD@!90,$`@,%`#" !@DJADB& M]PT!!P$``*""%30P@@8T,(($'* #`@$"`@$@, T&"2J&2(;W#0$!!04`,'TQ M"S )!@-5! 83`DE,,18P% 8#500*$PU3=&%R=$-O;2!,=&0N,2LP*08#500+ M$R)396-U3 >%PTP-S$P,C0R M,3 R-35:%PTQ-S$P,C0R,3 R-35:,(&,,0LP"08#500&$P))3#$6,!0&`U4$ M"A,-4W1AU=(H%G1L0/ M"%.^R]VD\1*T/GW,;0V&=^F>AA]&!9^WZN40`[7C] &,ZP@/A$$WXO*M?I - MYM)\+<%QO>#.;"*K+[.Y43]6LG 9'O[.8/821Z\F=FG$G)\W,XP0@*.U];^1 MM\L/# S]IMD0LGM"GX&>)S at WJV25A\Q)52*&$F= MZN/D2;CUUWR*,F\OF-IM1/F,U^8 at F5P<=.8).C"@%M;VL./[]CCB\NA61]UQ MQ[3I,U*WAO_PK!9P??KFF$I4>N7'1LT[)J""O%A]@): CAB%H='1P.B\O=W=W+G-T87)T#L3L"/<)_*IZ_^J-Y@^)Z#_2:8<%#TU\!Z+XVKT9X5$!RW MZ/XMC1M%UX$QUTMEYM)%5*,UZJ6*0';IC(@?[WLX]4!T"0YE'3V#'*?JWH/? M=-C\-LK64^YKX.V(5]U>=#;_.M'%==9J'%%7*@2 M[]('$_AP0V+6[KWL37[<#4"@^=?=P.\S>+QNPDV)%\_O&83]9=6]E.C M&L2%;7KE3 ]U]+\XB,*^X\6&GO168V\9:5'IJEXW`M$&A7S>ZB-";$# "R^X MAD08.RM J#@9%/RS_7*]02M]5-%,NN=6Q9P###)*H0`H2&Y] M'HN]E9VZ4K*ZTG1=`0!F$/+\AVGN6 at H/A^^750E=]RNA;L6:W]/WOS=8GXP]#N>5!'"M\ITVN*4U0F)WJD)QO4Q>W+P"X#- +7RK+00I!" MUKD[UZZB1:QC\<^#SAC at ++*3$"IJ9!-1[:K4H46>U)?:YP2P:O/R"GFKM:0%AZQ_JFX3XR&>0ARA-I'.XD,4!FPZAGBT)I#H5 MGWXE.7BLME4V#G1!Z3>4)^YV/-&H8+R"OU@'*%$69D1J[0?>]1&"Z_%8]Y.4ZO%W1[O M!$;BADA!6([^R0(#`0`!HX(#KC""`ZHP"08#51T3! (P`# +!@-5'0\$! ," M!+ P'08#51TE!!8P% 8(*P8!!04'`P(&""L&`04%!P,$,!T&`U4=#@06!!3E MU6GS@[&*& X$C+5FVJ,`:?UPNS ?!@-5'2,$&# 6@!2N58-O[#'*N?<=^J]K M,?/('>.LNS >!@-5'1$$%S 5 at 1-P9 M?7;?7 ":&WWWW=G0^ 41E at .7I%TM5*(%U%Y';QX_%G&^&5=?5A8I!YU%E41P M3\.]]#+@(O?0*YN<`1D0D_QD$.+YKMD91I0F\[2*M8>'R )>I(4KM; \$Q2( M: P]YCU N4XLUSAOU8L,D$E:O72!K=]P&[XAP3BVRD$.75J59_[,/RGM+\RJ M+?T7[7A31)U at M,8>+MY!PB B2 MMKJ+PJRO6+=\;*Y_CF;*X2:+&C9V7 ';=!0E_NVUH(@/W7C*+1\' MES !+7)Y^D;6$RJHN::K at TD=Y?+OW>0!CA@*CV-3%H5BJ0X9.LRU9J;":W0' MY"OA=CZT;=CV1.%S8A\[Q+Z at 4U8E;%$)]ZJKRK]V_6V;\YW;OSUFO Q6JJ^8 M2)4Z2]^G6%#9.'6I6^I## +_F>OH;$UP6REEG-VJ7CJG'OF M;O^$-2)(O[I[ M:?APQ_IZ-]C8#=)V3U?_D+?CD=+=[\)@MVV^*^* MH!"HV?L8QK:U7%(\B;89*G,!"@\#LQ)@\GHO@=NC;O\F,)?UB]V)5[:M/;.O M*\6W=@+PI=8KFH84*G+VXS.,70E+$]^[C'034DL"`P$``:."`E(P@@)., P& M`U4=$P0%, ,!`?\P"P8#51T/! 0#`@&N,!T&`U4=#@06!!1."^\:I$!;I1=I MAS#*-&A#T$&N\C!D!@-5'1\$73!;,"R@*J HAB9H='1P.B\O8V5R="YS=&%R M=&-O;2YO2P@2!0;VQI8WD at 879A:6QA8FQE(&%T(&AT=' Z+R]C97)T+G-T87)T8V]M M+F]R9R]P;VQI8WDN<&1F,!$&"6"&2 &&^$(!`00$`P(`!S X!@E at AD@!AOA" M`0T$*Q8I4W1A@(%YKP> MM?*>]*DI at _BR%.-N*(=$PY :WCBI/*Q#361%SMTHJ5SR.(F0=W[#:VJYP\M$2ZQXD(OGQRP>2Q%$R#12 M)\T*79^%P8G5&GCRE1!3,MV A&9UV;5H*/MA+KZ$J#C F1*&I1YG9*T&+B^I M<(7'E@]\B67UCD-4#JO=I8 YE&# -,F6<"RC$O4?2'N]''YKMYV0]"([KOC\ M*LKZ at E*@[Z]+59/KP;7P(HNL-$XF(@2AARQU2K?E?1/7N QDP#;2R2^&$HPC M"<$;@CMS2:-J5X>4Y=9XQ9E#8^--X':E8 M+KG$"0%^E;IM``8^LNI*$#G8T"OUO^QUOY<"Q0D;"-Q5-^*!^S>$0V(@RN=6 M2V7J_FS!)),DH33K!?^:(JZ;?3_Q95$*IC!JL_2('( -_'**Z(->,8($2C"" M!$8"`0$P at 9,P at 8PQ"S )!@-5! 83`DE,,18P% 8#500*$PU3=&%R=$-O;2!, M=&0N,2LP*08#500+$R)396-U%O*64*^ V"8!%)3 at X0J%%*9:6 M0N#@2=K/NJ0FW `(1#8JCT23I)S![%,V>@SG/#Y at Y3C.L%J\YL>+4^XP@:0& M"2L&`00!@C<0!#&!EC"!DS"!C#$+, D&`U4$!A,"24PQ%C 4!@-5! H3#5-T M87)T0V]M($QT9"XQ*S I!@-5! L3(E-E8W5R92!$:6=I=&%L($-E2!);G1EX2MC<.%QJ->.V\TR-]+/S0BVH\4O.Q&PSK+ at T80: Y M@*+G_ZC4+"AW4(4XW&B'DR3,KW6::\8]6O!R8^"[T_X-W8]HY R%'!GKP HX M.-=]3/@77"?7*WCN2+ 4G;=%T:17. 13M?N'CU)J.$O.Q7V&IC_HPO'9=-8)/V!F M=7ZO"]7P'=HX!ZWZI"YOZ 8+>]@P"JD(3!6/OFP9=)1G[47H&KE_&RZ3)O(* 6"J:_I>"J[F-]']Y>)A"8^P`````````` ` end From nginx-forum at nginx.us Wed Feb 6 18:45:00 2013 From: nginx-forum at nginx.us (automatix) Date: Wed, 06 Feb 2013 13:45:00 -0500 Subject: cache configuration In-Reply-To: <201301282155.14117.vbart@nginx.com> References: <201301282155.14117.vbart@nginx.com> Message-ID: Hello! I have to reopen this topic, because the issue is back... I have a basic host file, that defines thr root folder for a host as: # file ax-common-vhost server { if ($host ~ ^(?.+)\.(?.+)\.loc$) { set $folder "$area/$project"; } ... root /var/www/$folder/; } Now I want to add another basic vhost file for Zend Framework projects. There the webroot should be in the older "public" directly under the root. So I created a copy of my basic vhost file with a new root rule: # ax-zf-vhost server { if ($host ~ ^(?.+)\.(?.+)\.loc$) { set $folder "$area/$project"; } ... root /var/www/$folder/public/; } # file zf1sandbox.sandbox.loc include /etc/nginx/sites-available/ax-zf-vhost; But it's not working. The server tries to open the path of the zf1sandbox project root, cannot find any index file there and throws a 403 error. I've already set the sendfile setting to "off" in all relevant files/segments: # file nginx.conf http { ... sendfile off; ... } # file ax-zf-vhost server { location / { index index.html index.php; sendfile off; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235601,235971#msg-235971 From nginx-forum at nginx.us Wed Feb 6 18:55:12 2013 From: nginx-forum at nginx.us (automatix) Date: Wed, 06 Feb 2013 13:55:12 -0500 Subject: cache configuration In-Reply-To: References: <201301282155.14117.vbart@nginx.com> Message-ID: Sorry, my mistake. It's another problem. The server is not analyzing my zf1sandbox.sandbox.loc file, that includes the template for Zend Framework vhosts (include /etc/nginx/sites-available/ax-zf-vhost;), but is using my common template (ax-common-vhost). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235601,235972#msg-235972 From wenwei.li at dianping.com Thu Feb 7 03:07:34 2013 From: wenwei.li at dianping.com (=?GB2312?B?wO7OxM6w?=) Date: Thu, 7 Feb 2013 11:07:34 +0800 Subject: how proxy_next_upstream control retry times In-Reply-To: <20130206104728.GM40753@mdounin.ru> References: <20130206104728.GM40753@mdounin.ru> Message-ID: thank you very much. with *recursive_error_pages* on it works. 2013/2/6 Maxim Dounin > Hello! > > On Wed, Feb 06, 2013 at 11:23:24AM +0800, ??? wrote: > > > hi? > > now i am looking for a 50x retry method, config like this: > > > > > > > > upstream jboss8080 { > > server 10.1.2.164:8080 weight=1 max_fails=1 > > fail_timeout=2s; > > server 10.1.2.174:8080 weight=1 max_fails=1 > > fail_timeout=2s; > > server 10.1.2.209:8080 weight=1 max_fails=1 > > fail_timeout=2s; > > server 10.1.7.136:8080 weight=1 max_fails=1 > > fail_timeout=2s; > > server 10.1.7.137:8080 weight=1 max_fails=1 > > fail_timeout=2s; > > server 10.1.7.138:8080 weight=1 max_fails=1 > > fail_timeout=2s; > > } > > > > server { > > ........ > > location / { > > proxy_next_upstream http_500 http_502 http_503 http_504 timeout error > > invalid_header; > > ....... > > if ( !-f $request_filename ) { > > proxy_pass http://jboss8080; > > break; > > } > > } > > > > error_page 500 502 503 504 /50x.html; > > location = /50x.html { > > root html; > > } > > } > > > > > > then how proxy_next_upstream control retry times. > > Quote from http://nginx.org/r/upstream: > > : If an error occurs when communicating with the server, a request > : will be passed to the next server, and so on until all of the > : functioning servers will be tried. If a successful response could > : not be obtained from any of the servers, the client will be > : returned the result of contacting the last server. > > That is, errors are handled as they appear, and there is no > retry-specific controls except proxy_next_upstream itself. In a > worst case all servers will be tried. > > > by the way, i used error_page, config like this: > > > > > > upstream backend { > > server localhost:8080 weight=5; > > } > > > > upstream backup1 { > > server localhost:8081 weight=5; > > } > > > > upstream backup2 { > > server localhost:8082 weight=5; > > } > > > > server { > > listen 80; > > server_name localhost; > > proxy_intercept_errors on; > > > > location / { > > error_page 502 @backup1; > > proxy_pass http://backend; > > } > > > > location @backup1 { > > error_page 502 @backup2; > > proxy_pass http://backup1; > > } > > > > location @backup2 { > > proxy_pass http://backup2; > > } > > } > > > > @backup1works, but @backup2 doesn't. > > how can i let backup2 works. > > http://nginx.org/r/recursive_error_pages > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- ???? Regards, ??? software engineer ?????-???-????? Tel?(021)53559777-1700 Mobile:15921585268 QQ:363603327 MSN:muzi666boy at hotmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From quandary82 at hailmail.net Thu Feb 7 11:31:02 2013 From: quandary82 at hailmail.net (SirNoSkill) Date: Thu, 07 Feb 2013 03:31:02 -0800 Subject: Is HTTP 1.1 chuncked file encoding on upstream fastcgi servers working for nginx 1.2.4 ? Message-ID: <1360236662.32133.140661188047653.1D881D6E@webmail.messagingengine.com> Hi, I have a question regarding nginx-fastcgi. I use nginx 1.2.4 from here: -------------------------------------------------------------------------------------------------- sudo -s nginx=stable # use nginx=development for latest development version add-apt-repository ppa:nginx/$nginx apt-get update apt-get install nginx -------------------------------------------------------------------------------------------------- The, I use ASP.NET MVC3 on Linux with nginx. To do that, I forward requests to nginx via fastcgi-mono-server4. So far it works fine, except for this little problem here: http://stackoverflow.com/questions/14662795/why-do-i-have-unwanted-extra-bytes-at-the-beginning-of-image which I have also forwarded to the mono mailing-list, here: http://mono.1490590.n4.nabble.com/Bug-in-mono-3-0-1-MVC3-File-FileResult-td4658382.html It seems to have something todo with HTTP 1.1's chunked transfer encoding via the fastcgi-mono-server4. So my question to the nginx people: Does this version of nginx (1.2.4) support HTTP 1.1 + chunked transfer encoding, and that also for the fastcgi-upstream servers ? Because according to links found via quora, it does. Second, do I have to add any nginx config file variables to make it work ? I googled a bit and found these options: chunked_transfer_encoding on; fastcgi_keep_conn on; proxy_http_version 1.1; If so, is this the correct place (see mono mailing list link for complete entry): location / { root /home/danillo/www/HomePage; #index index.html index.htm default.aspx Default.aspx; #fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; include /etc/nginx/fastcgi_params; chunked_transfer_encoding on; proxy_http_version 1.1; #fastcgi_keep_conn on; } Or are the 3 options mentioned above "ON" by default, so that I do not need to specify them ? Am I missing something configuration-wise ? Because I just want to rule out the possibility of this actually being a nginx or a missing-nginx-configuration-option bug. -- NoSkillz quandary82 at hailmail.net -- http://www.fastmail.fm - Or how I learned to stop worrying and love email again From mdounin at mdounin.ru Thu Feb 7 12:04:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 Feb 2013 16:04:09 +0400 Subject: Is HTTP 1.1 chuncked file encoding on upstream fastcgi servers working for nginx 1.2.4 ? In-Reply-To: <1360236662.32133.140661188047653.1D881D6E@webmail.messagingengine.com> References: <1360236662.32133.140661188047653.1D881D6E@webmail.messagingengine.com> Message-ID: <20130207120408.GE66348@mdounin.ru> Hello! On Thu, Feb 07, 2013 at 03:31:02AM -0800, SirNoSkill wrote: > Hi, > > I have a question regarding nginx-fastcgi. > > I use nginx 1.2.4 from here: > -------------------------------------------------------------------------------------------------- > sudo -s > nginx=stable # use nginx=development for latest development version > add-apt-repository ppa:nginx/$nginx > apt-get update > apt-get install nginx > -------------------------------------------------------------------------------------------------- > > The, I use ASP.NET MVC3 on Linux with nginx. > To do that, I forward requests to nginx via fastcgi-mono-server4. > > So far it works fine, except for this little problem here: > http://stackoverflow.com/questions/14662795/why-do-i-have-unwanted-extra-bytes-at-the-beginning-of-image > which I have also forwarded to the mono mailing-list, here: > http://mono.1490590.n4.nabble.com/Bug-in-mono-3-0-1-MVC3-File-FileResult-td4658382.html > > > It seems to have something todo with HTTP 1.1's chunked transfer > encoding via the fastcgi-mono-server4. > So my question to the nginx people: > Does this version of nginx (1.2.4) support HTTP 1.1 + chunked transfer > encoding, and that also for the fastcgi-upstream servers ? It's bad idea to use "Transfer-Encoding" while working via CGI and derived protocols like FastCGI. Quote from RFC 3875, http://tools.ietf.org/html/rfc3875#section-6.3.4: The script MUST NOT return any header fields that relate to client-side communication issues and could affect the server's ability to send the response to the client. As you are talking to nginx via FastCGI, not HTTP, it won't try to dig into content returned and decode it according to any Transfer-Encoding. Instead, the "Transfer-Encoding" header returned will be just dropped by nginx as per RFC 3875. -- Maxim Dounin http://nginx.com/support.html From kworthington at gmail.com Thu Feb 7 16:51:20 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 7 Feb 2013 11:51:20 -0500 Subject: [nginx-announce] nginx-1.3.12 In-Reply-To: <20130205142236.GZ40753@mdounin.ru> References: <20130205142236.GZ40753@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.12 For Windows http://goo.gl/Hfv3V (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream (http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Feb 5, 2013, at 9:22 AM, Maxim Dounin wrote: > Changes with nginx 1.3.12 05 Feb 2013 > > *) Feature: variables support in the "proxy_bind", "fastcgi_bind", > "memcached_bind", "scgi_bind", and "uwsgi_bind" directives. > > *) Feature: the $pipe, $request_length, $time_iso8601, and $time_local > variables can now be used not only in the "log_format" directive. > Thanks to Kiril Kalchev. > > *) Feature: IPv6 support in the ngx_http_geoip_module. > Thanks to Gregor Kali?nik. > > *) Bugfix: in the "proxy_method" directive. > > *) Bugfix: a segmentation fault might occur in a worker process if > resolver was used with the poll method. > > *) Bugfix: nginx might hog CPU during SSL handshake with a backend if > the select, poll, or /dev/poll methods were used. > > *) Bugfix: the "[crit] SSL_write() failed (SSL:)" error. > > *) Bugfix: in the "client_body_in_file_only" directive; the bug had > appeared in 1.3.9. > > *) Bugfix: in the "fastcgi_keep_conn" directive. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce From illuminating.me at gmail.com Fri Feb 8 03:20:35 2013 From: illuminating.me at gmail.com (Fufeng Yao) Date: Fri, 8 Feb 2013 11:20:35 +0800 Subject: set port range for nginx Message-ID: Hi, all I've got an nginx server in an internal network, and the server will forward request to outer net using proxy_pass, it seems like: proxy_pass http://[public ip]:[port] Unfortunately, the firewall block most of the port, so the proxy_pass failed. I have two questions: How the proxy_pass use port to forward the request? pick a random port? Would that be possible to set a port range (10000~20000 e.g) for proxy_pass to use? Regards, Yao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Feb 8 10:05:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Feb 2013 14:05:05 +0400 Subject: set port range for nginx In-Reply-To: References: Message-ID: <20130208100505.GL66348@mdounin.ru> Hello! On Fri, Feb 08, 2013 at 11:20:35AM +0800, Fufeng Yao wrote: > Hi, all > I've got an nginx server in an internal network, and the server will > forward request to outer net using proxy_pass, > it seems like: > proxy_pass http://[public ip]:[port] > Unfortunately, the firewall block most of the port, so the proxy_pass > failed. > I have two questions: > How the proxy_pass use port to forward the request? pick a random port? Outoing port (as well as IP address, unless proxy_bind is used) is selected by your OS. Use your system configuration options to tune port range used. E.g. on FreeBSD it can be done with net.inet.ip.portrange.first and net.inet.ip.portrange.last sysctl's. On Linux it's tuned with net.ipv4.ip_local_port_range sysctl or /proc/sys/net/ipv4/ip_local_port_range. > Would that be possible to set a port range (10000~20000 e.g) for proxy_pass > to use? In theory, nginx can use bind() syscall to select some particular port, but only one of them, and this doesn't make sense with proxy_pass - as this will not allow more than one connection to the same destination address. That is, tuning the OS as suggested above is the only way to go. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Feb 8 15:01:30 2013 From: nginx-forum at nginx.us (atipico) Date: Fri, 08 Feb 2013 10:01:30 -0500 Subject: shared memory zone "media" conflicts with already declared size 0 In-Reply-To: <4AC0E136.4070309@puffy.pl> References: <4AC0E136.4070309@puffy.pl> Message-ID: <188bdfbd671d842e572900dbf75b267e.NginxMailingListEnglish@forum.nginx.org> I am using nginx version: nginx/1.2.6 and facing a similar error: Starting nginx: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: [emerg] zero size shared memory zone "limit_per_ip" nginx: configuration file /etc/nginx/nginx.conf test failed invoke-rc.d: initscript nginx, action "start" failed. Here's my nginx.conf file: user www-data; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { # Cloudflare set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; sendfile on; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; client_body_timeout 10; client_header_timeout 10; keepalive_timeout 10; send_timeout 10; #limit_zone limit_per_ip $binary_remote_addr 16m; server_tokens off; #charset utf-8; expires -1; #A negative time sets the Cache-Control header to no-cache client_max_body_size 10m; client_body_buffer_size 128k; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; gzip_vary on; #you instruct proxies to store both a compressed and uncompressed version of the content # gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Thanks for any help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,9716,236021#msg-236021 From steffen.weber at gmail.com Fri Feb 8 16:49:03 2013 From: steffen.weber at gmail.com (Steffen Weber) Date: Fri, 8 Feb 2013 17:49:03 +0100 Subject: fastcgi_keep_conn + PHP-FPM In-Reply-To: <20130206162234.GP40753@mdounin.ru> References: <20130206162234.GP40753@mdounin.ru> Message-ID: Yes, might be a problem in PHP (I'm using 5.4.11). Maybe these two PHP bugs are related: - https://bugs.php.net/bug.php?id=60961 - https://bugs.php.net/bug.php?id=63395 On Wed, Feb 6, 2013 at 5:22 PM, Maxim Dounin wrote: > Hello! > > On Wed, Feb 06, 2013 at 11:00:42AM +0100, Steffen Weber wrote: > >> The changelog of nginx 1.3.12 mentions a bugfix in the "fastcgi_keep_conn" >> directive. Therefore I decided to give this feature a try. >> >> Snippet from nginx.conf: >> >> fastcgi_keep_conn on; >> >> upstream php { >> server 127.0.0.1:9000; >> keepalive 6; >> } >> >> Snippet from php-fpm.conf: >> >> listen = 127.0.0.1:9000 >> pm = static >> pm.max_children = 4 >> pm.max_requests = 5 # very low for demo purpose >> >> Now I run the following command: >> >> while true; do wget http://localhost/test.php -O- > /dev/null; done >> >> After 5 requests (pm.max_requests = 5) wget hangs and nginx logs the >> following error: >> >> 2013/02/06 10:47:09 [error] 6795#0: *6 readv() failed (104: Connection >> reset by peer) while reading upstream, client: ::ffff:127.0.0.1, server: >> localhost, request: "GET /test.php HTTP/1.1", upstream: "fastcgi:// >> 127.0.0.1:9000", host: "localhost" >> >> It seems that nginx cannot cope with a restarting php-fpm process. Anything >> I can do? > > From error message it looks like php-fpm closed connection > uncleanly (socket closed with unread data in socket buffer?). > > Not idea about details as I can't reproduce it here on FreeBSD > (connection reset on close if there are unread data in socket > buffer is Linux-specific), but I don't think there is a room for > improvement on nginx side. You may want to ask someone from > php-fpm side to look into this (anight, are you here?). > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Feb 8 16:56:22 2013 From: nginx-forum at nginx.us (shrikeh) Date: Fri, 08 Feb 2013 11:56:22 -0500 Subject: $upstream_http_* variables exist but do not seem to be readable In-Reply-To: <52234cc778514fa0c5ff7c66470ad801.NginxMailingListEnglish@forum.nginx.org> References: <52234cc778514fa0c5ff7c66470ad801.NginxMailingListEnglish@forum.nginx.org> Message-ID: <712a1722721406f7aca54924b5246920.NginxMailingListEnglish@forum.nginx.org> I resolved this issue, the result is in this gist should anyone wish to use it: https://gist.github.com/shrikeh/4722427 Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235808,236023#msg-236023 From mureninc at gmail.com Fri Feb 8 17:39:13 2013 From: mureninc at gmail.com (Constantine A. Murenin) Date: Fri, 8 Feb 2013 09:39:13 -0800 Subject: set port range for nginx In-Reply-To: <20130208100505.GL66348@mdounin.ru> References: <20130208100505.GL66348@mdounin.ru> Message-ID: On 8 February 2013 02:05, Maxim Dounin wrote: > Hello! > > On Fri, Feb 08, 2013 at 11:20:35AM +0800, Fufeng Yao wrote: > >> Hi, all >> I've got an nginx server in an internal network, and the server will >> forward request to outer net using proxy_pass, >> it seems like: >> proxy_pass http://[public ip]:[port] >> Unfortunately, the firewall block most of the port, so the proxy_pass >> failed. >> I have two questions: >> How the proxy_pass use port to forward the request? pick a random port? > > Outoing port (as well as IP address, unless proxy_bind is used) is > selected by your OS. Use your system configuration options to > tune port range used. > > E.g. on FreeBSD it can be done with net.inet.ip.portrange.first > and net.inet.ip.portrange.last sysctl's. On Linux it's tuned with > net.ipv4.ip_local_port_range sysctl or > /proc/sys/net/ipv4/ip_local_port_range. > >> Would that be possible to set a port range (10000~20000 e.g) for proxy_pass >> to use? > > In theory, nginx can use bind() syscall to select some particular > port, but only one of them, and this doesn't make sense with > proxy_pass - as this will not allow more than one connection to > the same destination address. That is, tuning the OS as suggested > above is the only way to go. Or, alternatively, a local firewall with port translation can be used to ensure that all outgoing ports that are used would be the ones that would pass the upstream firewall. See http://www.openbsd.org/faq/pf/rdr.html for some details, which has a couple of examples of port redirection/translation within the firewall. C. From ru at nginx.com Fri Feb 8 18:45:16 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 8 Feb 2013 22:45:16 +0400 Subject: shared memory zone "media" conflicts with already declared size 0 In-Reply-To: <188bdfbd671d842e572900dbf75b267e.NginxMailingListEnglish@forum.nginx.org> References: <4AC0E136.4070309@puffy.pl> <188bdfbd671d842e572900dbf75b267e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130208184516.GA13040@lo0.su> On Fri, Feb 08, 2013 at 10:01:30AM -0500, atipico wrote: > I am using nginx version: nginx/1.2.6 and facing a similar error: > > Starting nginx: nginx: the configuration file /etc/nginx/nginx.conf syntax > is ok > nginx: [emerg] zero size shared memory zone "limit_per_ip" > nginx: configuration file /etc/nginx/nginx.conf test failed > invoke-rc.d: initscript nginx, action "start" failed. > > Here's my nginx.conf file: > [...] > http { [...] > #limit_zone limit_per_ip $binary_remote_addr 16m; The "limit_per_ip" zone, including its size of 16 megabytes, was originally configured here. > include /etc/nginx/mime.types; [...] > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; At least one of the included files refers to this zone through the "limit_conn" directive. The diagnostics is rather limited in this case, but the message tells you that the size of the shared memory zone "limit_per_ip" is unknown (because the corresponding directive is commented out). JFYI, in modern versions of nginx the "limit_conn_zone" directive should be used instead. See http://nginx.org/r/limit_zone for details. From agentzh at gmail.com Fri Feb 8 20:26:15 2013 From: agentzh at gmail.com (agentzh) Date: Fri, 8 Feb 2013 12:26:15 -0800 Subject: [ANN] ngx_openresty devel version 1.2.6.5 released In-Reply-To: References: Message-ID: Hello, folks! I am happy to announce the new development version of ngx_openresty, 1.2.6.5: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (development) release, 1.2.6.3: * upgraded SrcacheNginxModule to 0.19. * bugfix: HEAD and conditional GET requests would still fall back to content handler execution (leading to backend accesses) even in case of a cache hit. thanks Wang Lichao for reporting this issue. * style: massive coding style fixes. * upgraded LuaRestyUploadLibrary to 0.07. * bugfix: the boundary string could not be parsed if no space was present before the "boundary=xxx" parameter in the "Content-Type" request header. thanks chenshu for reporting this issue. OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From nginx-forum at nginx.us Sat Feb 9 01:11:35 2013 From: nginx-forum at nginx.us (mottwsc) Date: Fri, 08 Feb 2013 20:11:35 -0500 Subject: nginx or apache for game Message-ID: <4fc016515da305db189ff274063c949f.NginxMailingListEnglish@forum.nginx.org> We are building a web based game on a LAMP stack, but it could just as well be on a LEMP stack. The game does not involve a lot of typical game stuff - character movement, etc. - like in most video games. Instead, it involves some video clips and some html pages, with updates to content and chat supplied primarily using Ajax and jQuery. You can think of it like the app 'Are you smarter than a 5th grader', but this is an HTML5 web-based application, not an app you download, and it has much more variety and functionality. I've read that nginx is much better for static web pages and that apache is better for processing on the back end. Can anyone provide some guidance as to why it would be better to switch to nginx from apache for this type of game application? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236033,236033#msg-236033 From tim-nginx at bitgems.com Sat Feb 9 07:32:41 2013 From: tim-nginx at bitgems.com (Tim Mensch) Date: Sat, 09 Feb 2013 00:32:41 -0700 Subject: nginx or apache for game In-Reply-To: <4fc016515da305db189ff274063c949f.NginxMailingListEnglish@forum.nginx.org> References: <4fc016515da305db189ff274063c949f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5115FB99.9020905@bitgems.com> On 2/8/2013 6:11 PM, mottwsc wrote: > I've read that nginx is much better for static web pages and that > apache is better for processing on the back end. I'm using OpenResty, and I would say that it gives me more performance than Apache (as an app server) would in ANY configuration. Possibly by a factor of 10-40, though I haven't actually done an apples-to-apples benchmark -- and I'm also using LuaJIT now instead of PHP, so that's a HUGE speed advantage by itself (Lua being the FASTEST dynamic language around, and PHP being one of the slowest). I've heard of people getting 70k+ concurrent connections per second using Lua (or LuaJIT maybe) and OpenResty. My low-end VPS can't handle that many connections, but it easily handles 3k+ connections per second, including complex app logic. YMMV. Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-list at puzzled.xs4all.nl Sat Feb 9 08:54:19 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Sat, 09 Feb 2013 09:54:19 +0100 Subject: fastcgi_keep_conn + PHP-FPM In-Reply-To: References: <20130206162234.GP40753@mdounin.ru> Message-ID: <51160EBB.5010808@puzzled.xs4all.nl> On 02/08/2013 05:49 PM, Steffen Weber wrote: > Yes, might be a problem in PHP (I'm using 5.4.11). Maybe these two PHP > bugs are related: > > - https://bugs.php.net/bug.php?id=60961 > - https://bugs.php.net/bug.php?id=63395 Thanks for that info. Looking at the two bug reports and #60961 in particular it's unclear in which 5.3 release (if at all) it's been fixed. Anyone know? Regards, Patrick From nginx-forum at nginx.us Sat Feb 9 18:40:24 2013 From: nginx-forum at nginx.us (Sil68) Date: Sat, 09 Feb 2013 13:40:24 -0500 Subject: worker process (cache manager process) exiting on signal 11 Message-ID: I've compiled nginx 1.3.11 on my RaspberryPI (512MB) running Debian Wheezy/ARM, and all appears to be quite okay, that is until nginx is getting fired up, then it's aborting with some error message. *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~* 2013/02/09 19:39:17 [notice] 32149#0: using the "epoll" event method 2013/02/09 19:39:17 [debug] 32149#0: counter: 40157080, 1 2013/02/09 19:39:17 [notice] 32149#0: nginx/1.3.12 2013/02/09 19:39:17 [notice] 32149#0: built by gcc 4.6.3 (Debian 4.6.3-14+rpi1) 2013/02/09 19:39:17 [notice] 32149#0: OS: Linux 3.2.27+ 2013/02/09 19:39:17 [notice] 32149#0: sysctl(KERN_RTSIGMAX): 0 2013/02/09 19:39:17 [notice] 32149#0: getrlimit(RLIMIT_NOFILE): 1024:4096 2013/02/09 19:39:17 [debug] 32151#0: write: 6, BED7FA58, 6, 0 2013/02/09 19:39:17 [debug] 32151#0: setproctitle: "nginx: master process nginx" 2013/02/09 19:39:17 [notice] 32151#0: start worker processes 2013/02/09 19:39:17 [debug] 32151#0: channel 3:6 2013/02/09 19:39:17 [notice] 32151#0: start worker process 32152 2013/02/09 19:39:17 [debug] 32151#0: channel 7:8 2013/02/09 19:39:17 [notice] 32151#0: start cache manager process 32153 2013/02/09 19:39:17 [debug] 32151#0: pass channel s:1 pid:32153 fd:7 to s:0 pid:32152 fd:3 2013/02/09 19:39:17 [debug] 32151#0: channel 9:10 2013/02/09 19:39:17 [notice] 32151#0: start cache loader process 32154 2013/02/09 19:39:17 [debug] 32151#0: pass channel s:2 pid:32154 fd:9 to s:0 pid:32152 fd:3 2013/02/09 19:39:17 [debug] 32151#0: pass channel s:2 pid:32154 fd:9 to s:1 pid:32153 fd:7 2013/02/09 19:39:17 [debug] 32151#0: sigsuspend 2013/02/09 19:39:18 [notice] 32151#0: signal 17 (SIGCHLD) received 2013/02/09 19:39:18 [alert] 32151#0: worker process 32152 exited on signal 11 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [alert] 32151#0: cache manager process 32153 exited on signal 11 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [alert] 32151#0: cache loader process 32154 exited on signal 11 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock 2013/02/09 19:39:18 [debug] 32151#0: shmtx forced unlock : : : *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~* 'uname -a' results in *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~* Linux myhost 3.2.27+ #250 PREEMPT Thu Oct 18 19:03:02 BST 2012 armv6l GNU/Linux *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~* 'nginx -V' output *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~* TLS SNI support enabled configure arguments: --prefix=/usr/local --conf-path=/usr/local/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/run/nginx/nginx.lck --user=www --group=www --with-rtsig_module --with-select_module --with-poll_module --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-mail_ssl_module --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --http-scgi-temp-path=/var/tmp/nginx/scgi --with-pcre --with-pcre-jit --with-debug --add-module=/data/src/nginx/modules/ngx_devel_kit --add-module=/data/src/nginx/modules/nginx-rtmp-module --add-module=/data/src/nginx/modules/lua-nginx-module --add-module=/data/src/nginx/modules/nginx-upload-progress-module --add-module=/data/src/nginx/modules/ngx_auto_lib *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~* Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236043,236043#msg-236043 From mdounin at mdounin.ru Sat Feb 9 21:34:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 10 Feb 2013 01:34:30 +0400 Subject: worker process (cache manager process) exiting on signal 11 In-Reply-To: References: Message-ID: <20130209213430.GQ66348@mdounin.ru> Hello! On Sat, Feb 09, 2013 at 01:40:24PM -0500, Sil68 wrote: > I've compiled nginx 1.3.11 on my RaspberryPI (512MB) running Debian > Wheezy/ARM, and all appears to be quite okay, that is until nginx is getting > fired up, then it's aborting with some error message. [...] > --http-scgi-temp-path=/var/tmp/nginx/scgi --with-pcre --with-pcre-jit > --with-debug --add-module=/data/src/nginx/modules/ngx_devel_kit > --add-module=/data/src/nginx/modules/nginx-rtmp-module > --add-module=/data/src/nginx/modules/lua-nginx-module > --add-module=/data/src/nginx/modules/nginx-upload-progress-module > --add-module=/data/src/nginx/modules/ngx_auto_lib Are you able to reproduce the problem without 3rd party modules? If yes, please follow the instructions here to obtain a coredump and a backtrace: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Sat Feb 9 22:12:28 2013 From: nginx-forum at nginx.us (piotr.dobrogost) Date: Sat, 09 Feb 2013 17:12:28 -0500 Subject: Upgrading Executable on the Fly - wrong docs? Message-ID: <19de30008465a1410755891b63c84485.NginxMailingListEnglish@forum.nginx.org> Hi! After reading the section titled "Upgrading Executable on the Fly" in the docs (at http://nginx.org/en/docs/control.html) I have an impression the information given is wrong. In the first bullet one reads "Send the HUP signal to the old master process. The old process will start new worker processes without re-reading the configuration. (...)" then in the second and third bullet one reads "When the new master process exits, the old master process will start new worker processes." If the old master process already started new worker processes after it had received the HUP signal then it means it didn't have to wait until the new master process exited, right? Doesn't this contradict the subsequent information that the old master process waits with starting new worker processes until after the new master process exited? Regards Piotr Dobrogost Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236047,236047#msg-236047 From nginx-forum at nginx.us Sun Feb 10 10:03:40 2013 From: nginx-forum at nginx.us (Sil68) Date: Sun, 10 Feb 2013 05:03:40 -0500 Subject: worker process (cache manager process) exiting on signal 11 In-Reply-To: <20130209213430.GQ66348@mdounin.ru> References: <20130209213430.GQ66348@mdounin.ru> Message-ID: Just re-added "--add-module=/data/src/nginx/modules/nginx-upload-progress-module" and it's still working! :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236043,236060#msg-236060 From nginx-forum at nginx.us Sun Feb 10 10:17:02 2013 From: nginx-forum at nginx.us (Sil68) Date: Sun, 10 Feb 2013 05:17:02 -0500 Subject: worker process (cache manager process) exiting on signal 11 In-Reply-To: References: <20130209213430.GQ66348@mdounin.ru> Message-ID: That is, I've de-activated all 3rd party modules, re-compiled nginx (1.3.12), and everything was fine! Then, next iteration, adding the upload progress module again, re-compile, test, cool! :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236043,236061#msg-236061 From nginx-forum at nginx.us Sun Feb 10 10:50:10 2013 From: nginx-forum at nginx.us (Sil68) Date: Sun, 10 Feb 2013 05:50:10 -0500 Subject: worker process (cache manager process) exiting on signal 11 In-Reply-To: <20130209213430.GQ66348@mdounin.ru> References: <20130209213430.GQ66348@mdounin.ru> Message-ID: Re-activating this module "--add-module=/data/src/nginx/modules/nginx-rtmp-module" causes the signal 11 terminations Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236043,236062#msg-236062 From steffen.weber at gmail.com Sun Feb 10 12:29:44 2013 From: steffen.weber at gmail.com (Steffen Weber) Date: Sun, 10 Feb 2013 13:29:44 +0100 Subject: fastcgi_keep_conn + PHP-FPM In-Reply-To: <51160EBB.5010808@puzzled.xs4all.nl> References: <20130206162234.GP40753@mdounin.ru> <51160EBB.5010808@puzzled.xs4all.nl> Message-ID: On Sat, Feb 9, 2013 at 9:54 AM, Patrick Lists wrote: > Thanks for that info. Looking at the two bug reports and #60961 in > particular it's unclear in which 5.3 release (if at all) it's been fixed. > Anyone know? Both bugs are still open. From nginx-forum at nginx.us Sun Feb 10 12:44:46 2013 From: nginx-forum at nginx.us (Sil68) Date: Sun, 10 Feb 2013 07:44:46 -0500 Subject: worker process (cache manager process) exiting on signal 11 In-Reply-To: <20130209213430.GQ66348@mdounin.ru> References: <20130209213430.GQ66348@mdounin.ru> Message-ID: <00cd6ecf0d344ee7ad5b2beb4fe170fc.NginxMailingListEnglish@forum.nginx.org> Re-activating "--add-module=/data/src/nginx/modules/lua-nginx-module" works, too! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236043,236064#msg-236064 From nginx-forum at nginx.us Sun Feb 10 13:29:14 2013 From: nginx-forum at nginx.us (Sil68) Date: Sun, 10 Feb 2013 08:29:14 -0500 Subject: worker process (cache manager process) exiting on signal 11 In-Reply-To: <20130209213430.GQ66348@mdounin.ru> References: <20130209213430.GQ66348@mdounin.ru> Message-ID: Re-adding " --add-module=/data/src/nginx/modules/ngx_auto_lib" also works perfectly fine! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236043,236065#msg-236065 From nginx-forum at nginx.us Sun Feb 10 14:15:12 2013 From: nginx-forum at nginx.us (Sil68) Date: Sun, 10 Feb 2013 09:15:12 -0500 Subject: worker process (cache manager process) exiting on signal 11 In-Reply-To: <20130209213430.GQ66348@mdounin.ru> References: <20130209213430.GQ66348@mdounin.ru> Message-ID: <33ec76ca5789117da0d2fe62dac1fbff.NginxMailingListEnglish@forum.nginx.org> Finally re-added "--add-module=/data/src/nginx/modules/ngx_devel_kit" and all is peachy! So apparently there's some issue with the rtmp module. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236043,236066#msg-236066 From nginx-forum at nginx.us Sun Feb 10 14:15:23 2013 From: nginx-forum at nginx.us (mottwsc) Date: Sun, 10 Feb 2013 09:15:23 -0500 Subject: nginx or apache for game In-Reply-To: <5115FB99.9020905@bitgems.com> References: <5115FB99.9020905@bitgems.com> Message-ID: <2e5465efc97e13340373e7569d72818a.NginxMailingListEnglish@forum.nginx.org> At this point for the beta, I'm using PHP - could switch to something else like Lua for a future production version. My best course for right now might be to use nginx on the front end and apache on the back end; could extend nginx to OpenResty, and could possibly use Varnish as well. Anyone with experience in these areas, please comment. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236033,236067#msg-236067 From nginx-forum at nginx.us Mon Feb 11 01:06:24 2013 From: nginx-forum at nginx.us (mdwalter) Date: Sun, 10 Feb 2013 20:06:24 -0500 Subject: upstream with vps Message-ID: <55fe9ef73b9697ece516697a8a7e4b6b.NginxMailingListEnglish@forum.nginx.org> Hello guy, I'm sorry for my english So I have 4 vps: 1. 4gb ram 1 processor 1 ipv4 2. 2gb ram 4 processors 3 ipv4 + 3 ipv6 3. 1gb ram 4 processors 3 ipv4 + 3 ipv6 4. 1gb ram 4 processors 3 ipv4 + 3 ipv6 I thought this configuration: vps 1 with nginx and fastcgi php 5.4 vps 2 with nginx for upstream and with mysql 5.5 with replicator vps 3 with nginx and fastcgi php 5.4 vps 4 with nginx and mysql 5.5 and replicator the first thing I wanted to tell you if the vps can hold 500/1000 visitors a day with a dozen domains vps 2 use it for upstream because it has 3 ipv4 available. We see the configuration of ngnix.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; upstream backend { server 79.xxx.190:80; <-- This is vps1 server 199.xxx.72.100:80; <-- this is vps2 (same vps of this file) } } Now setup for one domain: server { listen 199.xxx.72.100; root /usr/share/nginx/computereconomy.com; index index.html index.htm; server_name computereconomy.com; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } gzip on; ssi on; ssi_silent_errors off; } you think you can use a VPS to the upstream and viewing the site? or is there something I should do more or less? the vps1 is a normal setup (nginx) or should I configure it differently? if so, how? Thanx!! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236068,236068#msg-236068 From ru at nginx.com Mon Feb 11 07:07:22 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 11 Feb 2013 11:07:22 +0400 Subject: Upgrading Executable on the Fly - wrong docs? In-Reply-To: <19de30008465a1410755891b63c84485.NginxMailingListEnglish@forum.nginx.org> References: <19de30008465a1410755891b63c84485.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130211070722.GA45362@lo0.su> On Sat, Feb 09, 2013 at 05:12:28PM -0500, piotr.dobrogost wrote: > Hi! > > After reading the section titled "Upgrading Executable on the Fly" in the > docs (at http://nginx.org/en/docs/control.html) I have an impression the > information given is wrong. > In the first bullet one reads > "Send the HUP signal to the old master process. The old process will start > new worker processes without re-reading the configuration. (...)" > then in the second and third bullet one reads > "When the new master process exits, the old master process will start new > worker processes." The instructions in the bullets are not supposed to be executed in a sequence. Instead, they document two possible actions to perform: 1) Start old workers with old configuration, then gracefully stop new master/workers (bullet #1). 2) Stop new master/workers immediately (*). Old master will restart workers automatically when new master exits (bullet #2). (*) send KILL to new workers if they don't exit normally (bullet #3). > If the old master process already started new worker processes after it had > received the HUP signal then it means it didn't have to wait until the new > master process exited, right? Doesn't this contradict the subsequent > information that the old master process waits with starting new worker > processes until after the new master process exited? I can see where your confusion comes from. How's this instead? http://pp.nginx.com/ru/libxslt/en/docs/control.html#upgrade From nginx-forum at nginx.us Mon Feb 11 08:19:27 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 11 Feb 2013 03:19:27 -0500 Subject: nginx or apache for game In-Reply-To: <2e5465efc97e13340373e7569d72818a.NginxMailingListEnglish@forum.nginx.org> References: <5115FB99.9020905@bitgems.com> <2e5465efc97e13340373e7569d72818a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <679b8b240b8319eb7dda57a64e447cc2.NginxMailingListEnglish@forum.nginx.org> tl;dr: yes :) we run nginx as frontend (caching/loadbalancing/waf-functions via naxsi) in front of different setups (rails, tomcat, apache+php) and can recommend it, esp. the caching-function, if used with caution, might be integrated in parts of your application. i'll give a talk on this topic on an oss-conference soon and will release a paper with benchmarks and different implementations of nginx as frontend-gateway. regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236033,236071#msg-236071 From nginx-forum at nginx.us Mon Feb 11 08:48:51 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 11 Feb 2013 03:48:51 -0500 Subject: upstream with vps In-Reply-To: <55fe9ef73b9697ece516697a8a7e4b6b.NginxMailingListEnglish@forum.nginx.org> References: <55fe9ef73b9697ece516697a8a7e4b6b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8251187d94b98c2ace09e6a20f2f7b97.NginxMailingListEnglish@forum.nginx.org> you should be able to handle 1000 visitiors/day with a modern smartphone :) if your vps are running on the same host i'd suggest to strip down your setup to 2 machines and apply more ram/cpus for each machine, maybe in a hot-standby-scenario with a switchable failover-ip. regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236068,236072#msg-236072 From rainer at ultra-secure.de Mon Feb 11 09:08:19 2013 From: rainer at ultra-secure.de (Rainer Duffner) Date: Mon, 11 Feb 2013 10:08:19 +0100 Subject: nginx or apache for game In-Reply-To: <679b8b240b8319eb7dda57a64e447cc2.NginxMailingListEnglish@forum.nginx.org> References: <5115FB99.9020905@bitgems.com> <2e5465efc97e13340373e7569d72818a.NginxMailingListEnglish@forum.nginx.org> <679b8b240b8319eb7dda57a64e447cc2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130211100819.6c9519b5@suse3> Am Mon, 11 Feb 2013 03:19:27 -0500 schrieb "mex" : > tl;dr: yes :) > > we run nginx as frontend (caching/loadbalancing/waf-functions via > naxsi) in front of different setups > (rails, tomcat, apache+php) and can recommend it, esp. the > caching-function, if used with caution, > might be integrated in parts of your application. > > i'll give a talk on this topic on an oss-conference soon and will > release a paper with benchmarks > and different implementations of nginx as frontend-gateway. And which conference would that be? Just out of curiosity? ;-) From nginx-forum at nginx.us Mon Feb 11 09:12:30 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 11 Feb 2013 04:12:30 -0500 Subject: nginx or apache for game In-Reply-To: <20130211100819.6c9519b5@suse3> References: <20130211100819.6c9519b5@suse3> Message-ID: <9f5a8f3a24c38f30ac808cde1bbaf995.NginxMailingListEnglish@forum.nginx.org> > > And which conference would that be? > Just out of curiosity? > ;-) cebit :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236033,236074#msg-236074 From nginx-forum at nginx.us Mon Feb 11 09:49:46 2013 From: nginx-forum at nginx.us (amodpandey) Date: Mon, 11 Feb 2013 04:49:46 -0500 Subject: set $cookie_abc "$cookie_abc"; Message-ID: I have some lua code where I play with the value of ngx.var.cookie_abc. The variable must exist if some assignment is done on it. To achieve this I did set $cookie_abc "$cookie_abc"; The above line clears the value of $cookie_abc, where in I assumed it to be defaulted if the value already exists. i.e. if request has cookie abc="test" set, the $cookie_abc will have value test. But after executing the above expression the value is set to blank "". Where in it should have been "test". If the cookie value is not present in the request then obviously the value should be blank. Is there a way around it? If not, can it be fixed. Note even this won't work set $tmp_abc "$cookie_abc"; set $cookie_abc "$tmp_abc"; There variable values are referential so even $tmp_abc become blank. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236075,236075#msg-236075 From ru at nginx.com Mon Feb 11 10:12:56 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 11 Feb 2013 14:12:56 +0400 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: References: Message-ID: <20130211101256.GE65322@lo0.su> On Mon, Feb 11, 2013 at 04:49:46AM -0500, amodpandey wrote: > I have some lua code where I play with the value of ngx.var.cookie_abc. The > variable must exist if some assignment is done on it. > > To achieve this I did > > set $cookie_abc "$cookie_abc"; > > The above line clears the value of $cookie_abc, where in I assumed it to be > defaulted if the value already exists. > > i.e. if request has cookie abc="test" set, the $cookie_abc will have value > test. But after executing the above expression the value is set to blank "". > Where in it should have been "test". If the cookie value is not present in > the request then obviously the value should be blank. > > Is there a way around it? If not, can it be fixed. > > Note even this won't work > > set $tmp_abc "$cookie_abc"; > set $cookie_abc "$tmp_abc"; > > There variable values are referential so even $tmp_abc become blank. map $cookie_abc $abc { '' default; default $cookie_abc; } will set $abc to the value of $cookie_abc if not empty, or to "default" if cookie is unset or empty. http://nginx.org/r/map From nginx-forum at nginx.us Mon Feb 11 12:22:23 2013 From: nginx-forum at nginx.us (mdwalter) Date: Mon, 11 Feb 2013 07:22:23 -0500 Subject: upstream with vps In-Reply-To: <8251187d94b98c2ace09e6a20f2f7b97.NginxMailingListEnglish@forum.nginx.org> References: <55fe9ef73b9697ece516697a8a7e4b6b.NginxMailingListEnglish@forum.nginx.org> <8251187d94b98c2ace09e6a20f2f7b97.NginxMailingListEnglish@forum.nginx.org> Message-ID: thanx for answer me. The all vps is in different datacenter europa and usa. Yes I know 1000 visitors in 24h for this server is not problem: 0.69 visitors every second, but in reality the visitators can be 500 in same moment, maybe this is reason I make the load balancing and sure never know... some time some datacenter can came down. I wanna know if the back server can work like front server too. Thank again for answer :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236068,236078#msg-236078 From nginx-forum at nginx.us Mon Feb 11 13:01:40 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 11 Feb 2013 08:01:40 -0500 Subject: upstream with vps In-Reply-To: References: <55fe9ef73b9697ece516697a8a7e4b6b.NginxMailingListEnglish@forum.nginx.org> <8251187d94b98c2ace09e6a20f2f7b97.NginxMailingListEnglish@forum.nginx.org> Message-ID: <97d3499fbb6e2d0cf3b498abe3cdab16.NginxMailingListEnglish@forum.nginx.org> depending on your webapp and ability to cache (static) files and content (e.g. the content changes every 10 minutes -> you can deploy a 5 min cache) every server should be able to handle that ammount of request. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236068,236083#msg-236083 From andrejaenisch at googlemail.com Mon Feb 11 13:14:19 2013 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Mon, 11 Feb 2013 14:14:19 +0100 Subject: nginx or apache for game In-Reply-To: <679b8b240b8319eb7dda57a64e447cc2.NginxMailingListEnglish@forum.nginx.org> References: <5115FB99.9020905@bitgems.com> <2e5465efc97e13340373e7569d72818a.NginxMailingListEnglish@forum.nginx.org> <679b8b240b8319eb7dda57a64e447cc2.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2013/2/11 mex : > i'll give a talk on this topic on an oss-conference soon and will release a paper with benchmarks and different implementations of nginx as frontend-gateway. Hello, I would like to ask you to link the papers, when they're online. I'm interested in reading them :) Regards, Andr? From nginx-forum at nginx.us Mon Feb 11 13:41:05 2013 From: nginx-forum at nginx.us (amodpandey) Date: Mon, 11 Feb 2013 08:41:05 -0500 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: <20130211101256.GE65322@lo0.su> References: <20130211101256.GE65322@lo0.su> Message-ID: <940be7978324c18636eaf3e1e2d9da5f.NginxMailingListEnglish@forum.nginx.org> Thank you. But this is not what I am looking for. The question is about defining $cookie_abc and default it to $cookie_abc itself (if the value exists). the map will give me the value in $abc which I am not looking for. BTW set $cookie_abc "$abc"; will still reset the value of $abc to default Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236075,236085#msg-236085 From andrew at nginx.com Mon Feb 11 14:56:56 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Mon, 11 Feb 2013 18:56:56 +0400 Subject: Proxing webservices (Webservices, WSDL, SOAP) In-Reply-To: <98f2207d4940a759786c6fc8ed53579b.NginxMailingListEnglish@forum.nginx.org> References: <98f2207d4940a759786c6fc8ed53579b.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Jan 30, 2013, at 3:57 PM, pricne5 wrote: > What is the correct way to proxing to any remote webservice? How to use > nginx in front of IIS or other web server, who serves webservices? > > As an example we have any remote SOAP webservice at > http://B:8089/getClientService/getClientService?wsdl. In SOAP document of > these webservice we have endpoint location: > ..... > > binding="tns:getClientServiceBinding"> > location="http://B:8089/getClientService/getClientService"/> > > > ..... > > If we use proxy_pass: > > server { > listen 80; > server_name A; > location /{ > proxy_pass http://B:8089/; > } > > nginx won't rewrite(or change) SOAP's endpoint address to itself, so any > futher SOAP requests will fail, because requesting side makes request to > direct host described at SOAP endpoint location :( You want to change "soap:address" URL on-the-fly to point next SOAP request(s) to http://A:80/getClientService/getClientService, right? Did you try sub_module? http://nginx.org/en/docs/http/ngx_http_sub_module.html From nginx-forum at nginx.us Mon Feb 11 15:44:03 2013 From: nginx-forum at nginx.us (jeangouytch) Date: Mon, 11 Feb 2013 10:44:03 -0500 Subject: Nginx seems to ignore proxy_set_header Host directive Message-ID: Hi all I am trying to use nginx to get remote access to various service on my home server, via proxy_pass redirection to various subdirectorys For example, owncloud server with apache, pyload with it's builtin http server, xbmc web interface, ... I access the webserver via a static IP adress, and not via a domain name. Here's a simple conf file for nginx. ######################################################## server { listen 80; ## listen for ipv4; this line is default and implied root /hdd/www; index index.html index.htm; server_name localhost; location /pyload/ { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://192.168.1.100:8001/; } } ######################################################## I think this setup would be straighforward, but for some reason, all relative path are not rewriten and I get 502 error for all images, css and js file. I would be very grateful if someone had a clue to help me understanding what I am doing wrong. Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236099,236099#msg-236099 From vbart at nginx.com Mon Feb 11 16:03:13 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 11 Feb 2013 20:03:13 +0400 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: <940be7978324c18636eaf3e1e2d9da5f.NginxMailingListEnglish@forum.nginx.org> References: <20130211101256.GE65322@lo0.su> <940be7978324c18636eaf3e1e2d9da5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201302112003.13319.vbart@nginx.com> On Monday 11 February 2013 17:41:05 amodpandey wrote: > Thank you. But this is not what I am looking for. The question is about > defining $cookie_abc and default it to $cookie_abc itself (if the value > exists). > > the map will give me the value in $abc which I am not looking for. > > BTW set $cookie_abc "$abc"; will still reset the value of $abc to default > All magic variable $cookie_*, $http_*, $upstream_http_* are always exist. By defining "set $cookie_abc" you just hid the original variable, so don't do that. It more looks like a bug in the lua module. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon Feb 11 18:07:18 2013 From: nginx-forum at nginx.us (amodpandey) Date: Mon, 11 Feb 2013 13:07:18 -0500 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: <201302112003.13319.vbart@nginx.com> References: <201302112003.13319.vbart@nginx.com> Message-ID: <3a69e4d51273a480aed429bd1c671c87.NginxMailingListEnglish@forum.nginx.org> The variable $cookie_abc will exist only if the client request cookie has "abc". In my case the first request won't have this cookie set and I am setting this value for some magic. Lua module is very this on this feature. If we do not have the variable defined you should set it before using. set $cookie_abc "$cookie_abc" If the above works as expected I am good. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236075,236108#msg-236108 From vbart at nginx.com Mon Feb 11 18:17:16 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 11 Feb 2013 22:17:16 +0400 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: <3a69e4d51273a480aed429bd1c671c87.NginxMailingListEnglish@forum.nginx.org> References: <201302112003.13319.vbart@nginx.com> <3a69e4d51273a480aed429bd1c671c87.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201302112217.16268.vbart@nginx.com> On Monday 11 February 2013 22:07:18 amodpandey wrote: > The variable $cookie_abc will exist only if the client request cookie has > "abc". In my case the first request won't have this cookie set and I am > setting this value for some magic. > It *always* exists. For clients without the cookie it has empty value. > Lua module is very this on this feature. If we do not have the variable > defined you should set it before using. > > set $cookie_abc "$cookie_abc" > > If the above works as expected I am good. > Yes. It works as expected. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Feb 11 18:21:29 2013 From: nginx-forum at nginx.us (amodpandey) Date: Mon, 11 Feb 2013 13:21:29 -0500 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: <201302112217.16268.vbart@nginx.com> References: <201302112217.16268.vbart@nginx.com> Message-ID: <1691b23b91c90a6460de45c083156ca3.NginxMailingListEnglish@forum.nginx.org> It should not and it does not! If the client does not send a cookie with name "abc" or "def" I do not assume nginx to have any variable with that name. Am I missing anything? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236075,236110#msg-236110 From rkearsley at blueyonder.co.uk Mon Feb 11 18:51:44 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Mon, 11 Feb 2013 18:51:44 +0000 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: <1691b23b91c90a6460de45c083156ca3.NginxMailingListEnglish@forum.nginx.org> References: <201302112217.16268.vbart@nginx.com> <1691b23b91c90a6460de45c083156ca3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51193DC0.5010007@blueyonder.co.uk> You should give a better (code test case) example of what you want to do If you are using lua then i'm sure there will be a solution On 11/02/13 18:21, amodpandey wrote: > It should not and it does not! If the client does not send a cookie with > name "abc" or "def" I do not assume nginx to have any variable with that > name. Am I missing anything? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236075,236110#msg-236110 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Feb 11 19:09:39 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Feb 2013 19:09:39 +0000 Subject: Nginx seems to ignore proxy_set_header Host directive In-Reply-To: References: Message-ID: <20130211190939.GB32392@craic.sysops.org> On Mon, Feb 11, 2013 at 10:44:03AM -0500, jeangouytch wrote: Hi there, > I am trying to use nginx to get remote access to various service on my home > server, via proxy_pass redirection to various subdirectorys That's fairly standard. Although having different levels of subdirectories on the proxied and the proxy servers usually gets messy, unless the proxied server is very careful. > I access the webserver via a static IP adress, and not via a domain name. The "listen" and "server_name" directives determine which "server" is used for each request. So using an IP address is fine, so long as things are set correctly. (The easy way to ensure that, is to have exactly one server{} block.) > location /pyload/ { > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_pass http://192.168.1.100:8001/; > } > I think this setup would be straighforward, but for some reason, all > relative path are not rewriten and I get 502 error for all images, css and > js file. Which relative paths? How do you want them to be rewritten? What part of the nginx config do you expect will do the rewriting? And how does that match the subject of this mail? proxy_redirect (http://nginx.org/r/proxy_redirect) can do rewriting of http headers, except you've turned it off here. proxy_set_header doesn't do rewriting of anything. Neither does proxy_pass. > I would be very grateful if someone had a clue to help me understanding what > I am doing wrong. Can you give an example of what you do, what you see, and what you expect to see? "curl -i" is usually a good way of showing the headers and content returned. (*Usually*, it's best if nginx does not mess with the content. You can look at http://nginx.org/r/sub_filter if you think you want to change the content.) f -- Francis Daly francis at daoine.org From agentzh at gmail.com Mon Feb 11 19:47:37 2013 From: agentzh at gmail.com (agentzh) Date: Mon, 11 Feb 2013 11:47:37 -0800 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: References: Message-ID: Hello! On Mon, Feb 11, 2013 at 1:49 AM, amodpandey wrote: > I have some lua code where I play with the value of ngx.var.cookie_abc. The > variable must exist if some assignment is done on it. > > To achieve this I did > > set $cookie_abc "$cookie_abc"; > > The above line clears the value of $cookie_abc, where in I assumed it to be > defaulted if the value already exists. > Are you sure? Which version of ngx_lua and Nginx are you using? I've tried the following minimal example on my side with Nginx 1.2.6 + ngx_lua 0.7.14 and it works as expected: location = /t { content_by_lua ' ngx.say("cookie abc: ", ngx.var.cookie_abc) '; } And let's use curl to access location = /t (assuming the Nginx is listening on the local port 8080): $ curl -H 'Cookie: abc=32' localhost:8080/t cookie abc: 32 $ curl localhost:8080/t cookie abc: nil Please note that your request must actually take a Cookie request header, otherwise the value of $cookie_abc will surely be empty (or nil, to be more accurate). Best regards, -agentzh From nginx-forum at nginx.us Mon Feb 11 21:13:28 2013 From: nginx-forum at nginx.us (jeangouytch) Date: Mon, 11 Feb 2013 16:13:28 -0500 Subject: Nginx seems to ignore proxy_set_header Host directive In-Reply-To: <20130211190939.GB32392@craic.sysops.org> References: <20130211190939.GB32392@craic.sysops.org> Message-ID: Hi Francis. Thank you for your help. > Which relative paths? How do you want them to be rewritten? > > What part of the nginx config do you expect will do the rewriting? > > And how does that match the subject of this mail? > > proxy_redirect (http://nginx.org/r/proxy_redirect) can do rewriting of > http headers, except you've turned it off here. > > proxy_set_header doesn't do rewriting of anything. Neither does > proxy_pass. > > > I would be very grateful if someone had a clue to help me > understanding what > > I am doing wrong. > > Can you give an example of what you do, what you see, and what you > expect to see? "curl -i" is usually a good way of showing the headers > and content returned. > Ok, I think i'm not using the right words, maybe "rewriting" is not the exact term. My problem is : relative path for images are normal in the html files, looking like "/media/myimage.jpg" I understand that the proxy_set_header directive would make the relative path to be treated as serverIP/pyload/media/myimage.jpg, but the chrome console shows that the relative path are treated as serverIP/media/myimage.jpg Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236099,236115#msg-236115 From francis at daoine.org Mon Feb 11 21:43:45 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Feb 2013 21:43:45 +0000 Subject: Nginx seems to ignore proxy_set_header Host directive In-Reply-To: References: <20130211190939.GB32392@craic.sysops.org> Message-ID: <20130211214345.GD32392@craic.sysops.org> On Mon, Feb 11, 2013 at 04:13:28PM -0500, jeangouytch wrote: Hi there, > > Can you give an example of what you do, what you see, and what you > > expect to see? "curl -i" is usually a good way of showing the headers > > and content returned. > > > Ok, I think i'm not using the right words, maybe "rewriting" is not the > exact term. My problem is : > relative path for images are normal in the html files, looking like > "/media/myimage.jpg" > I understand that the proxy_set_header directive would make the relative > path to be treated as serverIP/pyload/media/myimage.jpg, No, proxy_set_header doesn't do that. http://nginx.org/r/proxy_set_header for details of what it does do. I don't know of any reliable way to get nginx to correctly change the content of html files, and anything else that the browser might interpret as containing local urls, so that everything works as you wish. The easiest two ways of proxying multiple servers are: convince the "pyload" server that all of its content is *actually* below /pyload, so that it generates "correct" urls itself (and do something similar for each other internal server); or use different hostnames for each internal server, and have different server{} blocks in nginx.conf proxy_pass to different internal servers. The third way is to ensure that all of the content on all of the internal servers never refers to any local url that starts with "/" -- so instead of "/media/myimage.jpg" it would be "../../media/myimage.jpg", with the correct number of "../" components each time. The first two ways are "matching subdirectories on the proxy and proxied servers"; the third is "great care on the proxied server". If you have a restricted, controlled set of file contents on each server, then you might be able to use one of the substitution filter modules to make enough changes that it works well enough for you. f -- Francis Daly francis at daoine.org From list_nginx at bluerosetech.com Tue Feb 12 02:18:02 2013 From: list_nginx at bluerosetech.com (Darren Pilgrim) Date: Mon, 11 Feb 2013 18:18:02 -0800 Subject: Converting Apache configs to nginx, why is a NameVirtualHost workalike is a bad thing? Message-ID: <5119A65A.2020706@bluerosetech.com> I'm switching some servers from Apache 2.2.x to nginx 1.2.6. In Apache, I use NameVirtualHost. I also use Rewrite directives to manage www. prefixing via a 301 redirect. All the user has to is create /wwwroot/$hostname, upload the site files, and point DNS at my server. On my end, all I have to do is create the Apache config once as part of the new-user setup. When I converted that to nginx, this is what I came up with: root /www/user/wwwroot/$http_host; For the user who wants to drop the www.: if ($http_host ~ ^www\.(.+)$) { return 301 http://$1$request_uri; } For the user who wants to always have the www.: if ($http_host !~ ^www\.) { return 301 http://www.$http_host$request_uri; } Which, in testing, works a treat. Various articles say this style of configuration is bad. Instead I should: 1. have per-domain server blocks; 2. have a server block for www.example.com that redirects to example.com. or vice versa. In other words, for the user who has 37 domains, I'll need 74 server blocks in their nginx config. That's a significant regression in terms of workload and simplicity. If it's bad, ok, I won't do that; however, I can't seem to find an explaination *why* it's bad. Would someone please clarify that point? From unai at leanservers.com Tue Feb 12 02:59:46 2013 From: unai at leanservers.com (Unai Rodriguez) Date: Tue, 12 Feb 2013 10:59:46 +0800 Subject: Converting Apache configs to nginx, why is a NameVirtualHost workalike is a bad thing? Message-ID: Having one block per server is the best for performance because you only test a condition once; the way you're doing it would require NGINX to evaluate hostnames twice or maybe more. In my opinion config maintenance is important as well so either stick to what you have (unless those sites have thousands of requests per second) or generate the configs using Puppet, Chef or the like... Sent from Samsung Mobile -------- Original message -------- From: Darren Pilgrim Date: To: nginx at nginx.org Subject: Converting Apache configs to nginx, why is a NameVirtualHost workalike is a bad thing? I'm switching some servers from Apache 2.2.x to nginx 1.2.6.? In Apache, I use NameVirtualHost.? I also use Rewrite directives to manage www. prefixing via a 301 redirect.? All the user has to is create /wwwroot/$hostname, upload the site files, and point DNS at my server. On my end, all I have to do is create the Apache config once as part of the new-user setup. When I converted that to nginx, this is what I came up with: root /www/user/wwwroot/$http_host; For the user who wants to drop the www.: if ($http_host ~ ^www\.(.+)$) { return 301 http://$1$request_uri; } For the user who wants to always have the www.: if ($http_host !~ ^www\.) { return 301 http://www.$http_host$request_uri; } Which, in testing, works a treat.? Various articles say this style of configuration is bad.? Instead I should: 1. have per-domain server blocks; 2. have a server block for www.example.com that redirects to example.com. or vice versa. In other words, for the user who has 37 domains, I'll need 74 server blocks in their nginx config.? That's a significant regression in terms of workload and simplicity. If it's bad, ok, I won't do that; however, I can't seem to find an explaination *why* it's bad.? Would someone please clarify that point? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 12 03:21:56 2013 From: nginx-forum at nginx.us (amodpandey) Date: Mon, 11 Feb 2013 22:21:56 -0500 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: References: Message-ID: <24c10c5d2cf7fe4700c3a2235a69e1db.NginxMailingListEnglish@forum.nginx.org> Thank you for your response. Versions nginx versions tried nginx/1.2.5 and nginx/1.3.9 LuaJIT 2.0.0 What I want achive? Set the value of $cookie_abc to "a"/"b" (some logic) if the cookie value is not coming in the request else use the value set. I am doing this in server level set $cookie_abc "$cookie_abc"; set $tmp_abc ""; set_by_lua $tmp_abc ' common.set_abc_cookie() '; I am using set_by_lua to make sure the cookie value is set before the rewrites are evaluated. Why I am doing this? I have used $cookie_abc variable in my config and I want to have "a"/"b" value depending on a logic if the cookie is not passed. What is not working? inside common.set_abc_cookie() ngx.var.cookie_abc = "a" if the cookie is not passed. This is expected. That is why I am doing set $cookie_abc "$cookie_abc"; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236075,236122#msg-236122 From nginx-forum at nginx.us Tue Feb 12 07:01:55 2013 From: nginx-forum at nginx.us (n1xman) Date: Tue, 12 Feb 2013 02:01:55 -0500 Subject: Worker processes not shutting down In-Reply-To: References: Message-ID: <8b582656c938132d5f99e41e8aaa817d.NginxMailingListEnglish@forum.nginx.org> Hi, I have the same issue and would like to find a solution. This is in production so couple of 3rd party module compiled. This happens when we reload the nginx config. # ps -ef | grep nginx root 10163 1 0 Feb08 ? 00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 14434 10163 0 06:39 ? 00:00:00 nginx: worker process is shutting down nginx 15664 10163 0 06:40 ? 00:00:00 nginx: worker process nginx 15665 10163 0 06:40 ? 00:00:00 nginx: worker process nginx 15666 10163 0 06:40 ? 00:00:00 nginx: worker process nginx 15667 10163 0 06:40 ? 00:00:00 nginx: worker process root 17489 9311 0 06:43 pts/3 00:00:00 grep nginx nginx 23887 10163 0 Feb08 ? 00:00:12 nginx: worker process is shutting down nginx 23888 10163 0 Feb08 ? 00:00:08 nginx: worker process is shutting down nginx 23892 10163 0 Feb08 ? 00:00:20 nginx: worker process is shutting down nginx 32240 10163 0 Feb11 ? 00:00:15 nginx: worker process is shutting down nginx 32241 10163 0 Feb11 ? 00:00:16 nginx: worker process is shutting down nginx 32244 10163 0 Feb11 ? 00:00:13 nginx: worker process is shutting down nginx 32245 10163 0 Feb11 ? 00:00:19 nginx: worker process is shutting down # pstack 32245 #0 0x00000039dced4863 in __epoll_wait_nocancel () from /lib64/libc.so.6 #1 0x0000000000425e57 in ngx_epoll_process_events () #2 0x000000000041cc2e in ngx_process_events_and_timers () #3 0x0000000000423d7d in ngx_worker_process_cycle () #4 0x000000000042236d in ngx_spawn_process () #5 0x00000000004232cc in ngx_start_worker_processes () #6 0x0000000000424a9e in ngx_master_process_cycle () #7 0x00000000004069fd in main () ]# pstack 32244 #0 0x00000039dced4863 in __epoll_wait_nocancel () from /lib64/libc.so.6 #1 0x0000000000425e57 in ngx_epoll_process_events () #2 0x000000000041cc2e in ngx_process_events_and_timers () #3 0x0000000000423d7d in ngx_worker_process_cycle () #4 0x000000000042236d in ngx_spawn_process () #5 0x00000000004232cc in ngx_start_worker_processes () #6 0x0000000000424a9e in ngx_master_process_cycle () #7 0x00000000004069fd in main () # pstack 32241 #0 0x00000039dced4863 in __epoll_wait_nocancel () from /lib64/libc.so.6 #1 0x0000000000425e57 in ngx_epoll_process_events () #2 0x000000000041cc2e in ngx_process_events_and_timers () #3 0x0000000000423d7d in ngx_worker_process_cycle () #4 0x000000000042236d in ngx_spawn_process () #5 0x00000000004232cc in ngx_start_worker_processes () #6 0x0000000000424a9e in ngx_master_process_cycle () #7 0x00000000004069fd in main () # strace -p 32244 Process 32244 attached - interrupt to quit epoll_wait(21, # strace -p 32241 Process 32241 attached - interrupt to quit epoll_wait(19, # strace -p 32245 Process 32245 attached - interrupt to quit epoll_wait(24, {{EPOLLIN|EPOLLOUT, {u32=412706096, u64=412706096}}}, 512, 9351728) = 1 recvfrom(447, "HTTP/1.1 200 OK\r\nDate: Tue, 12 F"..., 2048, 0, NULL, NULL) = 611 write(444, "\27\3\1\2\200\224T\204\354\353w\344+\267c9\314\270\30I\216\200\314\354\376\3\323\202\332(:\323"..., 645) = 645 recvfrom(447, 0x1a0e98f0, 2048, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable) epoll_wait(24, {{EPOLLIN|EPOLLOUT, {u32=412730864, u64=412730864}}}, 512, 9351656) = 1 read(443, "\27\3\1\0 \t\355\205\202\267>\274\240\324qR\305\334\36\223\355\0251\31[A\366\225~\217\220\252"..., 34821) = 698 read(443, 0x1a0d0600, 34821) = -1 EAGAIN (Resource temporarily unavailable) sendto(446, "GET /CometServer/cometd/connect?"..., 621, 0, NULL, 0) = 621 epoll_wait(24, {{EPOLLIN|EPOLLOUT, {u32=412783472, u64=412783472}}}, 512, 9351648) = 1 recvfrom(446, "HTTP/1.1 200 OK\r\nDate: Tue, 12 F"..., 1024, 0, NULL, NULL) = 1024 write(443, "\27\3\1\4 +T@\376\260\306q\vrq\1\240T\244\377\227\23g\5\340\262FYs:]o"..., 1061) = 1061 recvfrom(446, "|TRT\\\"},\\\"DAT\\\":{\\\"DQ\\\":[\\\"PFX|1"..., 1024, 0, NULL, NULL) = 1024 write(443, "\27\3\1\4 \262\202\372V\326\251S\252?\353\266\307\272\257\240\306\247\205\345\307\320L\31W\223\337h"..., 1061) = 1061 recvfrom(446, "\":\"{\\\"RT\\\":\\\"4\\\",\\\"HED\\\":{\\\"DQ\\\""..., 1024, 0, NULL, NULL) = 1024 write(443, "\27\3\1\4 9(\241\235\377\350\220\4\320&\25\34|;#\215\212\220\"\263\vFz\350\360ks"..., 1061) = 1061 recvfrom(446, "a\":{\"scope\":\"public\",\"server\":\"P"..., 1024, 0, NULL, NULL) = 1024 # cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.7 (Tikanga) # uname -rop 2.6.18-274.el5 x86_64 GNU/Linux # nginx -V nginx version: nginx/1.2.6 built by gcc 4.1.2 20080704 (Red Hat 4.1.2-50) TLS SNI support disabled configure arguments: --prefix=/etc/nginx/ --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-file-aio --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-debug --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/ngx_devel_kit-master --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/echo-nginx-module-master --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/set-misc-nginx-module-master --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/srcache-nginx-module-master --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/nginx-sticky-module-1.1 --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/nginx_upstream_check_module-master --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/memc-nginx-module-master --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/nginx_cross_origin_module-master --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/nginx_tcp_proxy_module-master --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.6/contrib/naxsi-core-0.48/naxsi_src --with-cc-opt='-O2 -g -m64 -mtune=generic' Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221591,236123#msg-236123 From nginx-forum at nginx.us Tue Feb 12 07:54:37 2013 From: nginx-forum at nginx.us (jeangouytch) Date: Tue, 12 Feb 2013 02:54:37 -0500 Subject: Nginx seems to ignore proxy_set_header Host directive In-Reply-To: <20130211214345.GD32392@craic.sysops.org> References: <20130211214345.GD32392@craic.sysops.org> Message-ID: <0c38dd2b121eaec2b4894b9e93c2ef78.NginxMailingListEnglish@forum.nginx.org> Ok, thank you for making the things clear. I'll try to go with the first solution then. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236099,236124#msg-236124 From appa at perusio.net Tue Feb 12 09:01:56 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Tue, 12 Feb 2013 10:01:56 +0100 Subject: set $cookie_abc "$cookie_abc"; Message-ID: Do you really need to use Lua? It seems that the required logic can be done using the map directive. Explain clearly what you want to achieve. --appa amodpandey a ?crit?: >Thank you for your response. > >Versions > >nginx versions tried nginx/1.2.5 and nginx/1.3.9 >LuaJIT 2.0.0 > >What I want achive? > >Set the value of $cookie_abc to "a"/"b" (some logic) if the cookie value is >not coming in the request else use the value set. I am doing this in > >server level > >set $cookie_abc "$cookie_abc"; >set $tmp_abc ""; >set_by_lua $tmp_abc ' > common.set_abc_cookie() >'; > >I am using set_by_lua to make sure the cookie value is set before the >rewrites are evaluated. > >Why I am doing this? > >I have used $cookie_abc variable in my config and I want to have "a"/"b" >value depending on a logic if the cookie is not passed. > >What is not working? > >inside common.set_abc_cookie() > >ngx.var.cookie_abc = "a" if the cookie is not passed. > >This is expected. That is why I am doing >set $cookie_abc "$cookie_abc"; > >Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236075,236122#msg-236122 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Feb 12 13:57:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Feb 2013 17:57:27 +0400 Subject: nginx-1.2.7 Message-ID: <20130212135727.GG20890@mdounin.ru> Changes with nginx 1.2.7 12 Feb 2013 *) Change: now if the "include" directive with mask is used on Unix systems, included files are sorted in alphabetical order. *) Change: the "add_header" directive adds headers to 201 responses. *) Feature: the "geo" directive now supports IPv6 addresses in CIDR notation. *) Feature: the "flush" and "gzip" parameters of the "access_log" directive. *) Feature: variables support in the "auth_basic" directive. *) Feature: the $pipe, $request_length, $time_iso8601, and $time_local variables can now be used not only in the "log_format" directive. Thanks to Kiril Kalchev. *) Feature: IPv6 support in the ngx_http_geoip_module. Thanks to Gregor Kali?nik. *) Bugfix: nginx could not be built with the ngx_http_perl_module in some cases. *) Bugfix: a segmentation fault might occur in a worker process if the ngx_http_xslt_module was used. *) Bugfix: nginx could not be built on MacOSX in some cases. Thanks to Piotr Sikora. *) Bugfix: the "limit_rate" directive with high rates might result in truncated responses on 32-bit platforms. Thanks to Alexey Antropov. *) Bugfix: a segmentation fault might occur in a worker process if the "if" directive was used. Thanks to Piotr Sikora. *) Bugfix: a "100 Continue" response was issued with "413 Request Entity Too Large" responses. *) Bugfix: the "image_filter", "image_filter_jpeg_quality" and "image_filter_sharpen" directives might be inherited incorrectly. Thanks to Ian Babrou. *) Bugfix: "crypt_r() failed" errors might appear if the "auth_basic" directive was used on Linux. *) Bugfix: in backup servers handling. Thanks to Thomas Chen. *) Bugfix: proxied HEAD requests might return incorrect response if the "gzip" directive was used. *) Bugfix: a segmentation fault occurred on start or during reconfiguration if the "keepalive" directive was specified more than once in a single upstream block. *) Bugfix: in the "proxy_method" directive. *) Bugfix: a segmentation fault might occur in a worker process if resolver was used with the poll method. *) Bugfix: nginx might hog CPU during SSL handshake with a backend if the select, poll, or /dev/poll methods were used. *) Bugfix: the "[crit] SSL_write() failed (SSL:)" error. *) Bugfix: in the "fastcgi_keep_conn" directive. -- Maxim Dounin http://nginx.com/support.html From kworthington at gmail.com Tue Feb 12 16:11:52 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 12 Feb 2013 11:11:52 -0500 Subject: [nginx-announce] nginx-1.2.7 In-Reply-To: <20130212135734.GH20890@mdounin.ru> References: <20130212135734.GH20890@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.2.7 For Windows http://goo.gl/q2kaJ (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ (516) 647-1992 http://twitter.com/kworthington On Tue, Feb 12, 2013 at 8:57 AM, Maxim Dounin wrote: > Changes with nginx 1.2.7 12 Feb > 2013 > > *) Change: now if the "include" directive with mask is used on Unix > systems, included files are sorted in alphabetical order. > > *) Change: the "add_header" directive adds headers to 201 responses. > > *) Feature: the "geo" directive now supports IPv6 addresses in CIDR > notation. > > *) Feature: the "flush" and "gzip" parameters of the "access_log" > directive. > > *) Feature: variables support in the "auth_basic" directive. > > *) Feature: the $pipe, $request_length, $time_iso8601, and $time_local > variables can now be used not only in the "log_format" directive. > Thanks to Kiril Kalchev. > > *) Feature: IPv6 support in the ngx_http_geoip_module. > Thanks to Gregor Kali?nik. > > *) Bugfix: nginx could not be built with the ngx_http_perl_module in > some cases. > > *) Bugfix: a segmentation fault might occur in a worker process if the > ngx_http_xslt_module was used. > > *) Bugfix: nginx could not be built on MacOSX in some cases. > Thanks to Piotr Sikora. > > *) Bugfix: the "limit_rate" directive with high rates might result in > truncated responses on 32-bit platforms. > Thanks to Alexey Antropov. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "if" directive was used. > Thanks to Piotr Sikora. > > *) Bugfix: a "100 Continue" response was issued with "413 Request > Entity > Too Large" responses. > > *) Bugfix: the "image_filter", "image_filter_jpeg_quality" and > "image_filter_sharpen" directives might be inherited incorrectly. > Thanks to Ian Babrou. > > *) Bugfix: "crypt_r() failed" errors might appear if the "auth_basic" > directive was used on Linux. > > *) Bugfix: in backup servers handling. > Thanks to Thomas Chen. > > *) Bugfix: proxied HEAD requests might return incorrect response if the > "gzip" directive was used. > > *) Bugfix: a segmentation fault occurred on start or during > reconfiguration if the "keepalive" directive was specified more than > once in a single upstream block. > > *) Bugfix: in the "proxy_method" directive. > > *) Bugfix: a segmentation fault might occur in a worker process if > resolver was used with the poll method. > > *) Bugfix: nginx might hog CPU during SSL handshake with a backend if > the select, poll, or /dev/poll methods were used. > > *) Bugfix: the "[crit] SSL_write() failed (SSL:)" error. > > *) Bugfix: in the "fastcgi_keep_conn" directive. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Tue Feb 12 19:57:30 2013 From: agentzh at gmail.com (agentzh) Date: Tue, 12 Feb 2013 11:57:30 -0800 Subject: set $cookie_abc "$cookie_abc"; In-Reply-To: <24c10c5d2cf7fe4700c3a2235a69e1db.NginxMailingListEnglish@forum.nginx.org> References: <24c10c5d2cf7fe4700c3a2235a69e1db.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Mon, Feb 11, 2013 at 7:21 PM, amodpandey wrote: > > Set the value of $cookie_abc to "a"/"b" (some logic) if the cookie value is > not coming in the request else use the value set. I am doing this in > If you want to set a cookie (i.e., adding Set-Cookie response headers on the HTTP protocol level), then assigning to the Nginx variable $cookie_XXX will not do what you want. (This has nothing to do with Lua and this is how the Nginx core works right now.) To achieve that, you need to add the Set-Cookie response headers explicitly. For example, in Lua you can do something like this: ngx.header['Set-Cookie'] = {'a=32; path=/', 'b=4; path=/'} will yield the HTTP response headers Set-Cookie: a=32; path=/ Set-Cookie: b=4; path=/ Best regards, -agentzh From nginx-forum at nginx.us Tue Feb 12 20:01:39 2013 From: nginx-forum at nginx.us (piotr.dobrogost) Date: Tue, 12 Feb 2013 15:01:39 -0500 Subject: Upgrading Executable on the Fly - wrong docs? In-Reply-To: <20130211070722.GA45362@lo0.su> References: <20130211070722.GA45362@lo0.su> Message-ID: Ruslan, thanks for quick reply. I have some trouble comparing the new wording with the previous one as it looks like your change went live at http://nginx.org/en/docs/control.html so I do not have the old one to compare any more :) Neverthless I have some more comments on the new (current) one. I think an error sneaked into the new version. The first bullet is now "Send the HUP signal to the old master process. The old master process will start new worker processes without re-reading the configuration. After that, all new processes can be shut down gracefully, by sending the QUIT signal to the old master process." I think it should have been "(...) by sending the QUIT signal to the new master process." instead. What I don't understand is why the old master process does not re-read the configuration after receiving the HUP signal as at the top of the page it's written HUP (...), starting new worker processes with a new configuration, (...) If the reason is because it had received the USR2 signal at the beginning of the whole procedure and this changed its state (it "remembers" receiving the USR2 signal) it should be explained. Also, maybe I'm missing something but I think that the two bullets are not symmetrical without a reason. In the first bullet the QUIT signal is used whereas in the second bullet the TERM signal is used. I believe either of them could be used with the obvious difference of fast vs graceful shutdown. If it's true (either could be used) then using different signals between the first and the second bullet is misleading. Additionaly I have a question regarding the following fragment: "In order to upgrade the server executable, the new executable file should be put in place of an old file first. After that USR2 signal should be sent to the master process. The master process first renames its file (...) How can the master process rename its file if this file is already gone i.e. it had been replaced by the new executable? Regards, Piotr Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236047,236164#msg-236164 From parasharragh at gmail.com Tue Feb 12 20:16:48 2013 From: parasharragh at gmail.com (Raghvendra Parashar) Date: Wed, 13 Feb 2013 01:46:48 +0530 Subject: Need help : Getting 502 response status code for specific URL Message-ID: Hi nginx team, I am getting 502 bad gateway error for particular URL. Application details : Rails 3.1.0 Ruby 1.9.2 unicorn 4.2.0 resque 1.20.0 nginx/1.0.14 redis 2.4.8 nginx error log : 2013/02/12 07:36:16 [error] 32401#0: *1948 upstream prematurely closed connection while reading response header from upstream I have also posted my question on stack-overflow, please find it here, but couldn't got any response. Please guide me to solve this problem. Thanks Regards: Raghvendra Kumar Parashar -------------- next part -------------- An HTML attachment was scrubbed... URL: From nik.molnar at consbio.org Tue Feb 12 20:27:22 2013 From: nik.molnar at consbio.org (Nikolas Stevenson-Molnar) Date: Tue, 12 Feb 2013 12:27:22 -0800 Subject: Need help : Getting 502 response status code for specific URL In-Reply-To: References: Message-ID: <511AA5AA.9060806@consbio.org> 502 Bad Gateway usually indicates a problem with the application (or upstream) server. In this case, I think that would be Unicorn. Double check that Unicorn is configured and running and correctly and that the port or socket that you have in your nginx config is the same as the one Unicorn is listening on. _Nik On 2/12/2013 12:16 PM, Raghvendra Parashar wrote: > Hi nginx team, > > I am getting 502 bad gateway error for particular URL. > > Application details : > Rails 3.1.0 > Ruby 1.9.2 > unicorn 4.2.0 > resque 1.20.0 > nginx/1.0.14 > redis 2.4.8 > > nginx error log :| > 2013/02/1207:36:16[error]32401#0: *1948 upstream prematurely closed > connection while reading response header from upstream| > > I have also posted my question on stack-overflow, please find it here > , > but couldn't got any response. > > Please guide me to solve this problem. > > Thanks > Regards: > Raghvendra Kumar Parashar > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 13 07:56:27 2013 From: nginx-forum at nginx.us (ConnorMcLaud) Date: Wed, 13 Feb 2013 02:56:27 -0500 Subject: When 1.3 becomes stable? Message-ID: <148e5b9252da68bd5b8d44d37fe291b8.NginxMailingListEnglish@forum.nginx.org> I want to know when there will be stable 1.3 version of nginx. Personally I interested in this feature: *) Feature: support for chunked transfer encoding while reading client request body. I need to know should we implement this in our own module or we can wait. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236175,236175#msg-236175 From maxim at nginx.com Wed Feb 13 10:33:00 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 13 Feb 2013 14:33:00 +0400 Subject: Upgrading Executable on the Fly - wrong docs? In-Reply-To: References: <20130211070722.GA45362@lo0.su> Message-ID: <511B6BDC.30403@nginx.com> On 2/13/13 12:01 AM, piotr.dobrogost wrote: > Ruslan, thanks for quick reply. > > I have some trouble comparing the new wording with the previous one as it > looks like your change went live at http://nginx.org/en/docs/control.html so > I do not have the old one to compare any more :) > [...] You do: http://trac.nginx.org/nginx/log/nginx_org/xml/en/docs/control.xml -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/support.html From mdounin at mdounin.ru Wed Feb 13 10:57:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Feb 2013 14:57:22 +0400 Subject: When 1.3 becomes stable? In-Reply-To: <148e5b9252da68bd5b8d44d37fe291b8.NginxMailingListEnglish@forum.nginx.org> References: <148e5b9252da68bd5b8d44d37fe291b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130213105721.GO20890@mdounin.ru> Hello! On Wed, Feb 13, 2013 at 02:56:27AM -0500, ConnorMcLaud wrote: > I want to know when there will be stable 1.3 version of nginx. > Personally I interested in this feature: > *) Feature: support for chunked transfer encoding while reading client > request body. > I need to know should we implement this in our own module or we can wait. There is no exact ETA, but likely 1.4.x stable branch will appear sometime in Q2 2013. On the other hand, 1.3.x is usable for production anyway (though might break some 3rd party modules due to API changes introduced from time to time), and you might want to just use 1.3.x if you need chunked transfer encoding support. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Wed Feb 13 16:19:42 2013 From: nginx-forum at nginx.us (BrindleFly) Date: Wed, 13 Feb 2013 11:19:42 -0500 Subject: Timeout serving large requests Message-ID: I'm struggling to find the source of a timeout. I am running Nginx (1.2.6), Passenger 3.0.18, and Rails 3.2.11. On serving one long running request, I find the results (CSV file) truncated after being returned from the client. The Rails application however continues to serve up the data until complete, at which time it reports: Couldn't forward the HTTP response back to the HTTP client: It seems the user clicked on the 'Stop' button in his browser. Nginx access.log however reports a 200 for the request. Some things I have tried: 1) I modified the Nginx read/send timeouts in the Passenger gem (ext/nginx/Configuration.c) and recompiled, with no impact. 2) I ran a test of bypassing Nginx/Passenger by going direct to my app running in Unicorn, and it serves up the result fine. I then took Passenger out of the equation by configuring Nginx to pass the requests to Unicorn, and the truncated result is back again. 3) I've played with the Nginx keepalive_timeout, proxy_read_timeout, proxy_send_timeout and send_timeout - all with no impact. Any thoughts/advice would be much appreciated. Joe Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236201#msg-236201 From anatoly at sonru.com Wed Feb 13 16:30:15 2013 From: anatoly at sonru.com (Anatoly Mikhailov) Date: Wed, 13 Feb 2013 16:30:15 +0000 Subject: Timeout serving large requests In-Reply-To: References: Message-ID: > I'm struggling to find the source of a timeout. I am running Nginx (1.2.6), > Passenger 3.0.18, and Rails 3.2.11. On serving one long running request, I > find the results (CSV file) truncated after being returned from the client. > The Rails application however continues to serve up the data until complete, > at which time it reports: > > Couldn't forward the HTTP response back to the HTTP client: It seems the > user clicked on the 'Stop' button in his browser. > > Nginx access.log however reports a 200 for the request. > > Some things I have tried: > 1) I modified the Nginx read/send timeouts in the Passenger gem > (ext/nginx/Configuration.c) and recompiled, with no impact. > 2) I ran a test of bypassing Nginx/Passenger by going direct to my app > running in Unicorn, and it serves up the result fine. I then took Passenger > out of the equation by configuring Nginx to pass the requests to Unicorn, > and the truncated result is back again. > 3) I've played with the Nginx keepalive_timeout, proxy_read_timeout, > proxy_send_timeout and send_timeout - all with no impact. > > Any thoughts/advice would be much appreciated. > > Joe > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236201#msg-236201 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx It happened for us many times, there is only one option - upstream. It can be Unicorn/PassengerStandalone/whatever. We spent a lot of time debugging Nginx, so there is no real solution for passenger built-in module. Anatoly -------------- next part -------------- An HTML attachment was scrubbed... URL: From haroldsinclair at gmail.com Wed Feb 13 16:38:13 2013 From: haroldsinclair at gmail.com (Harold Sinclair) Date: Wed, 13 Feb 2013 11:38:13 -0500 Subject: Timeout serving large requests In-Reply-To: References: Message-ID: That was a bug in rails not the web server. On Wed, Feb 13, 2013 at 11:30 AM, Anatoly Mikhailov wrote: > > I'm struggling to find the source of a timeout. I am running Nginx (1.2.6), > Passenger 3.0.18, and Rails 3.2.11. On serving one long running request, I > find the results (CSV file) truncated after being returned from the client. > The Rails application however continues to serve up the data until > complete, > at which time it reports: > > Couldn't forward the HTTP response back to the HTTP client: It seems the > user clicked on the 'Stop' button in his browser. > > Nginx access.log however reports a 200 for the request. > > Some things I have tried: > 1) I modified the Nginx read/send timeouts in the Passenger gem > (ext/nginx/Configuration.c) and recompiled, with no impact. > 2) I ran a test of bypassing Nginx/Passenger by going direct to my app > running in Unicorn, and it serves up the result fine. I then took Passenger > out of the equation by configuring Nginx to pass the requests to Unicorn, > and the truncated result is back again. > 3) I've played with the Nginx keepalive_timeout, proxy_read_timeout, > proxy_send_timeout and send_timeout - all with no impact. > > Any thoughts/advice would be much appreciated. > > Joe > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,236201,236201#msg-236201 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > It happened for us many times, there is only one option - *upstream*. > It can be Unicorn/PassengerStandalone/whatever. > > We spent a lot of time debugging Nginx, so there is no real solution > for passenger built-in module. > > > Anatoly > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anatoly at sonru.com Wed Feb 13 16:39:59 2013 From: anatoly at sonru.com (Anatoly Mikhailov) Date: Wed, 13 Feb 2013 16:39:59 +0000 Subject: Timeout serving large requests In-Reply-To: References: Message-ID: > >> I'm struggling to find the source of a timeout. I am running Nginx (1.2.6), >> Passenger 3.0.18, and Rails 3.2.11. On serving one long running request, I >> find the results (CSV file) truncated after being returned from the client. >> The Rails application however continues to serve up the data until complete, >> at which time it reports: >> >> Couldn't forward the HTTP response back to the HTTP client: It seems the >> user clicked on the 'Stop' button in his browser. >> >> Nginx access.log however reports a 200 for the request. >> >> Some things I have tried: >> 1) I modified the Nginx read/send timeouts in the Passenger gem >> (ext/nginx/Configuration.c) and recompiled, with no impact. >> 2) I ran a test of bypassing Nginx/Passenger by going direct to my app >> running in Unicorn, and it serves up the result fine. I then took Passenger >> out of the equation by configuring Nginx to pass the requests to Unicorn, >> and the truncated result is back again. >> 3) I've played with the Nginx keepalive_timeout, proxy_read_timeout, >> proxy_send_timeout and send_timeout - all with no impact. >> >> Any thoughts/advice would be much appreciated. >> >> Joe >> >> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236201#msg-236201 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > It happened for us many times, there is only one option - upstream. > It can be Unicorn/PassengerStandalone/whatever. > > We spent a lot of time debugging Nginx, so there is no real solution > for passenger built-in module. > > > Anatoly > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx You can find a lot of the same issues on Passenger bugtracker, the bug is years old. Passenger did the great job for us and it's good to continue using it as standalone way. But what if you want to have true Zero-downtime deployment? Passenger provides it only with commercial paid version. Anatoly -------------- next part -------------- An HTML attachment was scrubbed... URL: From anatoly at sonru.com Wed Feb 13 16:41:14 2013 From: anatoly at sonru.com (Anatoly Mikhailov) Date: Wed, 13 Feb 2013 16:41:14 +0000 Subject: Timeout serving large requests In-Reply-To: References: Message-ID: There is no bug in Rails neither in Webserver! It's a bug on passenger module implementation. Anatoly > That was a bug in rails not the web server. > > > On Wed, Feb 13, 2013 at 11:30 AM, Anatoly Mikhailov wrote: > >> I'm struggling to find the source of a timeout. I am running Nginx (1.2.6), >> Passenger 3.0.18, and Rails 3.2.11. On serving one long running request, I >> find the results (CSV file) truncated after being returned from the client. >> The Rails application however continues to serve up the data until complete, >> at which time it reports: >> >> Couldn't forward the HTTP response back to the HTTP client: It seems the >> user clicked on the 'Stop' button in his browser. >> >> Nginx access.log however reports a 200 for the request. >> >> Some things I have tried: >> 1) I modified the Nginx read/send timeouts in the Passenger gem >> (ext/nginx/Configuration.c) and recompiled, with no impact. >> 2) I ran a test of bypassing Nginx/Passenger by going direct to my app >> running in Unicorn, and it serves up the result fine. I then took Passenger >> out of the equation by configuring Nginx to pass the requests to Unicorn, >> and the truncated result is back again. >> 3) I've played with the Nginx keepalive_timeout, proxy_read_timeout, >> proxy_send_timeout and send_timeout - all with no impact. >> >> Any thoughts/advice would be much appreciated. >> >> Joe >> >> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236201#msg-236201 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > It happened for us many times, there is only one option - upstream. > It can be Unicorn/PassengerStandalone/whatever. > > We spent a lot of time debugging Nginx, so there is no real solution > for passenger built-in module. > > > Anatoly > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 13 16:48:44 2013 From: nginx-forum at nginx.us (BrindleFly) Date: Wed, 13 Feb 2013 11:48:44 -0500 Subject: Timeout serving large requests In-Reply-To: References: Message-ID: <67b0c3c43c65a3941756ff98c03a33f0.NginxMailingListEnglish@forum.nginx.org> The Unicorn test I ran disabled the passenger module and passed on requests via upstream in Nginx - but still returned a partial request. The only test that returns a full request is to bypass Nginx entirely and go direct to Unicorn. So based on this, it seems specific to my Nginx configuration. I find it hard to believe this is a generic defect in Nginx. It must be configuration related, right? Joe Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236206#msg-236206 From nginx-forum at nginx.us Wed Feb 13 16:50:16 2013 From: nginx-forum at nginx.us (BrindleFly) Date: Wed, 13 Feb 2013 11:50:16 -0500 Subject: Timeout serving large requests In-Reply-To: References: Message-ID: Anatoly, See my other reply. I can take Passenger entirely out of the mix and still reproduce (using Nginx, Unicorn, configured via upstream). So this is not Passenger in this case (although usually I suspect it for most other issues ;) ). Joe Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236207#msg-236207 From anatoly at sonru.com Wed Feb 13 16:58:19 2013 From: anatoly at sonru.com (Anatoly Mikhailov) Date: Wed, 13 Feb 2013 16:58:19 +0000 Subject: Timeout serving large requests In-Reply-To: References: Message-ID: Joe, ok, Can I have the full configuration please? Anatoly On Feb 13, 2013, at 4:50 PM, BrindleFly wrote: > Anatoly, > > See my other reply. I can take Passenger entirely out of the mix and still > reproduce (using Nginx, Unicorn, configured via upstream). So this is not > Passenger in this case (although usually I suspect it for most other issues > ;) ). > > Joe > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236207#msg-236207 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Feb 13 17:11:16 2013 From: nginx-forum at nginx.us (BrindleFly) Date: Wed, 13 Feb 2013 12:11:16 -0500 Subject: Timeout serving large requests In-Reply-To: References: Message-ID: <740f49a0b61f4a8d947293a40a18f966.NginxMailingListEnglish@forum.nginx.org> Here is nginx version (note: although it is compiled with passenger, I have not turned on the passenger directive in nginx.conf): nginx version: nginx/1.2.6 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --with-http_ssl_module --with-http_gzip_static_module --with-cc-opt=-Wno-error --add-module=/usr/local/rvm/gems/ruby-1.9.3-p286/gems/passenger-3.0.18/ext/nginx Here is nginx.conf: user nobody; worker_processes 1; pid /etc/nginx/nginx.pid; events { worker_connections 1024; accept_mutex off; } include conf.d/*.conf; http { upstream app_server { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80 default; server_name myapp.com; root /var/www/myapp/public; try_files $uri/index.html $uri.html $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } # Try extreme timeouts to see if issue will reproduce client_header_timeout 600s; client_body_timeout 600s; keepalive_timeout 600s; proxy_read_timeout 600s; proxy_send_timeout 600s; lingering_timeout 600s; lingering_time 600s; send_timeout 600s; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush off; gzip on; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 500; gzip_disable "MSIE [1-6]\."; gzip_types text/plain text/html text/xml text/css text/comma-separated-values text/javascript application/x-javascript application/atom+xml; include sites.d/*.conf; include blockips.conf; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236210#msg-236210 From anatoly at sonru.com Wed Feb 13 17:17:00 2013 From: anatoly at sonru.com (Anatoly Mikhailov) Date: Wed, 13 Feb 2013 17:17:00 +0000 Subject: Timeout serving large requests In-Reply-To: <740f49a0b61f4a8d947293a40a18f966.NginxMailingListEnglish@forum.nginx.org> References: <740f49a0b61f4a8d947293a40a18f966.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0FDAE934-545D-40ED-9521-41E225FC44A0@sonru.com> Try to build it clean without passenger module, then update config with: http { client_max_body_size 25m; client_body_buffer_size 128k; client_body_temp_path /tmp/client_body_temp; ? } Anatoly On Feb 13, 2013, at 5:11 PM, "BrindleFly" wrote: > Here is nginx version (note: although it is compiled with passenger, I have > not turned on the passenger directive in nginx.conf): > > nginx version: nginx/1.2.6 > built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --with-http_ssl_module > --with-http_gzip_static_module --with-cc-opt=-Wno-error > --add-module=/usr/local/rvm/gems/ruby-1.9.3-p286/gems/passenger-3.0.18/ext/nginx > > Here is nginx.conf: > > user nobody; > worker_processes 1; > pid /etc/nginx/nginx.pid; > > events { > worker_connections 1024; > accept_mutex off; > } > > include conf.d/*.conf; > > http { > upstream app_server { > server 127.0.0.1:8080 fail_timeout=0; > } > server { > listen 80 default; > server_name myapp.com; > root /var/www/myapp/public; > > try_files $uri/index.html $uri.html $uri @app; > > location @app { > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header Host $http_host; > > proxy_redirect off; > > proxy_pass http://app_server; > } > > } > > # Try extreme timeouts to see if issue will reproduce > > client_header_timeout 600s; > client_body_timeout 600s; > keepalive_timeout 600s; > proxy_read_timeout 600s; > proxy_send_timeout 600s; > lingering_timeout 600s; > lingering_time 600s; > send_timeout 600s; > > error_log /var/log/nginx/error.log; > access_log /var/log/nginx/access.log; > > include mime.types; > default_type application/octet-stream; > > sendfile on; > tcp_nopush off; > gzip on; > gzip_http_version 1.0; > gzip_proxied any; > gzip_min_length 500; > gzip_disable "MSIE [1-6]\."; > gzip_types text/plain text/html text/xml text/css > text/comma-separated-values > text/javascript application/x-javascript > application/atom+xml; > > include sites.d/*.conf; > > include blockips.conf; > } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236210#msg-236210 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Feb 13 17:40:37 2013 From: nginx-forum at nginx.us (BrindleFly) Date: Wed, 13 Feb 2013 12:40:37 -0500 Subject: Timeout serving large requests In-Reply-To: <0FDAE934-545D-40ED-9521-41E225FC44A0@sonru.com> References: <0FDAE934-545D-40ED-9521-41E225FC44A0@sonru.com> Message-ID: <6c6fa17c3f94b05445fdc41283c701d1.NginxMailingListEnglish@forum.nginx.org> I recompiled without passenger (--with-http_ssl_module --with-http_gzip_static_module --with-cc-opt=-Wno-error) and to my surprise, the interrupted request issue has disappeared. This means that having the passenger module compiled into an application changes some aspect of timeouts for all requests, independent of whether you have turned the passenger directive on or not. Wow... Now I understand why Phusion sells commercial versions of passenger that add support for timeout configuration parameters. Time to abandon Passenger. I sort of suspected I wasn't having an nginx issue. ;) Joe Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236212#msg-236212 From nginx-forum at nginx.us Wed Feb 13 18:54:08 2013 From: nginx-forum at nginx.us (tsaavik) Date: Wed, 13 Feb 2013 13:54:08 -0500 Subject: Content-Length ###s being served, but no actual content Message-ID: <0b642d26e676e34c80eef0b573689dde.NginxMailingListEnglish@forum.nginx.org> I've discovered what I believe to be a bug (yeah I know everyone is already rolling their eyes) :D I believe that my CGI is returning some data (perhaps Content-Type header) and then is occasionally stalling due to backend load. This is fine, and fastcgi_read_timeout can be adjusted to deal with the issue of course. However, it appears that when Nginx times out the connection it still serves a non-empty Content-Length! HTTP/1.1 200 OK^M Server: nginx/1.2.6^M Date: Tue, 05 Feb 2013 20:12:31 GMT^M Content-Type: text/x-json^M Content-Length: 397^M Connection: keep-alive^M X-WSL-Version: 1.0^M ^M HTTP/1.1 200 OK^M Server: nginx/1.2.6^M Date: Tue, 05 Feb 2013 20:12:38 GMT^M Content-Type: text/x-json^M Content-Length: 396^M Connection: keep-alive^M ^M (There is actual JSON data here, I clipped it out, trust me!) ---- See that 397 bytes of non-existent data? I tested on nginx/0.8.53 and nginx/1.2.6. I'm using netcat6 to run the tests with http-pipelining. I also tried various different kernels, and ran the tests on my local network to rule out weird internet stuff. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236213,236213#msg-236213 From nginx-forum at nginx.us Wed Feb 13 19:39:27 2013 From: nginx-forum at nginx.us (piotr.dobrogost) Date: Wed, 13 Feb 2013 14:39:27 -0500 Subject: Upgrading Executable on the Fly - wrong docs? In-Reply-To: <511B6BDC.30403@nginx.com> References: <511B6BDC.30403@nginx.com> Message-ID: Thanks for the link. If the statement "If new processes (...)" is supposed to mean "If the new master process and the new worker processes started by it" then I would use the latter form as it doesn't leave room for ambiguity. Still all my questions from my previous post in this thread are valid. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236047,236214#msg-236214 From anatoly at sonru.com Wed Feb 13 19:47:53 2013 From: anatoly at sonru.com (Anatoly Mikhailov) Date: Wed, 13 Feb 2013 19:47:53 +0000 Subject: Timeout serving large requests In-Reply-To: <6c6fa17c3f94b05445fdc41283c701d1.NginxMailingListEnglish@forum.nginx.org> References: <0FDAE934-545D-40ED-9521-41E225FC44A0@sonru.com> <6c6fa17c3f94b05445fdc41283c701d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0F637E61-1458-4EA9-8402-3BFBBA36A47C@sonru.com> Actually you can still use passenger standalone to have it independent from the nginx binary (no need to compile it inside). In that case passenger should not affect on request interruptions. The reason to use something else can be zero-downtime practice, if you would go with Unicorn, try to check the ready-to-use config: https://gist.github.com/mikhailov/3052776 You know, it's the same approach as for any Unix signals-based binary (master/workers) USR2+QUIT roll out. Anatoly On Feb 13, 2013, at 5:40 PM, BrindleFly wrote: > I recompiled without passenger (--with-http_ssl_module > --with-http_gzip_static_module --with-cc-opt=-Wno-error) and to my surprise, > the interrupted request issue has disappeared. This means that having the > passenger module compiled into an application changes some aspect of > timeouts for all requests, independent of whether you have turned the > passenger directive on or not. > > Wow... Now I understand why Phusion sells commercial versions of passenger > that add support for timeout configuration parameters. Time to abandon > Passenger. > > I sort of suspected I wasn't having an nginx issue. ;) > > Joe > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,236212#msg-236212 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Feb 14 10:55:47 2013 From: nginx-forum at nginx.us (mottwsc) Date: Thu, 14 Feb 2013 05:55:47 -0500 Subject: only allow traffic from specific servers Message-ID: BACKGROUND: I will be setting up two servers on a hosted platform. I plan to use NGINX as a front-end proxy for APACHE. Let's call the two servers App Server 1 and App Server 2. I also have a server on another hosting platform and that has my home page. Let's call that OUTSIDE Server. App Server 1 and App Server 2 will house the application. When the user clicks on the sign in button on the OUTSIDE Server, the user will be redirected to App Server 1. Basic round-robin load balancing through NGINX on App Server 1 will assign the user to sign in on either that server (App Server 1) or on App Server 2. QUESTIONS: (1) I want to only allow App Server 1 to accept traffic coming from either OUTSIDE Server or App Server 2. Even if the user had bookmarked App Server 1, they will need to go back thru OUTSIDE Server to sign in. How do I do this with NGINX? (2) Very similar issue with App Server 2, except that it can only have traffic coming from App Server 1. How do I do this with NGINX? Thanks for any suggestions! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236220,236220#msg-236220 From mdounin at mdounin.ru Thu Feb 14 11:07:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Feb 2013 15:07:29 +0400 Subject: Content-Length ###s being served, but no actual content In-Reply-To: <0b642d26e676e34c80eef0b573689dde.NginxMailingListEnglish@forum.nginx.org> References: <0b642d26e676e34c80eef0b573689dde.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130214110729.GF40751@mdounin.ru> Hello! On Wed, Feb 13, 2013 at 01:54:08PM -0500, tsaavik wrote: > I've discovered what I believe to be a bug (yeah I know everyone is already > rolling their eyes) :D > > I believe that my CGI is returning some data (perhaps Content-Type header) > and then is occasionally stalling due to backend load. > This is fine, and fastcgi_read_timeout can be adjusted to deal with the > issue of course. > However, it appears that when Nginx times out the connection it still serves > a non-empty Content-Length! > > HTTP/1.1 200 OK^M > Server: nginx/1.2.6^M > Date: Tue, 05 Feb 2013 20:12:31 GMT^M > Content-Type: text/x-json^M > Content-Length: 397^M > Connection: keep-alive^M > X-WSL-Version: 1.0^M > ^M > HTTP/1.1 200 OK^M > Server: nginx/1.2.6^M > Date: Tue, 05 Feb 2013 20:12:38 GMT^M > Content-Type: text/x-json^M > Content-Length: 396^M > Connection: keep-alive^M > ^M > (There is actual JSON data here, I clipped it out, trust me!) > > ---- > See that 397 bytes of non-existent data? > I tested on nginx/0.8.53 and nginx/1.2.6. I'm using netcat6 to run the tests > with http-pipelining. > I also tried various different kernels, and ran the tests on my local > network to rule out weird internet stuff. Here nginx just returns Content-Length it got from the backend, and there is no way to withdraw headers if the problem with backend happens after headers are sent. What is wrong here is that next pipelined request is answered if error happens while reading response body from the backend. Instead, client connection should be just closed as there is no way to recover. This is somewhat known problem and it's resolution waits for the upstream module error handling audit. It's relatively low priority though as it doesn't appear frequently, as the problem have to happen after headers are sent, during transmission of a response body. -- Maxim Dounin http://nginx.com/support.html From francis at daoine.org Thu Feb 14 12:12:20 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 14 Feb 2013 12:12:20 +0000 Subject: only allow traffic from specific servers In-Reply-To: References: Message-ID: <20130214121220.GE32392@craic.sysops.org> On Thu, Feb 14, 2013 at 05:55:47AM -0500, mottwsc wrote: Hi there, > When the user clicks on the sign in button on > the OUTSIDE Server, the user will be redirected to App Server 1. > QUESTIONS: > (1) I want to only allow App Server 1 to accept traffic coming from either > OUTSIDE Server or App Server 2. Even if the user had bookmarked App Server > 1, they will need to go back thru OUTSIDE Server to sign in. How do I do > this with NGINX? If you use nginx to redirect to the app servers, what you seem to want can't be done. If you use nginx to proxy_pass to the app servers, what you seem to want can be done. But doing it is unrelated to nginx. Look at the configuration of the app servers and/or the network devices protecting them. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Feb 14 22:05:21 2013 From: nginx-forum at nginx.us (etrader) Date: Thu, 14 Feb 2013 17:05:21 -0500 Subject: Regex for rewrite of subdomains Message-ID: <0641b34dc36fc5bbbaab10b75a920dad.NginxMailingListEnglish@forum.nginx.org> I want to rewrite subdomains by adding a separate server for subdomains as server { server_name domain.com ... } server { server_name *.domain.com } but how I can I ass a rewrite rule int the second server to process subdomain requests as keyword.domain.com/query=some to domain.com/script.php?query=some&sub=keyword Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236231,236231#msg-236231 From steve at greengecko.co.nz Thu Feb 14 22:51:12 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 15 Feb 2013 11:51:12 +1300 Subject: Regex for rewrite of subdomains In-Reply-To: <0641b34dc36fc5bbbaab10b75a920dad.NginxMailingListEnglish@forum.nginx.org> References: <0641b34dc36fc5bbbaab10b75a920dad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1360882272.1261.390.camel@steve-new> I'd take a look at using pattern matching in a map ( http://wiki.nginx.org/HttpMapModule ) and redirecting if the default value isn't found? Maybe not the most effective, but simpler to maintain... Steve On Thu, 2013-02-14 at 17:05 -0500, etrader wrote: > I want to rewrite subdomains by adding a separate server for subdomains as > > > server { > server_name domain.com > ... > } > server { > server_name *.domain.com > } > > but how I can I ass a rewrite rule int the second server to process > subdomain requests as > > keyword.domain.com/query=some to > domain.com/script.php?query=some&sub=keyword > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236231,236231#msg-236231 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From list_nginx at bluerosetech.com Fri Feb 15 05:48:11 2013 From: list_nginx at bluerosetech.com (Darren Pilgrim) Date: Thu, 14 Feb 2013 21:48:11 -0800 Subject: Regex for rewrite of subdomains In-Reply-To: <0641b34dc36fc5bbbaab10b75a920dad.NginxMailingListEnglish@forum.nginx.org> References: <0641b34dc36fc5bbbaab10b75a920dad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <511DCC1B.7060302@bluerosetech.com> On 2013-02-14 14:05, etrader wrote: > I want to rewrite subdomains by adding a separate server for subdomains as [...] > keyword.domain.com/query=some to > domain.com/script.php?query=some&sub=keyword server { server_name *.domain.com; if ($http_host ~* (.+)\.domain.com$) { set $keyword $1; } if ($request_uri ~* ^/query=(.+)) { return 301 http://domain.com/script.php?query=$1&sub=$keyword; } } I'm not sure a map would be faster in this case because you don't need a multitude of patterns. From igor at sysoev.ru Fri Feb 15 06:21:48 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 15 Feb 2013 10:21:48 +0400 Subject: Regex for rewrite of subdomains In-Reply-To: <511DCC1B.7060302@bluerosetech.com> References: <0641b34dc36fc5bbbaab10b75a920dad.NginxMailingListEnglish@forum.nginx.org> <511DCC1B.7060302@bluerosetech.com> Message-ID: On Feb 15, 2013, at 9:48 , Darren Pilgrim wrote: > On 2013-02-14 14:05, etrader wrote: >> I want to rewrite subdomains by adding a separate server for subdomains as > [...] > > keyword.domain.com/query=some to > > domain.com/script.php?query=some&sub=keyword > > server { > server_name *.domain.com; > > if ($http_host ~* (.+)\.domain.com$) { > set $keyword $1; > } > > if ($request_uri ~* ^/query=(.+)) { > return 301 http://domain.com/script.php?query=$1&sub=$keyword; > } > } Oh, NO! server { server_name ~^(?.+)\.domain\.com$; return 301 http://domain.com/script.php?query=$arg_query&sub=$keyword; } Or if the server may process something expect "query": server { server_name ~^(?.+)\.domain\.com$; if ($arg_query) { return 301 http://domain.com/script.php?query=$arg_query&sub=$keyword; } ... } -- Igor Sysoev http://nginx.com/support.html From nginx-forum at nginx.us Fri Feb 15 14:33:57 2013 From: nginx-forum at nginx.us (antoinebk) Date: Fri, 15 Feb 2013 09:33:57 -0500 Subject: nginx not reverse proxying correctly Message-ID: Hello, I come seeking your help because I have a problem that I have been unable to solve using the classic Internet resources. Currently, we have an Apache2 webserver acting as load balancer for our web architecture. The backends are Xen virtual machines accessible via IPv6 for the public Internet and via IPv4 for our VPN. The problem is that the Apache 2 loadbalancer doesn't perform as well as we would like it to so we're switching to nginx. The version of nginx installed is 1.3.10 which was compiled with standard Debian options. We had to go for this version because it was the only one that supports IPv6 backends which is a requirement for these VMs. For the moment, nginx only has one "virtual host" or server block and it is the following. upstream backend-cookissime-prod { server cookissime-prod.cookissime1.vm.cob:80 max_fails=5; server cookissime-prod.cookissime2.vm.cob:80 max_fails=5; } server { listen 37.59.6.220:80; # listen [::]:80; server_name www.cookissime.fr; access_log /var/log/nginx/cookissime-prod.log; error_log /var/log/nginx/cookissime-prod.log; ## send request back to apache1 ## location / { proxy_pass http://backend-cookissime-prod; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } There is also a second code block which takes the above information and replaces prod with dev. The domain names cookissime-prod.cookissime1.vm.cob and cookissime-prod.cookissime2.vm.cob resolve to an IPv6 on our internal DNS. The above configuration seems to be good syntax-wise. The problem is that most of the time, this configuration displays the default "Welcome to nginx" page but sporadically it will display the website for a few minutes then return to the default page. This very setup works correctly with Apache2 so the virtual machines are functionnal. What am I missing ? What could cause these problems ? Thank you in advance for your help, Antoine Benkemoun Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236230,236230#msg-236230 From nginx-forum at nginx.us Sat Feb 16 08:43:55 2013 From: nginx-forum at nginx.us (youreright) Date: Sat, 16 Feb 2013 03:43:55 -0500 Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work In-Reply-To: References: Message-ID: I've tried to fix the "Too many connections problem" following the suggested sites: http://www.cyberciti.biz/tips/linux-procfs-file-descriptors.html http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/#comment-79592 I've ran into other seemingly nonsensical errors. I have a new unused server here where I?m trying to install/use nginx for php for the first time. Strange error for unused server? == Firstly, it seems strange to me that I would get ?Too many open files? for a new unused server. ulimit -Hn/Sn showed 4096/1024 which seemed adequate whie nginx was using only 9/10 acccording to: ls -l /proc//fd | wc -l Anyhow, I followed the instructions and now I get this error: == 2013/02/15 16:30:39 [alert] 4785#0: 1024 worker_connections are not enough 2013/02/15 16:30:39 [error] 4785#0: *1021 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: localhost, request: ?GET /info.php HTTP/1.0?, upstream: ?http://127.0.0.1:80/info.php?, host: ?127.0.0.1? Tried: == I?ve tried increasing the worker_connections to large numbers e.g. 19999 to no avail. Any tips? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227162,236246#msg-236246 From nginx-forum at nginx.us Sat Feb 16 08:45:40 2013 From: nginx-forum at nginx.us (youreright) Date: Sat, 16 Feb 2013 03:45:40 -0500 Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work In-Reply-To: References: Message-ID: I forgot to click "Follow Topic", so I'm posting again just to do that, as I don't see any way to alter my previous post to enable follow. So please reply after this post or in some other way by which I'll be notified of your reply. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227162,236247#msg-236247 From mdounin at mdounin.ru Sat Feb 16 09:31:32 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 16 Feb 2013 13:31:32 +0400 Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work In-Reply-To: References: Message-ID: <20130216093132.GX40751@mdounin.ru> Hello! On Sat, Feb 16, 2013 at 03:43:55AM -0500, youreright wrote: > I've tried to fix the "Too many connections problem" following the suggested > sites: > http://www.cyberciti.biz/tips/linux-procfs-file-descriptors.html > http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/#comment-79592 > > I've ran into other seemingly nonsensical errors. > > I have a new unused server here where I?m trying to install/use nginx for > php for the first time. > > Strange error for unused server? > == > Firstly, it seems strange to me that I would get ?Too many open files? for a > new unused server. ulimit -Hn/Sn showed 4096/1024 which seemed adequate whie > nginx was using only 9/10 acccording to: ls -l /proc//fd | wc -l > > Anyhow, I followed the instructions and now I get this error: > == > 2013/02/15 16:30:39 [alert] 4785#0: 1024 worker_connections are not enough > 2013/02/15 16:30:39 [error] 4785#0: *1021 recv() failed (104: Connection > reset by peer) while reading response header from upstream, client: > 127.0.0.1, server: localhost, request: ?GET /info.php HTTP/1.0?, upstream: > ?http://127.0.0.1:80/info.php?, host: ?127.0.0.1? > > Tried: > == > I?ve tried increasing the worker_connections to large numbers e.g. 19999 to > no avail. > > Any tips? Error message (in particular, "127.0.0.1:80" as an upstream, and "127.0.0.1" as a server) suggests you have proxy loop in your configuration. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Sat Feb 16 22:27:51 2013 From: nginx-forum at nginx.us (mottwsc) Date: Sat, 16 Feb 2013 17:27:51 -0500 Subject: installing nginx on centos should be straightforward Message-ID: <7a9652cc69dd89f47a8d85eaefbdba1b.NginxMailingListEnglish@forum.nginx.org> I have set up a virtual server with a LAMP stack (Centos 6.3) and am now trying to install nginx to use as a proxy server. I have been told to download from sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm and have done so prior to trying to install nginx. However, something was wrong with this because when I tried to install nginx (sudo yum install nginx), there were a number of packages that were skipped due to dependency problems. When I tried to start nginx, it could not find the files. Given that nginx is a well-used server, this has to be easier than what I am encountering. Does anyone know the correct commands to set this up so that nginx can install? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236254#msg-236254 From steve at greengecko.co.nz Sun Feb 17 04:35:41 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sun, 17 Feb 2013 17:35:41 +1300 Subject: installing nginx on centos should be straightforward In-Reply-To: <7a9652cc69dd89f47a8d85eaefbdba1b.NginxMailingListEnglish@forum.nginx.org> References: <7a9652cc69dd89f47a8d85eaefbdba1b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1361075741.1261.489.camel@steve-new> On Sat, 2013-02-16 at 17:27 -0500, mottwsc wrote: > I have set up a virtual server with a LAMP stack (Centos 6.3) and am now > trying to install nginx to use as a proxy server. I have been told to > download from sudo rpm -Uvh > http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm and > have done so prior to trying to install nginx. However, something was wrong > with this because when I tried to install nginx (sudo yum install nginx), > there were a number of packages that were skipped due to dependency > problems. When I tried to start nginx, it could not find the files. > > Given that nginx is a well-used server, this has to be easier than what I am > encountering. Does anyone know the correct commands to set this up so that > nginx can install? > > Thanks. It is. If you're going to install nginx from a repo, why not use theirs? rpm -Uvh http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm then yum install nginx ( next question... why only use it as a proxy server? ) Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From nginx-forum at nginx.us Sun Feb 17 11:52:30 2013 From: nginx-forum at nginx.us (fluffypony) Date: Sun, 17 Feb 2013 06:52:30 -0500 Subject: Debugging performance under high load Message-ID: <1b1f45e623d13ab7fa1566d20eb07197.NginxMailingListEnglish@forum.nginx.org> Hi all, I have a reasonably beefy VPS (16gb RAM, 4x vCores) running Ubuntu 12.04 LTS on a 1GigE line that is basically uncontested at the moment. Speed tests on the box show reasonably high bandwidth available up and down (VirtIO isn't on at the moment, but that doesn't seem to be affecting it). When doing a load test on a static object via HTTPS (apachebench on a 100kb image) with a concurrency of 1000 I'm seeing pretty poor performance - 450 requests per second, about 4.5mbps traffic, and an average of about 2.2s per request. Monitoring the server in htop I'm not seeing the memory even twitch above 570mb (out of 16gb) and an overall processor usage of like 25% per core, if that much. My config is fairly standard - this is a static file, after all, so it's not even touching php-fpm. I have my hard and soft ulimits raised to 100k for the www-data user. I have my worker_processes set to 4, worker_rlimit_nofile set to 100k, and worker_connections set to 2048. multi_accept is on and epoll is on. I have a keepalive timeout of 2. For the purposes of this test I have a self-signed cert on the server, the ssl_protocols are set to SSLv2 SSLv3 TLSv1; and the ssl_ciphers are set to RC4:HIGH:!aNULL:!MD5:!kEDH;. Suggestions? How do I debug the poor performance so I at least know what to fix? Is there a way to step through exactly what is happening in a request under load to see where it's being delayed? I'd like to get it up to at least 1k RPS if not more, and I believe the server and the bandwidth are up to the task. FP Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236260,236260#msg-236260 From nginx-forum at nginx.us Sun Feb 17 13:42:52 2013 From: nginx-forum at nginx.us (mottwsc) Date: Sun, 17 Feb 2013 08:42:52 -0500 Subject: installing nginx on centos should be straightforward In-Reply-To: <1361075741.1261.489.camel@steve-new> References: <1361075741.1261.489.camel@steve-new> Message-ID: <547c4a1c9f51779ba5e57fc2640b7be9.NginxMailingListEnglish@forum.nginx.org> Thanks for the suggestion, Steve. I was working from that angle before based on advice from a person at my hosting company and had used the nginx repo. I am addressing three points in response. Any suggestions/thoughts from you and/or others are appreciated. (1) Reason for nginx and apache: The reason I am planning to use nginx on the front end and apache on the back end (instead of nginx for all of it) is that I've read in an article that apache?s power and nginx?s speed are well known. But apache is hard on server memory, and nginx (while great at static files) needs the help of php-fpm or similar modules for dynamic content. The article goes on to recommend that you combine the two web servers, with nginx as static web server front and apache processing the back end. My application has a lot of dynamic content including videos, and makes use of ajax and jquery. What are your and others thoughts on the nginx / apache / both question? (2) Your command to get the nginx repo: when I tried this again with your specific command, I got: [m at 01 ~]$ rpm -Uvh http://nqinx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm Retrieving http://nqinx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm curl: (6) Couldn't resolve host 'nqinx.org' error: skipping http://nqinx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm - transfer failed (3) Past attempt at installing nginx in a similar way: I'm pasting the output from this past attempt in case anyone can see what might be missing or wrong... [m at 01 ~]$ wget http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm --2013-02-17 01:35:23-- http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm Resolving nginx.org... 206.251.255.63 Connecting to nginx.org|206.251.255.63|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4311 (4.2K) [application/x-redhat-package-manager] Saving to: `nginx-release-centos-6-0.el6.ngx.noarch.rpm' 100%[======================================>] 4,311 --.-K/s in 0.07s 2013-02-17 01:35:23 (61.7 KB/s) - `nginx-release-centos-6-0.el6.ngx.noarch.rpm' saved [4311/4311] [m at 01 ~]$ rpm -ivh nginx-release-centos-6-0.el6.ngx.noarch.rpm warning: nginx-release-centos-6-0.el6.ngx.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID 7bd9bf62: NOKEY error: can't create transaction lock on /var/lib/rpm/.rpm.lock (Permission denied) [m at 01 ~]$ sudo yum install nginx [sudo] password for m: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.lga7.us.voxel.net * epel: epel.mirror.constant.com * extras: mirror.symnds.com * updates: mirror.team-cymru.org Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: nginx-0.8.55-2.el5.x86_64 --> Processing Dependency: libxslt.so.1()(64bit) for package: nginx-0.8.55-2.el5.x86_64 --> Processing Dependency: libssl.so.6()(64bit) for package: nginx-0.8.55-2.el5.x86_64 --> Processing Dependency: libgd.so.2()(64bit) for package: nginx-0.8.55-2.el5.x86_64 --> Processing Dependency: libexslt.so.0()(64bit) for package: nginx-0.8.55-2.el5.x86_64 --> Processing Dependency: libcrypto.so.6()(64bit) for package: nginx-0.8.55-2.el5.x86_64 --> Processing Dependency: libGeoIP.so.1()(64bit) for package: nginx-0.8.55-2.el5.x86_64 --> Running transaction check ---> Package GeoIP.x86_64 0:1.4.8-1.el5 will be installed ---> Package gd.x86_64 0:2.0.35-10.el6 will be installed --> Processing Dependency: libpng12.so.0(PNG12_0)(64bit) for package: gd-2.0.35-10.el6.x86_64 --> Processing Dependency: libpng12.so.0()(64bit) for package: gd-2.0.35-10.el6.x86_64 --> Processing Dependency: libjpeg.so.62()(64bit) for package: gd-2.0.35-10.el6.x86_64 --> Processing Dependency: libfreetype.so.6()(64bit) for package: gd-2.0.35-10.el6.x86_64 --> Processing Dependency: libfontconfig.so.1()(64bit) for package: gd-2.0.35-10.el6.x86_64 --> Processing Dependency: libXpm.so.4()(64bit) for package: gd-2.0.35-10.el6.x86_64 --> Processing Dependency: libX11.so.6()(64bit) for package: gd-2.0.35-10.el6.x86_64 ---> Package libxslt.x86_64 0:1.1.26-2.el6_3.1 will be installed ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: nginx-0.8.55-2.el5.x86_64 ---> Package openssl098e.x86_64 0:0.9.8e-17.el6.centos.2 will be installed --> Running transaction check ---> Package fontconfig.x86_64 0:2.8.0-3.el6 will be installed ---> Package freetype.x86_64 0:2.3.11-14.el6_3.1 will be installed ---> Package libX11.x86_64 0:1.3-2.el6 will be installed --> Processing Dependency: libX11-common = 1.3-2.el6 for package: libX11-1.3-2.el6.x86_64 --> Processing Dependency: libxcb.so.1()(64bit) for package: libX11-1.3-2.el6.x86_64 ---> Package libXpm.x86_64 0:3.5.8-2.el6 will be installed ---> Package libjpeg.x86_64 0:6b-46.el6 will be installed ---> Package libpng.x86_64 2:1.2.49-1.el6_2 will be installed ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: nginx-0.8.55-2.el5.x86_64 --> Running transaction check ---> Package libX11-common.noarch 0:1.3-2.el6 will be installed ---> Package libxcb.x86_64 0:1.5-1.el6 will be installed --> Processing Dependency: libXau.so.6()(64bit) for package: libxcb-1.5-1.el6.x86_64 ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: nginx-0.8.55-2.el5.x86_64 --> Running transaction check ---> Package libXau.x86_64 0:1.0.5-1.el6 will be installed ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: nginx-0.8.55-2.el5.x86_64 --> Finished Dependency Resolution Error: Package: nginx-0.8.55-2.el5.x86_64 (epel) Requires: perl(:MODULE_COMPAT_5.8.8) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest [m at 01 ~]$ sudo /etc/init.d/nginx start sudo: /etc/init.d/nginx: command not found Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236259#msg-236259 From nginx-forum at nginx.us Sun Feb 17 13:57:44 2013 From: nginx-forum at nginx.us (mottwsc) Date: Sun, 17 Feb 2013 08:57:44 -0500 Subject: installing nginx on centos should be straightforward In-Reply-To: <547c4a1c9f51779ba5e57fc2640b7be9.NginxMailingListEnglish@forum.nginx.org> References: <1361075741.1261.489.camel@steve-new> <547c4a1c9f51779ba5e57fc2640b7be9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Installing as root allowed the method under (3) to work. Still have a problem starting nginx, but I'll see if I can work thru that. I would appreciate comments on item (1) though from anyone's experience. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236263#msg-236263 From rkearsley at blueyonder.co.uk Sun Feb 17 14:11:09 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Sun, 17 Feb 2013 14:11:09 +0000 Subject: installing nginx on centos should be straightforward In-Reply-To: <547c4a1c9f51779ba5e57fc2640b7be9.NginxMailingListEnglish@forum.nginx.org> References: <1361075741.1261.489.camel@steve-new> <547c4a1c9f51779ba5e57fc2640b7be9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5120E4FD.9000506@blueyonder.co.uk> what was the article that you read? you should probably do your own tests to work out the fastest way to do it if you really need as many dynamic requests as possible My thoughts at this point (after using nginx for 3+ years) is that I would avoid using apache - KISS! On 17/02/13 13:42, mottwsc wrote: > Thanks for the suggestion, Steve. I was working from that angle before > based on advice from a person at my hosting company and had used the nginx > repo. I am addressing three points in response. Any suggestions/thoughts > from you and/or others are appreciated. > > (1) Reason for nginx and apache: > The reason I am planning to use nginx on the front end and apache on the > back end (instead of nginx for all of it) is that I've read in an article > that apache?s power and nginx?s speed are well known. But apache is hard on > server memory, and nginx (while great at static files) needs the help of > php-fpm or similar modules for dynamic content. The article goes on to > recommend that you combine the two web servers, with nginx as static web > server front and apache processing the back end. My application has a lot > of dynamic content including videos, and makes use of ajax and jquery. > > What are your and others thoughts on the nginx / apache / both question? > > > (2) Your command to get the nginx repo: > when I tried this again with your specific command, I got: > [m at 01 ~]$ rpm -Uvh > http://nqinx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > Retrieving > http://nqinx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > curl: (6) Couldn't resolve host 'nqinx.org' > error: skipping > http://nqinx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > - transfer failed > > > (3) Past attempt at installing nginx in a similar way: > I'm pasting the output from this past attempt in case anyone can see what > might be missing or wrong... > [m at 01 ~]$ wget > http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > --2013-02-17 01:35:23-- > http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > Resolving nginx.org... 206.251.255.63 > Connecting to nginx.org|206.251.255.63|:80... connected. > HTTP request sent, awaiting response... 200 OK > Length: 4311 (4.2K) [application/x-redhat-package-manager] > Saving to: `nginx-release-centos-6-0.el6.ngx.noarch.rpm' > > 100%[======================================>] 4,311 --.-K/s in 0.07s > > 2013-02-17 01:35:23 (61.7 KB/s) - > `nginx-release-centos-6-0.el6.ngx.noarch.rpm' saved [4311/4311] > > [m at 01 ~]$ rpm -ivh nginx-release-centos-6-0.el6.ngx.noarch.rpm > warning: nginx-release-centos-6-0.el6.ngx.noarch.rpm: Header V4 RSA/SHA1 > Signature, key ID 7bd9bf62: NOKEY > error: can't create transaction lock on /var/lib/rpm/.rpm.lock (Permission > denied) > [m at 01 ~]$ sudo yum install nginx > [sudo] password for m: > Loaded plugins: fastestmirror > Loading mirror speeds from cached hostfile > * base: mirrors.lga7.us.voxel.net > * epel: epel.mirror.constant.com > * extras: mirror.symnds.com > * updates: mirror.team-cymru.org > Setting up Install Process > Resolving Dependencies > --> Running transaction check > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libxslt.so.1()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libssl.so.6()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libgd.so.2()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libexslt.so.0()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libcrypto.so.6()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libGeoIP.so.1()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Running transaction check > ---> Package GeoIP.x86_64 0:1.4.8-1.el5 will be installed > ---> Package gd.x86_64 0:2.0.35-10.el6 will be installed > --> Processing Dependency: libpng12.so.0(PNG12_0)(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libpng12.so.0()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libjpeg.so.62()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libfreetype.so.6()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libfontconfig.so.1()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libXpm.so.4()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libX11.so.6()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > ---> Package libxslt.x86_64 0:1.1.26-2.el6_3.1 will be installed > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > ---> Package openssl098e.x86_64 0:0.9.8e-17.el6.centos.2 will be installed > --> Running transaction check > ---> Package fontconfig.x86_64 0:2.8.0-3.el6 will be installed > ---> Package freetype.x86_64 0:2.3.11-14.el6_3.1 will be installed > ---> Package libX11.x86_64 0:1.3-2.el6 will be installed > --> Processing Dependency: libX11-common = 1.3-2.el6 for package: > libX11-1.3-2.el6.x86_64 > --> Processing Dependency: libxcb.so.1()(64bit) for package: > libX11-1.3-2.el6.x86_64 > ---> Package libXpm.x86_64 0:3.5.8-2.el6 will be installed > ---> Package libjpeg.x86_64 0:6b-46.el6 will be installed > ---> Package libpng.x86_64 2:1.2.49-1.el6_2 will be installed > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > --> Running transaction check > ---> Package libX11-common.noarch 0:1.3-2.el6 will be installed > ---> Package libxcb.x86_64 0:1.5-1.el6 will be installed > --> Processing Dependency: libXau.so.6()(64bit) for package: > libxcb-1.5-1.el6.x86_64 > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > --> Running transaction check > ---> Package libXau.x86_64 0:1.0.5-1.el6 will be installed > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > --> Finished Dependency Resolution > Error: Package: nginx-0.8.55-2.el5.x86_64 (epel) > Requires: perl(:MODULE_COMPAT_5.8.8) > You could try using --skip-broken to work around the problem > You could try running: rpm -Va --nofiles --nodigest > [m at 01 ~]$ sudo /etc/init.d/nginx start > sudo: /etc/init.d/nginx: command not found > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236259#msg-236259 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Feb 17 14:26:29 2013 From: nginx-forum at nginx.us (mottwsc) Date: Sun, 17 Feb 2013 09:26:29 -0500 Subject: installing nginx on centos should be straightforward In-Reply-To: <5120E4FD.9000506@blueyonder.co.uk> References: <5120E4FD.9000506@blueyonder.co.uk> Message-ID: <2c2cb1542b9cf03bdf08d410d21b7a44.NginxMailingListEnglish@forum.nginx.org> Here's the article: https://www.digitalocean.com/community/articles/how-to-configure-nginx-as-a-front-end-proxy-for-apache I agree that if I could do it all with one web server, it would be simpler/cleaner. I'm just not sure based on what is in this article that nginx will be fine alone - "nginx (great at static files) needs the help of php-fpm or similar modules for dynamic content". Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236265#msg-236265 From rkearsley at blueyonder.co.uk Sun Feb 17 15:36:26 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Sun, 17 Feb 2013 15:36:26 +0000 Subject: installing nginx on centos should be straightforward In-Reply-To: <2c2cb1542b9cf03bdf08d410d21b7a44.NginxMailingListEnglish@forum.nginx.org> References: <5120E4FD.9000506@blueyonder.co.uk> <2c2cb1542b9cf03bdf08d410d21b7a44.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5120F8FA.5070004@blueyonder.co.uk> Hi Many (MANY) people use php-fpm and it's fine If you really need extra performance you should test it yourself on your own application (not hard to do) and see if proxying to apache actually gives any benefit From nginx-forum at nginx.us Sun Feb 17 16:33:00 2013 From: nginx-forum at nginx.us (mottwsc) Date: Sun, 17 Feb 2013 11:33:00 -0500 Subject: installing nginx on centos should be straightforward In-Reply-To: <5120F8FA.5070004@blueyonder.co.uk> References: <5120F8FA.5070004@blueyonder.co.uk> Message-ID: <4cf71e9649f12fd1b1ab16a11c5f49f8.NginxMailingListEnglish@forum.nginx.org> It looks like PHP-FPM comes with PHP 5.3.3, so I should have it already for use with nginx. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236268#msg-236268 From steve at greengecko.co.nz Sun Feb 17 18:14:05 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 18 Feb 2013 07:14:05 +1300 Subject: installing nginx on centos should be straightforward In-Reply-To: <547c4a1c9f51779ba5e57fc2640b7be9.NginxMailingListEnglish@forum.nginx.org> References: <1361075741.1261.489.camel@steve-new> <547c4a1c9f51779ba5e57fc2640b7be9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2FEB67AC-5CAF-4FD5-93E7-284A83DD7961@greengecko.co.nz> Sent from my iPad On 18/02/2013, at 2:42 AM, "mottwsc" wrote: > Thanks for the suggestion, Steve. I was working from that angle before > based on advice from a person at my hosting company and had used the nginx > repo. I am addressing three points in response. Any suggestions/thoughts > from you and/or others are appreciated. > > (1) Reason for nginx and apache: > The reason I am planning to use nginx on the front end and apache on the > back end (instead of nginx for all of it) is that I've read in an article > that apache?s power and nginx?s speed are well known. But apache is hard on > server memory, and nginx (while great at static files) needs the help of > php-fpm or similar modules for dynamic content. The article goes on to > recommend that you combine the two web servers, with nginx as static web > server front and apache processing the back end. My application has a lot > of dynamic content including videos, and makes use of ajax and jquery. > > What are your and others thoughts on the nginx / apache / both question? I use nginx and PHP-fpm without any problems. I only use apache when forced. > > > (2) Your command to get the nginx repo: > when I tried this again with your specific command, I got: > [m at 01 ~]$ rpm -Uvh > http://nqinx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > Retrieving > http://nqinx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > curl: (6) Couldn't resolve host 'nqinx.org' > error: skipping > http://nqinx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > - transfer failed > Try nginx.org, not nqinx.org > > (3) Past attempt at installing nginx in a similar way: > I'm pasting the output from this past attempt in case anyone can see what > might be missing or wrong... > [m at 01 ~]$ wget > http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > --2013-02-17 01:35:23-- > http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > Resolving nginx.org... 206.251.255.63 > Connecting to nginx.org|206.251.255.63|:80... connected. > HTTP request sent, awaiting response... 200 OK > Length: 4311 (4.2K) [application/x-redhat-package-manager] > Saving to: `nginx-release-centos-6-0.el6.ngx.noarch.rpm' > > 100%[======================================>] 4,311 --.-K/s in 0.07s > > 2013-02-17 01:35:23 (61.7 KB/s) - > `nginx-release-centos-6-0.el6.ngx.noarch.rpm' saved [4311/4311] > > [m at 01 ~]$ rpm -ivh nginx-release-centos-6-0.el6.ngx.noarch.rpm > warning: nginx-release-centos-6-0.el6.ngx.noarch.rpm: Header V4 RSA/SHA1 > Signature, key ID 7bd9bf62: NOKEY > error: can't create transaction lock on /var/lib/rpm/.rpm.lock (Permission > denied) This failed. You need to use sudo for this as well. After fixing this, you'll be installing 1.2.7, not 0.8. > [m at 01 ~]$ sudo yum install nginx > [sudo] password for m: > Loaded plugins: fastestmirror > Loading mirror speeds from cached hostfile > * base: mirrors.lga7.us.voxel.net > * epel: epel.mirror.constant.com > * extras: mirror.symnds.com > * updates: mirror.team-cymru.org > Setting up Install Process > Resolving Dependencies > --> Running transaction check > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libxslt.so.1()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libssl.so.6()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libgd.so.2()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libexslt.so.0()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libcrypto.so.6()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Processing Dependency: libGeoIP.so.1()(64bit) for package: > nginx-0.8.55-2.el5.x86_64 > --> Running transaction check > ---> Package GeoIP.x86_64 0:1.4.8-1.el5 will be installed > ---> Package gd.x86_64 0:2.0.35-10.el6 will be installed > --> Processing Dependency: libpng12.so.0(PNG12_0)(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libpng12.so.0()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libjpeg.so.62()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libfreetype.so.6()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libfontconfig.so.1()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libXpm.so.4()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > --> Processing Dependency: libX11.so.6()(64bit) for package: > gd-2.0.35-10.el6.x86_64 > ---> Package libxslt.x86_64 0:1.1.26-2.el6_3.1 will be installed > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > ---> Package openssl098e.x86_64 0:0.9.8e-17.el6.centos.2 will be installed > --> Running transaction check > ---> Package fontconfig.x86_64 0:2.8.0-3.el6 will be installed > ---> Package freetype.x86_64 0:2.3.11-14.el6_3.1 will be installed > ---> Package libX11.x86_64 0:1.3-2.el6 will be installed > --> Processing Dependency: libX11-common = 1.3-2.el6 for package: > libX11-1.3-2.el6.x86_64 > --> Processing Dependency: libxcb.so.1()(64bit) for package: > libX11-1.3-2.el6.x86_64 > ---> Package libXpm.x86_64 0:3.5.8-2.el6 will be installed > ---> Package libjpeg.x86_64 0:6b-46.el6 will be installed > ---> Package libpng.x86_64 2:1.2.49-1.el6_2 will be installed > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > --> Running transaction check > ---> Package libX11-common.noarch 0:1.3-2.el6 will be installed > ---> Package libxcb.x86_64 0:1.5-1.el6 will be installed > --> Processing Dependency: libXau.so.6()(64bit) for package: > libxcb-1.5-1.el6.x86_64 > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > --> Running transaction check > ---> Package libXau.x86_64 0:1.0.5-1.el6 will be installed > ---> Package nginx.x86_64 0:0.8.55-2.el5 will be installed > --> Processing Dependency: perl(:MODULE_COMPAT_5.8.8) for package: > nginx-0.8.55-2.el5.x86_64 > --> Finished Dependency Resolution > Error: Package: nginx-0.8.55-2.el5.x86_64 (epel) > Requires: perl(:MODULE_COMPAT_5.8.8) > You could try using --skip-broken to work around the problem > You could try running: rpm -Va --nofiles --nodigest > [m at 01 ~]$ sudo /etc/init.d/nginx start > sudo: /etc/init.d/nginx: command not found > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236259#msg-236259 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Sun Feb 17 22:09:31 2013 From: nginx-forum at nginx.us (mottwsc) Date: Sun, 17 Feb 2013 17:09:31 -0500 Subject: installing nginx on centos should be straightforward In-Reply-To: <2FEB67AC-5CAF-4FD5-93E7-284A83DD7961@greengecko.co.nz> References: <2FEB67AC-5CAF-4FD5-93E7-284A83DD7961@greengecko.co.nz> Message-ID: Thanks for catching that type, GreenGecko. I was able to get nginx installed, but at this point it won't start (bind, that is). Is this problem familiar to anyone? >>> the end of the installation... Installed: nginx.x86_64 0:1.2.7-1.el6.ngx Complete! >>> trying to start nginx... [root at 01 ~]# /etc/init.d/nginx start Starting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] still could not bind() [FAILED] Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236273#msg-236273 From steve at greengecko.co.nz Sun Feb 17 22:16:27 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 18 Feb 2013 11:16:27 +1300 Subject: installing nginx on centos should be straightforward In-Reply-To: References: <2FEB67AC-5CAF-4FD5-93E7-284A83DD7961@greengecko.co.nz> Message-ID: <1361139387.1261.530.camel@steve-new> On Sun, 2013-02-17 at 17:09 -0500, mottwsc wrote: > Thanks for catching that type, GreenGecko. > > I was able to get nginx installed, but at this point it won't start (bind, > that is). > > Is this problem familiar to anyone? > > >>> the end of the installation... > Installed: > nginx.x86_64 0:1.2.7-1.el6.ngx > > Complete! > > >>> trying to start nginx... > [root at 01 ~]# /etc/init.d/nginx start > Starting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address > already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] still could not bind() > [FAILED] > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236273#msg-236273 Yes. You've already got something running on port 80... either your old nginx, or apache. Error messages are there to help you... Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From ianevans at digitalhit.com Mon Feb 18 01:39:02 2013 From: ianevans at digitalhit.com (Ian M. Evans) Date: Sun, 17 Feb 2013 20:39:02 -0500 Subject: Rewriting jpg with specific google referer to 1 pixel gif Message-ID: <076ba091271f73f0edb8a3165182e9bd.squirrel@www.digitalhit.com> If I understand correctly, nginx doesn't do multiple conditions in an 'if' or nested if's. Based on some of the ideas being tossed around in this thread (http://www.webmasterworld.com/google/4537063-11-30.htm) I'd like to rewrite serve up a one pixel gif to whenever both of the two following conditions are met: 1) The http referer is http://www.google.*/blank.html(*) and then 2) the requested asset is a jpg What's the best performance way to handle that? * I suck at regex, so I just put the asterix there to signify catching any of the google country domains. Thanks for any suggestions. From agentzh at gmail.com Mon Feb 18 05:01:04 2013 From: agentzh at gmail.com (agentzh) Date: Sun, 17 Feb 2013 21:01:04 -0800 Subject: [ANN] ngx_openresty stable version 1.2.6.6 released In-Reply-To: References: Message-ID: Hello, folks! I am delighted to announce that the new stable version of ngx_openresty, 1.2.6.6, is just out: http://openresty.org/download/ngx_openresty-1.2.6.6.tar.gz And the PGP signature file for this release tar ball is http://openresty.org/download/ngx_openresty-1.2.6.6.tar.gz.asc The PGP public key (with ID A0E98066) has been uploaded to the key servers pgp.mit.edu and keys.gnupg.net. Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (development) release, 1.2.6.5: * upgraded LuaNginxModule to 0.7.15. * bugfix: the original Lua VM error messages might get lost in case of Lua code crashes when user coroutines were used. thanks Dirk Feytons for the report. * diagnose: added more info about "r->main->count" to the debugging logs. * style: massive coding style fixes according to the Nginx coding style. The following components are bundled: * LuaJIT-2.0.0 * array-var-nginx-module-0.03rc1 * auth-request-nginx-module-0.2 * drizzle-nginx-module-0.1.4 * echo-nginx-module-0.42 * encrypted-session-nginx-module-0.02 * form-input-nginx-module-0.07 * headers-more-nginx-module-0.19 * iconv-nginx-module-0.10rc7 * lua-5.1.5 * lua-cjson-1.0.3 * lua-rds-parser-0.05 * lua-redis-parser-0.10 * lua-resty-dns-0.09 * lua-resty-memcached-0.10 * lua-resty-mysql-0.12 * lua-resty-redis-0.15 * lua-resty-string-0.08 * lua-resty-upload-0.07 * memc-nginx-module-0.13rc3 * nginx-1.2.6 * ngx_coolkit-0.2rc1 * ngx_devel_kit-0.2.18 * ngx_lua-0.7.15 * ngx_postgres-1.0rc2 * rds-csv-nginx-module-0.05rc2 * rds-json-nginx-module-0.12rc10 * redis-nginx-module-0.3.6 * redis2-nginx-module-0.09 * set-misc-nginx-module-0.22rc8 * srcache-nginx-module-0.19 * xss-nginx-module-0.03rc9 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From cnst++ at FreeBSD.org Mon Feb 18 08:06:08 2013 From: cnst++ at FreeBSD.org (Constantine A. Murenin) Date: Mon, 18 Feb 2013 00:06:08 -0800 Subject: A dynamic web-site written wholly in nginx.conf? Introducing mdoc.su! Message-ID: <20130218080608.GA19965@Cns.Cns.SU> This is a multi-part message in MIME format. --------------030707040502000103070707 Content-Type: text/plain; charset=KOI8-R; format=flowed Content-Transfer-Encoding: 7bit Dear nginx@, I'm not sure if this has already been done before and to what extent, but I'd like to demonstrate that a whole web-service "portal" can be written exclusively in nginx.conf, without any php, perl, python, java or cgi, or even any files external to nginx.conf. Introducing http://mdoc.su/ . The service provides deterministic URL tinyfication / URI shortening for BSD manual pages. How does it work? It operates on 3 inputs: the operating system, the section and the manual page. For example, to see the kqueue(2) manual page from FreeBSD, you can point your browser to http://mdoc.su/f/kqueue , or mdoc.su/f/kqueue.2, or mdoc.su/f/2/kqueue, or you can even use "FreeBSD" or "freebsd" in place of "f", as in, http://mdoc.su/freebsd/kqueue etc. When nginx receives the request, it quickly gets re-written, and a redirect to FreeBSD.org/cgi/man.cgi is produced. Same for OpenBSD, NetBSD and DragonFly BSD, of course. Forgot how to specify timeouts for ssh(1)? http://mdoc.su/o/ssh The site even has a start page, also exclusively through nginx.conf, and supports Google Webmaster Tools site verification, through the "HTML file upload" option, through nginx.conf (of course!). (BTW, the format of those verification files has changed a couple of years back, where special and unique file content is now required, and the new file format itself is a very-very big secret, that Google will not share with anyone without a file-save-capable browser or an NDA! Reverse-engineered with nginx.conf, too!) Notice that the whole ordeal runs entirely out of nginx and is controlled by an nginx.conf file, which I think is pretty nifty. :-) The source code is available at https://github.com/cnst/mdoc.su , and might also be attached inline to this message. Comments, questions and suggestions are very welcome. Best regards, Constantine. --------------030707040502000103070707 Content-Type: text/plain; charset=KOI8-R; name="mdoc.su.nginx.conf" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="mdoc.su.nginx.conf" # cnst: mdoc.su.nginx.conf, 2013-02-14/17 # Deterministic URL shortener for BSD manual pages, written in nginx.conf # Copyright (c) 2013 Constantine A. Murenin # http://mdoc.su/ # https://github.com/cnst/mdoc.su server { listen *:80; listen [::]:80; server_name mdoc.su www.mdoc.su *.mdoc.su; if ($host != "mdoc.su") { rewrite ^ http://mdoc.su$request_uri? redirect; } location = / { default_type text/html; return 200 " mdoc.su — Manual Pages for FreeBSD, OpenBSD, NetBSD and DragonFly BSD!

mdoc.su

man pages for FreeBSD, NetBSD, OpenBSD and DragonFly


Usage:
	mdoc.su/b/p
		or
	mdoc.su/b/0/p
		or
	mdoc.su/b/p.0
	, where
		b is
			f|n|o|d, or 
			FreeBSD|NetBSD|OpenBSD|DragonFly, or 
			same lower case
	, and
		p is the name of the manual page
	, and
		0 is the section number
	.

Now, what's mdoc?
See:
	http://mdoc.su/f/mdoc — according to FreeBSD
	http://mdoc.su/n/mdoc — according to NetBSD
	http://mdoc.su/o/mdoc — according to OpenBSD
	http://mdoc.su/d/mdoc — according to DragonFly

Or, if you will,
	http://mdoc.su/f/mdoc.7
	http://mdoc.su/f/7/mdoc


© 2013 Constantine A. Murenin (cnst)


nginx/$nginx_version at $host

"; } location = /google2a7d1d40a6b37a23.html { rewrite ^/(.*) $1; return 200 "google-site-verification: $uri"; } location /FreeBSD { rewrite ^/FreeBSD(/.*)?$ /f$1; } location /f { set $fb "http://www.freebsd.org/cgi/man.cgi?query="; set $fs "&sektion="; rewrite ^/freebsd(/.*)?$ /.$1; rewrite ^/./([^/.]+)/([^/]+)$ $fb$2$fs$1 redirect; rewrite ^/./([^/]+)\.([1-9])$ $fb$1$fs$2 redirect; rewrite ^/./([^/]+)$ $fb$1$fs redirect; rewrite ^/./?$ / last; return 404; } location /NetBSD { rewrite ^/NetBSD(/.*)?$ /n$1; } location /n { set $nb "http://netbsd.gw.com/cgi-bin/man-cgi?"; rewrite ^/netbsd(/.*)?$ /.$1; rewrite ^/./([a-z]+[0-9]*[kx]?)/([^/]+)/([^/]+)$ $nb$3+$2.$1 redirect; rewrite ^/./([^/]+)/([^/]+)$ $nb$2+$1 redirect; rewrite ^/./([^/]+)\.([1-9]\.[a-z]+[0-9]*[kx]?)$ $nb$1+$2 redirect; rewrite ^/./([^/]+)\.([1-9])$ $nb$1+$2 redirect; rewrite ^/./([^/]+)$ $nb$1 redirect; rewrite ^/./?$ / last; return 404; } location /OpenBSD { rewrite ^/OpenBSD(/.*)?$ /o$1; } location /o { set $ob "http://www.openbsd.org/cgi-bin/man.cgi?query="; set $os "&sektion="; rewrite ^/openbsd(/.*)?$ /.$1; rewrite ^/./([a-z]+[0-9]*[k]?)/([1-9]|3p)/([^/]+)$ $ob$3$os$2&arch=$1 redirect; rewrite ^/./([^/.]+)/([^/]+)$ $ob$2$os$1 redirect; rewrite ^/./([^/]+)\.([1-9]|3p)\.([a-z]+[0-9]*[k]?)$ $ob$1$os$2&arch=$3 redirect; rewrite ^/./([^/]+)\.([1-9]|3p)$ $ob$1$os$2 redirect; rewrite ^/./([^/]+)$ $ob$1$os redirect; rewrite ^/./?$ / last; return 404; } location /DragonFly { rewrite ^/DragonFly(BSD)?(/.*)?$ /d$2; } location /d { set $db "http://leaf.dragonflybsd.org/cgi/web-man?command="; set $ds "§ion="; rewrite ^/dragonfly(bsd)?(/.*)?$ /d$2; rewrite ^/d(ragon)?fly(/.*)?$ /d$2; rewrite ^/./([^/.]+)/([^/]+)$ $db$2$ds$1 redirect; rewrite ^/./([^/]+)\.([1-9])$ $db$1$ds$2 redirect; rewrite ^/./([^/]+)$ $db$1$ds redirect; rewrite ^/./?$ / last; return 404; } location / { return 403; } access_log logs/mdoc.su/mdoc.su.access.log combined; error_log logs/mdoc.su/mdoc.su.error.log warn; } --------------030707040502000103070707-- From cnst++ at FreeBSD.org Mon Feb 18 09:46:59 2013 From: cnst++ at FreeBSD.org (Constantine A. Murenin) Date: Mon, 18 Feb 2013 01:46:59 -0800 Subject: A dynamic web-site written wholly in nginx.conf? Introducing mdoc.su! Message-ID: <5121F893.7050100@FreeBSD.org> Dear nginx@, I'm not sure if this has already been done before and to what extent, but I'd like to demonstrate that a whole web-service "portal" can be written exclusively in nginx.conf, without any php, perl, python, java or cgi, or even any files external to nginx.conf. Introducing http://mdoc.su/ . The service provides deterministic URL tinyfication / URI shortening for BSD manual pages. How does it work? It operates on 3 inputs: the operating system, the section and the manual page. For example, to see the kqueue(2) manual page from FreeBSD, you can point your browser to http://mdoc.su/f/kqueue , or mdoc.su/f/kqueue.2, or mdoc.su/f/2/kqueue, or you can even use "FreeBSD" or "freebsd" in place of "f", as in, http://mdoc.su/freebsd/kqueue etc. When nginx receives the request, it quickly gets re-written, and a redirect to FreeBSD.org/cgi/man.cgi is produced. Same for OpenBSD, NetBSD and DragonFly BSD, of course. Forgot how to specify timeouts for ssh(1)? http://mdoc.su/o/ssh The site even has a start page, also exclusively through nginx.conf, and supports Google Webmaster Tools site verification, through the "HTML file upload" option, through nginx.conf (of course!). (BTW, the format of those verification files has changed a couple of years back, where special and unique file content is now required, and the new file format itself is a very-very big secret, that Google will not share with anyone without a file-save-capable browser or an NDA! Reverse-engineered with nginx.conf, too!) Notice that the whole ordeal runs entirely out of nginx and is controlled by an nginx.conf file, which I think is pretty nifty. :-) The source code is available at https://github.com/cnst/mdoc.su , and might also be attached to this message. Comments, questions and suggestions are very welcome. P.S. Prior multi-part message was a result of trying to hand-edit "Content-Disposition: attachment;" to "inline" through E / "edit-headers" in mutt, after pasting the message from SeaMonkey. :) But mail.content_disposition_type set to 0 should now work much better. Best regards, Constantine. -------------- next part -------------- # cnst: mdoc.su.nginx.conf, 2013-02-14/17 # Deterministic URL shortener for BSD manual pages, written in nginx.conf # Copyright (c) 2013 Constantine A. Murenin # http://mdoc.su/ # https://github.com/cnst/mdoc.su server { listen *:80; listen [::]:80; server_name mdoc.su www.mdoc.su *.mdoc.su; if ($host != "mdoc.su") { rewrite ^ http://mdoc.su$request_uri? redirect; } location = / { default_type text/html; return 200 " mdoc.su — Manual Pages for FreeBSD, OpenBSD, NetBSD and DragonFly BSD!

mdoc.su

man pages for FreeBSD, NetBSD, OpenBSD and DragonFly


Usage:
	mdoc.su/b/p
		or
	mdoc.su/b/0/p
		or
	mdoc.su/b/p.0
	, where
		b is
			f|n|o|d, or 
			FreeBSD|NetBSD|OpenBSD|DragonFly, or 
			same lower case
	, and
		p is the name of the manual page
	, and
		0 is the section number
	.

Now, what's mdoc?
See:
	http://mdoc.su/f/mdoc — according to FreeBSD
	http://mdoc.su/n/mdoc — according to NetBSD
	http://mdoc.su/o/mdoc — according to OpenBSD
	http://mdoc.su/d/mdoc — according to DragonFly

Or, if you will,
	http://mdoc.su/f/mdoc.7
	http://mdoc.su/f/7/mdoc


© 2013 Constantine A. Murenin (cnst)


nginx/$nginx_version at $host

"; } location = /google2a7d1d40a6b37a23.html { rewrite ^/(.*) $1; return 200 "google-site-verification: $uri"; } location /FreeBSD { rewrite ^/FreeBSD(/.*)?$ /f$1; } location /f { set $fb "http://www.freebsd.org/cgi/man.cgi?query="; set $fs "&sektion="; rewrite ^/freebsd(/.*)?$ /.$1; rewrite ^/./([^/.]+)/([^/]+)$ $fb$2$fs$1 redirect; rewrite ^/./([^/]+)\.([1-9])$ $fb$1$fs$2 redirect; rewrite ^/./([^/]+)$ $fb$1$fs redirect; rewrite ^/./?$ / last; return 404; } location /NetBSD { rewrite ^/NetBSD(/.*)?$ /n$1; } location /n { set $nb "http://netbsd.gw.com/cgi-bin/man-cgi?"; rewrite ^/netbsd(/.*)?$ /.$1; rewrite ^/./([a-z]+[0-9]*[kx]?)/([^/]+)/([^/]+)$ $nb$3+$2.$1 redirect; rewrite ^/./([^/]+)/([^/]+)$ $nb$2+$1 redirect; rewrite ^/./([^/]+)\.([1-9]\.[a-z]+[0-9]*[kx]?)$ $nb$1+$2 redirect; rewrite ^/./([^/]+)\.([1-9])$ $nb$1+$2 redirect; rewrite ^/./([^/]+)$ $nb$1 redirect; rewrite ^/./?$ / last; return 404; } location /OpenBSD { rewrite ^/OpenBSD(/.*)?$ /o$1; } location /o { set $ob "http://www.openbsd.org/cgi-bin/man.cgi?query="; set $os "&sektion="; rewrite ^/openbsd(/.*)?$ /.$1; rewrite ^/./([a-z]+[0-9]*[k]?)/([1-9]|3p)/([^/]+)$ $ob$3$os$2&arch=$1 redirect; rewrite ^/./([^/.]+)/([^/]+)$ $ob$2$os$1 redirect; rewrite ^/./([^/]+)\.([1-9]|3p)\.([a-z]+[0-9]*[k]?)$ $ob$1$os$2&arch=$3 redirect; rewrite ^/./([^/]+)\.([1-9]|3p)$ $ob$1$os$2 redirect; rewrite ^/./([^/]+)$ $ob$1$os redirect; rewrite ^/./?$ / last; return 404; } location /DragonFly { rewrite ^/DragonFly(BSD)?(/.*)?$ /d$2; } location /d { set $db "http://leaf.dragonflybsd.org/cgi/web-man?command="; set $ds "§ion="; rewrite ^/dragonfly(bsd)?(/.*)?$ /d$2; rewrite ^/d(ragon)?fly(/.*)?$ /d$2; rewrite ^/./([^/.]+)/([^/]+)$ $db$2$ds$1 redirect; rewrite ^/./([^/]+)\.([1-9])$ $db$1$ds$2 redirect; rewrite ^/./([^/]+)$ $db$1$ds redirect; rewrite ^/./?$ / last; return 404; } location / { return 403; } access_log logs/mdoc.su/mdoc.su.access.log combined; error_log logs/mdoc.su/mdoc.su.error.log warn; } From ingham.k at gmail.com Mon Feb 18 11:54:33 2013 From: ingham.k at gmail.com (Igor Karymov) Date: Mon, 18 Feb 2013 15:54:33 +0400 Subject: embedded variable $cookie_YYY and "-" symbol. Message-ID: Hi all. I observe unexpected behavior when trying to use $cookie_YYY embended variable when cookie name include "-" symbol. set $uwaver $cookie_un-uwa-version; $uwaver always equal to "-uwa-version" string instead of real cookie value. maybe i should use some kind of escaping here? nginx -v nginx version: nginx/1.2.7 uname -a Linux cabal 3.2.0-37-generic #58-Ubuntu SMP Thu Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pekka.Panula at sofor.fi Mon Feb 18 12:00:51 2013 From: Pekka.Panula at sofor.fi (Pekka.Panula at sofor.fi) Date: Mon, 18 Feb 2013 14:00:51 +0200 Subject: Geo blocking, but allowing google index robot to pass-thru Message-ID: Hi I have a site where i want to geo block all but one country, but perhaps allow Google to index site, perhaps some other index bot too. So what sort of configuration is needed so i can detect Google bot and let it pass-thru? Would be nice if there is example configuration. Is only good way to check user-agent? Pekka Panula | Jatkuvat Palvelut | Direct +358 10 235 9232 | Pekka.Panula at sofor.fi Sofor Oy | www.sofor.fi | Takakaarre 3 | PL 51 | FIN-62200 Kauhava tel. +358 10 235 90 | fax +358 10 235 9100 -------------- next part -------------- An HTML attachment was scrubbed... URL: From crirus at gmail.com Mon Feb 18 12:01:32 2013 From: crirus at gmail.com (Cristian Rusu) Date: Mon, 18 Feb 2013 14:01:32 +0200 Subject: Limit connections to mp4 files Message-ID: Hello I set this to my nginx config to prevent users playing multiple videos at the same time: http { limit_conn_zone $binary_remote_addr zone=addr:10m; server { location ~ \.mp4$ { limit_conn addr 2; limit_conn_log_level info; This doesn't seems to prevent anyone from playing at least three videos at once... What am I doing wrong here? --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 18 12:03:46 2013 From: nginx-forum at nginx.us (perone) Date: Mon, 18 Feb 2013 07:03:46 -0500 Subject: Websocket proxy support Message-ID: Is there any updates on the Websocket proxy support ? Everything I found is only this: http://trac.nginx.org/nginx/milestone/1.3.13 Thank you ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236289,236289#msg-236289 From andrew at nginx.com Mon Feb 18 12:06:36 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Mon, 18 Feb 2013 16:06:36 +0400 Subject: Websocket proxy support In-Reply-To: References: Message-ID: On Feb 18, 2013, at 4:03 PM, perone wrote: > Is there any updates on the Websocket proxy support ? Everything I found is > only this: http://trac.nginx.org/nginx/milestone/1.3.13 Yes, there will be an update very soon. Be patient and stay tuned! From nginx-forum at nginx.us Mon Feb 18 12:32:43 2013 From: nginx-forum at nginx.us (zzhofict) Date: Mon, 18 Feb 2013 07:32:43 -0500 Subject: Proxy module for nginx Message-ID: <29f6b468725c0d0c4032048b5ee7c3ea.NginxMailingListEnglish@forum.nginx.org> Hi, all I want to make a module for my nginx server, which can block some ip with particular features. My idea is to create a module like the proxy module in some way, can any of you give me some suggestions? Thanks a lot... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236292,236292#msg-236292 From francis at daoine.org Mon Feb 18 12:35:18 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Feb 2013 12:35:18 +0000 Subject: embedded variable $cookie_YYY and "-" symbol. In-Reply-To: References: Message-ID: <20130218123518.GF32392@craic.sysops.org> On Mon, Feb 18, 2013 at 03:54:33PM +0400, Igor Karymov wrote: Hi there, > set $uwaver $cookie_un-uwa-version; > > $uwaver always equal to "-uwa-version" string instead of real cookie value. > > maybe i should use some kind of escaping here? I believe that the reason is that there are some characters which are valid in cookie names, but which are not valid in nginx variable names; and I believe that the only way to access them in nginx.conf is to parse $http_cookie yourself. There is a similar problem with the $arg_* variables. Both the $cookie_ and the $arg_ variables are convenience features, and they work well provided that you restrict your inputs appropriately. I'm not sure how much work it would be to create a patch allow, for example, ${var.iab-le} as a way of accessing a variable named like that; but I guess that it has been "more work than just avoiding or working around those variable names" for everyone who has hit the issue so far. f -- Francis Daly francis at daoine.org From lists at ruby-forum.com Mon Feb 18 12:54:00 2013 From: lists at ruby-forum.com (Jonathan K.) Date: Mon, 18 Feb 2013 13:54:00 +0100 Subject: embedded variable $cookie_YYY and "-" symbol. In-Reply-To: <20130218123518.GF32392@craic.sysops.org> References: <20130218123518.GF32392@craic.sysops.org> Message-ID: Francis Daly wrote in post #1097580: > On Mon, Feb 18, 2013 at 03:54:33PM +0400, Igor Karymov wrote: > > Hi there, > >> set $uwaver $cookie_un-uwa-version; >> >> $uwaver always equal to "-uwa-version" string instead of real cookie value. >> >> maybe i should use some kind of escaping here? > > I believe that the reason is that there are some characters which are > valid in cookie names, but which are not valid in nginx variable names; > and I believe that the only way to access them in nginx.conf is to parse > $http_cookie yourself. > > There is a similar problem with the $arg_* variables. > > Both the $cookie_ and the $arg_ variables are convenience features, > and they work well provided that you restrict your inputs appropriately. > > I'm not sure how much work it would be to create a patch allow, for > example, ${var.iab-le} as a way of accessing a variable named like that; > but I guess that it has been "more work than just avoiding or working > around those variable names" for everyone who has hit the issue so far. > > f > -- > Francis Daly francis at daoine.org You can work around this for now with a quirk of the way the map module works. It treats any value beginning with $ as a variable name and skips some of the validation, so you can: # The first variable is irrelevant, $is_args just doesn't # do much processing. map $is_args $uwaver { default $cookie_un-uwa-version; } This workaround may become invalid if the map module is ever extended to accept complex values, but it just worked in a test for me. Jon -- Posted via http://www.ruby-forum.com/. From ingham.k at gmail.com Mon Feb 18 13:09:58 2013 From: ingham.k at gmail.com (Igor Karymov) Date: Mon, 18 Feb 2013 17:09:58 +0400 Subject: embedded variable $cookie_YYY and "-" symbol. In-Reply-To: References: <20130218123518.GF32392@craic.sysops.org> Message-ID: This workaround has solve my issues. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 18 13:57:50 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 18 Feb 2013 08:57:50 -0500 Subject: A dynamic web-site written wholly in nginx.conf? Introducing mdoc.su! In-Reply-To: <5121F893.7050100@FreeBSD.org> References: <5121F893.7050100@FreeBSD.org> Message-ID: <082a3941644b1b957d7623d3d1aa4008.NginxMailingListEnglish@forum.nginx.org> nice catch! i'd suggest you create the same stuff for ReactOS too (or maybe linux?) so you could have > f | n | o | r | d or > f | n | o | l | d regards, mex ... please forgive me that blasphemous references :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236283,236298#msg-236298 From contact at jpluscplusm.com Mon Feb 18 14:49:51 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 18 Feb 2013 14:49:51 +0000 Subject: Proxy module for nginx In-Reply-To: <29f6b468725c0d0c4032048b5ee7c3ea.NginxMailingListEnglish@forum.nginx.org> References: <29f6b468725c0d0c4032048b5ee7c3ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 18 February 2013 12:32, zzhofict wrote: > Hi, all > I want to make a module for my nginx server, which can block some ip with > particular features. > My idea is to create a module like the proxy module in some way, can any > of you give me some suggestions? > Thanks a lot... I suggest you kick off a discussion of what you're *ultimately* trying to achieve, what you're already tried, and what hasn't worked. You'll find a much better thread emerges from that, rather than the much-too-broad-but-much-too-narrow question you've just asked. Just my 2 cents, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Mon Feb 18 15:06:17 2013 From: nginx-forum at nginx.us (jims) Date: Mon, 18 Feb 2013 10:06:17 -0500 Subject: Reverse proxy configuration help Message-ID: <47e386c849c6212c28a126091b9b3cdf.NginxMailingListEnglish@forum.nginx.org> I am new to nginx, it being recommended to solve a problem. The problem: I have a VPS hosting a website and an application server in my DMZ. I have a test and prod version of each. I want both DMZ'ed servers reverse-proxied such that requests where the referrer is the test web server always go to the test app server and requests where the referrer is anything but the test web server always go to the production app server. The app servers can only be accessed over https, and the proxy will eventually but not quite yet. Question: What is the best way to accomplish this? I am trying to use two different registered host names which are registered to the secondary IP on the VPS, as the proxied names for the app servers, but that's not working too well. I wonder if it would be better to have a single server name for the proxy with the two proxied servers selected based on referrer, rather than trying to redirect to another server name, with one server name servicing one proxied server and the other, the other proxied server. Regardless, I can't seem to get past the connection to the backend server. I keep getting a 110 connection failure. I have tried several configurations but none seem to work. The problem I'm running into may be related to use of the valid_referers directive. It doesn't seem to do what I need, which is to use one back-end for requests referred from one web server host but use the other for all other requests. If I have two server directives with the same IP but two different server names, it seems I can't have two location directives, one within each server name. If I could get that to work, it seems to me it should allow me to redirect to the default app server using the valid_referers directive within the referrer-specific app server's server directive, but that doesn't seem to work the way I expect, either. I don't have a config file to post because it has gone through a dozen iterations already, none of which have been saved. A generic example of one that doesn't work would be : server { listen 10.10.10.10:80; server_name devappxy.mydomain.com; valid_referers devweb.mydomain.com; if ($invalid_referer) { return 301 http://apppxy.mydomain.com$request_uri; } proxy_bind 10.10.10.10; access_log /var/log/nginx/devpxyaccess.log main; error_log /var/log/nginx/devpxyerror.log debug; location / { proxy_pass https://devapp.mydomain.com; proxy_redirect https://devapp.mydomain.com / ; } } server { listen 10.10.10.10:80 ; server_name apppxy.mydomain.com ; proxy_bind 10.10.10.10 ; access_log /var/log/nginx/pxyaccess.log main ; error_log /var/log/nginx/pxyerror.log debug ; location / { proxy_pass https://prodapp.mydomain.com ; proxy_redirect https://prodapp.mydomain.com / ; } } When I do that it says "location" directive isn't allowed here... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236278,236278#msg-236278 From nginx-forum at nginx.us Mon Feb 18 15:06:18 2013 From: nginx-forum at nginx.us (jims) Date: Mon, 18 Feb 2013 10:06:18 -0500 Subject: Reverse proxy configuration help Message-ID: <47e386c849c6212c28a126091b9b3cdf.NginxMailingListEnglish@forum.nginx.org> I am new to nginx, it being recommended to solve a problem. The problem: I have a VPS hosting a website and an application server in my DMZ. I have a test and prod version of each. I want both DMZ'ed servers reverse-proxied such that requests where the referrer is the test web server always go to the test app server and requests where the referrer is anything but the test web server always go to the production app server. The app servers can only be accessed over https, and the proxy will eventually but not quite yet. Question: What is the best way to accomplish this? I am trying to use two different registered host names which are registered to the secondary IP on the VPS, as the proxied names for the app servers, but that's not working too well. I wonder if it would be better to have a single server name for the proxy with the two proxied servers selected based on referrer, rather than trying to redirect to another server name, with one server name servicing one proxied server and the other, the other proxied server. Regardless, I can't seem to get past the connection to the backend server. I keep getting a 110 connection failure. I have tried several configurations but none seem to work. The problem I'm running into may be related to use of the valid_referers directive. It doesn't seem to do what I need, which is to use one back-end for requests referred from one web server host but use the other for all other requests. If I have two server directives with the same IP but two different server names, it seems I can't have two location directives, one within each server name. If I could get that to work, it seems to me it should allow me to redirect to the default app server using the valid_referers directive within the referrer-specific app server's server directive, but that doesn't seem to work the way I expect, either. I don't have a config file to post because it has gone through a dozen iterations already, none of which have been saved. A generic example of one that doesn't work would be : server { listen 10.10.10.10:80; server_name devappxy.mydomain.com; valid_referers devweb.mydomain.com; if ($invalid_referer) { return 301 http://apppxy.mydomain.com$request_uri; } proxy_bind 10.10.10.10; access_log /var/log/nginx/devpxyaccess.log main; error_log /var/log/nginx/devpxyerror.log debug; location / { proxy_pass https://devapp.mydomain.com; proxy_redirect https://devapp.mydomain.com / ; } } server { listen 10.10.10.10:80 ; server_name apppxy.mydomain.com ; proxy_bind 10.10.10.10 ; access_log /var/log/nginx/pxyaccess.log main ; error_log /var/log/nginx/pxyerror.log debug ; location / { proxy_pass https://prodapp.mydomain.com ; proxy_redirect https://prodapp.mydomain.com / ; } } When I do that it says "location" directive isn't allowed here... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236278,236278#msg-236278 From contact at jpluscplusm.com Mon Feb 18 17:03:04 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 18 Feb 2013 17:03:04 +0000 Subject: Reverse proxy configuration help In-Reply-To: <47e386c849c6212c28a126091b9b3cdf.NginxMailingListEnglish@forum.nginx.org> References: <47e386c849c6212c28a126091b9b3cdf.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 18 February 2013 15:06, jims wrote: > I am new to nginx, it being recommended to solve a problem. [ Having read your mail, this kind of reverse proxying is exactly what nginx is very good at; I think you're just trying to do too much, too quickly, and need to step back from the problem for a moment to identify what your first steps should be; then iterate from simple to complex behaviours, only moving forward once each behaviour works successfully. ] > The problem: I have a VPS hosting a website and an application server in my > DMZ. I have a test and prod version of each. I want both DMZ'ed servers > reverse-proxied such that requests where the referrer is the test web server > always go to the test app server and requests where the referrer is anything > but the test web server always go to the production app server. When you say "referrer", do you really mean the referrer as distinguished by client-originated HTTP headers? I wouldn't do that, personally ... > The app servers can only be accessed over https, and the proxy will > eventually but not quite yet. That last part may be more of an issue for you, as you'll discover you need an IP address per SSL site you want to host. > Question: What is the best way to accomplish this? I am trying to use two > different registered host names which are registered to the secondary IP on > the VPS, as the proxied names for the app servers, but that's not working > too well. I wonder if it would be better to have a single server name for > the proxy with the two proxied servers selected based on referrer, rather > than trying to redirect to another server name, with one server name > servicing one proxied server and the other, the other proxied server. Goodness, no. I wouldn't /touch/ referer headers for HTTP routing. So unreliable! > Regardless, I can't seem to get past the connection to the backend server. > I keep getting a 110 connection failure. I have tried several > configurations but none seem to work. What does a connection, via telnet/netcat, from the server, show you? > The problem I'm running into may be related to use of the valid_referers > directive. It doesn't seem to do what I need, which is to use one back-end > for requests referred from one web server host but use the other for all > other requests. I may be repeating a single tune here, but I would really force your business to re-examine your requirements if you think this is desirable behaviour. > If I have two server directives with the same IP but two different server > names, it seems I can't have two location directives, one within each server > name. Each server may have zero or more location directives. Each location belongs to exactly one server stanza. I don't understand exactly what you think doesn't work, but if it contradicts the above 2 lines, then it's not legal nginx config. > If I could get that to work, it seems to me it should allow me to > redirect to the default app server using the valid_referers directive within > the referrer-specific app server's server directive, but that doesn't seem > to work the way I expect, either. When you say "redirect" here, you really mean "reverse proxy", don't you? "Redirecting" is a very specific, unrelated thing in HTTP-server-speak ... > I don't have a config file to post because it has gone through a dozen > iterations already, none of which have been saved. apt-get install git-core :-P > A generic example of > one that doesn't work would be : > server { > listen 10.10.10.10:80; > server_name devappxy.mydomain.com; > valid_referers devweb.mydomain.com; > if ($invalid_referer) { > return 301 http://apppxy.mydomain.com$request_uri; > } > proxy_bind 10.10.10.10; > access_log /var/log/nginx/devpxyaccess.log main; > error_log /var/log/nginx/devpxyerror.log debug; > location / { > proxy_pass https://devapp.mydomain.com; > proxy_redirect https://devapp.mydomain.com / ; > } > } > server { > listen 10.10.10.10:80 ; > server_name apppxy.mydomain.com ; > proxy_bind 10.10.10.10 ; > access_log /var/log/nginx/pxyaccess.log main ; > error_log /var/log/nginx/pxyerror.log debug ; > location / { > proxy_pass https://prodapp.mydomain.com ; > proxy_redirect https://prodapp.mydomain.com / ; > } > } > The only real problem I can see is that you don't have a resolver specified, so nginx doesn't know how to resolve the app FQDNs. Irrespective of this, there are much nicer ways to achieve this, which might use: * Nginx maps to translate from client Host header to backend FQDN. * Access/error logs specified using variables, but DRY them out at a higher level than per-server (i.e. state them once, globally, at the http level. * A single server stanza, switching between backends. I could write a version that uses these concepts for you, but I'd be depriving you of the educational and life-affirming journey of Getting There Yourself if I did ;-) If you want to get the best possible help with this, reduce the clutter in your example/failing config (i.e. make the smallest possible config that doesn't do what you think it /should/ do), and re-engage with the list. > When I do that it says "location" directive isn't allowed here... When you do what? Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From agentzh at gmail.com Mon Feb 18 19:02:36 2013 From: agentzh at gmail.com (agentzh) Date: Mon, 18 Feb 2013 11:02:36 -0800 Subject: Debugging performance under high load In-Reply-To: <1b1f45e623d13ab7fa1566d20eb07197.NginxMailingListEnglish@forum.nginx.org> References: <1b1f45e623d13ab7fa1566d20eb07197.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Sun, Feb 17, 2013 at 3:52 AM, fluffypony wrote: > How do I debug the poor > performance so I at least know what to fix? Is there a way to step through > exactly what is happening in a request under load to see where it's being > delayed? I'd like to get it up to at least 1k RPS if not more, and I believe > the server and the bandwidth are up to the task. > We've been using the Flame Graph tools to profile our online Nginx on Linux in production. It is a great tool to find out which part (be it a function or a code path) is hot and slow (on various levels like the kernelspace, the C level in userspace, or even high levels on scripting languages like Lua). See the ngx-sample-bt tool in my Nginx Systemtap Toolkit: https://github.com/agentzh/nginx-systemtap-toolkit#ngx-sample-bt There's no need to recompile or restart your Nginx for the live profiling. Just ensure that your Nginx executable is not stripped (the DWARF debug symbols should be enabled by Nginx by default). Another prerequisite to use tools in my Nginx Systemtap Toolkit is that you have a working systemtap installation in your Linux system, see the documentation for details: https://github.com/agentzh/nginx-systemtap-toolkit#prerequisites Best regards, -agentzh From steve at greengecko.co.nz Mon Feb 18 21:10:56 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 19 Feb 2013 10:10:56 +1300 Subject: Geo blocking, but allowing google index robot to pass-thru In-Reply-To: References: Message-ID: <1361221856.1261.583.camel@steve-new> On Mon, 2013-02-18 at 14:00 +0200, Pekka.Panula at sofor.fi wrote: > Hi > > I have a site where i want to geo block all but one country, but > perhaps allow Google to index site, perhaps some other index bot too. > > So what sort of configuration is needed so i can detect Google bot and > let it pass-thru? Would be nice if there is example configuration. Is > only good way to check user-agent? > > ______________________________________________________________________ > > Pekka Panula | Jatkuvat Palvelut | Direct +358 10 235 9232 | > Pekka.Panula at sofor.fi > Sofor Oy | www.sofor.fi| Takakaarre 3 | PL 51 | FIN-62200 Kauhava > tel. +358 10 235 90 | fax +358 10 235 9100 > Here's some code I use for a similar setup... map $geoip_country_code $external_redirects { default 'Block'; US http://www.example.com; } #Whitelist crawlers map $http_user_agent $crawler { default 0; ~*(AdsBot-Google|Googlebot-Mobile|Googlebot-Image|Mediapartners-Google| bingbot|Feedfetcher-Google|Googlebot|Yahoo\ !Slurp|msnbot|msnbot-media| YahooCacheSystem) 1; } # You'll also probably need to override for specific IP addresses too... geo $whitelisted { default $crawler; # localhost 127.0.0.0/8 1; # CloudFlare 204.93.240.0/24 1; 204.93.177.0/24 1; 199.27.128.0/21 1; 173.245.48.0/20 1; 103.21.244.0/22 1; 103.22.200.0/22 1; 103.31.4.0/22 1; 141.101.64.0/18 1; 108.162.192.0/18 1; 190.93.240.0/20 1; 188.114.96.0/20 1; 197.234.240.0/22 1; 198.41.128.0/17 1; } You can then use your own logic with the values of $external_redirects and $whitelisted to control redirections. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From nginx-forum at nginx.us Tue Feb 19 01:02:01 2013 From: nginx-forum at nginx.us (Dave Marchevsky) Date: Mon, 18 Feb 2013 20:02:01 -0500 Subject: Sending SSL Client Certificate upstream Message-ID: <1cad592bfd9800e6e2d472f8afa03681.NginxMailingListEnglish@forum.nginx.org> Hello, I am modifying my upstream servers so that they require client certificates from clients, which are other backend apps making service calls and nginx proxying for public requests. Public requests come to nginx without a client certificate and are proxied upstream, but since the upstream server requires a client cert they fail. I'd like to be able to provide a client cert to nginx (from local file) to send upstream for authentication. Is this possible? Thanks, Dave Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236314,236314#msg-236314 From martinloy.uy at gmail.com Tue Feb 19 02:19:09 2013 From: martinloy.uy at gmail.com (Martin Loy) Date: Tue, 19 Feb 2013 00:19:09 -0200 Subject: Websocket proxy support In-Reply-To: References: Message-ID: There is a commit from maxim about ~12hs ago > http://trac.nginx.org/nginx/changeset/5073/nginx :) On Mon, Feb 18, 2013 at 10:06 AM, Andrew Alexeev wrote: > On Feb 18, 2013, at 4:03 PM, perone wrote: > > > Is there any updates on the Websocket proxy support ? Everything I found > is > > only this: http://trac.nginx.org/nginx/milestone/1.3.13 > > Yes, there will be an update very soon. Be patient and stay tuned! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Nunca hubo un amigo que hiciese un favor a un enano, ni un enemigo que le hiciese un mal, que no se viese recompensado por entero.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 19 02:41:30 2013 From: nginx-forum at nginx.us (mottwsc) Date: Mon, 18 Feb 2013 21:41:30 -0500 Subject: 'no input file specified' on LEMP setup Message-ID: I followed the article "How to Install Linux, nginx, MySQL, PHP (LEMP) stack on CentOS 6" (https://www.digitalocean.com/community/articles/how-to-install-linux-nginx-mysql-php-lemp-stack-on-centos-6) and could see the nginx default page displayed. After I worked through the changes to configs, etc. and tried to display info.php (phpinfo), I get the message 'no input file specified'. I get that same thing on another php file that I loaded to the site in the same directory. However, I can display an html file from that directory. Any thoughts as to how to track this down? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236318,236318#msg-236318 From lists at ruby-forum.com Tue Feb 19 04:03:46 2013 From: lists at ruby-forum.com (Stefanita Rares D.) Date: Tue, 19 Feb 2013 05:03:46 +0100 Subject: Bandwidth limiting per virtualhost Message-ID: <4e21c200334232a830bb059613adce28@ruby-forum.com> Hi guys, I am managing a high traffic website. I have some embedded images that get a lot of traffic, and i would like to separate the traffic for the embeds, and the images that are viewed from the site on 2 virtual hosts. I would like to limit the embeds let's say to 50 mb/s so they don't eat up all the available bandwidth. Any suggestions on how can i achieve this by any chance? Thanks in advance. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Tue Feb 19 04:50:03 2013 From: nginx-forum at nginx.us (Ensiferous) Date: Mon, 18 Feb 2013 23:50:03 -0500 Subject: 'no input file specified' on LEMP setup In-Reply-To: References: Message-ID: <0fce77a80a40be74549391dde558aa12.NginxMailingListEnglish@forum.nginx.org> I have documented most of the causes of this here: http://blog.martinfjordvald.com/2011/01/no-input-file-specified-with-php-and-nginx/ Chances are your issue is in there as well. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236318,236320#msg-236320 From patricia.dbooker at gmail.com Tue Feb 19 10:10:40 2013 From: patricia.dbooker at gmail.com (D-BookeR) Date: Tue, 19 Feb 2013 11:10:40 +0100 Subject: [PROPOSAL] Looking for contributors to write a modular book on Nginx in French Message-ID: Hello, I'm looking for French-writing contributors and authors for writting a modular book on Nginx (in French). All experiences may be interesting, since a part of the book will be dedicated to use cases. The book will be published by Les ?ditions D-Booker, that I set up recently. For more information, please contact me off-list to contact at d-booker dot fr - Patricia -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 19 10:13:09 2013 From: nginx-forum at nginx.us (leejaycoke) Date: Tue, 19 Feb 2013 05:13:09 -0500 Subject: nginx can't run php only 'PAGE NOT FOUND' Message-ID: I'm sorry for my English... nginx has default html page it's /usr/share/nginx/html. But I changed custom new path it's /home/norrent/public_html. If I put php and html files in the original path, works well. But if I put php and html files in the custom path, only html files work. and php files show me up 'Page Not Found' then nginx record error message ************************ [error] 2478#0: *19 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: XX.XXX.XX.XXX, server: lo blah~blah~blah~~~~~~~ ************************ I thins I seted some wrong configuration. please check this nginx.conf file; *************************************** /etc/nginx/nginx.conf *************************************** server { listen 80; server_name localhost; root /home/norrent/public_html; location / { index index.php index.html index.htm; } location ~ \.php$ { set $php_root /home/norrent/public_html; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $php_root$fastcgi_script_name; include fastcgi_params; } } *************************************** /etc/nginx/fastcgi_params *************************************** fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; Thank you for nginx and forum! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236323,236323#msg-236323 From steve at greengecko.co.nz Tue Feb 19 10:23:02 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 19 Feb 2013 23:23:02 +1300 Subject: nginx can't run php only 'PAGE NOT FOUND' In-Reply-To: References: Message-ID: <51235286.4050800@greengecko.co.nz> On 19/02/13 23:13, leejaycoke wrote: > I'm sorry for my English... > > nginx has default html page it's /usr/share/nginx/html. > But I changed custom new path it's /home/norrent/public_html. > did you restart nginx to enable the new config? From ru at nginx.com Tue Feb 19 11:54:53 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 19 Feb 2013 15:54:53 +0400 Subject: Upgrading Executable on the Fly - wrong docs? In-Reply-To: References: <20130211070722.GA45362@lo0.su> Message-ID: <20130219115453.GD76522@lo0.su> On Tue, Feb 12, 2013 at 03:01:39PM -0500, piotr.dobrogost wrote: > Ruslan, thanks for quick reply. > > I have some trouble comparing the new wording with the previous one as it > looks like your change went live at http://nginx.org/en/docs/control.html so > I do not have the old one to compare any more :) Already answered. > Neverthless I have some more comments on the new (current) one. > > I think an error sneaked into the new version. The first bullet is now > "Send the HUP signal to the old master process. The old master process will > start new worker processes without re-reading the configuration. After that, > all new processes can be shut down gracefully, by sending the QUIT signal to > the old master process." > I think it should have been "(...) by sending the QUIT signal to the new > master process." instead. Thanks for spotting this, the fixed version is already on site. > What I don't understand is why the old master process does not re-read the > configuration after receiving the HUP signal as at the top of the page it's > written > HUP (...), starting new worker processes with a new configuration, (...) > If the reason is because it had received the USR2 signal at the beginning of > the whole procedure and this changed its state (it "remembers" receiving the > USR2 signal) it should be explained. HUP after USR2 is handled differently, exactly as documented. When master process knows it's "old" (i.e., upgrade procedure is in progress), a request to start new worker processes is interpreted as a rollback request -- master starts new worker and cache manager processes with an old configuration. > Also, maybe I'm missing something but I think that the two bullets are not > symmetrical without a reason. In the first bullet the QUIT signal is used > whereas in the second bullet the TERM signal is used. I believe either of > them could be used with the obvious difference of fast vs graceful shutdown. > If it's true (either could be used) then using different signals between the > first and the second bullet is misleading. These are two different procedudes with different properties. In the first case, you restart old workers with an old configuration, but let requests that are currently in-fly to be fully processed (if you can tolerate this). There's no interruption in handling requests. In the second case, you want to stop new workers right away (e.g., something really odd happened that you can't tolerate even in-fly requests to finish), and it requires only a single action from you to roll back (or none at all if e.g. a new binary process segfaults). But there's a small window where connection attempts may be rejected. Of course one may picture down other procedures, like starting old workers and immediately stopping new processes, but how this is practically different from the first case? Or one can gracefully stop new workers (new requests will be rejected, but those in-fly will be serviced, potentially indefinitely), and only after that old workers will be restarted and new requests will be handled (sorry, but such a procedure doesn't make any sense to me). > Additionaly I have a question regarding the following fragment: > "In order to upgrade the server executable, the new executable file should > be put in place of an old file first. After that USR2 signal should be sent > to the master process. The master process first renames its file (...) > How can the master process rename its file if this file is already gone i.e. > it had been replaced by the new executable? Read further, it "renames its file with the process ID", see http://nginx.org/r/pid From nginx-forum at nginx.us Tue Feb 19 11:56:55 2013 From: nginx-forum at nginx.us (leejaycoke) Date: Tue, 19 Feb 2013 06:56:55 -0500 Subject: nginx can't run php only 'PAGE NOT FOUND' In-Reply-To: <51235286.4050800@greengecko.co.nz> References: <51235286.4050800@greengecko.co.nz> Message-ID: <5cf900b067010fdd0fff7a1acd2dbb73.NginxMailingListEnglish@forum.nginx.org> Of course; because /usr/share/nginx/html workds well. everytime I restart nginx service, after save config file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236323,236328#msg-236328 From nginx-forum at nginx.us Tue Feb 19 12:26:34 2013 From: nginx-forum at nginx.us (disney2002) Date: Tue, 19 Feb 2013 07:26:34 -0500 Subject: *** glibc detected *** nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf: free(): invalid pointer: 0x00000000005b9980 *** Message-ID: <9dd7db4f50d37053642573d76d9bb7de.NginxMailingListEnglish@forum.nginx.org> I got a Serious problem?here is the nginx config: nginx version: nginx/0.8.55 built by gcc 4.1.2 20080704 (Red Hat 4.1.2-52) TLS SNI support enabled configure arguments: --user=www --group=www --prefix=/data/nginx/ --with-google_perftools_module --with-http_stub_status_module --with-openssl=/usr/local/openssl-1.0.1c --with-http_gzip_static_module --with-http_ssl_module and here is the error message: *** glibc detected *** nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf: free(): invalid pointer: 0x00000000005b9980 *** ======= Backtrace: ========= /lib64/libc.so.6[0x3acc670d7f] /lib64/libc.so.6(cfree+0x4b)[0x3acc6711db] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x491bfd] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x4c489d] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x4a6168] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x4a34f0] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x4a3bc6] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x4a3c95] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x4a3416] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x4ad950] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x4a365a] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x41fd57] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x41d203] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x41e98d] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf[0x40510a] /lib64/libc.so.6(__libc_start_main+0xf4)[0x3acc61d994] nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf(realloc+0x169)[0x4038e9] ======= Memory map: ======== 00400000-005fa000 r-xp 00000000 68:02 84738073 /data/nginx/sbin/nginx 007f9000-0081b000 rw-p 001f9000 68:02 84738073 /data/nginx/sbin/nginx 0081b000-0082d000 rw-p 0081b000 00:00 0 10c96000-10d42000 rw-p 10c96000 00:00 0 [heap] 3acc200000-3acc21c000 r-xp 00000000 68:03 25239458 /lib64/ld-2.5.so 3acc41c000-3acc41d000 r--p 0001c000 68:03 25239458 /lib64/ld-2.5.so 3acc41d000-3acc41e000 rw-p 0001d000 68:03 25239458 /lib64/ld-2.5.so 3acc600000-3acc74d000 r-xp 00000000 68:03 25239465 /lib64/libc-2.5.so 3acc74d000-3acc94d000 ---p 0014d000 68:03 25239465 /lib64/libc-2.5.so 3acc94d000-3acc951000 r--p 0014d000 68:03 25239465 /lib64/libc-2.5.so 3acc951000-3acc952000 rw-p 00151000 68:03 25239465 /lib64/libc-2.5.so 3acc952000-3acc957000 rw-p 3acc952000 00:00 0 3acca00000-3acca1e000 r-xp 00000000 68:03 25239625 /lib64/libpcre.so.0.0.1 3acca1e000-3accc1e000 ---p 0001e000 68:03 25239625 /lib64/libpcre.so.0.0.1 3accc1e000-3accc1f000 rw-p 0001e000 68:03 25239625 /lib64/libpcre.so.0.0.1 3acce00000-3acce14000 r-xp 00000000 68:03 25239471 /lib64/libz.so.1.2.3 3acce14000-3acd013000 ---p 00014000 68:03 25239471 /lib64/libz.so.1.2.3 3acd013000-3acd014000 rw-p 00013000 68:03 25239471 /lib64/libz.so.1.2.3 3acd200000-3acd202000 r-xp 00000000 68:03 25239489 /lib64/libdl-2.5.so 3acd202000-3acd402000 ---p 00002000 68:03 25239489 /lib64/libdl-2.5.so 3acd402000-3acd403000 r--p 00002000 68:03 25239489 /lib64/libdl-2.5.so 3acd403000-3acd404000 rw-p 00003000 68:03 25239489 /lib64/libdl-2.5.so 3ad7000000-3ad7011000 r-xp 00000000 68:03 25239643 /lib64/libresolv-2.5.so 3ad7011000-3ad7211000 ---p 00011000 68:03 25239643 /lib64/libresolv-2.5.so 3ad7211000-3ad7212000 r--p 00011000 68:03 25239643 /lib64/libresolv-2.5.so 3ad7212000-3ad7213000 rw-p 00012000 68:03 25239643 /lib64/libresolv-2.5.so 3ad7213000-3ad7215000 rw-p 3ad7213000 00:00 0 3ade400000-3ade40d000 r-xp 00000000 68:03 25239513 /lib64/libgcc_s-4.1.2-20080825.so.1 3ade40d000-3ade60d000 ---p 0000d000 68:03 25239513 /lib64/libgcc_s-4.1.2-20080825.so.1 3ade60d000-3ade60e000 rw-p 0000d000 68:03 25239513 /lib64/libgcc_s-4.1.2-20080825.so.1 3adec00000-3adec09000 r-xp 00000000 68:03 25239510 /lib64/libcrypt-2.5.so 3adec09000-3adee08000 ---p 00009000 68:03 25239510 /lib64/libcrypt-2.5.so 3adee08000-3adee09000 r--p 00008000 68:03 25239510 /lib64/libcrypt-2.5.so 3adee09000-3adee0a000 rw-p 00009000 68:03 25239510 /lib64/libcrypt-2.5.so 3adee0a000-3adee38000 rw-p 3adee0a000 00:00 0 3adf000000-3adf0e6000 r-xp 00000000 68:03 26360209 /usr/lib64/libstdc++.so.6.0.8 3adf0e6000-3adf2e5000 ---p 000e6000 68:03 26360209 /usr/lib64/libstdc++.so.6.0.8 3adf2e5000-3adf2eb000 r--p 000e5000 68:03 26360209 /usr/lib64/libstdc++.so.6.0.8 3adf2eb000-3adf2ee000 rw-p 000eb000 68:03 26360209 /usr/lib64/libstdc++.so.6.0.8 3adf2ee000-3adf300000 rw-p 3adf2ee000 00:00 0 2b879a005000-2b879a007000 rw-p 2b879a005000 00:00 0 2b879a007000-2b879a008000 rw-s 00000000 00:09 400593780 /dev/zero (deleted) 2b879a019000-2b879a01a000 rw-p 2b879a019000 00:00 0 2b879a01a000-2b879a028000 r-xp 00000000 68:03 16237094 /usr/local/lib/libprofiler.so.0.3.0 2b879a028000-2b879a227000 ---p 0000e000 68:03 16237094 /usr/local/lib/libprofiler.so.0.3.0 2b879a227000-2b879a228000 rw-p 0000d000 68:03 16237094 /usr/local/lib/libprofiler.so.0.3.0 2b879a228000-2b879a22d000 rw-p 2b879a228000 00:00 0 2b879a22d000-2b879a23c000 r-xp 00000000 68:03 16237057 /usr/local/lib/libunwind.so.7.0.0 2b879a23c000-2b879a43b000 ---p 0000f000 68:03 16237057 /usr/local/lib/libunwind.so.7.0.0 2b879a43b000-2b879a43c000 rw-p 0000e000 68:03 16237057 /usr/local/lib/libunwind.so.7.0.0 2b879a43c000-2b879a44b000 rw-p 2b879a43c000 00:00 0 2b879a44b000-2b879a4cd000 r-xp 00000000 68:03 25239469 /lib64/libm-2.5.so 2b879a4cd000-2b879a6cc000 ---p 00082000 68:03 25239469 /lib64/libm-2.5.so 2b879a6cc000-2b879a6cd000 r--p 00081000 68:03 25239469 /lib64/libm-2.5.so 2b879a6cd000-2b879a6ce000 rw-p 00082000 68:03 25239469 /lib64/libm-2.5.so 2b879a6ce000-2b879a6d1000 rw-p 2b879a6ce000 00:00 0 2b879a6d1000-2b879a6db000 r-xp 00000000 68:03 25239480 /lib64/libnss_files-2.5.so 2b879a6db000-2b879a8da000 ---p 0000a000 68:03 25239480 /lib64/libnss_files-2.5.so 2b879a8da000-2b879a8db000 r--p 00009000 68:03 25239480 /lib64/libnss_files-2.5.so 2b879a8db000-2b879a8dc000 rw-p 0000a000 68:03 25239480 /lib64/libnss_files-2.5.so 2b879a8dc000-2b879a8e0000 r-xp 00000000 68:03 25239478 /lib64/libnss_dns-2.5.so 2b879a8e0000-2b879aadf000 ---p 00004000 68:03 25239478 /lib64/libnss_dns-2.5.so 2b879aadf000-2b879aae0000 r--p 00003000 68:03 25239478 /lib64/libnss_dns-2.5.so 2b879aae0000-2b879aae1000 rw-p 00004000 68:03 25239478 /lib64/libnss_dns-2.5.so 7fff9d24c000-7fff9d261000 rw-p 7ffffffe9000 00:00 0 [stack] 7fff9d3fd000-7fff9d400000 r-xp 7fff9d3fd000 00:00 0 [vdso] ffffffffff600000-ffffffffffe00000 ---p 00000000 00:00 0 [vsyscall] Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236321,236321#msg-236321 From nginx-forum at nginx.us Tue Feb 19 12:27:08 2013 From: nginx-forum at nginx.us (jims) Date: Tue, 19 Feb 2013 07:27:08 -0500 Subject: Reverse proxy configuration help In-Reply-To: References: Message-ID: <24f3c1f99f1ea2ee3a1236eae4899c22.NginxMailingListEnglish@forum.nginx.org> Jonathan Matthews Wrote: ------------------------------------------------------- > On 18 February 2013 15:06, jims wrote: > > I am new to nginx, it being recommended to solve a problem. > > [ Having read your mail, this kind of reverse proxying is exactly what > nginx is very good at; I think you're just trying to do too much, too > quickly, and need to step back from the problem for a moment to > identify what your first steps should be; then iterate from simple to > complex behaviours, only moving forward once each behaviour works > successfully. ] > Point taken. Going straight for the desired end result doesn't always save time... Thanks for your response, Jonathan. It has been helpful. Read on for responses to your comments... > > The problem: I have a VPS hosting a website and an application > server in my > > DMZ. I have a test and prod version of each. I want both DMZ'ed > servers > > reverse-proxied such that requests where the referrer is the test > web server > > always go to the test app server and requests where the referrer is > anything > > but the test web server always go to the production app server. > > When you say "referrer", do you really mean the referrer as > distinguished by client-originated HTTP headers? I wouldn't do that, > personally ... > When I say "referrer" I mean the site where the link is presented to the end user. If that is what is "distinguished by client-originated HTTP headers" then yes. The desired result is that if a person is in our pool of testers and is testing the development website, any app server link (although pointing putatively to the production app server) would be sent to the reverse-proxy that's front-ending the test app server. The idea is to minimize unauthorized traffic to the test server. By using only links that get to the production app server, if someone saves the link and tries again later, they will hit the production app server's reverse-proxy front-end. They would only hit our test app server if they are actively testing for us. Once testing is complete, the proven code can be promoted to te production webste without having to deal with changing test links to prod links in the process Those who will be maintaining the links ongoing should not be expected either to change links as part of a move-to-production or to have to learn how to put variables into all the links, and we would not have to modify the CMS to handle links with variables - they should be able to copy and paste to create links, which resulting content should be able to be promoted to production without change, or it defeats the purpose of using a modern content-management system. > > The app servers can only be accessed over https, and the proxy will > > eventually but not quite yet. > > That last part may be more of an issue for you, as you'll discover you > need an IP address per SSL site you want to host. > Normally, yes, and each of the app server hostnames has its own registered IP address now, with trusted certs associated. We are working on obtaining a wildcard cert which we'd use for the proxy as well as the website, and will add IP addresses to the proxy if necessary. I would hope that, since we want the proxy to choose between two back-end app servers for the same front-end uri, depending on whether or not there is a referrer of the development website, one IP should be all that's needed on the front-end, correct? > > Question: What is the best way to accomplish this? I am trying to > use two > > different registered host names which are registered to the > secondary IP on > > the VPS, as the proxied names for the app servers, but that's not > working > > too well. I wonder if it would be better to have a single server > name for > > the proxy with the two proxied servers selected based on referrer, > rather > > than trying to redirect to another server name, with one server name > > servicing one proxied server and the other, the other proxied > server. > > Goodness, no. I wouldn't /touch/ referer headers for HTTP routing. So > unreliable! > OK. How would you recommend ensuring that if you click on a link on our dev site, it goes to the proxied test app server but if you access that same URL in any other way, whether by way of a link on the prod website, a bookmark, someone emailing you the link - the request goes to the proxied prod app server? As I said, I'm an nginx newb, so monosyllabic responses are appreciated... ;) > > Regardless, I can't seem to get past the connection to the backend > server. > > I keep getting a 110 connection failure. I have tried several > > configurations but none seem to work. > > What does a connection, via telnet/netcat, from the server, show you? > I get a connection. I haven't figured out the right HTTP command to send to get a valid response yet, but I get a response - not a timeout. > > The problem I'm running into may be related to use of the > valid_referers > > directive. It doesn't seem to do what I need, which is to use one > back-end > > for requests referred from one web server host but use the other for > all > > other requests. > > I may be repeating a single tune here, but I would really force your > business to re-examine your requirements if you think this is > desirable behaviour. > See my earlier response explaining the business requirement, to understand why this is a desireable behavior. > > If I have two server directives with the same IP but two different > server > > names, it seems I can't have two location directives, one within > each server > > name. > > Each server may have zero or more location directives. > Each location belongs to exactly one server stanza. > > I don't understand exactly what you think doesn't work, but if it > contradicts the above 2 lines, then it's not legal nginx config. > If you look at the example conf I posted, that configuration - two separate server stanzas, each with a location directive, and I get that message. I probably have something else misconfigured. Again, newb... > > If I could get that to work, it seems to me it should allow me to > > redirect to the default app server using the valid_referers > directive within > > the referrer-specific app server's server directive, but that > doesn't seem > > to work the way I expect, either. > > When you say "redirect" here, you really mean "reverse proxy", don't > you? > "Redirecting" is a very specific, unrelated thing in HTTP-server-speak > . The redirect is a redirect - telling nginx to use a different reverse-proxy "upstream" server from what it would normally use based on the URL in the request. However, if there is a better way to get the same result I am all for it. For example, a method whereby the same front-end url chooses an upstream server based on the valid_referer criterion, or whatever it is you would recommend other than the referrer,. > > > I don't have a config file to post because it has gone through a > dozen > > iterations already, none of which have been saved. > > apt-get install git-core :-P > I don't want to install apt on my centos server :/ How 'bout 'yum install git-core?' > > A generic example of > > one that doesn't work would be : > > server { > > listen 10.10.10.10:80; > > server_name devappxy.mydomain.com; > > valid_referers devweb.mydomain.com; > > if ($invalid_referer) { > > return 301 http://apppxy.mydomain.com$request_uri; > > } > > proxy_bind 10.10.10.10; > > access_log /var/log/nginx/devpxyaccess.log main; > > error_log /var/log/nginx/devpxyerror.log debug; > > location / { > > proxy_pass https://devapp.mydomain.com; > > proxy_redirect https://devapp.mydomain.com / ; > > } > > } > > server { > > listen 10.10.10.10:80 ; > > server_name apppxy.mydomain.com ; > > proxy_bind 10.10.10.10 ; > > access_log /var/log/nginx/pxyaccess.log main ; > > error_log /var/log/nginx/pxyerror.log debug ; > > location / { > > proxy_pass https://prodapp.mydomain.com ; > > proxy_redirect https://prodapp.mydomain.com / ; > > } > > } > > > > The only real problem I can see is that you don't have a resolver > specified, so nginx doesn't know how to resolve the app FQDNs. > Irrespective of this, there are much nicer ways to achieve this, which > might use: > > * Nginx maps to translate from client Host header to backend FQDN. Would that work if the goal is to direct traffic based on where you're coming from? I will explore... > * Access/error logs specified using variables, but DRY them out at a > higher level than per-server (i.e. state them once, globally, at the > http level. The logs are specified at per-server to quickly identify where the failure lies. They will be only at the nginx.conf http level when I have a suceessful configuration. > * A single server stanza, switching between backends. > I like the idea - I'm just stuck on how to get it to switch based on where the client is coming from... > I could write a version that uses these concepts for you, but I'd be > depriving you of the educational and life-affirming journey of Getting > There Yourself if I did ;-) > > If you want to get the best possible help with this, reduce the > clutter in your example/failing config (i.e. make the smallest > possible config that doesn't do what you think it /should/ do), and > re-engage with the list. > > > When I do that it says "location" directive isn't allowed here... > > When you do what? > When I set up my included config file to use the two-server-stanza configuration I posted (with hostnames/addresses pointing to real-life stuff, of course) that's what I get when issuing the service restart. > Jonathan > -- > Jonathan Matthews // Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks again - you've been quite helpful. Jim. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236278,236312#msg-236312 From nginx-forum at nginx.us Tue Feb 19 12:27:40 2013 From: nginx-forum at nginx.us (mottwsc) Date: Tue, 19 Feb 2013 07:27:40 -0500 Subject: installing nginx on centos should be straightforward In-Reply-To: <1361139387.1261.530.camel@steve-new> References: <1361139387.1261.530.camel@steve-new> Message-ID: <3ebdeb692744527358a45c715ae77bed.NginxMailingListEnglish@forum.nginx.org> OK, Steve - thanks for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236254,236317#msg-236317 From citrin at citrin.ru Tue Feb 19 12:33:09 2013 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Tue, 19 Feb 2013 16:33:09 +0400 Subject: *** glibc detected *** nginx: master process /data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf: free(): invalid pointer: 0x00000000005b9980 *** In-Reply-To: <9dd7db4f50d37053642573d76d9bb7de.NginxMailingListEnglish@forum.nginx.org> References: <9dd7db4f50d37053642573d76d9bb7de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51237105.1020105@citrin.ru> On 02/19/13 16:26, disney2002 wrote: > I got a Serious problem?here is the nginx config: > > nginx version: nginx/0.8.55 0.8.55 is very old, try to upgrade to latest stable - 1.2.7 -- Anton Yuzhaninov From sparvu at systemdatarecorder.org Tue Feb 19 12:47:23 2013 From: sparvu at systemdatarecorder.org (Stefan Parvu) Date: Tue, 19 Feb 2013 12:47:23 +0000 Subject: nginx serving R scripts via CGI/FastRWeb Message-ID: <20130219124723.GC14104@localhost> Hi, Anyone here testing, experimenting with R and nginx ? Im trying to setup nginx to serve R scripts via CGI using Rserve, FastRWeb modules as described here: http://jayemerson.blogspot.fi/2011/10/setting-up-fastrwebrserve-on-ubuntu.html My nginx is configured like: location ~ ^/cgi-bin/.*\.cgi$ { gzip off; fastcgi_pass unix:/opt/sdr/report/ws/fastcgi_temp/nginx-fcgi.sock; fastcgi_read_timeout 5m; fastcgi_index index.cgi; #fastcgi_buffers 8 4k; # # You may copy and paste the lines under or use include directive # include /etc/nginx/nginx-fcgi.conf; # In this example all is in one file # fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT /opt/sdr/report/docroot; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; } and Im using fcgiwrap from http://nginx.localdomain.pl/wiki/FcgiWrap. CGI scripts work fine but Im not able to any R scripts probable due R being not correctly called via nginx.conf ... I am trying something like: http://localhost/cgi-bin/R/foo.png?n=500 and R is a binary file under cgi-bin directory which should call FastRWeb ... Any pointers, help ? thanks, Stefan From dreamwerx at gmail.com Tue Feb 19 13:05:46 2013 From: dreamwerx at gmail.com (DreamWerx) Date: Tue, 19 Feb 2013 14:05:46 +0100 Subject: Limit request + whitelist = not using response code from backend? 0.8.54 Message-ID: Hi all, I'm hoping someone can help me with a small issue. I'm trying to implement rate limiting with a whitelist, and all in all it seems to be working, but the wrong response code is being sent back to the browser. For example if the apache backend sends a 302 redirect response, nginx still sends a 200 back? If I remove the mapping to code 200, it then sends a 418 back. Is there an easy fix for this? Here is my config. Thanks for any help. --------- http { recursive_error_pages on; proxy_buffering off; geo $limited { default 1; 10.0.0.0/8 0; xxx.xxx.xxx.xx 0; } limit_req_zone $binary_remote_addr zone=protect1:10m rate=5r/s; } location / { error_page 418 =200 @limitclient; #error_page 418 @limitclient; if ($limited) { return 418; } proxy_read_timeout 300; default_type text/html; charset utf-8; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://backend; } location @limitclient { error_page 503 @flooder; limit_req zone=protect1 burst=5 nodelay; proxy_read_timeout 300; default_type text/html; charset utf-8; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://backend; } location @flooder { rewrite ^(.*)$ /flooder.html break; } From mdounin at mdounin.ru Tue Feb 19 13:18:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Feb 2013 17:18:12 +0400 Subject: Limit request + whitelist = not using response code from backend? 0.8.54 In-Reply-To: References: Message-ID: <20130219131812.GI81985@mdounin.ru> Hello! On Tue, Feb 19, 2013 at 02:05:46PM +0100, DreamWerx wrote: > Hi all, > > I'm hoping someone can help me with a small issue. I'm trying to > implement rate limiting with a whitelist, and all in all it seems to > be working, but > the wrong response code is being sent back to the browser. > > For example if the apache backend sends a 302 redirect response, nginx > still sends a 200 back? If I remove the mapping to code 200, it then > sends a 418 back. > Is there an easy fix for this? Yes, - error_page 418 =200 @limitclient; + error_page 418 = @limitclient; See http://nginx.org/r/error_page. Alternatively, you may want to use something like geo $limited { ... } map $limited $address { 1 $binary_remote_address; 0 ""; } limit_req_zone $address zone=...; to implement a whitelist (i.e., make sure the variable used in limit_req_zone is empty if you don't want the limit). -- Maxim Dounin http://nginx.com/support.html From dreamwerx at gmail.com Tue Feb 19 13:34:39 2013 From: dreamwerx at gmail.com (dreamwerx at gmail.com) Date: Tue, 19 Feb 2013 08:34:39 -0500 (EST) Subject: Limit request + whitelist = not using response code from backend? 0.8.54 In-Reply-To: <20130219131812.GI81985@mdounin.ru> References: <20130219131812.GI81985@mdounin.ru> Message-ID: Worked perfect! Thanks again. On Tue, 19 Feb 2013, Maxim Dounin wrote: > Hello! > > On Tue, Feb 19, 2013 at 02:05:46PM +0100, DreamWerx wrote: > >> Hi all, >> >> I'm hoping someone can help me with a small issue. I'm trying to >> implement rate limiting with a whitelist, and all in all it seems to >> be working, but >> the wrong response code is being sent back to the browser. >> >> For example if the apache backend sends a 302 redirect response, nginx >> still sends a 200 back? If I remove the mapping to code 200, it then >> sends a 418 back. >> Is there an easy fix for this? > > Yes, > > - error_page 418 =200 @limitclient; > + error_page 418 = @limitclient; > > See http://nginx.org/r/error_page. > > Alternatively, you may want to use something like > > geo $limited { ... } > > map $limited $address { > 1 $binary_remote_address; > 0 ""; > } > > limit_req_zone $address zone=...; > > to implement a whitelist (i.e., make sure the variable used in > limit_req_zone is empty if you don't want the limit). > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From haroldsinclair at gmail.com Tue Feb 19 13:50:02 2013 From: haroldsinclair at gmail.com (Harold Sinclair) Date: Tue, 19 Feb 2013 08:50:02 -0500 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: <20130219124723.GC14104@localhost> References: <20130219124723.GC14104@localhost> Message-ID: Does the R cgi script filename end in .cgi ? That's what you specify, it appears. On Tue, Feb 19, 2013 at 7:47 AM, Stefan Parvu wrote: > Hi, > > Anyone here testing, experimenting with R and nginx ? > Im trying to setup nginx to serve R scripts via > CGI using Rserve, FastRWeb modules as described here: > > http://jayemerson.blogspot.fi/2011/10/setting-up-fastrwebrserve-on-ubuntu.html > > > My nginx is configured like: > > location ~ ^/cgi-bin/.*\.cgi$ { > gzip off; > fastcgi_pass > unix:/opt/sdr/report/ws/fastcgi_temp/nginx-fcgi.sock; > fastcgi_read_timeout 5m; > fastcgi_index index.cgi; > #fastcgi_buffers 8 4k; > # > # You may copy and paste the lines under or use include > directive > # include /etc/nginx/nginx-fcgi.conf; > # In this example all is in one file > # > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > fastcgi_param GATEWAY_INTERFACE CGI/1.1; > fastcgi_param SERVER_SOFTWARE nginx; > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param DOCUMENT_ROOT /opt/sdr/report/docroot; > fastcgi_param SERVER_PROTOCOL $server_protocol; > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > } > > and Im using fcgiwrap from http://nginx.localdomain.pl/wiki/FcgiWrap. > CGI scripts work fine but Im not able to any R scripts probable due > R being not correctly called via nginx.conf ... > > I am trying something like: http://localhost/cgi-bin/R/foo.png?n=500 > and R is a binary file under cgi-bin directory which should call > FastRWeb ... > > Any pointers, help ? > > thanks, > Stefan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparvu at systemdatarecorder.org Tue Feb 19 14:49:15 2013 From: sparvu at systemdatarecorder.org (Stefan Parvu) Date: Tue, 19 Feb 2013 14:49:15 +0000 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: References: <20130219124723.GC14104@localhost> Message-ID: <20130219144914.GE14104@localhost> On 08:50 Tue 19 Feb , Harold Sinclair wrote: > Does the R cgi script filename end in .cgi ? That's what you specify, it > appears. No, it does not end with cgi since the R scripts will not be called directly by nginx. As I understood I should call the R scripts like: http://localhost/cgi-bin/R/foo where there is a R binary file under cgi-bin which will proxy those to FastRWeb and Rserve R modules. So under cgi-bin there are no R scripts whatsoever, but they usually will go under: /opt/sdr/report/var/FastRWeb/web.R directory. Now my problem seems to be related how the R binary under cgi-bin will ever be called via CGI and further FastRWeb ... since my nginx.conf knows nada about R being called. Currently my nginx.conf is configured to run anything ending .cgi as cgi scripts ... /opt/sdr/report/docroot/cgi-bin $ ls -lrt total 60 -rwxr-x--- 1 sdr sdr 3715 Feb 19 12:00 initial.cgi -rwxr-xr-x 1 sdr sdr 55932 Feb 19 13:53 R $ file R R: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0x0351c94aaf487ee3559a722446865b7ae0f3b7cc, not stripped Here the R scripts: /opt/sdr/report/var/FastRWeb $ ls -lrt total 16 drwxr-xr-x 2 sdr sdr 4096 Feb 18 00:08 web drwxr-xr-x 2 sdr sdr 4096 Feb 18 00:08 tmp drwxr-xr-x 2 sdr sdr 4096 Feb 19 14:27 code srw-rw-rw- 1 sdr sdr 0 Feb 19 14:27 socket =>> drwxr-xr-x 2 sdr sdr 4096 Feb 19 14:28 web.R all R scripts will go here web.R directory. stefan From mdounin at mdounin.ru Tue Feb 19 15:29:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Feb 2013 19:29:03 +0400 Subject: nginx-1.3.13 Message-ID: <20130219152902.GO81985@mdounin.ru> Changes with nginx 1.3.13 19 Feb 2013 *) Change: a compiler with name "cc" is now used by default. *) Feature: support for proxying of WebSocket connections. Thanks to Apcera and CloudBees for sponsoring this work. *) Feature: the "auth_basic_user_file" directive supports "{SHA}" password encryption method. Thanks to Louis Opter. -- Maxim Dounin http://nginx.com/support.html From dewanggaba at gmail.com Tue Feb 19 15:46:26 2013 From: dewanggaba at gmail.com (antituhan) Date: Tue, 19 Feb 2013 07:46:26 -0800 (PST) Subject: How about an nginx repository mirror list? Message-ID: <1361288786000-7583852.post@n2.nabble.com> I see here http://wiki.nginx.org/Install for the Nginx Repository, especially RHEL/CentOS don't have any mirror likes IP Based to Location. Or, is there a plan to build it? ----- [daemon at antituhan.com ~]# -- View this message in context: http://nginx.2469901.n2.nabble.com/How-about-an-nginx-repository-mirror-list-tp7583852.html Sent from the nginx mailing list archive at Nabble.com. From contact at jpluscplusm.com Tue Feb 19 16:21:25 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 19 Feb 2013 16:21:25 +0000 Subject: How about an nginx repository mirror list? In-Reply-To: <1361288786000-7583852.post@n2.nabble.com> References: <1361288786000-7583852.post@n2.nabble.com> Message-ID: On 19 February 2013 15:46, antituhan wrote: > I see here http://wiki.nginx.org/Install for the Nginx Repository, especially > RHEL/CentOS don't have any mirror likes IP Based to Location. Or, is there > a plan to build it? I can't parse what you wrote. Please rewrite it more clearly. -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From haroldsinclair at gmail.com Tue Feb 19 16:22:05 2013 From: haroldsinclair at gmail.com (Harold Sinclair) Date: Tue, 19 Feb 2013 11:22:05 -0500 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: <20130219144914.GE14104@localhost> References: <20130219124723.GC14104@localhost> <20130219144914.GE14104@localhost> Message-ID: Sounds like you ought to try writing a wrapper script ending in .cgi in the cgi dir that grabs the query string and hands the job off to the R executable. Not sure if R is required in the cgi-bin directory. You might have to enable symlinking out for it to work. On Tue, Feb 19, 2013 at 9:49 AM, Stefan Parvu wrote: > On 08:50 Tue 19 Feb , Harold Sinclair wrote: > > Does the R cgi script filename end in .cgi ? That's what you specify, it > > appears. > > No, it does not end with cgi since the R scripts will not be called > directly > by nginx. > > As I understood I should call the R scripts like: > http://localhost/cgi-bin/R/foo where > there is a R binary file under cgi-bin which will proxy those to FastRWeb > and Rserve R modules. So under cgi-bin there are no R scripts whatsoever, > but they usually will go under: /opt/sdr/report/var/FastRWeb/web.R > directory. > > Now my problem seems to be related how the R binary under cgi-bin will > ever be called via CGI and further FastRWeb ... since my nginx.conf knows > nada > about R being called. Currently my nginx.conf is configured to run > anything ending .cgi as cgi scripts ... > > /opt/sdr/report/docroot/cgi-bin > $ ls -lrt > total 60 > -rwxr-x--- 1 sdr sdr 3715 Feb 19 12:00 initial.cgi > -rwxr-xr-x 1 sdr sdr 55932 Feb 19 13:53 R > > $ file R > R: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked > (uses shared libs), for GNU/Linux 2.6.24, > BuildID[sha1]=0x0351c94aaf487ee3559a722446865b7ae0f3b7cc, not stripped > > Here the R scripts: > > /opt/sdr/report/var/FastRWeb > $ ls -lrt > total 16 > drwxr-xr-x 2 sdr sdr 4096 Feb 18 00:08 web > drwxr-xr-x 2 sdr sdr 4096 Feb 18 00:08 tmp > drwxr-xr-x 2 sdr sdr 4096 Feb 19 14:27 code > srw-rw-rw- 1 sdr sdr 0 Feb 19 14:27 socket > =>> drwxr-xr-x 2 sdr sdr 4096 Feb 19 14:28 web.R > > all R scripts will go here web.R directory. > > > stefan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparvu at systemdatarecorder.org Tue Feb 19 16:38:48 2013 From: sparvu at systemdatarecorder.org (Stefan Parvu) Date: Tue, 19 Feb 2013 16:38:48 +0000 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: References: <20130219124723.GC14104@localhost> <20130219144914.GE14104@localhost> Message-ID: <20130219163848.GA16580@localhost> On 11:22 Tue 19 Feb , Harold Sinclair wrote: > Sounds like you ought to try writing a wrapper script ending in .cgi in the > cgi dir that grabs the query string and hands the job off to the R > executable. Not sure if R is required in the cgi-bin directory. You might > have to enable symlinking out for it to work. yeah not sure how to do it. FastRWeb tell us: " FastRWeb consists of several parts: Webserver-to-R pipeline, consisting of either a thin CGI or a PHP client connecting to Rserve which sources and runs the R script. The CGI client is called Rcgi and is compiled as part of the package. The PHP client is part of Rserve in the clients section. " So Im on option 1: CGI. The client will be this binary which somehow needs to be executed for each R script. I placed the Rcgi under cgi.bin directory and rename it as R.cgi. If I call directly: http://localhost:9001/cgi-bin/R.cgi the browser returns: Error: no function or path specified. http://localhost:9001/cgi-bin/R.cgi/foo.png?n=100 I see under error log: 2013/02/19 18:39:53 [error] 2847#0: *56 open() "/opt/sdr/report/docroot/cgi-bin/R.cgi/foo.png" failed (20: Not a directory), client: 127.0.0.1, server: sdrrep, request: "GET /cgi-bin/R.cgi/foo.png?n=100 HTTP/1.1", host: "localhost:9001" stefan From nginx-forum at nginx.us Tue Feb 19 16:43:16 2013 From: nginx-forum at nginx.us (mottwsc) Date: Tue, 19 Feb 2013 11:43:16 -0500 Subject: 'no input file specified' on LEMP setup In-Reply-To: <0fce77a80a40be74549391dde558aa12.NginxMailingListEnglish@forum.nginx.org> References: <0fce77a80a40be74549391dde558aa12.NginxMailingListEnglish@forum.nginx.org> Message-ID: <459e9d61426eb9873eef4c88d38a9db5.NginxMailingListEnglish@forum.nginx.org> I was missing an update in the config file, so your guidance helped. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236318,236349#msg-236349 From sparvu at systemdatarecorder.org Tue Feb 19 16:41:35 2013 From: sparvu at systemdatarecorder.org (Stefan Parvu) Date: Tue, 19 Feb 2013 16:41:35 +0000 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: <20130219163848.GA16580@localhost> References: <20130219124723.GC14104@localhost> <20130219144914.GE14104@localhost> <20130219163848.GA16580@localhost> Message-ID: <20130219164135.GB16580@localhost> > So Im on option 1: CGI. The client will be this binary which somehow needs to be > executed for each R script. I placed the Rcgi under cgi.bin directory and rename it > as R.cgi. > and I forgot to mention for CGI Im using FcgiWrap: http://nginx.localdomain.pl/wiki/FcgiWrap stefan From nginx-forum at nginx.us Tue Feb 19 17:02:01 2013 From: nginx-forum at nginx.us (manacit) Date: Tue, 19 Feb 2013 12:02:01 -0500 Subject: SPDY Patch fails to compile Message-ID: <0ea57184825408f71519dc5ef071d1ca.NginxMailingListEnglish@forum.nginx.org> Attempting to compile the SPDY patch with the new 1.3.13 release gives: $ make make -f objs/Makefile make[1]: Entering directory `/home/nick/src/nginx-1.3.13' cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules \ -o objs/src/http/ngx_http_spdy.o \ src/http/ngx_http_spdy.c cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules \ -o objs/src/http/ngx_http_spdy_filter_module.o \ src/http/ngx_http_spdy_filter_module.c cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules \ -o objs/src/http/modules/ngx_http_autoindex_module.o \ src/http/modules/ngx_http_autoindex_module.c cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules \ -o objs/src/http/modules/ngx_http_auth_basic_module.o \ src/http/modules/ngx_http_auth_basic_module.c src/http/ngx_http_spdy.c: In function ?ngx_http_spdy_run_request?: src/http/ngx_http_spdy.c:1747:33: error: variable ?fc? set but not used [-Werror=unused-but-set-variable] cc1: all warnings being treated as errors make[1]: *** [objs/src/http/ngx_http_spdy.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make[1]: Leaving directory `/home/nick/src/nginx-1.3.13' make: *** [build] Error 2 The patch was applied exactly as specified in the README, with no additional addons. 1.3.12 built with the SPDY patch on the machine without error Note that I can build 1.3.13 on other machines Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236351,236351#msg-236351 From vbart at nginx.com Tue Feb 19 17:22:34 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 19 Feb 2013 21:22:34 +0400 Subject: SPDY Patch fails to compile In-Reply-To: <0ea57184825408f71519dc5ef071d1ca.NginxMailingListEnglish@forum.nginx.org> References: <0ea57184825408f71519dc5ef071d1ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201302192122.34307.vbart@nginx.com> On Tuesday 19 February 2013 21:02:01 manacit wrote: > Attempting to compile the SPDY patch with the new 1.3.13 release gives: > > $ make > make -f objs/Makefile > make[1]: Entering directory `/home/nick/src/nginx-1.3.13' > cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g > -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I > src/http -I src/http/modules \ > -o objs/src/http/ngx_http_spdy.o \ > src/http/ngx_http_spdy.c > cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g > -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I > src/http -I src/http/modules \ > -o objs/src/http/ngx_http_spdy_filter_module.o \ > src/http/ngx_http_spdy_filter_module.c > cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g > -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I > src/http -I src/http/modules \ > -o objs/src/http/modules/ngx_http_autoindex_module.o \ > src/http/modules/ngx_http_autoindex_module.c > cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g > -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I > src/http -I src/http/modules \ > -o objs/src/http/modules/ngx_http_auth_basic_module.o \ > src/http/modules/ngx_http_auth_basic_module.c > src/http/ngx_http_spdy.c: In function ?ngx_http_spdy_run_request?: > src/http/ngx_http_spdy.c:1747:33: error: variable ?fc? set but not used > [-Werror=unused-but-set-variable] > cc1: all warnings being treated as errors > make[1]: *** [objs/src/http/ngx_http_spdy.o] Error 1 > make[1]: *** Waiting for unfinished jobs.... > make[1]: Leaving directory `/home/nick/src/nginx-1.3.13' > make: *** [build] Error 2 > > > The patch was applied exactly as specified in the README, with no > additional addons. 1.3.12 built with the SPDY patch on the machine without > error > > Note that I can build 1.3.13 on other machines > Hmm.. it seems that I broke building without debug. Fixed now: http://nginx.org/patches/spdy/patch.spdy-65_1.3.13.txt wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From francis at daoine.org Tue Feb 19 17:24:16 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Feb 2013 17:24:16 +0000 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: <20130219124723.GC14104@localhost> References: <20130219124723.GC14104@localhost> Message-ID: <20130219172416.GG32392@craic.sysops.org> On Tue, Feb 19, 2013 at 12:47:23PM +0000, Stefan Parvu wrote: Hi there, This location: > location ~ ^/cgi-bin/.*\.cgi$ { will only match some requests that end in ".cgi" (before the ?, if that applies). What you possibly want is a separate location just for your "R" requests. Something like (untested by me!) location ^~ /cgi-bin/R/ { } in which you have your "fastcgi_pass" directive, plus whatever "fastcgi_param" directives you need to make it work. That is *probably* fastcgi_param REQUEST_URI $request_uri; fastcgi_param QUERY_STRING $query_string; fastcgi_param SCRIPT_FILENAME $document_root/cgi-bin/R; but the exact details depend on what your fastcgi server wants. (You may want to set PATH_INFO, if your application uses that.) See http://nginx.org/r/location for the details of which one location{} is chosen for each request, and then see the fastcgi documentation to decide exactly what params you need. Usually, SCRIPT_FILENAME is "the file on the filesystem that the fastcgi server should execute"; and other things are "stuff that the fastcgi server or the application can use to decide what to do". > and Im using fcgiwrap from http://nginx.localdomain.pl/wiki/FcgiWrap. > CGI scripts work fine but Im not able to any R scripts probable due > R being not correctly called via nginx.conf ... In this case, yes. Your nginx configuration was such that your requests for "R" were not being sent to the fastcgi server. > I am trying something like: http://localhost/cgi-bin/R/foo.png?n=500 > and R is a binary file under cgi-bin directory which should call > FastRWeb ... Provided that the fastcgi server is able to run whatever file you name in SCRIPT_FILENAME, and has access to whatever other params it cares about, something like the above configuration has a chance of working. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Feb 19 21:54:46 2013 From: nginx-forum at nginx.us (la_chouette) Date: Tue, 19 Feb 2013 16:54:46 -0500 Subject: Nginx redirect domain and sub-domain without www Message-ID: <081a9519c7628095223a8b10fa6b370d.NginxMailingListEnglish@forum.nginx.org> Hello, for saas web application project I need to redirect subdomains to a specific directory (app-saas /) without the www. The domain name must also be without redirecting to the www root. Ex. www.domain.tdl to domain.tdl www.sub.domain.tdl to sub.domain.tdl www |-- index.php (domain.tdl, without www) `-- app-saas/ (sub.domain.tdl, without www) So I did this but if it does not work with: www.sub.domain.tdl server { listen 80; server_name ~^www\.(\w+)\.domain\.com$ root /var/www/app-saas; location / { index index.php index.html index.htm; try_files $uri $uri/ /index.php?$args; } } server { listen 80; server_name domain.com; root /var/www; location / { index index.php index.html index.htm; try_files $uri $uri/ /index.php?$args; } } f possible I would like to assemble her two redirect rules in a single parenthesis "server {...}" to apply to all common filters (header expire, deny, Hotlink, etc.) thank you for your help, cordially. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236359,236359#msg-236359 From francis at daoine.org Tue Feb 19 23:21:29 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 19 Feb 2013 23:21:29 +0000 Subject: Nginx redirect domain and sub-domain without www In-Reply-To: <081a9519c7628095223a8b10fa6b370d.NginxMailingListEnglish@forum.nginx.org> References: <081a9519c7628095223a8b10fa6b370d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130219232129.GI32392@craic.sysops.org> On Tue, Feb 19, 2013 at 04:54:46PM -0500, la_chouette wrote: Hi there, > for saas web application project I need to redirect subdomains to a specific > directory (app-saas /) without the www. > > The domain name must also be without redirecting to the www root. I'm afraid that I don't understand what exactly you are asking for. When someone asks for http://www.example.com/file, what do you want to send back? The contents of /usr/local/nginx/html/file? Or tell them to ask for http://example.com/file instead? Or something else? Same question for http://www.sub.example.com/file -- contents of /usr/local/nginx/html/app-saas/file; redirect to http://sub.example.com/file; something else? And is that "for any subdomain they use", or "for one specific one"? > f possible I would like to assemble her two redirect rules in a single > parenthesis "server {...}" to apply to all common filters (header expire, > deny, Hotlink, etc.) If the configuration really is common, then using "include" might be an option. But the earlier questions are probably more useful to answer first. f -- Francis Daly francis at daoine.org From kworthington at gmail.com Wed Feb 20 03:21:14 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 19 Feb 2013 22:21:14 -0500 Subject: [nginx-announce] nginx-1.3.13 In-Reply-To: <20130219152912.GP81985@mdounin.ru> References: <20130219152912.GP81985@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.13 For Windows http://goo.gl/zbUIy (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Feb 19, 2013 at 10:29 AM, Maxim Dounin wrote: > Changes with nginx 1.3.13 19 Feb > 2013 > > *) Change: a compiler with name "cc" is now used by default. > > *) Feature: support for proxying of WebSocket connections. > Thanks to Apcera and CloudBees for sponsoring this work. > > *) Feature: the "auth_basic_user_file" directive supports "{SHA}" > password encryption method. > Thanks to Louis Opter. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparvu at systemdatarecorder.org Wed Feb 20 10:00:04 2013 From: sparvu at systemdatarecorder.org (Stefan Parvu) Date: Wed, 20 Feb 2013 10:00:04 +0000 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: <20130219172416.GG32392@craic.sysops.org> References: <20130219124723.GC14104@localhost> <20130219172416.GG32392@craic.sysops.org> Message-ID: <20130220100004.GC22940@localhost> > > Something like (untested by me!) > location ^~ /cgi-bin/R/ { } > yep correct. I do have now on cgi-bin directory a binary file called: R. I did change my nginx.conf to have a new location definition for the R calls, like here: location ~ ^/cgi-bin/R$ { gzip off; fastcgi_pass unix:/opt/sdr/report/ws/fastcgi_temp/nginx-fcgi.sock; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param SCRIPT_FILENAME /opt/sdr/report/docroot$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT /opt/sdr/report/docroot; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; } This works if I put in my browser: http://localhost:9001/cgi-bin/R I get some error in the browser but it does seem to work since I did not input any parameters to R call. But when I try: http://localhost:9001/cgi-bin/R/foo?n=50 this returns with same error that there is no such directory, seems somehow the request it is not passed via CGI to FastWebR. 2013/02/20 11:52:01 [error] 3637#0: *9 open() "/opt/sdr/report/docroot/cgi-bin/R/foo" failed (20: Not a directory), client: 127.0.0.1, server: sdrrep, request: "GET /cgi-bin/R/foo?n=50 HTTP/1.1", host: "localhost:9001" > In this case, yes. Your nginx configuration was such that your requests > for "R" were not being sent to the fastcgi server. And still they are not passed correctly. something still messed up with my nginx.conf you can see entire nginx.conf here: http://www.systemdatarecorder.org/nginx.conf Stefan From francis at daoine.org Wed Feb 20 13:25:04 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Feb 2013 13:25:04 +0000 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: <20130220100004.GC22940@localhost> References: <20130219124723.GC14104@localhost> <20130219172416.GG32392@craic.sysops.org> <20130220100004.GC22940@localhost> Message-ID: <20130220132504.GJ32392@craic.sysops.org> On Wed, Feb 20, 2013 at 10:00:04AM +0000, Stefan Parvu wrote: > > Something like (untested by me!) > > location ^~ /cgi-bin/R/ { } > yep correct. I do have now on cgi-bin directory a binary file called: R. > I did change my nginx.conf to have a new location definition for the > R calls, like here: > > location ~ ^/cgi-bin/R$ { "~" means "regex". "^" means "start of string". "$" means "end of string". This location will only match requests that are /cgi-bin/R or /cgi-bin/R?something and not /cgi-bin/R/something > But when I try: http://localhost:9001/cgi-bin/R/foo?n=50 > this returns with same error that there is no such directory, seems somehow > the request it is not passed via CGI to FastWebR. Yes, that's what you configured. What happens when you test exactly what was suggested? f -- Francis Daly francis at daoine.org From roberto at unbit.it Wed Feb 20 13:30:38 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 20 Feb 2013 14:30:38 +0100 Subject: [PATCH] websockets support for uwsgi protocol Message-ID: <7e944b12d0024dd29d89ffec3e743a09.squirrel@manage.unbit.it> Hi, the (tiny) attached patch enable support for new websockets handling when the uwsgi protocol is used instead of HTTP. I have tested it with various websocket libraries and with the api available in uWSGI 1.9. >From 1.9 sources (with nginx pointing to uwsgi port 3031): ./uwsgi -s :3031 -w tests.websockets_echo --gevent 10 no additional configuration is needed for nginx -- Roberto De Ioris http://unbit.it -------------- next part -------------- A non-text attachment was scrubbed... Name: uwsgi_websocket_nginx.patch Type: application/octet-stream Size: 695 bytes Desc: not available URL: From sb at waeme.net Wed Feb 20 13:42:28 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Wed, 20 Feb 2013 17:42:28 +0400 Subject: How about an nginx repository mirror list? In-Reply-To: <1361288786000-7583852.post@n2.nabble.com> References: <1361288786000-7583852.post@n2.nabble.com> Message-ID: <29BCACA5-3478-4F50-91FA-BE8E80AB5C81@waeme.net> On 19 Feb2013, at 19:46 , antituhan wrote: > I see here http://wiki.nginx.org/Install for the Nginx Repository, especially > RHEL/CentOS don't have any mirror likes IP Based to Location. Or, is there > a plan to build it? There are no plans to add mirrors to main package repository. From mdounin at mdounin.ru Wed Feb 20 13:50:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Feb 2013 17:50:54 +0400 Subject: [PATCH] websockets support for uwsgi protocol In-Reply-To: <7e944b12d0024dd29d89ffec3e743a09.squirrel@manage.unbit.it> References: <7e944b12d0024dd29d89ffec3e743a09.squirrel@manage.unbit.it> Message-ID: <20130220135054.GW81985@mdounin.ru> Hello! (Cc'd nginx-devel@ as this is better list for this discussion.) On Wed, Feb 20, 2013 at 02:30:38PM +0100, Roberto De Ioris wrote: > Hi, the (tiny) attached patch enable support for new websockets handling > when the uwsgi protocol is used instead of HTTP. > > I have tested it with various websocket libraries and with the api > available in uWSGI 1.9. > > From 1.9 sources (with nginx pointing to uwsgi port 3031): > > ./uwsgi -s :3031 -w tests.websockets_echo --gevent 10 > > > no additional configuration is needed for nginx Should we also do the same for SCGI? I personally think it is more or less the same for all CGI-based protocols, and the only problematic one is FastCGI, which probably still needs wrapping within FCGI_STDIN/FCGI_STDOUT records instead of real connection upgrade. -- Maxim Dounin http://nginx.com/support.html From roberto at unbit.it Wed Feb 20 13:53:07 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 20 Feb 2013 14:53:07 +0100 Subject: [PATCH] websockets support for uwsgi protocol In-Reply-To: <20130220135054.GW81985@mdounin.ru> References: <7e944b12d0024dd29d89ffec3e743a09.squirrel@manage.unbit.it> <20130220135054.GW81985@mdounin.ru> Message-ID: <8b097697f2ec8602e609a78209cdab58.squirrel@manage.unbit.it> > Hello! > > (Cc'd nginx-devel@ as this is better list for this discussion.) > > On Wed, Feb 20, 2013 at 02:30:38PM +0100, Roberto De Ioris wrote: > >> Hi, the (tiny) attached patch enable support for new websockets handling >> when the uwsgi protocol is used instead of HTTP. >> >> I have tested it with various websocket libraries and with the api >> available in uWSGI 1.9. >> >> From 1.9 sources (with nginx pointing to uwsgi port 3031): >> >> ./uwsgi -s :3031 -w tests.websockets_echo --gevent 10 >> >> >> no additional configuration is needed for nginx > > Should we also do the same for SCGI? > > I personally think it is more or less the same for all CGI-based > protocols, and the only problematic one is FastCGI, which probably > still needs wrapping within FCGI_STDIN/FCGI_STDOUT records instead > of real connection upgrade. > > AFAIK scgi body management works in the same way as the uwsgi one, so the patch should be usable there too -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Wed Feb 20 13:59:23 2013 From: nginx-forum at nginx.us (BrentNewland) Date: Wed, 20 Feb 2013 08:59:23 -0500 Subject: PHP as FastCGI on Windows: Simple how-to setup PHP as a service Message-ID: The official guide on setting up PHP as FastCGI on Windows http://wiki.nginx.org/PHPFastCGIOnWindows makes use of batch files. After extensive research and some new information, I've managed to make a fairly simple guide on setting up PHP as a service on Windows, with full start/stop/restart/status support. The first thing that's needed is the WinSW binary from http://maven.jenkins-ci.org/content/repositories/releases/com/sun/winsw/winsw/ The "winsw-{VERSION}-bin.exe" needs to be saved to the folder containing php-cgi.exe, and needs to be renamed "winsw.exe" In the same folder as php-cgi.exe and winsw.exe, create an xml file "winsw.xml" with the following content: PHP PHP PHP C:\PATH\TO\php\php-cgi.exe C:\PATH\TO\php\php-stop.cmd C:\PATH\FOR\WINSW\LOGFILES roll -bPORT -cc:\PATH\TO\php.ini As an example, I keep PHP in C:\SERVER\php, the PHP ini file in C:\SERVER\config, and the logs in C:\SERVER\logs\php. PHP runs on port 9123. PHP PHP PHP C:\SERVER\php\php-cgi.exe C:\SERVER\php\php-stop.cmd C:\SERVER\logs\php roll -b9123 -cc:\server\config\php.ini In your PHP folder, alongside your php-cgi.exe, you need to make a file called "php-stop.cmd" with the following contents: taskkill /f /IM php-cgi.exe Once this has all been accomplished, open the command prompt, switch to the folder containing php-cgi.exe, and execute the following command: winsw install At this point, you can open the Services (win+run>services.msc) and start PHP, or type "net start PHP". You can also type "winsw start", "winsw stop", "winsw status", and "winsw restart" If having issues starting the service, verify it starts on its own by opening the command prompt to the folder containing php-cgi.exe and running: php-cgi -b(PORT) -cc:\PATH\TO\php.ini Also, make sure the log folders referenced in the xml file exist. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236376,236376#msg-236376 From roberto at unbit.it Wed Feb 20 14:07:53 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 20 Feb 2013 15:07:53 +0100 Subject: [PATCH] websockets support for uwsgi protocol In-Reply-To: <8b097697f2ec8602e609a78209cdab58.squirrel@manage.unbit.it> References: <7e944b12d0024dd29d89ffec3e743a09.squirrel@manage.unbit.it> <20130220135054.GW81985@mdounin.ru> <8b097697f2ec8602e609a78209cdab58.squirrel@manage.unbit.it> Message-ID: > >> Hello! >> >> (Cc'd nginx-devel@ as this is better list for this discussion.) >> >> On Wed, Feb 20, 2013 at 02:30:38PM +0100, Roberto De Ioris wrote: >> >>> Hi, the (tiny) attached patch enable support for new websockets >>> handling >>> when the uwsgi protocol is used instead of HTTP. >>> >>> I have tested it with various websocket libraries and with the api >>> available in uWSGI 1.9. >>> >>> From 1.9 sources (with nginx pointing to uwsgi port 3031): >>> >>> ./uwsgi -s :3031 -w tests.websockets_echo --gevent 10 >>> >>> >>> no additional configuration is needed for nginx >> >> Should we also do the same for SCGI? >> >> I personally think it is more or less the same for all CGI-based >> protocols, and the only problematic one is FastCGI, which probably >> still needs wrapping within FCGI_STDIN/FCGI_STDOUT records instead >> of real connection upgrade. >> >> > > AFAIK scgi body management works in the same way as the uwsgi one, so the > patch should be usable there too Ok, i can confirm the same patch applied to the SCGI module in the same position works (at least with uWSGI 1.9 in SCGI mode) ./uwsgi --scgi-nph-socket :3031 -w tests.websockets_echo --gevent 10 remember to add scgi_param PATH_INFO $document_uri; to the nginx config to make it work -- Roberto De Ioris http://unbit.it From mdounin at mdounin.ru Wed Feb 20 15:11:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Feb 2013 19:11:49 +0400 Subject: [PATCH] websockets support for uwsgi protocol In-Reply-To: References: <7e944b12d0024dd29d89ffec3e743a09.squirrel@manage.unbit.it> <20130220135054.GW81985@mdounin.ru> <8b097697f2ec8602e609a78209cdab58.squirrel@manage.unbit.it> Message-ID: <20130220151149.GX81985@mdounin.ru> Hello! On Wed, Feb 20, 2013 at 03:07:53PM +0100, Roberto De Ioris wrote: > > > > >> Hello! > >> > >> (Cc'd nginx-devel@ as this is better list for this discussion.) > >> > >> On Wed, Feb 20, 2013 at 02:30:38PM +0100, Roberto De Ioris wrote: > >> > >>> Hi, the (tiny) attached patch enable support for new websockets > >>> handling > >>> when the uwsgi protocol is used instead of HTTP. > >>> > >>> I have tested it with various websocket libraries and with the api > >>> available in uWSGI 1.9. > >>> > >>> From 1.9 sources (with nginx pointing to uwsgi port 3031): > >>> > >>> ./uwsgi -s :3031 -w tests.websockets_echo --gevent 10 > >>> > >>> > >>> no additional configuration is needed for nginx > >> > >> Should we also do the same for SCGI? > >> > >> I personally think it is more or less the same for all CGI-based > >> protocols, and the only problematic one is FastCGI, which probably > >> still needs wrapping within FCGI_STDIN/FCGI_STDOUT records instead > >> of real connection upgrade. > >> > >> > > > > AFAIK scgi body management works in the same way as the uwsgi one, so the > > patch should be usable there too > > > Ok, i can confirm the same patch applied to the SCGI module in the same > position works (at least with uWSGI 1.9 in SCGI mode) > > ./uwsgi --scgi-nph-socket :3031 -w tests.websockets_echo --gevent 10 > > remember to add > > scgi_param PATH_INFO $document_uri; > > to the nginx config to make it work Ok, so the next question is: any specific reason to exclude normal CGI responses with "Status" as in your patch? I in fact don't like the idea of supporting http-like answers with status like from CGI-like protocols, correct way is to use "Status" header. Not sure why Manlio introduced it at all, probably due to some compatibility concerns (and due to the fact that SCGI specification explicitly refuses to specify response format). Something like this should be better, IMHO: diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c +++ b/src/http/modules/ngx_http_scgi_module.c @@ -984,7 +984,7 @@ ngx_http_scgi_process_header(ngx_http_re u = r->upstream; if (u->headers_in.status_n) { - return NGX_OK; + goto done; } if (u->headers_in.status) { @@ -1015,6 +1015,14 @@ ngx_http_scgi_process_header(ngx_http_re u->state->status = u->headers_in.status_n; } + done: + + if (u->headers_in.status_n == NGX_HTTP_SWITCHING_PROTOCOLS + && r->headers_in.upgrade) + { + u->upgrade = 1; + } + return NGX_OK; } diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -1018,7 +1018,7 @@ ngx_http_uwsgi_process_header(ngx_http_r u = r->upstream; if (u->headers_in.status_n) { - return NGX_OK; + goto done; } if (u->headers_in.status) { @@ -1049,6 +1049,14 @@ ngx_http_uwsgi_process_header(ngx_http_r u->state->status = u->headers_in.status_n; } + done: + + if (u->headers_in.status_n == NGX_HTTP_SWITCHING_PROTOCOLS + && r->headers_in.upgrade) + { + u->upgrade = 1; + } + return NGX_OK; } -- Maxim Dounin http://nginx.com/support.html From roberto at unbit.it Wed Feb 20 15:20:29 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 20 Feb 2013 16:20:29 +0100 Subject: [PATCH] websockets support for uwsgi protocol In-Reply-To: <20130220151149.GX81985@mdounin.ru> References: <7e944b12d0024dd29d89ffec3e743a09.squirrel@manage.unbit.it> <20130220135054.GW81985@mdounin.ru> <8b097697f2ec8602e609a78209cdab58.squirrel@manage.unbit.it> <20130220151149.GX81985@mdounin.ru> Message-ID: > > Ok, so the next question is: any specific reason to exclude normal > CGI responses with "Status" as in your patch? > > I in fact don't like the idea of supporting http-like answers with > status like from CGI-like protocols, correct way is to use > "Status" header. Not sure why Manlio introduced it at all, > probably due to some compatibility concerns (and due to the fact > that SCGI specification explicitly refuses to specify response > format). Honestly i do not remember why Manlio added support for nph (but i have added it to uWSGI SCGI parser too, so in my subconsciuous there should be a good reason :P) regarding your updated patch is better for sure -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Wed Feb 20 15:57:40 2013 From: nginx-forum at nginx.us (WGH) Date: Wed, 20 Feb 2013 10:57:40 -0500 Subject: WebSockets - connection keeps closing after 1 minute Message-ID: <35e4d4c4dbd9773476fb7b44a38b6865.NginxMailingListEnglish@forum.nginx.org> Hello! My WebSocket connections keep closing after precisely one minute. The configuration is the same as provided in that commit comment. http://trac.nginx.org/nginx/changeset/5073/nginx Direct connections to the server (without intermediate nginx) happily live for many hours. What option should I change to fix that? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236382,236382#msg-236382 From mdounin at mdounin.ru Wed Feb 20 16:06:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Feb 2013 20:06:18 +0400 Subject: WebSockets - connection keeps closing after 1 minute In-Reply-To: <35e4d4c4dbd9773476fb7b44a38b6865.NginxMailingListEnglish@forum.nginx.org> References: <35e4d4c4dbd9773476fb7b44a38b6865.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130220160618.GY81985@mdounin.ru> Hello! On Wed, Feb 20, 2013 at 10:57:40AM -0500, WGH wrote: > Hello! > > My WebSocket connections keep closing after precisely one minute. > The configuration is the same as provided in that commit comment. > http://trac.nginx.org/nginx/changeset/5073/nginx > > Direct connections to the server (without intermediate nginx) happily live > for many hours. > > What option should I change to fix that? There is proxy_read_timeout (http://nginx.org/r/proxy_read_timeout) which as well applies to WebSocket connections. You have to bump it if your backend do not send anything for a long time. Alternatively, you may configure your backend to send websocket ping frames periodically to reset the timeout (and check if the connection is still alive). -- Maxim Dounin http://nginx.com/support.html From dewanggaba at gmail.com Wed Feb 20 16:15:08 2013 From: dewanggaba at gmail.com (antituhan) Date: Wed, 20 Feb 2013 08:15:08 -0800 (PST) Subject: How about an nginx repository mirror list? In-Reply-To: References: <1361288786000-7583852.post@n2.nabble.com> Message-ID: <1361376908814-7583876.post@n2.nabble.com> Jonathan Matthews wrote > On 19 February 2013 15:46, antituhan < > dewanggaba@ > > wrote: >> I see here http://wiki.nginx.org/Install for the Nginx Repository, >> especially >> RHEL/CentOS don't have any mirror likes IP Based to Location. Or, is >> there >> a plan to build it? > > I can't parse what you wrote. Please rewrite it more clearly. > > -- > Jonathan Matthews // Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > nginx mailing list > nginx@ > http://mailman.nginx.org/mailman/listinfo/nginx Sorry :D I mean, why nginx don't have any mirror to mirror the main package? (eg. like other linux repository). Spread around the world :) ----- [daemon at antituhan.com ~]# -- View this message in context: http://nginx.2469901.n2.nabble.com/How-about-an-nginx-repository-mirror-list-tp7583852p7583876.html Sent from the nginx mailing list archive at Nabble.com. From dewanggaba at gmail.com Wed Feb 20 16:15:57 2013 From: dewanggaba at gmail.com (antituhan) Date: Wed, 20 Feb 2013 08:15:57 -0800 (PST) Subject: How about an nginx repository mirror list? In-Reply-To: <29BCACA5-3478-4F50-91FA-BE8E80AB5C81@waeme.net> References: <1361288786000-7583852.post@n2.nabble.com> <29BCACA5-3478-4F50-91FA-BE8E80AB5C81@waeme.net> Message-ID: <1361376957366-7583877.post@n2.nabble.com> Sergey Budnevitch wrote > On 19 Feb2013, at 19:46 , antituhan < > dewanggaba@ > > wrote: > >> I see here http://wiki.nginx.org/Install for the Nginx Repository, >> especially >> RHEL/CentOS don't have any mirror likes IP Based to Location. Or, is >> there >> a plan to build it? > > There are no plans to add mirrors to main package repository. > > _______________________________________________ > nginx mailing list > nginx@ > http://mailman.nginx.org/mailman/listinfo/nginx Thanks sergey for your answer, But would you mind to tell us why? ----- [daemon at antituhan.com ~]# -- View this message in context: http://nginx.2469901.n2.nabble.com/How-about-an-nginx-repository-mirror-list-tp7583852p7583877.html Sent from the nginx mailing list archive at Nabble.com. From nginx-forum at nginx.us Wed Feb 20 16:17:11 2013 From: nginx-forum at nginx.us (WGH) Date: Wed, 20 Feb 2013 11:17:11 -0500 Subject: WebSockets - connection keeps closing after 1 minute In-Reply-To: <20130220160618.GY81985@mdounin.ru> References: <20130220160618.GY81985@mdounin.ru> Message-ID: <1cc90bc0223012cc59e806d5353ba4ac.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > There is proxy_read_timeout (http://nginx.org/r/proxy_read_timeout) > which as well applies to WebSocket connections. You have to bump > it if your backend do not send anything for a long time. > Alternatively, you may configure your backend to send websocket > ping frames periodically to reset the timeout (and check if the > connection is still alive). Thanks, raising the timeout resolved the problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236382,236386#msg-236386 From sb at waeme.net Wed Feb 20 16:26:55 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Wed, 20 Feb 2013 20:26:55 +0400 Subject: How about an nginx repository mirror list? In-Reply-To: <1361376957366-7583877.post@n2.nabble.com> References: <1361288786000-7583852.post@n2.nabble.com> <29BCACA5-3478-4F50-91FA-BE8E80AB5C81@waeme.net> <1361376957366-7583877.post@n2.nabble.com> Message-ID: <1B88F559-DBD4-4E4B-BDE9-D4CEC7CC009D@waeme.net> On 20 Feb2013, at 20:15 , antituhan wrote: > Sergey Budnevitch wrote >> On 19 Feb2013, at 19:46 , antituhan < > >> dewanggaba@ > >> > wrote: >> >>> I see here http://wiki.nginx.org/Install for the Nginx Repository, >>> especially >>> RHEL/CentOS don't have any mirror likes IP Based to Location. Or, is >>> there >>> a plan to build it? >> >> There are no plans to add mirrors to main package repository. > > Thanks sergey for your answer, > > But would you mind to tell us why? We have no capacity problems with current repository, since nginx packages are small, so mirrors are useless in our case. From mdounin at mdounin.ru Wed Feb 20 16:41:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Feb 2013 20:41:37 +0400 Subject: [PATCH] websockets support for uwsgi protocol In-Reply-To: References: <7e944b12d0024dd29d89ffec3e743a09.squirrel@manage.unbit.it> <20130220135054.GW81985@mdounin.ru> <8b097697f2ec8602e609a78209cdab58.squirrel@manage.unbit.it> <20130220151149.GX81985@mdounin.ru> Message-ID: <20130220164137.GA81985@mdounin.ru> Hello! On Wed, Feb 20, 2013 at 04:20:29PM +0100, Roberto De Ioris wrote: > > > > > Ok, so the next question is: any specific reason to exclude normal > > CGI responses with "Status" as in your patch? > > > > I in fact don't like the idea of supporting http-like answers with > > status like from CGI-like protocols, correct way is to use > > "Status" header. Not sure why Manlio introduced it at all, > > probably due to some compatibility concerns (and due to the fact > > that SCGI specification explicitly refuses to specify response > > format). > > Honestly i do not remember why Manlio added support for nph (but i have > added it to uWSGI SCGI parser too, so in my subconsciuous there should be > a good reason :P) > > regarding your updated patch is better for sure Committed, thnx. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Wed Feb 20 17:02:04 2013 From: nginx-forum at nginx.us (huynq) Date: Wed, 20 Feb 2013 12:02:04 -0500 Subject: Proxying non-ssl SMTP/POP to ssl SMTP/POP Message-ID: <1d8e7ff6af46473c9c4f4cf9ee3d1f0c.NginxMailingListEnglish@forum.nginx.org> Hi everyone, I'm now using nginx in setting up a proxying mail system in my company. The model is as below: Mail client <=========> Nginx proxy <===========> Mail server The stream between mail client and nginx is non-ssl connection (includes smtp and pop3 stream), whereas the stream between nginx and mail server uses ssl. The reason I use this model is that: I want to create a neutral node that can modify the emails before being delivered to mail server or client. However, I'm still stuck on the configuration of nginx to implement this model. So if you have any idea about how to configure this system as well as it's possibility with nginx, please help me. I'm still a newbie to this field, so if this is a dump question, please forgive for my awkwardness. Thank you very much! Btw: I also referenced to the question of kgoj at : http://forum.nginx.org/read.php?2,126528,126528#msg-126528. However, his model is contrasted with my model and I haven't found any idea yet! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236390,236390#msg-236390 From nginx-forum at nginx.us Wed Feb 20 17:17:10 2013 From: nginx-forum at nginx.us (jims) Date: Wed, 20 Feb 2013 12:17:10 -0500 Subject: Reverse proxy configuration help In-Reply-To: <24f3c1f99f1ea2ee3a1236eae4899c22.NginxMailingListEnglish@forum.nginx.org> References: <24f3c1f99f1ea2ee3a1236eae4899c22.NginxMailingListEnglish@forum.nginx.org> Message-ID: Jonathan, I just want to thank you for helping a newbie out, once again. Your advice helped me to get things going, Now I get to fine-tune it... jims Wrote: ------------------------------------------------------- > Jonathan Matthews Wrote: > ------------------------------------------------------- > > On 18 February 2013 15:06, jims wrote: > > > I am new to nginx, it being recommended to solve a problem. > > > > [ Having read your mail, this kind of reverse proxying is exactly > what > > nginx is very good at; I think you're just trying to do too much, > too > > quickly, and need to step back from the problem for a moment to > > identify what your first steps should be; then iterate from simple > to > > complex behaviours, only moving forward once each behaviour works > > successfully. ] > > > Point taken. Going straight for the desired end result doesn't always > save time... > > Thanks for your response, Jonathan. It has been helpful. Read on for > responses to your comments... > > > > The problem: I have a VPS hosting a website and an application > > server in my > > > DMZ. I have a test and prod version of each. I want both DMZ'ed > > servers > > > reverse-proxied such that requests where the referrer is the test > > web server > > > always go to the test app server and requests where the referrer > is > > anything > > > but the test web server always go to the production app server. > > > > When you say "referrer", do you really mean the referrer as > > distinguished by client-originated HTTP headers? I wouldn't do > that, > > personally ... > > > When I say "referrer" I mean the site where the link is presented to > the end user. If that is what is "distinguished by client-originated > HTTP headers" then yes. > The desired result is that if a person is in our pool of testers and > is testing the development website, any app server link (although > pointing putatively to the production app server) would be sent to the > reverse-proxy that's front-ending the test app server. The idea is to > minimize unauthorized traffic to the test server. By using only links > that get to the production app server, if someone saves the link and > tries again later, they will hit the production app server's > reverse-proxy front-end. They would only hit our test app server if > they are actively testing for us. Once testing is complete, the > proven code can be promoted to te production webste without having to > deal with changing test links to prod links in the process Those who > will be maintaining the links ongoing should not be expected either to > change links as part of a move-to-production or to have to learn how > to put variables into all the links, and we would not have to modify > the CMS to handle links with variables - they should be able to copy > and paste to create links, which resulting content should be able to > be promoted to production without change, or it defeats the purpose of > using a modern content-management system. > > > The app servers can only be accessed over https, and the proxy > will > > > eventually but not quite yet. > > > > That last part may be more of an issue for you, as you'll discover > you > > need an IP address per SSL site you want to host. > > > Normally, yes, and each of the app server hostnames has its own > registered IP address now, with trusted certs associated. We are > working on obtaining a wildcard cert which we'd use for the proxy as > well as the website, and will add IP addresses to the proxy if > necessary. I would hope that, since we want the proxy to choose > between two back-end app servers for the same front-end uri, depending > on whether or not there is a referrer of the development website, one > IP should be all that's needed on the front-end, correct? > > > Question: What is the best way to accomplish this? I am trying > to > > use two > > > different registered host names which are registered to the > > secondary IP on > > > the VPS, as the proxied names for the app servers, but that's not > > working > > > too well. I wonder if it would be better to have a single server > > name for > > > the proxy with the two proxied servers selected based on > referrer, > > rather > > > than trying to redirect to another server name, with one server > name > > > servicing one proxied server and the other, the other proxied > > server. > > > > Goodness, no. I wouldn't /touch/ referer headers for HTTP routing. > So > > unreliable! > > > OK. How would you recommend ensuring that if you click on a link on > our dev site, it goes to the proxied test app server but if you access > that same URL in any other way, whether by way of a link on the prod > website, a bookmark, someone emailing you the link - the request goes > to the proxied prod app server? As I said, I'm an nginx newb, so > monosyllabic responses are appreciated... ;) > > > Regardless, I can't seem to get past the connection to the > backend > > server. > > > I keep getting a 110 connection failure. I have tried several > > > configurations but none seem to work. > > > > What does a connection, via telnet/netcat, from the server, show > you? > > > I get a connection. I haven't figured out the right HTTP command to > send to get a valid response yet, but I get a response - not a > timeout. > > > The problem I'm running into may be related to use of the > > valid_referers > > > directive. It doesn't seem to do what I need, which is to use > one > > back-end > > > for requests referred from one web server host but use the other > for > > all > > > other requests. > > > > I may be repeating a single tune here, but I would really force > your > > business to re-examine your requirements if you think this is > > desirable behaviour. > > > See my earlier response explaining the business requirement, to > understand why this is a desireable behavior. > > > If I have two server directives with the same IP but two > different > > server > > > names, it seems I can't have two location directives, one within > > each server > > > name. > > > > Each server may have zero or more location directives. > > Each location belongs to exactly one server stanza. > > > > I don't understand exactly what you think doesn't work, but if it > > contradicts the above 2 lines, then it's not legal nginx config. > > > If you look at the example conf I posted, that configuration - two > separate server stanzas, each with a location directive, and I get > that message. I probably have something else misconfigured. Again, > newb... > > > If I could get that to work, it seems to me it should allow me > to > > > redirect to the default app server using the valid_referers > > directive within > > > the referrer-specific app server's server directive, but that > > doesn't seem > > > to work the way I expect, either. > > > > When you say "redirect" here, you really mean "reverse proxy", > don't > > you? > > "Redirecting" is a very specific, unrelated thing in > HTTP-server-speak > > . > The redirect is a redirect - telling nginx to use a different > reverse-proxy "upstream" server from what it would normally use based > on the URL in the request. However, if there is a better way to get > the same result I am all for it. For example, a method whereby the > same front-end url chooses an upstream server based on the > valid_referer criterion, or whatever it is you would recommend other > than the referrer,. > > > > > I don't have a config file to post because it has gone through a > > dozen > > > iterations already, none of which have been saved. > > > > apt-get install git-core :-P > > > I don't want to install apt on my centos server :/ How 'bout 'yum > install git-core?' > > > A generic example of > > > one that doesn't work would be : > > > server { > > > listen 10.10.10.10:80; > > > server_name devappxy.mydomain.com; > > > valid_referers devweb.mydomain.com; > > > if ($invalid_referer) { > > > return 301 http://apppxy.mydomain.com$request_uri; > > > } > > > proxy_bind 10.10.10.10; > > > access_log /var/log/nginx/devpxyaccess.log main; > > > error_log /var/log/nginx/devpxyerror.log debug; > > > location / { > > > proxy_pass https://devapp.mydomain.com; > > > proxy_redirect https://devapp.mydomain.com / ; > > > } > > > } > > > server { > > > listen 10.10.10.10:80 ; > > > server_name apppxy.mydomain.com ; > > > proxy_bind 10.10.10.10 ; > > > access_log /var/log/nginx/pxyaccess.log main ; > > > error_log /var/log/nginx/pxyerror.log debug ; > > > location / { > > > proxy_pass https://prodapp.mydomain.com ; > > > proxy_redirect https://prodapp.mydomain.com / ; > > > } > > > } > > > > > > > The only real problem I can see is that you don't have a resolver > > specified, so nginx doesn't know how to resolve the app FQDNs. > > Irrespective of this, there are much nicer ways to achieve this, > which > > might use: > > > > * Nginx maps to translate from client Host header to backend FQDN. > Would that work if the goal is to direct traffic based on where you're > coming from? I will explore... > > * Access/error logs specified using variables, but DRY them out at > a > > higher level than per-server (i.e. state them once, globally, at > the > > http level. > The logs are specified at per-server to quickly identify where the > failure lies. They will be only at the nginx.conf http level when I > have a suceessful configuration. > > * A single server stanza, switching between backends. > > > I like the idea - I'm just stuck on how to get it to switch based on > where the client is coming from... > > I could write a version that uses these concepts for you, but I'd > be > > depriving you of the educational and life-affirming journey of > Getting > > There Yourself if I did ;-) > > > > If you want to get the best possible help with this, reduce the > > clutter in your example/failing config (i.e. make the smallest > > possible config that doesn't do what you think it /should/ do), and > > re-engage with the list. > > > > > When I do that it says "location" directive isn't allowed here... > > > > When you do what? > > > When I set up my included config file to use the two-server-stanza > configuration I posted (with hostnames/addresses pointing to real-life > stuff, of course) that's what I get when issuing the service restart. > > Jonathan > > -- > > Jonathan Matthews // Oxford, London, UK > > http://www.jpluscplusm.com/contact.html > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Thanks again - you've been quite helpful. > > Jim. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236278,236361#msg-236361 From rakan.alhneiti at gmail.com Wed Feb 20 20:13:18 2013 From: rakan.alhneiti at gmail.com (Rakan Alhneiti) Date: Wed, 20 Feb 2013 23:13:18 +0300 Subject: Fwd: nginx performance on Amazon EC2 In-Reply-To: References: Message-ID: Hello, I am running a django app with nginx & uwsgi on an amazon ec2 instance and a vmware machine almost the same size as the ec2 one. Here's how i run uwsgi: sudo uwsgi -b 25000 --chdir=/www/python/apps/pyapp --module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings --socket=/tmp/pyapp.socket --cheaper=8 --processes=16 --harakiri=10 --max-requests=5000 --vacuum --master --pidfile=/tmp/pyapp-master.pid --uid=220 --gid=499 & nginx configurations: server { listen 80; server_name test.com root /www/python/apps/pyapp/; access_log /var/log/nginx/test.com.access.log; error_log /var/log/nginx/test.com.error.log; # https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production location /static/ { alias /www/python/apps/pyapp/static/; expires 30d; } location /media/ { alias /www/python/apps/pyapp/media/; expires 30d; } location / { uwsgi_pass unix:///tmp/pyapp.socket; include uwsgi_params; proxy_read_timeout 120; } # what to serve if upstream is not available or crashes #error_page 500 502 503 504 /media/50x.html;} Here comes the problem. When doing "ab" (ApacheBenchmark) on both machines i get the following results: (vmware machine being almost the same size as the ec2 small instance) *Amazon EC2:* nginx version: nginx version: nginx/1.2.6 uwsgi version:1.4.5 Concurrency Level: 500Time taken for tests: 21.954 secondsComplete requests: 5000Failed requests: 126 (Connect: 0, Receive: 0, Length: 126, Exceptions: 0)Write errors: 0Non-2xx responses: 4874Total transferred: 4142182 bytes HTML transferred: 3384914 bytesRequests per second: 227.75 [#/sec] (mean)Time per request: 2195.384 [ms] (mean)Time per request: 4.391 [ms] (mean, across all concurrent requests)Transfer rate: 184.25 [Kbytes/sec] received *Vmware machine (CentOS 6):* nginx version: nnginx version: nginx/1.0.15 uwsgi version: 1.4.5 Concurrency Level: 1000Time taken for tests: 1.094 secondsComplete requests: 5000Failed requests: 0Write errors: 0Total transferred: 30190000 bytes HTML transferred: 28930000 bytesRequests per second: 4568.73 [#/sec] (mean)Time per request: 218.879 [ms] (mean)Time per request: 0.219 [ms] (mean, across all concurrent requests)Transfer rate: 26939.42 [Kbytes/sec] received As you can see... all requests on the ec2 instance fail with either timeout errors or "Client prematurely disconnected". However, on my vmware machine all requests go through with no problems. The other thing is the difference in reqs / second i am doing on both machines. What am i doing wrong on ec2? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparvu at systemdatarecorder.org Wed Feb 20 20:28:27 2013 From: sparvu at systemdatarecorder.org (Stefan Parvu) Date: Wed, 20 Feb 2013 20:28:27 +0000 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: <20130220132504.GJ32392@craic.sysops.org> References: <20130219124723.GC14104@localhost> <20130219172416.GG32392@craic.sysops.org> <20130220100004.GC22940@localhost> <20130220132504.GJ32392@craic.sysops.org> Message-ID: <20130220202827.GB27026@localhost> > > "~" means "regex". > "^" means "start of string". > "$" means "end of string". > This location will only match requests that are > /cgi-bin/R > or > > /cgi-bin/R?something > and not > > /cgi-bin/R/something > yep. silly me, I do have a location for all /cgi-bin/R/ calls, like: location ~ ^/cgi-bin/R/ { gzip off; fastcgi_pass unix:/opt/sdr/report/ws/fastcgi_temp/nginx-fcgi.sock; fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME /opt/sdr/report/docroot/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT /opt/sdr/report/docroot; } and another one for anything *.cgi: location ~ ^/cgi-bin/.*\.cgi$ { gzip off; fastcgi_pass unix:/opt/sdr/report/ws/fastcgi_temp/nginx-fcgi.sock; fastcgi_read_timeout 5m; fastcgi_index index.cgi; #fastcgi_buffers 8 4k; # # You may copy and paste the lines under or use include directive # include /etc/nginx/nginx-fcgi.conf; # In this example all is in one file ... } since under my /cgi-bin/ I can have lots of cgi scripts and the executable R cgi script. Testing any calls for /cgi-bin/R/foo returns: Cannot get script name, are DOCUMENT_ROOT and SCRIPT_NAME (or SCRIPT_FILENAME) set and is the script executable? Cannot get script name, are DOCUMENT_ROOT and SCRIPT_NAME (or SCRIPT_FILENAME) set and is the script executable? Im trying to see if fastcgi_split_path_info might help anything. stefan From francis at daoine.org Wed Feb 20 21:44:05 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Feb 2013 21:44:05 +0000 Subject: nginx serving R scripts via CGI/FastRWeb In-Reply-To: <20130220202827.GB27026@localhost> References: <20130219124723.GC14104@localhost> <20130219172416.GG32392@craic.sysops.org> <20130220100004.GC22940@localhost> <20130220132504.GJ32392@craic.sysops.org> <20130220202827.GB27026@localhost> Message-ID: <20130220214405.GK32392@craic.sysops.org> On Wed, Feb 20, 2013 at 08:28:27PM +0000, Stefan Parvu wrote: > location ~ ^/cgi-bin/R/ { That will match anything starting with /cgi-bin/R/, which is (most of) what you want. These requests will... > fastcgi_pass unix:/opt/sdr/report/ws/fastcgi_temp/nginx-fcgi.sock; be sent to the fastcgi server. Now, for all of these requests, you want to tell the fastcgi server to use the file /opt/sdr/report/docroot/cgi-bin/R. So: > fastcgi_index index.cgi; you don't need that, since it will never apply; and > fastcgi_param SCRIPT_FILENAME /opt/sdr/report/docroot/$fastcgi_script_name; that should be something that expands to /opt/sdr/report/docroot/cgi-bin/R. Which is probably $document_root/cgi-bin/R; you can use whatever matches exactly that filename. $fastcgi_script_name does not have the correct value by default, in this case. > fastcgi_param PATH_INFO $fastcgi_path_info; If your application uses PATH_INFO, then you'll want to set it something like that. I don't think that $fastcgi_path_info actually has a value by default, though. > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param DOCUMENT_ROOT /opt/sdr/report/docroot; They are all fairly standard; probably only the first and last matter, but it depends on your fastcgi server and your application. The last one would usually be written to use $document_root, but anything that ends up with the correct value is good. Note that your fastcgi server, from the logs you provide, either wants SCRIPT_FILENAME to be correct, or DOCUMENT_ROOT and SCRIPT_NAME to be correct; and you don't provide a SCRIPT_NAME. That may not matter when SCRIPT_FILENAME points to the R binary. > Testing any calls for /cgi-bin/R/foo returns: > Cannot get script name, are DOCUMENT_ROOT and SCRIPT_NAME (or SCRIPT_FILENAME) set and is the script executable? > Cannot get script name, are DOCUMENT_ROOT and SCRIPT_NAME (or SCRIPT_FILENAME) set and is the script executable? >From the above config, SCRIPT_FILENAME was probably /opt/sdr/report/docroot/cgi-bin/R/foo, which is not an executable file. > Im trying to see if fastcgi_split_path_info might help anything. Yes. http://nginx.org/r/fastcgi_split_path_info It will change the values of $fastcgi_script_name and $fastcgi_path_info. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Feb 20 22:10:26 2013 From: nginx-forum at nginx.us (mrtn) Date: Wed, 20 Feb 2013 17:10:26 -0500 Subject: How to check the existence of a http-only secure cookie Message-ID: I have a http-only and secure (ssl) cookie, and I want nginx to check whether this cookie exists in a request, if not, reject it by serving a 404 page. This is just a preliminary check, so I don't care about the actual value in the cookie. So far I've tried this: if ($http_cookie !~* "cookie_name=[.]+") { return 404; } in a location directive, but despite the cookie is contained in the requests, 404 is returned. What should be corrected here? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236394,236394#msg-236394 From francis at daoine.org Wed Feb 20 22:22:18 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Feb 2013 22:22:18 +0000 Subject: How to check the existence of a http-only secure cookie In-Reply-To: References: Message-ID: <20130220222218.GL32392@craic.sysops.org> On Wed, Feb 20, 2013 at 05:10:26PM -0500, mrtn wrote: > I have a http-only and secure (ssl) cookie, and I want nginx to check > whether this cookie exists in a request, if not, reject it by serving a 404 > page. This is just a preliminary check, so I don't care about the actual > value in the cookie. > > So far I've tried this: if ($http_cookie !~* "cookie_name=[.]+") { return > 404; } in a location directive, but despite the cookie is contained in the > requests, 404 is returned. What should be corrected here? Thanks! Does it pass if the cookie value starts with a dot? Every character in the regex means something. "." probably doesn't mean what you think it means here. Omit the [] and it might work for you. Or you could just test $cookie_cookie_name directly -- does it equal the empty string? If not, it has a value. (This doesn't actually check for http-only or secure, but you probably know that already.) f -- Francis Daly francis at daoine.org From list-reader at koshie.fr Wed Feb 20 22:27:04 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_K=C3=A9vin?=) Date: Wed, 20 Feb 2013 23:27:04 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 Message-ID: Hello, First, I'm new to this Mailing List. As say my signature I'm non-English and I'm trying the best to be understandable. If something isn't clear or noisy, please tell me. I'm using Nginx 1.2.1 on Debian Wheezy 64 bits, I want to host some personals website on my dedicated server. I've installed php5-fpm, looked at the configuration file which is /etc/php5/fpm/pool.d/www.conf and the "listen" parameters is '/var/run/php5-fpm.sock'. Some peoples on #nginx at freenode.org tell me I need to put 'fastcgi_pass unix:/var/run/php5-fpm.sock;' into the virtual host configuration file into /etc/nginx/conf.d/ directory. I do that and then I've checked one of my website which given me a "502 Bad Gateway", it worked. But when I'm trying to test if PHP is working with phpinfo, I've a 502 and Wordpress doesn't works so it's clear. My configuration is messed up. This is my configuration: /etc/nginx/nginx.conf: user www-data; worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; index index.html index.php; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/fastcgi_params; } Example of a vhost configuration file at /etc/nginx/conf.d/: server { listen 80; listen 443 ssl; # server_name ***.**.***.**; server_name subdomain.koshie.fr www.subdomain.koshie.fr; root /var/www/koshie.fr/subdomain; msie_padding on; # ssl_certificate /etc/nginx/certs/auction-web.crt; # ssl_certificate_key /etc/nginx/certs/auction-web.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; index index.php; fastcgi_index index.php; client_max_body_size 8M; client_body_buffer_size 256K; location ~ \.php$ { include fastcgi_params; # Assuming php-fastcgi running on localhost port 9000 # fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; } } /etc/php5/fpm/pool.d/www.conf (unchanged): ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /usr) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. listen = /var/run/php5-fpm.sock ; Set listen(2) backlog. ; Default Value: 128 (-1 on FreeBSD and OpenBSD) ;listen.backlog = 128 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives. With this process management, there will be ; always at least 1 children. ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; ondemand - no children are created at startup. Children will be forked when ; new requests will connect. The following parameter are used: ; pm.max_children - the maximum number of children that ; can be alive at the same time. ; pm.process_idle_timeout - The number of seconds after which ; an idle process will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. The below defaults are based on a server without much resources. Don't ; forget to tweak pm.* to fit your needs. ; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand' ; Note: This value is mandatory. pm.max_children = 5 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 2 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 1 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 3 ; The number of seconds after which an idle process will be killed. ; Note: Used only when pm is set to 'ondemand' ; Default Value: 10s ;pm.process_idle_timeout = 10s; ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 ;pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. It shows the following informations: ; pool - the name of the pool; ; process manager - static, dynamic or ondemand; ; start time - the date and time FPM has started; ; start since - number of seconds since FPM has started; ; accepted conn - the number of request accepted by the pool; ; listen queue - the number of request in the queue of pending ; connections (see backlog in listen(2)); ; max listen queue - the maximum number of requests in the queue ; of pending connections since FPM has started; ; listen queue len - the size of the socket queue of pending connections; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes; ; max active processes - the maximum number of active processes since FPM ; has started; ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic' and 'ondemand'); ; Value are updated in real time. ; Example output: ; pool: www ; process manager: static ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 62636 ; accepted conn: 190460 ; listen queue: 0 ; max listen queue: 1 ; listen queue len: 42 ; idle processes: 4 ; active processes: 11 ; total processes: 15 ; max active processes: 12 ; max children reached: 0 ; ; By default the status page output is formatted as text/plain. Passing either ; 'html', 'xml' or 'json' in the query string will return the corresponding ; output syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; http://www.foo.bar/status?xml ; ; By default the status page only outputs short status. Passing 'full' in the ; query string will also return status for each pool process. ; Example: ; http://www.foo.bar/status?full ; http://www.foo.bar/status?json&full ; http://www.foo.bar/status?html&full ; http://www.foo.bar/status?xml&full ; The Full status returns for each process: ; pid - the PID of the process; ; state - the state of the process (Idle, Running, ...); ; start time - the date and time the process has started; ; start since - the number of seconds since the process has started; ; requests - the number of requests the process has served; ; request duration - the duration in ?s of the requests; ; request method - the request method (GET, POST, ...); ; request URI - the request URI with the query string; ; content length - the content length of the request (only with POST); ; user - the user (PHP_AUTH_USER) (or '-' if not set); ; script - the main script called (or '-' if not set); ; last request cpu - the %cpu the last request consumed ; it's always 0 if the process is not in Idle state ; because CPU calculation is done when the request ; processing has terminated; ; last request memory - the max amount of memory the last request consumed ; it's always 0 if the process is not in Idle state ; because memory calculation is done when the request ; processing has terminated; ; If the process is in Idle state, then informations are related to the ; last request the process has served. Otherwise informations are related to ; the current request being served. ; Example output: ; ************************ ; pid: 31330 ; state: Running ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 63087 ; requests: 12808 ; request duration: 1250261 ; request method: GET ; request URI: /test_mem.php?N=10000 ; content length: 0 ; user: - ; script: /home/fat/web/docs/php/test_mem.php ; last request cpu: 0.00 ; last request memory: 0 ; ; Note: There is a real-time FPM status monitoring sample web page available ; It's available in: ${prefix}/share/fpm/status.html ; ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ;ping.response = pong ; The access log file ; Default: not set ;access.log = log/$pool.access.log ; The access log format. ; The following syntax is allowed ; %%: the '%' character ; %C: %CPU used by the request ; it can accept the following format: ; - %{user}C for user CPU only ; - %{system}C for system CPU only ; - %{total}C for user + system CPU (default) ; %d: time taken to serve the request ; it can accept the following format: ; - %{seconds}d (default) ; - %{miliseconds}d ; - %{mili}d ; - %{microseconds}d ; - %{micro}d ; %e: an environment variable (same as $_ENV or $_SERVER) ; it must be associated with embraces to specify the name of the env ; variable. Some exemples: ; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e ; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e ; %f: script filename ; %l: content-length of the request (for POST request only) ; %m: request method ; %M: peak of memory allocated by PHP ; it can accept the following format: ; - %{bytes}M (default) ; - %{kilobytes}M ; - %{kilo}M ; - %{megabytes}M ; - %{mega}M ; %n: pool name ; %o: ouput header ; it must be associated with embraces to specify the name of the header: ; - %{Content-Type}o ; - %{X-Powered-By}o ; - %{Transfert-Encoding}o ; - .... ; %p: PID of the child that serviced the request ; %P: PID of the parent of the child that serviced the request ; %q: the query string ; %Q: the '?' character if query string exists ; %r: the request URI (without the query string, see %q and %Q) ; %R: remote IP address ; %s: status (response code) ; %t: server time the request was received ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; %T: time the log has been written (the request has finished) ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; %u: remote user ; ; Default: "%R - %u %t \"%m %r\" %s" ;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%" ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = / ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Limits the extensions of the main script FPM will allow to parse. This can ; prevent configuration mistakes on the web server side. You should only limit ; FPM to .php extensions to prevent malicious users to use other extensions to ; exectute php code. ; Note: set an empty value to allow all extensions. ; Default Value: .php ;security.limit_extensions = .php .php3 .php4 .php5 ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /usr) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www at my.domain.com ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M This is the kind of log (/var/log/nginx/error.log, debug enabled) I have when I'm trying to connect on a page which give 502, exactly the same vhost configuration as above except for domain and document root: 2013/02/20 23:19:25 [debug] 17211#0: epoll: fd:6 ev:0001 d:00000000007D1AF0 2013/02/20 23:19:25 [debug] 17211#0: timer delta: 14959 2013/02/20 23:19:25 [debug] 17211#0: posted events 0000000000000000 2013/02/20 23:19:25 [debug] 17211#0: worker cycle 2013/02/20 23:19:25 [debug] 17211#0: epoll timer: 60000 2013/02/20 23:19:25 [debug] 17211#0: epoll: fd:3 ev:0001 d:00000000007D1F70 2013/02/20 23:19:25 [debug] 17211#0: timer delta: 0 2013/02/20 23:19:25 [debug] 17211#0: posted events 0000000000000000 2013/02/20 23:19:25 [debug] 17211#0: worker cycle 2013/02/20 23:19:25 [debug] 17211#0: epoll timer: 60000 2013/02/20 23:19:25 [debug] 17211#0: epoll: fd:3 ev:0004 d:00000000007D1F70 2013/02/20 23:19:25 [debug] 17211#0: epoll: fd:10 ev:001D d:00000000007D1DF1 2013/02/20 23:19:25 [debug] 17211#0: epoll_wait() error on fd:10 ev:001D 2013/02/20 23:19:25 [error] 17211#0: *207 connect() failed (111: Connection refused) while connecting to upstream, client: 80.239.242.190, server: subdomain.koshie.fr, request: "GET /wp-admin/info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" 2013/02/20 23:19:25 [debug] 17211#0: timer delta: 0 2013/02/20 23:19:25 [debug] 17211#0: posted events 0000000000801D70 2013/02/20 23:19:25 [debug] 17211#0: posted event 0000000000801D70 2013/02/20 23:19:25 [debug] 17211#0: posted event 0000000000000000 2013/02/20 23:19:25 [debug] 17211#0: worker cycle 2013/02/20 23:19:25 [debug] 17211#0: epoll timer: 65000 2013/02/20 23:19:25 [debug] 17211#0: epoll: fd:3 ev:0005 d:00000000007D1F70 2013/02/20 23:19:25 [debug] 17211#0: timer delta: 32 2013/02/20 23:19:25 [debug] 17211#0: posted events 0000000000000000 2013/02/20 23:19:25 [debug] 17211#0: worker cycle 2013/02/20 23:19:25 [debug] 17211#0: epoll timer: -1 If you need more logs or paste, please ask. Cordially, Koshie -- Sorry for my English, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr From francis at daoine.org Wed Feb 20 22:40:05 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Feb 2013 22:40:05 +0000 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: References: Message-ID: <20130220224005.GM32392@craic.sysops.org> On Wed, Feb 20, 2013 at 11:27:04PM +0100, GASPARD K?vin wrote: Hi there, > I'm using Nginx 1.2.1 on Debian Wheezy 64 bits, I want to host some > personals website on my dedicated server. I've installed php5-fpm, looked > at the configuration file which is /etc/php5/fpm/pool.d/www.conf and the > "listen" parameters is '/var/run/php5-fpm.sock'. So your fastcgi server is expected to be at /var/run/php5-fpm.sock. ls -l /var/run/php5-fpm.sock should show you that it really is there. > # fastcgi_pass 127.0.0.1:9000; > fastcgi_pass unix:/var/run/php5-fpm.sock; This looks like it *used to* try to access 127.0.0.1:9000, but now it accesses the expected socket. > 2013/02/20 23:19:25 [error] 17211#0: *207 connect() failed (111: > Connection refused) while connecting to upstream, client: 80.239.242.190, > server: subdomain.koshie.fr, request: "GET /wp-admin/info.php HTTP/1.1", > upstream: "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" This says it *actually* tried to access 127.0.0.1:9000, and failed. The nginx config you showed is not the one that the running nginx is using when it created this log file. Does the appropriate "nginx -t" show any problems? Does "nginx -s reload" give any indication that it did not reload correctly? f -- Francis Daly francis at daoine.org From list-reader at koshie.fr Wed Feb 20 22:54:08 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_K=C3=A9vin?=) Date: Wed, 20 Feb 2013 23:54:08 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: <20130220224005.GM32392@craic.sysops.org> References: <20130220224005.GM32392@craic.sysops.org> Message-ID: > On Wed, Feb 20, 2013 at 11:27:04PM +0100, GASPARD K?vin wrote: > > Hi there, > >> I'm using Nginx 1.2.1 on Debian Wheezy 64 bits, I want to host some >> personals website on my dedicated server. I've installed php5-fpm, >> looked >> at the configuration file which is /etc/php5/fpm/pool.d/www.conf and the >> "listen" parameters is '/var/run/php5-fpm.sock'. > > So your fastcgi server is expected to be at /var/run/php5-fpm.sock. > > ls -l /var/run/php5-fpm.sock srw-rw-rw- 1 root root 0 Feb 20 14:08 /var/run/php5-fpm.sock > > should show you that it really is there. I forget to say, but yes it's here and I've runned the daemon php-fpm. > >> # fastcgi_pass 127.0.0.1:9000; >> fastcgi_pass unix:/var/run/php5-fpm.sock; > > This looks like it *used to* try to access 127.0.0.1:9000, but now it > accesses the expected socket. Yeah, also forget to say that but this configuration was first used on CentOS 6 and it worked well. > >> 2013/02/20 23:19:25 [error] 17211#0: *207 connect() failed (111: >> Connection refused) while connecting to upstream, client: >> 80.239.242.190, >> server: subdomain.koshie.fr, request: "GET /wp-admin/info.php HTTP/1.1", >> upstream: "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" > > This says it *actually* tried to access 127.0.0.1:9000, and failed. Seems logical because I ask to use /var/run/php5-fpm.sock instead of the commented value in /etc/nginx/nginx.conf, right? > > The nginx config you showed is not the one that the running nginx is > using when it created this log file. I've pasted a working vhost configuration file and pasted an error on an other configuration file but they are identical except for document root and domain. To create this file I've used 'cp' and modified it. > > Does the appropriate "nginx -t" show any problems? Does "nginx -s reload" > give any indication that it did not reload correctly? > > f nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful And no output for 'nginx -s reload'. Thanks for your answer. Cordially, Koshie -- Sorry for my english, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr From francis at daoine.org Wed Feb 20 23:13:10 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Feb 2013 23:13:10 +0000 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: References: <20130220224005.GM32392@craic.sysops.org> Message-ID: <20130220231310.GN32392@craic.sysops.org> On Wed, Feb 20, 2013 at 11:54:08PM +0100, GASPARD K?vin wrote: > >On Wed, Feb 20, 2013 at 11:27:04PM +0100, GASPARD K?vin wrote: > >So your fastcgi server is expected to be at /var/run/php5-fpm.sock. > > > > ls -l /var/run/php5-fpm.sock > > srw-rw-rw- 1 root root 0 Feb 20 14:08 /var/run/php5-fpm.sock > > > > >should show you that it really is there. > > I forget to say, but yes it's here and I've runned the daemon php-fpm. That's fine - the fastcgi server is listening on the unix socket. > >># fastcgi_pass 127.0.0.1:9000; > >> fastcgi_pass unix:/var/run/php5-fpm.sock; > > > >This looks like it *used to* try to access 127.0.0.1:9000, but now it > >accesses the expected socket. > > Yeah, also forget to say that but this configuration was first used on > CentOS 6 and it worked well. That's also fine. The issue is that the nginx that you are testing is *not* trying to access the unix socket -- it is instead trying to access to network port. So either this is not the running config; or this is not the server{} block used in your test request. > >>2013/02/20 23:19:25 [error] 17211#0: *207 connect() failed (111: > >>Connection refused) while connecting to upstream, client: > >>80.239.242.190, > >>server: subdomain.koshie.fr, request: "GET /wp-admin/info.php HTTP/1.1", > >>upstream: "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" > > > >This says it *actually* tried to access 127.0.0.1:9000, and failed. > > Seems logical because I ask to use /var/run/php5-fpm.sock instead of the > commented value in /etc/nginx/nginx.conf, right? You want nginx to access the unix socket. It is trying to access the network port. > >The nginx config you showed is not the one that the running nginx is > >using when it created this log file. > > I've pasted a working vhost configuration file and pasted an error on an > other configuration file but they are identical except for document root > and domain. To create this file I've used 'cp' and modified it. When a request comes in to nginx, it chooses exactly one server{} block to handle it. That server{} block is chosen based firstly on the incoming ip:port; and then secondly on the incoming host name used in the request. Looking at your config file, plus every file include'd in it, can you see which one server{} block is used for this request? (You'll need to look at all of the "listen" directives first, and then all of the "server_name" directives in the server{}s with the "listen" that best matches the incoming ip:port.) What fastcgi_pass line is used in that one server{} block? > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is successful > > And no output for 'nginx -s reload'. Because of that, I guess that the problem is in identifying the correct server{}. f -- Francis Daly francis at daoine.org From list-reader at koshie.fr Wed Feb 20 23:36:27 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_K=C3=A9vin?=) Date: Thu, 21 Feb 2013 00:36:27 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: <20130220231310.GN32392@craic.sysops.org> References: <20130220224005.GM32392@craic.sysops.org> <20130220231310.GN32392@craic.sysops.org> Message-ID: >> >The nginx config you showed is not the one that the running nginx is >> >using when it created this log file. >> >> I've pasted a working vhost configuration file and pasted an error on an >> other configuration file but they are identical except for document root >> and domain. To create this file I've used 'cp' and modified it. > > When a request comes in to nginx, it chooses exactly one server{} block > to handle it. That server{} block is chosen based firstly on the incoming > ip:port; and then secondly on the incoming host name used in the request. > > Looking at your config file, plus every file include'd in it, can you > see which one server{} block is used for this request? (You'll need > to look at all of the "listen" directives first, and then all of the > "server_name" directives in the server{}s with the "listen" that best > matches the incoming ip:port.) > > What fastcgi_pass line is used in that one server{} block? I've do a grep -R 'listen' /etc/nginx/conf.d/ and every vhost configuration file have two lines, exactly the same: listen 80; listen 443 ssl; In the past, to be sure to don't make mistake I copied an old vhost configuration file and I modified it every times for a new website. So you'll see this kind of line: server_name subdomain.koshie.fr www.subdomain.koshie.fr; Some of these vhost are different but that's because they need regex for some stuff. Finally, for fastcgi_pass, I've only tested my configuration and pasted here data with the vhosts containing the value pasted in the first mail: fastcgi_pass unix:/var/run/php5-fpm.sock; I've for now a lot of domain with my old configuration value, listening on the network if I don't say rubish (and as you say before). I'm sorry but I'm not sure to understand why you are asking me to do and I given you what I can. If I miss something can you point me on the good way please? Good night, Koshie -- Sorry for my english, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr From francis at daoine.org Wed Feb 20 23:54:10 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Feb 2013 23:54:10 +0000 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: References: <20130220224005.GM32392@craic.sysops.org> <20130220231310.GN32392@craic.sysops.org> Message-ID: <20130220235410.GO32392@craic.sysops.org> On Thu, Feb 21, 2013 at 12:36:27AM +0100, GASPARD K?vin wrote: Hi there, > >Looking at your config file, plus every file include'd in it, can you > >see which one server{} block is used for this request? (You'll need > >to look at all of the "listen" directives first, and then all of the > >"server_name" directives in the server{}s with the "listen" that best > >matches the incoming ip:port.) > > > >What fastcgi_pass line is used in that one server{} block? > > I've do a grep -R 'listen' /etc/nginx/conf.d/ and every vhost > configuration file have two lines, exactly the same: > > listen 80; > listen 443 ssl; Ok, since all of the "listen" lines are the same, then the server{} that is chosen depends on the host name used in the test request. So: what is the hostname in the url that you try to get, when you see the 502 error? And which vhost configuration file has the matching server_name directive? If there is no exact match, then the first regex match is used. If there is none, then the default server{} is used. Which exactly is "the first regex" may not be immediately obvious if there are some in different files. > I'm sorry but I'm not sure to understand why you are asking me to do and I > given you what I can. If I miss something can you point me on the good way > please? The problem you report is consistent with the log output you showed. But the configuration you showed is not consistent with that log output. So something is unexpected. Maybe it is simplest if you rename the conf.d directory, then create a new conf.d directory with just one vhost file. Then reload nginx and re-do your test of a php request and see what it says. If it still fails, then you have a simpler test case to work from. f -- Francis Daly francis at daoine.org From anoopalias01 at gmail.com Thu Feb 21 04:05:12 2013 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 21 Feb 2013 09:35:12 +0530 Subject: nginx performance on Amazon EC2 In-Reply-To: References: Message-ID: On Thu, Feb 21, 2013 at 1:43 AM, Rakan Alhneiti wrote: > Hello, > > I am running a django app with nginx & uwsgi on an amazon ec2 instance and > a vmware machine almost the same size as the ec2 one. Here's how i run > uwsgi: > > sudo uwsgi -b 25000 --chdir=/www/python/apps/pyapp --module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings --socket=/tmp/pyapp.socket --cheaper=8 --processes=16 --harakiri=10 --max-requests=5000 --vacuum --master --pidfile=/tmp/pyapp-master.pid --uid=220 --gid=499 > > & nginx configurations: > > server { > listen 80; > server_name test.com > > root /www/python/apps/pyapp/; > > access_log /var/log/nginx/test.com.access.log; > error_log /var/log/nginx/test.com.error.log; > > # https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production > location /static/ { > alias /www/python/apps/pyapp/static/; > expires 30d; > } > > location /media/ { > alias /www/python/apps/pyapp/media/; > expires 30d; > } > > location / { > uwsgi_pass unix:///tmp/pyapp.socket; > include uwsgi_params; > proxy_read_timeout 120; > } > > # what to serve if upstream is not available or crashes > #error_page 500 502 503 504 /media/50x.html;} > > Here comes the problem. When doing "ab" (ApacheBenchmark) on both machines > i get the following results: (vmware machine being almost the same size as > the ec2 small instance) > > *Amazon EC2:* > > nginx version: nginx version: nginx/1.2.6 > > uwsgi version:1.4.5 > > Concurrency Level: 500Time taken for tests: 21.954 secondsComplete requests: 5000Failed requests: 126 > (Connect: 0, Receive: 0, Length: 126, Exceptions: 0)Write errors: 0Non-2xx responses: 4874Total transferred: 4142182 bytes > HTML transferred: 3384914 bytesRequests per second: 227.75 [#/sec] (mean)Time per request: 2195.384 [ms] (mean)Time per request: 4.391 [ms] (mean, across all concurrent requests)Transfer rate: 184.25 [Kbytes/sec] received > > *Vmware machine (CentOS 6):* > > nginx version: nnginx version: nginx/1.0.15 > > uwsgi version: 1.4.5 > > Concurrency Level: 1000Time taken for tests: 1.094 secondsComplete requests: 5000Failed requests: 0Write errors: 0Total transferred: 30190000 bytes > HTML transferred: 28930000 bytesRequests per second: 4568.73 [#/sec] (mean)Time per request: 218.879 [ms] (mean)Time per request: 0.219 [ms] (mean, across all concurrent requests)Transfer rate: 26939.42 [Kbytes/sec] received > > As you can see... all requests on the ec2 instance fail with either > timeout errors or "Client prematurely disconnected". However, on my vmware > machine all requests go through with no problems. The other thing is the > difference in reqs / second i am doing on both machines. > > What am i doing wrong on ec2? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Is there any data in the nginX error logs? -- *Anoop P Alias* GNUSYS -------------- next part -------------- An HTML attachment was scrubbed... URL: From crazygosh88 at yahoo.com Thu Feb 21 04:05:59 2013 From: crazygosh88 at yahoo.com (Nguyen Huy) Date: Wed, 20 Feb 2013 20:05:59 -0800 (PST) Subject: Proxying non-ssl SMTP/POP to ssl SMTP/POP Message-ID: <1361419559.18876.YahooMailNeo@web162005.mail.bf1.yahoo.com> Hi everyone, I'm now using nginx in setting up a proxying mail system in my company. The model is as below: Mail client <=========> Nginx proxy <===========> Mail server The stream between mail client and nginx is non-ssl connection (includes smtp and pop3 stream), whereas the stream between nginx and mail server uses ssl. The reason I use this model is that: I want to create a neutral node that can modify the emails before being delivered to mail server or client. However, I'm still stuck on the configuration of nginx to implement this model. So if you have any idea about how to configure this system as well as it's possibility with nginx, please help me. I'm still a newbie to this field, so if this is a dump question, please forgive for my awkwardness. Thank you very much! Btw: I also referenced to the question of kgoj at : http://forum.nginx.org/read.php?2,126528,126528#msg-126528. However, his model is contrasted with my model and I haven't found any idea yet! -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at styleflare.com Thu Feb 21 04:33:41 2013 From: david at styleflare.com (David | StyleFlare) Date: Wed, 20 Feb 2013 23:33:41 -0500 Subject: nginx performance on Amazon EC2 In-Reply-To: References: Message-ID: <5125A3A5.3090201@styleflare.com> Really you would have to do this test on 2 amazon servers and then see if one was more performant. Then you can assume something is wrong. Based on the configs everything looks right. The fact that your vmware server is better performing, is really not saying much. Its really hard to directly compare. I would presume so many other factors. Is the vmware box on a local network? On 2/20/13 11:05 PM, Anoop Alias wrote: > > > On Thu, Feb 21, 2013 at 1:43 AM, Rakan Alhneiti > > wrote: > > Hello, > > I am running a django app with nginx & uwsgi on an amazon ec2 > instance and a vmware machine almost the same size as the ec2 one. > Here's how i run uwsgi: > > |sudo uwsgi-b25000 --chdir=/www/python/apps/pyapp--module=wsgi:application--env DJANGO_SETTINGS_MODULE=settings--socket=/tmp/pyapp.socket--cheaper=8 --processes=16 --harakiri=10 --max-requests=5000 --vacuum--master--pidfile=/tmp/pyapp-master.pid--uid=220 --gid=499| > > & nginx configurations: > > |server{ > listen80; > server_name test.com > > root/www/python/apps/pyapp/; > > access_log/var/log/nginx/test.com.access.log; > error_log/var/log/nginx/test.com.error.log; > > #https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production > location/static/ { > alias /www/python/apps/pyapp/static/; > expires30d; > } > > location/media/ { > alias /www/python/apps/pyapp/media/; > expires30d; > } > > location/ { > uwsgi_pass unix:///tmp/pyapp.socket; > include uwsgi_params; > proxy_read_timeout120; > } > > # what to serve if upstream is not available or crashes > #error_page 500 502 503 504 /media/50x.html; > }| > > Here comes the problem. When doing "ab" (ApacheBenchmark) on both > machines i get the following results: (vmware machine being almost > the same size as the ec2 small instance) > > *Amazon EC2:* > > nginx version: nginx version: nginx/1.2.6 > > uwsgi version:1.4.5 > > |Concurrency Level: 500 > Time takenfor tests: 21.954 seconds > Complete requests: 5000 > Failed requests: 126 > (Connect: 0, Receive: 0, Length: 126, Exceptions: 0) > Write errors: 0 > Non-2xx responses: 4874 > Total transferred: 4142182 bytes > HTML transferred: 3384914 bytes > Requests per second: 227.75 [#/sec] (mean) > Time per request: 2195.384 [ms] (mean) > Time per request: 4.391 [ms] (mean, across all concurrent requests) > Transfer rate: 184.25 [Kbytes/sec] received| > > *Vmware machine (CentOS 6):* > > nginx version: nnginx version: nginx/1.0.15 > > uwsgi version: 1.4.5 > > |Concurrency Level: 1000 > Time takenfor tests: 1.094 seconds > Complete requests: 5000 > Failed requests: 0 > Write errors: 0 > Total transferred: 30190000 bytes > HTML transferred: 28930000 bytes > Requests per second: 4568.73 [#/sec] (mean) > Time per request: 218.879 [ms] (mean) > Time per request: 0.219 [ms] (mean, across all concurrent requests) > Transfer rate: 26939.42 [Kbytes/sec] received| > > As you can see... all requests on the ec2 instance fail with > either timeout errors or "Client prematurely disconnected". > However, on my vmware machine all requests go through with no > problems. The other thing is the difference in reqs / second i am > doing on both machines. > > What am i doing wrong on ec2? > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > Is there any data in the nginX error logs? > > > -- > *Anoop P Alias* > GNUSYS > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakan.alhneiti at gmail.com Thu Feb 21 05:04:52 2013 From: rakan.alhneiti at gmail.com (Rakan Alhneiti) Date: Thu, 21 Feb 2013 08:04:52 +0300 Subject: nginx performance on Amazon EC2 In-Reply-To: <5125A3A5.3090201@styleflare.com> References: <5125A3A5.3090201@styleflare.com> Message-ID: Hello, Yes my vm machine is working on my local network. I am referring to it rather so show that it performs better & no issues appear there. I tried both Amazon EC2 small instance & a linode 2048 instance and both give the exact same result as well. When doing apache benchmark, i can see the following in my nginx error log: [error] 4167#0: *27229 connect() to unix:///tmp/pyapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream, client: 127.0.0.1, server: mysite.com and upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: mysite.com Other than that, there's nothing in django error log but here's what i can see in uwsgi's daemon log: Wed Feb 20 21:59:51 2013 - writev(): Broken pipe [proto/uwsgi.c line 124] during GET /api/nodes/mostviewed/9/?format=json (127.0.0.1) [pid: 4112|app: 0|req: 34/644] 127.0.0.1 () {30 vars in 415 bytes} [Wed Feb 20 21:59:42 2013] GET /api/nodes/mostviewed/9/?format=json => generated 0 bytes in 8904 msecs (HTTP/1.0 200) 3 headers in 0 bytes (0 switches on core 0) Wed Feb 20 21:59:51 2013 - writev(): Broken pipe [proto/uwsgi.c line 124] during GET /api/nodes/mostviewed/9/?format=json (127.0.0.1) [pid: 4117|app: 0|req: 1/645] 127.0.0.1 () {30 vars in 415 bytes} [Wed Feb 20 21:59:46 2013] GET /api/nodes/mostviewed/9/?format=json => generated 0 bytes in 5021 msecs (HTTP/1.0 200) 3 headers in 0 bytes (0 switches on core 0) and stuff like: Wed Feb 20 20:01:01 2013 - uWSGI worker 1 screams: UAAAAAAH my master disconnected: i will kill myself !!! What do you guys think? Thanks alot Best Regards, *Rakan AlHneiti* Find me on the internet: Rakan Alhneiti | @rakanalh | Rakan Alhneiti | alhneiti ----- GTalk rakan.alhneiti at gmail.com ----- Mobile: +962-798-910 990 On Thu, Feb 21, 2013 at 7:33 AM, David | StyleFlare wrote: > Really you would have to do this test on 2 amazon servers and then see > if one was more performant. > Then you can assume something is wrong. > > Based on the configs everything looks right. > > The fact that your vmware server is better performing, is really not > saying much. Its really hard to directly compare. > > I would presume so many other factors. > > Is the vmware box on a local network? > > > > > > On 2/20/13 11:05 PM, Anoop Alias wrote: > > > > On Thu, Feb 21, 2013 at 1:43 AM, Rakan Alhneiti wrote: > >> Hello, >> >> I am running a django app with nginx & uwsgi on an amazon ec2 instance >> and a vmware machine almost the same size as the ec2 one. Here's how i run >> uwsgi: >> >> sudo uwsgi -b 25000 --chdir=/www/python/apps/pyapp --module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings --socket=/tmp/pyapp >> .socket --cheaper=8 --processes=16 --harakiri=10 --max-requests=5000 --vacuum --master --pidfile=/tmp/pyapp-master.pid --uid=220 --gid=499 >> >> & nginx configurations: >> >> server { >> listen 80; >> server_name test.com >> >> root /www/python/apps/pyapp/; >> >> access_log /var/log/nginx/test.com.access.log; >> error_log /var/log/nginx/test.com.error.log; >> >> # https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production >> location /static/ { >> alias /www/python/apps/pyapp/static/; >> expires 30d; >> } >> >> location /media/ { >> alias /www/python/apps/pyapp/media/; >> expires 30d; >> } >> >> location / { >> uwsgi_pass unix:///tmp/pyapp.socket; >> include uwsgi_params; >> proxy_read_timeout 120; >> } >> >> # what to serve if upstream is not available or crashes >> #error_page 500 502 503 504 /media/50x.html;} >> >> Here comes the problem. When doing "ab" (ApacheBenchmark) on both >> machines i get the following results: (vmware machine being almost the same >> size as the ec2 small instance) >> >> *Amazon EC2:* >> >> nginx version: nginx version: nginx/1.2.6 >> >> uwsgi version:1.4.5 >> >> Concurrency Level: 500Time taken for tests: 21.954 secondsComplete requests: 5000Failed requests: 126 >> (Connect: 0, Receive: 0, Length: 12 >> 6, Exceptions: 0)Write errors: 0Non-2xx responses: 4874Total transferred: 4142182 bytes >> HTML transferred: 3384914 bytesRequests per second: 227.75 [#/sec] (mean)Time per request: 2195.384 [ms] (mean)Time per request: 4.391 [ms] (mean, across all concurrent requests)Transfer rate: 184.25 [Kbytes/sec] received >> >> *Vmware machine (CentOS 6):* >> >> nginx version: nnginx version: nginx/1.0.15 >> >> uwsgi version: 1.4.5 >> >> Concurrency Level: 1000Time taken for tests: 1.094 secondsComplete requests: 5000Failed requests: 0Write errors: 0Total transferred: 30190000 bytes >> HTML transferred: 28930000 bytesRequests per second: 4568.73 [#/sec] (mean)Time per request: 218.879 [ms] (mean)Time per request: 0.219 [ms] (mean, across all concurrent requests)Transfer rate: 26939.42 [Kbytes/sec] received >> >> As you can see... all requests on the ec2 instance fail with either >> timeout errors or "Client prematurely disconnected". However, on my vmware >> machine all requests go through with no problems. The other thing is the >> difference in reqs / second i am doing on both machines. >> >> What am i doing wrong on ec2? >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > Is there any data in the nginX error logs? > > > -- > *Anoop P Alias* > GNUSYS > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 21 07:29:43 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 21 Feb 2013 02:29:43 -0500 Subject: nginx performance on Amazon EC2 In-Reply-To: References: Message-ID: hello, is the setup of your vmware similar to you ec2 - instance? i talk esp. about RAM/CPU-power here. do you have a monitoring on your instances, checking for load/ram-zusage, iowait etc? maybe you should start your ec-test with less than 500 concurrent connections and work up to the point that instance starts to fail. > [error] 4167#0: *27229 connect() to unix:///tmp/pyapp.socket failed > (11: > Resource temporarily unavailable) while connecting to upstream, > client: > 127.0.0.1, server: mysite.com looks like your django-app shuts down or isnt capable of handling that ammount of connections. oh, and you shouldnt expect a distant instance to have the same performance as a machine in your local net. Rakan Alhneiti Wrote: ------------------------------------------------------- > Hello, > > Yes my vm machine is working on my local network. I am referring to it > rather so show that it performs better & no issues appear there. > I tried both Amazon EC2 small instance & a linode 2048 instance and > both > give the exact same result as well. > > When doing apache benchmark, i can see the following in my nginx error > log: > > [error] 4167#0: *27229 connect() to unix:///tmp/pyapp.socket failed > (11: > Resource temporarily unavailable) while connecting to upstream, > client: > 127.0.0.1, server: mysite.com > > and > > upstream prematurely closed connection while reading response header > from > upstream, client: 127.0.0.1, server: mysite.com > > Other than that, there's nothing in django error log but here's what i > can > see in uwsgi's daemon log: > > Wed Feb 20 21:59:51 2013 - writev(): Broken pipe [proto/uwsgi.c line > 124] > during GET /api/nodes/mostviewed/9/?format=json (127.0.0.1) > [pid: 4112|app: 0|req: 34/644] 127.0.0.1 () {30 vars in 415 bytes} > [Wed Feb > 20 21:59:42 2013] GET /api/nodes/mostviewed/9/?format=json => > generated 0 > bytes in 8904 msecs (HTTP/1.0 200) 3 headers in 0 bytes (0 switches on > core > 0) > Wed Feb 20 21:59:51 2013 - writev(): Broken pipe [proto/uwsgi.c line > 124] > during GET /api/nodes/mostviewed/9/?format=json (127.0.0.1) > [pid: 4117|app: 0|req: 1/645] 127.0.0.1 () {30 vars in 415 bytes} [Wed > Feb > 20 21:59:46 2013] GET /api/nodes/mostviewed/9/?format=json => > generated 0 > bytes in 5021 msecs (HTTP/1.0 200) 3 headers in 0 bytes (0 switches on > core > 0) > > and stuff like: > Wed Feb 20 20:01:01 2013 - uWSGI worker 1 screams: UAAAAAAH my master > disconnected: i will kill myself !!! > > What do you guys think? > > Thanks alot > > > > Best Regards, > > *Rakan AlHneiti* > Find me on the internet: > Rakan Alhneiti | > @rakanalh > | Rakan Alhneiti | > alhneiti > > ----- GTalk rakan.alhneiti at gmail.com > ----- Mobile: +962-798-910 990 > > > > On Thu, Feb 21, 2013 at 7:33 AM, David | StyleFlare > wrote: > > > Really you would have to do this test on 2 amazon servers and then > see > > if one was more performant. > > Then you can assume something is wrong. > > > > Based on the configs everything looks right. > > > > The fact that your vmware server is better performing, is really not > > saying much. Its really hard to directly compare. > > > > I would presume so many other factors. > > > > Is the vmware box on a local network? > > > > > > > > > > > > On 2/20/13 11:05 PM, Anoop Alias wrote: > > > > > > > > On Thu, Feb 21, 2013 at 1:43 AM, Rakan Alhneiti > wrote: > > > >> Hello, > >> > >> I am running a django app with nginx & uwsgi on an amazon ec2 > instance > >> and a vmware machine almost the same size as the ec2 one. Here's > how i run > >> uwsgi: > >> > >> sudo uwsgi -b 25000 --chdir=/www/python/apps/pyapp > --module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings > --socket=/tmp/pyapp > >> .socket --cheaper=8 --processes=16 --harakiri=10 > --max-requests=5000 --vacuum --master --pidfile=/tmp/pyapp-master.pid > --uid=220 --gid=499 > >> > >> & nginx configurations: > >> > >> server { > >> listen 80; > >> server_name test.com > >> > >> root /www/python/apps/pyapp/; > >> > >> access_log /var/log/nginx/test.com.access.log; > >> error_log /var/log/nginx/test.com.error.log; > >> > >> # > https://docs.djangoproject.com/en/dev/howto/static-files/#serving-stat > ic-files-in-production > >> location /static/ { > >> alias /www/python/apps/pyapp/static/; > >> expires 30d; > >> } > >> > >> location /media/ { > >> alias /www/python/apps/pyapp/media/; > >> expires 30d; > >> } > >> > >> location / { > >> uwsgi_pass unix:///tmp/pyapp.socket; > >> include uwsgi_params; > >> proxy_read_timeout 120; > >> } > >> > >> # what to serve if upstream is not available or crashes > >> #error_page 500 502 503 504 /media/50x.html;} > >> > >> Here comes the problem. When doing "ab" (ApacheBenchmark) on both > >> machines i get the following results: (vmware machine being almost > the same > >> size as the ec2 small instance) > >> > >> *Amazon EC2:* > >> > >> nginx version: nginx version: nginx/1.2.6 > >> > >> uwsgi version:1.4.5 > >> > >> Concurrency Level: 500Time taken for tests: 21.954 > secondsComplete requests: 5000Failed requests: 126 > >> (Connect: 0, Receive: 0, Length: 12 > >> 6, Exceptions: 0)Write errors: 0Non-2xx responses: > 4874Total transferred: 4142182 bytes > >> HTML transferred: 3384914 bytesRequests per second: 227.75 > [#/sec] (mean)Time per request: 2195.384 [ms] (mean)Time per > request: 4.391 [ms] (mean, across all concurrent > requests)Transfer rate: 184.25 [Kbytes/sec] received > >> > >> *Vmware machine (CentOS 6):* > >> > >> nginx version: nnginx version: nginx/1.0.15 > >> > >> uwsgi version: 1.4.5 > >> > >> Concurrency Level: 1000Time taken for tests: 1.094 > secondsComplete requests: 5000Failed requests: 0Write > errors: 0Total transferred: 30190000 bytes > >> HTML transferred: 28930000 bytesRequests per second: > 4568.73 [#/sec] (mean)Time per request: 218.879 [ms] (mean)Time > per request: 0.219 [ms] (mean, across all concurrent > requests)Transfer rate: 26939.42 [Kbytes/sec] received > >> > >> As you can see... all requests on the ec2 instance fail with either > >> timeout errors or "Client prematurely disconnected". However, on my > vmware > >> machine all requests go through with no problems. The other thing > is the > >> difference in reqs / second i am doing on both machines. > >> > >> What am i doing wrong on ec2? > >> > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > >> > > > > > > Is there any data in the nginX error logs? > > > > > > -- > > *Anoop P Alias* > > GNUSYS > > > > > > _______________________________________________ > > nginx mailing > listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236391,236408#msg-236408 From nginx-forum at nginx.us Thu Feb 21 07:39:04 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Thu, 21 Feb 2013 02:39:04 -0500 Subject: 1.3.12 occasional segfaults Message-ID: <5bc94d0bfe9296985cd6bdf1ed0cc12d.NginxMailingListEnglish@forum.nginx.org> It's a fairly vanilla install of nginx (with the exception of the SPDY patch). We are seeing roughly 1 segfault every hour or so on each web server... I didn't generate a debugging log, because I wasn't sure how big something like that would be for an entire hour, but I can if needed. ------- nginx -V ------- nginx version: nginx/1.3.12 built by gcc 4.5.1 20101208 [gcc-4_5-branch revision 167585] (SUSE Linux) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/usr/log/ngnix/error.log --http-log-path=/usr/log/ngnix/access.log --with-openssl=/home/software_source/openssl-1.0.1c --with-cc-opt='-I /usr/local/ssl/include' --with-ld-opt='-L /usr/local/ssl/lib' --without-http_proxy_module --without-http_ssi_module --with-http_ssl_module --with-http_stub_status_module --with-http_spdy_module ------- backtrace from core dump: ------- #0 0x00000000004514ea in ngx_http_spdy_send_output_queue (sc=0xd77610) at src/http/ngx_http_spdy.c:713 cl = c = 0x7fe9a5436f90 clcf = out = frame = 0xa250d0 fn = #1 0x00000000004520b2 in ngx_http_spdy_write_handler (wev=) at src/http/ngx_http_spdy.c:644 rc = c = fc = ctx = r = stream = s = sn = sc = 0xd77610 #2 0x000000000041a5ba in ngx_event_process_posted (cycle=, posted=0x854a88) at src/event/ngx_event_posted.c:40 ev = #3 0x000000000041a1d5 in ngx_process_events_and_timers (cycle=0x885a90) at src/event/ngx_event.c:276 flags = 3 timer = delta = 81 #4 0x0000000000420038 in ngx_worker_process_cycle (cycle=0x885a90, data=) at src/os/unix/ngx_process_cycle.c:807 worker = i = c = #5 0x000000000041e9a3 in ngx_spawn_process (cycle=0x885a90, proc=0x41ff6c , data=0x2, name=0x59b51a "worker process", respawn=-3) at src/os/unix/ngx_process.c:198 on = 1 pid = 0 s = 2 #6 0x000000000041f5f6 in ngx_start_worker_processes (cycle=0x885a90, n=4, type=-3) at src/os/unix/ngx_process_cycle.c:362 i = ch = {command = 1, pid = 6348, slot = 1, fd = 19} #7 0x00000000004206d1 in ngx_master_process_cycle (cycle=0x885a90) at src/os/unix/ngx_process_cycle.c:136 title = 0x973f74 "master process /usr/sbin/nginx" p = size = 31 i = 1 n = sigio = set = {__val = {0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0}} itv = {it_interval = {tv_sec = 0, tv_usec = 0}, it_value = {tv_sec = 0, tv_usec = 0}} live = delay = ls = ccf = 0x0 #8 0x000000000040546c in main (argc=, argv=) at src/core/nginx.c:412 i = log = 0x84d7a0 cycle = 0x0 init_cycle = {conf_ctx = 0x0, pool = 0x884fa0, log = 0x84d7a0, new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data = 0x0, action = 0x0}, files = 0x1, free_connections = 0x0, free_connection_n = 0, reusable_connections_queue = { prev = 0x0, next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 1, pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 1, nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0, conf_file = { len = 0, data = 0x0}, conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 1, data = 0x0}, prefix = {len = 0, data = 0x0}, lock_file = {len = 0, data = 0x0}, hostname = {len = 0, data = 0x0}} ccf = Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236409,236409#msg-236409 From jgehrcke at googlemail.com Thu Feb 21 08:00:07 2013 From: jgehrcke at googlemail.com (Jan-Philip Gehrcke) Date: Thu, 21 Feb 2013 09:00:07 +0100 Subject: Fwd: nginx performance on Amazon EC2 In-Reply-To: References: Message-ID: <5125D407.3060602@googlemail.com> Do you run the benchmark program on the same virtual machine as the web stack?? For yielding conclusive results, you certainly don't want to make ab, nginx, and all other entities involved compete for the same CPU. If yes, try running ab from a different machine in the same network (make sure your network is not the bottle neck here) and compare your results again. Cheers, Jan-Philip On 02/20/2013 09:13 PM, Rakan Alhneiti wrote: > Hello, > > I am running a django app with nginx & uwsgi on an amazon ec2 instance > and a vmware machine almost the same size as the ec2 one. Here's how i > run uwsgi: > > > |sudo uwsgi-b25000 --chdir=/www/python/apps/pyapp--module=wsgi:application--env DJANGO_SETTINGS_MODULE=settings--socket=/tmp/pyapp.socket--cheaper=8 --processes=16 --harakiri=10 --max-requests=5000 --vacuum--master--pidfile=/tmp/pyapp-master.pid--uid=220 --gid=499| > > & nginx configurations: > > > |server{ > listen80; > server_name test.com > > root/www/python/apps/pyapp/; > > access_log/var/log/nginx/test.com.access.log; > error_log/var/log/nginx/test.com.error.log; > > #https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production > location/static/ { > alias /www/python/apps/pyapp/static/; > expires30d; > } > > location/media/ { > alias /www/python/apps/pyapp/media/; > expires30d; > } > > location/ { > uwsgi_pass unix:///tmp/pyapp.socket; > include uwsgi_params; > proxy_read_timeout120; > } > > # what to serve if upstream is not available or crashes > #error_page 500 502 503 504 /media/50x.html; > }| > > Here comes the problem. When doing "ab" (ApacheBenchmark) on both > machines i get the following results: (vmware machine being almost the > same size as the ec2 small instance) > > *Amazon EC2:* > > nginx version: nginx version: nginx/1.2.6 > > uwsgi version:1.4.5 > > > |Concurrency Level: 500 > Time takenfor tests: 21.954 seconds > Complete requests: 5000 > Failed requests: 126 > (Connect: 0, Receive: 0, Length: 126, Exceptions: 0) > Write errors: 0 > Non-2xx responses: 4874 > Total transferred: 4142182 bytes > HTML transferred: 3384914 bytes > Requests per second: 227.75 [#/sec] (mean) > Time per request: 2195.384 [ms] (mean) > Time per request: 4.391 [ms] (mean, across all concurrent requests) > Transfer rate: 184.25 [Kbytes/sec] received| > > *Vmware machine (CentOS 6):* > > nginx version: nnginx version: nginx/1.0.15 > > uwsgi version: 1.4.5 > > > |Concurrency Level: 1000 > Time takenfor tests: 1.094 seconds > Complete requests: 5000 > Failed requests: 0 > Write errors: 0 > Total transferred: 30190000 bytes > HTML transferred: 28930000 bytes > Requests per second: 4568.73 [#/sec] (mean) > Time per request: 218.879 [ms] (mean) > Time per request: 0.219 [ms] (mean, across all concurrent requests) > Transfer rate: 26939.42 [Kbytes/sec] received| > > As you can see... all requests on the ec2 instance fail with either > timeout errors or "Client prematurely disconnected". However, on my > vmware machine all requests go through with no problems. The other thing > is the difference in reqs / second i am doing on both machines. > > What am i doing wrong on ec2? > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From list-reader at koshie.fr Thu Feb 21 09:26:22 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_K=C3=A9vin?=) Date: Thu, 21 Feb 2013 10:26:22 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: <20130220235410.GO32392@craic.sysops.org> References: <20130220224005.GM32392@craic.sysops.org> <20130220231310.GN32392@craic.sysops.org> <20130220235410.GO32392@craic.sysops.org> Message-ID: > Hi there, > >> >Looking at your config file, plus every file include'd in it, can you >> >see which one server{} block is used for this request? (You'll need >> >to look at all of the "listen" directives first, and then all of the >> >"server_name" directives in the server{}s with the "listen" that best >> >matches the incoming ip:port.) >> > >> >What fastcgi_pass line is used in that one server{} block? >> >> I've do a grep -R 'listen' /etc/nginx/conf.d/ and every vhost >> configuration file have two lines, exactly the same: >> >> listen 80; >> listen 443 ssl; > > Ok, since all of the "listen" lines are the same, then the server{} > that is chosen depends on the host name used in the test request. > > So: what is the hostname in the url that you try to get, when you see > the 502 error? Trying to install a Wordpress, used a info.php page here: http://blog.koshie.fr/wp-admin/info.php As you can see, there is a 502 Bad Gateway error. > And which vhost configuration file has the matching server_name > directive? If there is no exact match, then the first regex match is > used. If there is none, then the default server{} is used. Which exactly > is "the first regex" may not be immediately obvious if there are some > in different files. Logically, this is the vhost configuration file for http://blog.koshie.fr/wp-admin/info.php: server { listen 80; listen 443 ssl; # server_name 176.31.122.26; server_name blog.koshie.fr www.blog.koshie.fr; root /var/www/koshie.fr/blog/wordpress; msie_padding on; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; # index index.php; # fastcgi_index index.php; client_max_body_size 8M; client_body_buffer_size 256K; location ~ \.php$ { include fastcgi_params; # Assuming php-fastcgi running on localhost port 9000 # fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; } } } >> I'm sorry but I'm not sure to understand why you are asking me to do >> and I >> given you what I can. If I miss something can you point me on the good >> way >> please? > > The problem you report is consistent with the log output you showed. > > But the configuration you showed is not consistent with that log output. > > So something is unexpected. > > Maybe it is simplest if you rename the conf.d directory, then create > a new conf.d directory with just one vhost file. Then reload nginx and > re-do your test of a php request and see what it says. So, above you've the configuration file related to this log error: 2013/02/21 09:37:35 [error] 3631#0: *1476 connect() failed (111: Connection refused) while connecting to upstream, client: 46.218.152.242, server: island.koshie.fr, request: "GET /wp-admin/info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" 2013/02/21 09:37:35 [debug] 3631#0: timer delta: 1 2013/02/21 09:37:35 [debug] 3631#0: posted events 0000000000801CA0 2013/02/21 09:37:35 [debug] 3631#0: posted event 0000000000801CA0 2013/02/21 09:37:35 [debug] 3631#0: posted event 0000000000000000 2013/02/21 09:37:35 [debug] 3631#0: worker cycle 2013/02/21 09:37:35 [debug] 3631#0: epoll timer: 65000 I've copied /etc/nginx/conf.d/ to /etc/nginx/conf.d.backup/, removed files into /etc/nginx/conf.d/ and moved /etc/nginx/conf.d/koshie.fr.conf and /etc/nginx/conf.d/blog.koshie.fr.conf to /etc/nginx/conf.d/ (because unless the both conf I has an error... With squid ! I don't get it, I've no squid on my desktop or my server, anyway... I think it's maybe related to the fact koshie.fr is the main domain?), restarted nginx and koshie.fr works, but blog.koshie.fr/wordpress/info.php gimme again a 502. This is the log for this request: 2013/02/21 10:21:22 [error] 1097#0: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 46.218.152.242, server: koshie.fr, request: "GET /wordpress/info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" 2013/02/21 10:21:22 [debug] 1097#0: timer delta: 0 2013/02/21 10:21:22 [debug] 1097#0: posted events 00000000007299F0 2013/02/21 10:21:22 [debug] 1097#0: posted event 00000000007299F0 2013/02/21 10:21:22 [debug] 1097#0: posted event 0000000000000000 2013/02/21 10:21:22 [debug] 1097#0: worker cycle 2013/02/21 10:21:22 [debug] 1097#0: epoll timer: 65000 > If it still fails, then you have a simpler test case to work from. What is this test case please? Cordially, Koshie From vbart at nginx.com Thu Feb 21 10:00:10 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 21 Feb 2013 14:00:10 +0400 Subject: 1.3.12 occasional segfaults In-Reply-To: <5bc94d0bfe9296985cd6bdf1ed0cc12d.NginxMailingListEnglish@forum.nginx.org> References: <5bc94d0bfe9296985cd6bdf1ed0cc12d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201302211400.10853.vbart@nginx.com> On Thursday 21 February 2013 11:39:04 digitalpoint wrote: > It's a fairly vanilla install of nginx (with the exception of the SPDY > patch). We are seeing roughly 1 segfault every hour or so on each web > server... I didn't generate a debugging log, because I wasn't sure how big > something like that would be for an entire hour, but I can if needed. > First of all, please update nginx and spdy-patch to the latest versions: http://nginx.org/download/nginx-1.3.13.tar.gz http://nginx.org/patches/spdy/patch.spdy-65_1.3.13.txt It is possible that the problem is already solved. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html > ------- > nginx -V > ------- > > nginx version: nginx/1.3.12 > built by gcc 4.5.1 20101208 [gcc-4_5-branch revision 167585] (SUSE Linux) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --pid-path=/var/run/nginx.pid --error-log-path=/usr/log/ngnix/error.log > --http-log-path=/usr/log/ngnix/access.log > --with-openssl=/home/software_source/openssl-1.0.1c --with-cc-opt='-I > /usr/local/ssl/include' --with-ld-opt='-L /usr/local/ssl/lib' > --without-http_proxy_module --without-http_ssi_module > --with-http_ssl_module --with-http_stub_status_module > --with-http_spdy_module > [...] From nginx-forum at nginx.us Thu Feb 21 10:17:21 2013 From: nginx-forum at nginx.us (Wolfsrudel) Date: Thu, 21 Feb 2013 05:17:21 -0500 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: References: Message-ID: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> Dump question, but why did you put the vhost-files into "conf.d"? Normally they are stored in "sites-available" and symlinked in "sites-enabled". nginx (as apache) uses this directory to read all information about the vhosts. Are there any templates in "sites-enabled"? How do they look like? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236396,236413#msg-236413 From nginx-forum at nginx.us Thu Feb 21 10:26:57 2013 From: nginx-forum at nginx.us (Wolfsrudel) Date: Thu, 21 Feb 2013 05:26:57 -0500 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> References: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5ff1c9eb9054a2904fcc03e561887aea.NginxMailingListEnglish@forum.nginx.org> Can you please do a `grep 'fastcgi_pass ' /etc/nginx/*/*` and post the output? Maybe there are other configuration files with a fefault 'fastcgi_pass' which overwrites your vhost. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236396,236415#msg-236415 From list-reader at koshie.fr Thu Feb 21 11:07:41 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_K=C3=A9vin?=) Date: Thu, 21 Feb 2013 12:07:41 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> References: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Le Thu, 21 Feb 2013 11:17:21 +0100, Wolfsrudel a ?crit: > Dump question, but why did you put the vhost-files into "conf.d"? > Normally > they are stored in "sites-available" and symlinked in "sites-enabled". > nginx > (as apache) uses this directory to read all information about the vhosts. > Are there any templates in "sites-enabled"? How do they look like? To be honest I don' know. When I've setup this configuration (more than 1 year ago I think) I've probably take 2 or 3 days on #nginx IRC channel and when it was working I've never modified the configuration. This is the only file into /etc/nginx/sites-enabled/ inf file default: # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} -- Sorry for my english, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr From list-reader at koshie.fr Thu Feb 21 11:12:15 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_K=C3=A9vin?=) Date: Thu, 21 Feb 2013 12:12:15 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: <5ff1c9eb9054a2904fcc03e561887aea.NginxMailingListEnglish@forum.nginx.org> References: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> <5ff1c9eb9054a2904fcc03e561887aea.NginxMailingListEnglish@forum.nginx.org> Message-ID: Le Thu, 21 Feb 2013 11:26:57 +0100, Wolfsrudel a ?crit: > Can you please do a `grep 'fastcgi_pass ' /etc/nginx/*/*` and post the > output? Maybe there are other configuration files with a fefault > 'fastcgi_pass' which overwrites your vhost. With a sudo grep -R 'fastcgi_pass ' /etc/nginx/*/* I have exactly what you tell and I know that, but I don't know if it can create a conflict. I'm using the new fastcgi_pass value for 3 of my vhost now: fastcgi_pass unix:/var/run/php5-fpm.sock; But older (worked on CentOS) has for now fastcgi_pass 127.0.0.1:9000; I've see that too: /etc/nginx/sites-available/default: # fastcgi_pass 127.0.0.1:9000; /etc/nginx/sites-available/default: # fastcgi_pass unix:/var/run/php5-fpm.sock; /etc/nginx/sites-enabled/default: # fastcgi_pass 127.0.0.1:9000; /etc/nginx/sites-enabled/default: # fastcgi_pass unix:/var/run/php5-fpm.sock; I never touched them in my whole Nginx life. Cordially, Koshie -- Sorry for my english, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr From m6rkalan at gmail.com Thu Feb 21 12:46:35 2013 From: m6rkalan at gmail.com (Mark Alan) Date: Thu, 21 Feb 2013 12:46:35 +0000 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: References: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5126172d.a567b40a.3f31.6d36@mx.google.com> On Thu, 21 Feb 2013 12:07:41 +0100, GASPARD K?vin wrote: > To be honest I don' know. When I've setup this configuration (more > than 1 year ago I think) It seems that you are trying to force a non Debian directory structure into a Debian one. Show us the result of: nginx -V 2>&1|sed 's,--,\n--,g' find /etc/nginx/ -name *.conf|xargs -r grep -v '^\s*\(#\|$\)' find /etc/nginx/sites-*/*|xargs -r grep -v '^\s*\(#\|$\)' M. From list-reader at koshie.fr Thu Feb 21 13:07:45 2013 From: list-reader at koshie.fr (=?utf-8?Q?GASPARD_K=C3=A9vin?=) Date: Thu, 21 Feb 2013 14:07:45 +0100 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: <5126172d.a567b40a.3f31.6d36@mx.google.com> References: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> <5126172d.a567b40a.3f31.6d36@mx.google.com> Message-ID: Le Thu, 21 Feb 2013 13:46:35 +0100, Mark Alan a ?crit: > On Thu, 21 Feb 2013 12:07:41 +0100, GASPARD K?vin > wrote: > >> To be honest I don' know. When I've setup this configuration (more >> than 1 year ago I think) > > It seems that you are trying to force a non Debian directory > structure into a Debian one. > > Show us the result of: > > nginx -V 2>&1|sed 's,--,\n--,g' nginx version: nginx/1.2.1 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-auth-pam --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-echo --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-upstream-fair --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-dav-ext-module > find /etc/nginx/ -name *.conf|xargs -r grep -v '^\s*\(#\|$\)' /etc/nginx/conf.d/koshie-island.koshie.fr.conf:server { /etc/nginx/conf.d/koshie-island.koshie.fr.conf: listen 80; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: server_name island.koshie.fr www.island.koshie.fr; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: root /var/www/koshie.fr/island; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/koshie-island.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/koshie-island.koshie.fr.conf: } /etc/nginx/conf.d/koshie-island.koshie.fr.conf:} /etc/nginx/conf.d/documents.koshie.fr.conf:server { /etc/nginx/conf.d/documents.koshie.fr.conf: listen 80; /etc/nginx/conf.d/documents.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/documents.koshie.fr.conf: server_name documents.koshie.fr www.documents.koshie.fr; /etc/nginx/conf.d/documents.koshie.fr.conf: root /var/www/koshie.fr/documents; /etc/nginx/conf.d/documents.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/documents.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/documents.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/documents.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/documents.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/documents.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/documents.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/documents.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/documents.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/documents.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/documents.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/documents.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/documents.koshie.fr.conf: } /etc/nginx/conf.d/documents.koshie.fr.conf:} /etc/nginx/conf.d/jmuller.koshie.fr.conf:server { /etc/nginx/conf.d/jmuller.koshie.fr.conf: listen 80; /etc/nginx/conf.d/jmuller.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/jmuller.koshie.fr.conf: server_name 176.31.122.26; /etc/nginx/conf.d/jmuller.koshie.fr.conf: server_name jmuller.koshie.fr www.jmuller.koshie.fr; /etc/nginx/conf.d/jmuller.koshie.fr.conf: root /var/www/koshie.fr/jmuller; /etc/nginx/conf.d/jmuller.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/jmuller.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/jmuller.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/jmuller.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/jmuller.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/jmuller.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/jmuller.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/jmuller.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/jmuller.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/jmuller.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/jmuller.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/jmuller.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/jmuller.koshie.fr.conf: } /etc/nginx/conf.d/jmuller.koshie.fr.conf:location / { /etc/nginx/conf.d/jmuller.koshie.fr.conf: rewrite ^/([a-z_]+)\.html$ /index.php?page=$1; /etc/nginx/conf.d/jmuller.koshie.fr.conf: } /etc/nginx/conf.d/jmuller.koshie.fr.conf:} /etc/nginx/conf.d/images.koshie.fr.conf:server { /etc/nginx/conf.d/images.koshie.fr.conf: listen 80; /etc/nginx/conf.d/images.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/images.koshie.fr.conf: server_name images.koshie.fr www.images.koshie.fr; /etc/nginx/conf.d/images.koshie.fr.conf: root /var/www/koshie.fr/images; /etc/nginx/conf.d/images.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/images.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/images.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/images.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/images.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/images.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/images.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/images.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/images.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/images.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/images.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/images.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/images.koshie.fr.conf: } /etc/nginx/conf.d/images.koshie.fr.conf:} /etc/nginx/conf.d/wiki.koshie.fr.conf:server { /etc/nginx/conf.d/wiki.koshie.fr.conf: listen 80; /etc/nginx/conf.d/wiki.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/wiki.koshie.fr.conf: server_name wiki.koshie.fr www.wiki.koshie.fr; /etc/nginx/conf.d/wiki.koshie.fr.conf: root /var/www/koshie.fr/wiki; /etc/nginx/conf.d/wiki.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/wiki.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/wiki.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/wiki.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/wiki.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/wiki.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/wiki.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/wiki.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/wiki.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/wiki.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/wiki.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/wiki.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/wiki.koshie.fr.conf: } /etc/nginx/conf.d/wiki.koshie.fr.conf:} /etc/nginx/conf.d/enka.koshie.fr.conf:server { /etc/nginx/conf.d/enka.koshie.fr.conf: listen 80; /etc/nginx/conf.d/enka.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/enka.koshie.fr.conf: server_name enka.koshie.fr www.enka.koshie.fr; /etc/nginx/conf.d/enka.koshie.fr.conf: root /var/www/koshie.fr/enka; /etc/nginx/conf.d/enka.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/enka.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/enka.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/enka.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/enka.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/enka.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/enka.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/enka.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/enka.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/enka.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/enka.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/enka.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/enka.koshie.fr.conf: } /etc/nginx/conf.d/enka.koshie.fr.conf:} grep: /etc/nginx/conf.d/doinalefort.com.conf: Permission denied /etc/nginx/conf.d/geekdujour.koshie.fr.conf:server { /etc/nginx/conf.d/geekdujour.koshie.fr.conf: listen 80; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: server_name geekdujour.koshie.fr www.geekdujour.koshie.fr; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: root /var/www/koshie.fr/geekdujour; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: index index.php; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_index index.php; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/geekdujour.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/geekdujour.koshie.fr.conf: } /etc/nginx/conf.d/geekdujour.koshie.fr.conf:} /etc/nginx/conf.d/vaporideas.fr.conf:server { /etc/nginx/conf.d/vaporideas.fr.conf: listen 80; /etc/nginx/conf.d/vaporideas.fr.conf: listen 443 ssl; /etc/nginx/conf.d/vaporideas.fr.conf: server_name vaporideas.fr www.vaporideas.fr; /etc/nginx/conf.d/vaporideas.fr.conf: root /var/www/koshie.fr/vaporideas; /etc/nginx/conf.d/vaporideas.fr.conf: msie_padding on; /etc/nginx/conf.d/vaporideas.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/vaporideas.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/vaporideas.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/vaporideas.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/vaporideas.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/vaporideas.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/vaporideas.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/vaporideas.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/vaporideas.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/vaporideas.fr.conf: include fastcgi_params; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/vaporideas.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/vaporideas.fr.conf: } /etc/nginx/conf.d/vaporideas.fr.conf:} /etc/nginx/conf.d/le-numero.koshie.fr.conf:server { /etc/nginx/conf.d/le-numero.koshie.fr.conf: listen 80; /etc/nginx/conf.d/le-numero.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/le-numero.koshie.fr.conf: server_name le-numero.koshie.fr www.le-numero.koshie.fr; /etc/nginx/conf.d/le-numero.koshie.fr.conf: root /var/www/koshie.fr/le-numero; /etc/nginx/conf.d/le-numero.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/le-numero.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/le-numero.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/le-numero.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/le-numero.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/le-numero.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/le-numero.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/le-numero.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/le-numero.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/le-numero.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/le-numero.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/le-numero.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/le-numero.koshie.fr.conf: } /etc/nginx/conf.d/le-numero.koshie.fr.conf:} /etc/nginx/conf.d/koshie.fr.conf:server { /etc/nginx/conf.d/koshie.fr.conf: listen 80; /etc/nginx/conf.d/koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/koshie.fr.conf: server_name koshie.fr www.koshie.fr; /etc/nginx/conf.d/koshie.fr.conf: root /var/www/koshie.fr/; /etc/nginx/conf.d/koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/koshie.fr.conf: } /etc/nginx/conf.d/koshie.fr.conf:} /etc/nginx/conf.d/bacqup.com.conf:server { /etc/nginx/conf.d/bacqup.com.conf: listen 80; /etc/nginx/conf.d/bacqup.com.conf: listen 443 ssl; /etc/nginx/conf.d/bacqup.com.conf: server_name bacqup.com www.bacqup.com; /etc/nginx/conf.d/bacqup.com.conf: root /var/www/bacqup.com/; /etc/nginx/conf.d/bacqup.com.conf: msie_padding on; /etc/nginx/conf.d/bacqup.com.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/bacqup.com.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/bacqup.com.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/bacqup.com.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/bacqup.com.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/bacqup.com.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/bacqup.com.conf: index index.php; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_index index.php; /etc/nginx/conf.d/bacqup.com.conf: client_max_body_size 8M; /etc/nginx/conf.d/bacqup.com.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/bacqup.com.conf: location ~ \.php$ { /etc/nginx/conf.d/bacqup.com.conf: include fastcgi_params; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/bacqup.com.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/bacqup.com.conf: } /etc/nginx/conf.d/bacqup.com.conf:location / { /etc/nginx/conf.d/bacqup.com.conf: rewrite ^/([a-z_]+)\.html$ /index.php?page=$1; /etc/nginx/conf.d/bacqup.com.conf: } /etc/nginx/conf.d/bacqup.com.conf:} grep: /etc/nginx/conf.d/doinalefort.fr.conf: Permission denied /etc/nginx/conf.d/for-help.koshie.fr.conf:server { /etc/nginx/conf.d/for-help.koshie.fr.conf: listen 80; /etc/nginx/conf.d/for-help.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/for-help.koshie.fr.conf: server_name for-help.koshie.fr www.for-help.koshie.fr; /etc/nginx/conf.d/for-help.koshie.fr.conf: root /var/www/koshie.fr/for-help; /etc/nginx/conf.d/for-help.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/for-help.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/for-help.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/for-help.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/for-help.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/for-help.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/for-help.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/for-help.koshie.fr.conf: index index.php; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_index index.php; /etc/nginx/conf.d/for-help.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/for-help.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/for-help.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/for-help.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_pass unix:/var/run/php5-fpm.sock; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/for-help.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/for-help.koshie.fr.conf: } /etc/nginx/conf.d/for-help.koshie.fr.conf:} /etc/nginx/conf.d/neko.koshie.fr.conf:server { /etc/nginx/conf.d/neko.koshie.fr.conf: listen 80; /etc/nginx/conf.d/neko.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/neko.koshie.fr.conf: server_name neko.koshie.fr www.neko.koshie.fr; /etc/nginx/conf.d/neko.koshie.fr.conf: root /var/www/koshie.fr/neko; /etc/nginx/conf.d/neko.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/neko.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/neko.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/neko.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/neko.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/neko.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/neko.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/neko.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/neko.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/neko.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/neko.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/neko.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/neko.koshie.fr.conf: } /etc/nginx/conf.d/neko.koshie.fr.conf:} /etc/nginx/conf.d/eo.koshie.fr.conf:server { /etc/nginx/conf.d/eo.koshie.fr.conf: listen 80; /etc/nginx/conf.d/eo.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/eo.koshie.fr.conf: server_name eo.koshie.fr www.eo.koshie.fr; /etc/nginx/conf.d/eo.koshie.fr.conf: root /var/www/koshie.fr/eo; /etc/nginx/conf.d/eo.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/eo.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/eo.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/eo.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/eo.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/eo.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/eo.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/eo.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/eo.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/eo.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/eo.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/eo.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/eo.koshie.fr.conf: } /etc/nginx/conf.d/eo.koshie.fr.conf:} grep: /etc/nginx/conf.d/doinalefort.koshie.fr.conf: Permission denied /etc/nginx/conf.d/metal.koshie.fr.conf:server { /etc/nginx/conf.d/metal.koshie.fr.conf: listen 80; /etc/nginx/conf.d/metal.koshie.fr.conf: listen 443 ssl; /etc/nginx/conf.d/metal.koshie.fr.conf: server_name metal.koshie.fr www.metal.koshie.fr; /etc/nginx/conf.d/metal.koshie.fr.conf: root /var/www/koshie.fr/metal; /etc/nginx/conf.d/metal.koshie.fr.conf: msie_padding on; /etc/nginx/conf.d/metal.koshie.fr.conf: ssl_session_timeout 5m; /etc/nginx/conf.d/metal.koshie.fr.conf: ssl_protocols SSLv2 SSLv3 TLSv1; /etc/nginx/conf.d/metal.koshie.fr.conf: ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/metal.koshie.fr.conf: ssl_prefer_server_ciphers on; /etc/nginx/conf.d/metal.koshie.fr.conf: error_log /var/log/nginx/error.log; /etc/nginx/conf.d/metal.koshie.fr.conf: access_log /var/log/nginx/access.log; /etc/nginx/conf.d/metal.koshie.fr.conf: client_max_body_size 8M; /etc/nginx/conf.d/metal.koshie.fr.conf: client_body_buffer_size 256K; /etc/nginx/conf.d/metal.koshie.fr.conf: location ~ \.php$ { /etc/nginx/conf.d/metal.koshie.fr.conf: include fastcgi_params; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_pass 127.0.0.1:9000; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_connect_timeout 60; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_send_timeout 180; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_read_timeout 180; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_buffer_size 128k; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_buffers 4 256k; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_busy_buffers_size 256k; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_temp_file_write_size 256k; /etc/nginx/conf.d/metal.koshie.fr.conf: fastcgi_intercept_errors on; /etc/nginx/conf.d/metal.koshie.fr.conf: } /etc/nginx/conf.d/metal.koshie.fr.conf:} /etc/nginx/conf.d/default.conf:server { /etc/nginx/conf.d/default.conf: listen 80; /etc/nginx/conf.d/default.conf: server_name localhost; /etc/nginx/conf.d/default.conf: location / { /etc/nginx/conf.d/default.conf: root /usr/share/nginx/html; /etc/nginx/conf.d/default.conf: index index.html index.htm; /etc/nginx/conf.d/default.conf: } /etc/nginx/conf.d/default.conf: error_page 500 502 503 504 /50x.html; /etc/nginx/conf.d/default.conf: location = /50x.html { /etc/nginx/conf.d/default.conf: root /usr/share/nginx/html; /etc/nginx/conf.d/default.conf: } /etc/nginx/conf.d/default.conf:} /etc/nginx/nginx.conf:user www-data; /etc/nginx/nginx.conf:worker_processes 1; /etc/nginx/nginx.conf:error_log /var/log/nginx/error.log debug; /etc/nginx/nginx.conf:pid /var/run/nginx.pid; /etc/nginx/nginx.conf:events { /etc/nginx/nginx.conf: worker_connections 1024; /etc/nginx/nginx.conf:} /etc/nginx/nginx.conf:http { /etc/nginx/nginx.conf: include /etc/nginx/mime.types; /etc/nginx/nginx.conf: default_type application/octet-stream; /etc/nginx/nginx.conf: index index.html index.php; /etc/nginx/nginx.conf: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' /etc/nginx/nginx.conf: '$status $body_bytes_sent "$http_referer" ' /etc/nginx/nginx.conf: '"$http_user_agent" "$http_x_forwarded_for"'; /etc/nginx/nginx.conf: access_log /var/log/nginx/access.log main; /etc/nginx/nginx.conf: sendfile on; /etc/nginx/nginx.conf: keepalive_timeout 65; /etc/nginx/nginx.conf: gzip on; /etc/nginx/nginx.conf: include /etc/nginx/conf.d/*.conf; /etc/nginx/nginx.conf: include /etc/nginx/fastcgi_params; /etc/nginx/nginx.conf:} > find /etc/nginx/sites-*/*|xargs -r grep -v '^\s*\(#\|$\)' /etc/nginx/sites-available/default:server { /etc/nginx/sites-available/default: root /usr/share/nginx/www; /etc/nginx/sites-available/default: index index.html index.htm; /etc/nginx/sites-available/default: server_name localhost; /etc/nginx/sites-available/default: location / { /etc/nginx/sites-available/default: try_files $uri $uri/ /index.html; /etc/nginx/sites-available/default: } /etc/nginx/sites-available/default: location /doc/ { /etc/nginx/sites-available/default: alias /usr/share/doc/; /etc/nginx/sites-available/default: autoindex on; /etc/nginx/sites-available/default: allow 127.0.0.1; /etc/nginx/sites-available/default: allow ::1; /etc/nginx/sites-available/default: deny all; /etc/nginx/sites-available/default: } /etc/nginx/sites-available/default:} /etc/nginx/sites-enabled/default:server { /etc/nginx/sites-enabled/default: root /usr/share/nginx/www; /etc/nginx/sites-enabled/default: index index.html index.htm; /etc/nginx/sites-enabled/default: server_name localhost; /etc/nginx/sites-enabled/default: location / { /etc/nginx/sites-enabled/default: try_files $uri $uri/ /index.html; /etc/nginx/sites-enabled/default: } /etc/nginx/sites-enabled/default: location /doc/ { /etc/nginx/sites-enabled/default: alias /usr/share/doc/; /etc/nginx/sites-enabled/default: autoindex on; /etc/nginx/sites-enabled/default: allow 127.0.0.1; /etc/nginx/sites-enabled/default: allow ::1; /etc/nginx/sites-enabled/default: deny all; /etc/nginx/sites-enabled/default: } /etc/nginx/sites-enabled/default:} Cordially, koshie -- Sorry for my english, I'm trying the best in each e-mail writing. Tell me if I'm not clear enough. This mail account is only for list reading, to contact me send an e-mail at kevingaspard at koshie.fr From nginx-forum at nginx.us Thu Feb 21 13:27:58 2013 From: nginx-forum at nginx.us (mrtn) Date: Thu, 21 Feb 2013 08:27:58 -0500 Subject: How to check the existence of a http-only secure cookie In-Reply-To: <20130220222218.GL32392@craic.sysops.org> References: <20130220222218.GL32392@craic.sysops.org> Message-ID: <0193c3514991fc04d3c84ca72c976dc4.NginxMailingListEnglish@forum.nginx.org> i see. since you mentioned it, is there any way to check for http-only and secure properties of a cookie using nginx? In other words, combined with the original question above, i want to check if a given a cookie is present and it is http-only and secure, otherwise, reject the request with a 404. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236394,236423#msg-236423 From vbart at nginx.com Thu Feb 21 13:40:56 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 21 Feb 2013 17:40:56 +0400 Subject: How to check the existence of a http-only secure cookie In-Reply-To: <0193c3514991fc04d3c84ca72c976dc4.NginxMailingListEnglish@forum.nginx.org> References: <20130220222218.GL32392@craic.sysops.org> <0193c3514991fc04d3c84ca72c976dc4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201302211740.56058.vbart@nginx.com> On Thursday 21 February 2013 17:27:58 mrtn wrote: > i see. since you mentioned it, is there any way to check for http-only and > secure properties of a cookie using nginx? There are no such properties in the Cookie request header. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html > In other words, combined with > the original question above, i want to check if a given a cookie is > present and it is http-only and secure, otherwise, reject the request with > a 404. > From francis at daoine.org Thu Feb 21 13:50:41 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 21 Feb 2013 13:50:41 +0000 Subject: 502 bad gateway error with php5-fpm on Debian 7 In-Reply-To: References: <20130220224005.GM32392@craic.sysops.org> <20130220231310.GN32392@craic.sysops.org> <20130220235410.GO32392@craic.sysops.org> Message-ID: <20130221135041.GP32392@craic.sysops.org> On Thu, Feb 21, 2013 at 10:26:22AM +0100, GASPARD K?vin wrote: Hi there, > >So: what is the hostname in the url that you try to get, when you see > >the 502 error? > > Trying to install a Wordpress, used a info.php page here: > http://blog.koshie.fr/wp-admin/info.php Ok - so the one server{} block that is used is either the one that has server_name blog.koshie.fr, or is the default one. > As you can see, there is a 502 Bad Gateway error. Yes, and that error log shows that: > 2013/02/21 10:21:22 [error] 1097#0: *5 connect() failed (111: Connection > refused) while connecting to upstream, client: 46.218.152.242, server: > koshie.fr, request: "GET /wordpress/info.php HTTP/1.1", upstream: > "fastcgi://127.0.0.1:9000", host: "blog.koshie.fr" it is using the server "koshie.fr", not the server "blog.koshie.fr". Presumably the server "koshie.fr" is the default, and the server "blog.koshie.fr" does not exist. So the configuration that is running, is *not* the configuration that you are showing here. > Logically, this is the vhost configuration file for > http://blog.koshie.fr/wp-admin/info.php: But based on your later mail, this configuration file does not exist. If you want to get this configured correctly, your best bet is probably to simplify the configuration significantly. Leave /etc/nginx/nginx.conf as it is. Let /etc/nginx/conf.d have exactly one file in it, this one. Then run your test and see if it works or fails. > >Maybe it is simplest if you rename the conf.d directory, then create > >a new conf.d directory with just one vhost file. Then reload nginx and > >re-do your test of a php request and see what it says. > > So, above you've the configuration file related to this log error: No. That configuration file does not result in this error. > >If it still fails, then you have a simpler test case to work from. > > What is this test case please? Your test case is: * you run "curl -i http://blog.koshie.fr/wordpress/info.php" * you expect to see some useful content * you actually see a 502 error. Then do whatever it takes to get the expected output. I think that one part of the problem is that you have only half-changed from an old system to a new system. You new system has nothing listening on 127.0.0.1:9000, so any configuration that mentions that ip:port is broken. It should be removed, or replaced with the unix socket. And your new system does not actually include all of the files that you want it to. When your nginx starts, it reads exactly one configuration file: /etc/nginx/nginx.conf. That file then uses "include" to read some other files. Those other files do not seem to be the ones you want, for some reason. I suggest: stop nginx. Make sure it is stopped, and not running, and has nothing listening on port 80 or port 443. Then look at the files in /etc/nginx/conf.d, and make sure that they are exactly the ones that you want. Then start nginx, access the info.php url, and see if it works. Good luck, f -- Francis Daly francis at daoine.org From n-a-zhubr at yandex.ru Thu Feb 21 14:35:14 2013 From: n-a-zhubr at yandex.ru (Nikolai Zhubr) Date: Thu, 21 Feb 2013 18:35:14 +0400 Subject: Proxying websocket (to e.g. tomcat) Message-ID: <512630A2.6030403@yandex.ru> Hello all! I've installed the latest version 1.3.13 (supposedly supporing websocket) and was trying to push websocket connections through nginx to tomcat while leaving all other (static) content to nginx. To do this, I've added the following "location" for tomcat: location /examples/websocket { proxy_pass http://127.0.0.1:8080/examples/websocket; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } I know that tomcat itself accepts some test websocket connection just fine. Also, getting static pages from tomcat through nginx works well too. However, proxying websocket doesn't seem to work. So I've stopped tomcat and used netcat to see what gets through. Here it goes: Direct request (firefox -> tomcat): GET /examples/websocket/echoStream HTTP/1.1 Host: 192.168.0.91:8080 User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive, Upgrade Sec-WebSocket-Version: 13 Origin: http://192.168.0.91:8080 Sec-WebSocket-Key: ZffbHEsoryDw1gcX51lt8g== Pragma: no-cache Cache-Control: no-cache Upgrade: websocket Request passed though nginx (firefox -> nginx -> tomcat): GET /examples/websocket/echoStream HTTP/1.0 Host: 192.168.0.91 X-Real-IP: 192.168.0.98 Connection: close User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Sec-WebSocket-Version: 13 Origin: http://192.168.0.91 Sec-WebSocket-Key: 9YkdANPMSHDxb8axUbeKwQ== Pragma: no-cache Cache-Control: no-cache One can clearly see that there is a problem. At least, "HTTP/1.1" is lost, "Connection: keep-alive, Upgrade" is lost, and "Upgrade: websocket" is lost. Generally, it does not look like websocket is supported at all (Essentially, apache does this same damage to websocket connections). Honestly I'm not much familiar with nginx, just had to dismiss apache because apparently they refused to even consider support for websocket. So before trying to dig deep into sources I thoght I should ask here. Is it possible to get websocket through nginx really? Maybe I need to configure something additionally? Thank you. Nikolai From vbart at nginx.com Thu Feb 21 14:30:49 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 21 Feb 2013 18:30:49 +0400 Subject: Proxying websocket (to e.g. tomcat) In-Reply-To: <512630A2.6030403@yandex.ru> References: <512630A2.6030403@yandex.ru> Message-ID: <201302211830.49838.vbart@nginx.com> On Thursday 21 February 2013 18:35:14 Nikolai Zhubr wrote: > Hello all! > > I've installed the latest version 1.3.13 (supposedly supporing > websocket) and was trying to push websocket connections through nginx to > tomcat while leaving all other (static) content to nginx. To do this, > I've added the following "location" for tomcat: > > location /examples/websocket { > proxy_pass http://127.0.0.1:8080/examples/websocket; > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > } > > I know that tomcat itself accepts some test websocket connection just > fine. Also, getting static pages from tomcat through nginx works well > too. However, proxying websocket doesn't seem to work. > > So I've stopped tomcat and used netcat to see what gets through. Here it > goes: > > Direct request (firefox -> tomcat): > > GET /examples/websocket/echoStream HTTP/1.1 > Host: 192.168.0.91:8080 > User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20100101 > Firefox/17.0 > Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3 > Accept-Encoding: gzip, deflate > Connection: keep-alive, Upgrade > Sec-WebSocket-Version: 13 > Origin: http://192.168.0.91:8080 > Sec-WebSocket-Key: ZffbHEsoryDw1gcX51lt8g== > Pragma: no-cache > Cache-Control: no-cache > Upgrade: websocket > > Request passed though nginx (firefox -> nginx -> tomcat): > > GET /examples/websocket/echoStream HTTP/1.0 > Host: 192.168.0.91 > X-Real-IP: 192.168.0.98 > Connection: close > User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20100101 > Firefox/17.0 > Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3 > Accept-Encoding: gzip, deflate > Sec-WebSocket-Version: 13 > Origin: http://192.168.0.91 > Sec-WebSocket-Key: 9YkdANPMSHDxb8axUbeKwQ== > Pragma: no-cache > Cache-Control: no-cache > > One can clearly see that there is a problem. At least, "HTTP/1.1" is > lost, "Connection: keep-alive, Upgrade" is lost, and "Upgrade: > websocket" is lost. Generally, it does not look like websocket is > supported at all (Essentially, apache does this same damage to websocket > connections). > > Honestly I'm not much familiar with nginx, just had to dismiss apache > because apparently they refused to even consider support for websocket. > So before trying to dig deep into sources I thoght I should ask here. Is > it possible to get websocket through nginx really? Maybe I need to > configure something additionally? > Yes, it's possible with 1.3.13. And yes, you need some additional configuration. Example: location /examples/websocket { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } Docs: http://nginx.org/r/proxy_http_version http://nginx.org/r/proxy_set_header wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From n-a-zhubr at yandex.ru Thu Feb 21 15:55:45 2013 From: n-a-zhubr at yandex.ru (Nikolai Zhubr) Date: Thu, 21 Feb 2013 19:55:45 +0400 Subject: Proxying websocket (to e.g. tomcat) In-Reply-To: <201302211830.49838.vbart@nginx.com> References: <512630A2.6030403@yandex.ru> <201302211830.49838.vbart@nginx.com> Message-ID: <51264381.1050305@yandex.ru> 21.02.2013 18:30, Valentin V. Bartenev wrote: [...] > > Yes, it's possible with 1.3.13. And yes, you need some additional configuration. > > Example: > > location /examples/websocket { > proxy_pass http://127.0.0.1:8080; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection "upgrade"; > } Ah, this indeed helped! Now it works. Thank you very much. Apparently such configuration implies that different kinds of connections (standard and websocket) can not be mixed in one "location" section? (As far as I understood it, magic headers do not get through directly, but essentially get reintroduced by these configuration settings?) Thank you. Nikolai > > Docs: > > http://nginx.org/r/proxy_http_version > http://nginx.org/r/proxy_set_header > > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > From tema.gurtovoy at gmail.com Thu Feb 21 15:59:03 2013 From: tema.gurtovoy at gmail.com (=?KOI8-R?B?4dLUxc0g59XS1M/Xz8o=?=) Date: Thu, 21 Feb 2013 19:59:03 +0400 Subject: Invalidation After Updates or Deletions Message-ID: Hello everyone, Is there any way to configure nginx proxy cache invalidation on unsafe requests? as it is supposed? in RFC? http://www.ietf.org/rfc/rfc2616.txt ?section 9.1.1 ?section 13.10 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Feb 21 16:03:52 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 21 Feb 2013 20:03:52 +0400 Subject: Proxying websocket (to e.g. tomcat) In-Reply-To: <51264381.1050305@yandex.ru> References: <512630A2.6030403@yandex.ru> <201302211830.49838.vbart@nginx.com> <51264381.1050305@yandex.ru> Message-ID: <201302212003.52403.vbart@nginx.com> On Thursday 21 February 2013 19:55:45 Nikolai Zhubr wrote: > 21.02.2013 18:30, Valentin V. Bartenev wrote: > [...] > > > Yes, it's possible with 1.3.13. And yes, you need some additional > > configuration. > > > > Example: > > location /examples/websocket { > > > > proxy_pass http://127.0.0.1:8080; > > proxy_http_version 1.1; > > proxy_set_header Upgrade $http_upgrade; > > proxy_set_header Connection "upgrade"; > > > > } > > Ah, this indeed helped! Now it works. Thank you very much. > > Apparently such configuration implies that different kinds of > connections (standard and websocket) can not be mixed in one "location" > section? (As far as I understood it, magic headers do not get through > directly, but essentially get reintroduced by these configuration > settings?) > Not quite so. Actually, they can be mixed. That's why the $http_upgrade variable used. If there's no such header in request, then the variable is empty and the header won't be set. You can also set the Connection header to different values depending on existence of the Upagrade header in a request. Example: http { map $http_upgrade $conn_header { default upgrade; '' close; } server { ... location { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $conn_header; } } http://nginx.org/r/map wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From lists at wildgooses.com Thu Feb 21 17:48:21 2013 From: lists at wildgooses.com (Ed W) Date: Thu, 21 Feb 2013 17:48:21 +0000 Subject: How to remove the "IF" in this fcgi config Message-ID: <51265DE5.4070209@wildgooses.com> Hi, I'm trying to setup a php app using fpm (owncloud). I am trying to match urls which can all over the filesystem and of the form: something.php/some/path?params So far I have something like this: location / { try_files $uri $uri/ index.php; } location ~ ^(?P.+\.php)(/|$) { fastcgi_split_path_info ^(.+\.php)(/.*)$; if (!-f $script_name) { #return 404; break; } include fastcgi2.conf; fastcgi_pass 127.0.0.1:9000; } where: fastcgi2.conf is a copy of fastcgi.conf with one change: fastcgi_param REQUEST_URI $uri$is_args$args; How do I avoid using an IF here to check that the php file really exists? Also, why does uncommenting the return 404 cause some kind of breakage (not even sure I understand exactly what happens? Seems like the paths get broken?) Thanks for any help? Ed W From igor at sysoev.ru Thu Feb 21 17:54:45 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 21 Feb 2013 21:54:45 +0400 Subject: How to remove the "IF" in this fcgi config In-Reply-To: <51265DE5.4070209@wildgooses.com> References: <51265DE5.4070209@wildgooses.com> Message-ID: <93E666FD-1A14-496A-BE1D-33DEB9EBB7F8@sysoev.ru> On Feb 21, 2013, at 21:48 , Ed W wrote: > Hi, I'm trying to setup a php app using fpm (owncloud). > > I am trying to match urls which can all over the filesystem and of the form: something.php/some/path?params > > So far I have something like this: > > > location / { > try_files $uri $uri/ index.php; > } > > location ~ ^(?P.+\.php)(/|$) { > fastcgi_split_path_info ^(.+\.php)(/.*)$; > if (!-f $script_name) { > #return 404; > break; > } > include fastcgi2.conf; > fastcgi_pass 127.0.0.1:9000; > } > > where: > fastcgi2.conf is a copy of fastcgi.conf with one change: > fastcgi_param REQUEST_URI $uri$is_args$args; > > > How do I avoid using an IF here to check that the php file really exists? Also, why does uncommenting the return 404 cause some kind of breakage (not even sure I understand exactly what happens? Seems like the paths get broken?) location ~ ^(?.+\.php)(?/|$) { try_files $script_name =404; include fastcgi2.conf; fastcgi_param PATH_INFO $path_info; fastcgi_pass 127.0.0.1:9000; } -- Igor Sysoev http://nginx.com/support.html From lists at wildgooses.com Thu Feb 21 19:08:56 2013 From: lists at wildgooses.com (Ed W) Date: Thu, 21 Feb 2013 19:08:56 +0000 Subject: How to remove the "IF" in this fcgi config In-Reply-To: <93E666FD-1A14-496A-BE1D-33DEB9EBB7F8@sysoev.ru> References: <51265DE5.4070209@wildgooses.com> <93E666FD-1A14-496A-BE1D-33DEB9EBB7F8@sysoev.ru> Message-ID: <512670C8.40208@wildgooses.com> On 21/02/2013 17:54, Igor Sysoev wrote: > location ~ ^(?.+\.php)(?/|$) { > try_files $script_name =404; > > include fastcgi2.conf; > fastcgi_param PATH_INFO $path_info; > fastcgi_pass 127.0.0.1:9000; > } Thanks!! Can I ask you to confirm the correction of a typo in your answer. Do I want this: ....(?.*) { ie is this ammended version correct in the face of a URL such as: blah.php/some/path?param=2 Thanks Ed W From m6rkalan at gmail.com Thu Feb 21 20:06:18 2013 From: m6rkalan at gmail.com (Mark Alan) Date: Thu, 21 Feb 2013 20:06:18 +0000 Subject: nginx + php5-fpm on Debian In-Reply-To: References: <376c0daec1cc52c1eeee17d88e83c8d8.NginxMailingListEnglish@forum.nginx.org> <5126172d.a567b40a.3f31.6d36@mx.google.com> Message-ID: <51267e40.a862b40a.3f0b.192a@mx.google.com> On Thu, 21 Feb 2013 14:07:45 +0100, GASPARD K?vin wrote: > > nginx -V 2>&1|sed 's,--,\n--,g' > nginx version: nginx/1.2.1 Ok, this seems pretty standard for Debian. > > find /etc/nginx/ -name *.conf|xargs -r grep -v '^\s*\(#\|$\)' > /etc/nginx/conf.d/koshie-island.koshie.fr.conf:server { > /etc/nginx/conf.d/koshie-island.koshie.fr.conf: > listen To get out of a hole, first you must stop digging. So, in order to regain control of your Nginx under Debian: 1. Clean /etc/nginx/conf.d/ sudo mkdir /etc/nginx/conf.d-backup sudo mv /etc/nginx/conf.d/* /etc/nginx/conf.d-backup/ 2. Simplify your /etc/nginx/sites-available/default server { listen 80 default_server; server_name_in_redirect off; return 444; } server { listen 443 default_server ssl; server_name_in_redirect off; ssl_certificate /etc/ssl/certs/dummy-web.crt; ssl_certificate_key /etc/ssl/private/dummy-web.key; return 444; } 3. Create simpler domain config files, and put them inside /etc/nginx/sites-available/: # /etc/nginx/sites-available/koshiefr # for http only server { listen 80; server_name www.koshie.fr; # may also add IP here return 301 $scheme://koshie.fr$request_uri; # 301/perm 302/temp } server { listen 80; server_name koshie.fr; root /var/www/koshiefr; # avoid non alfanum here & rm last / #client_max_body_size 8M; #client_body_buffer_size 256K; index index.php /index.php; location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; } } # /etc/nginx/sites-available/koshiefrs # for https only server { listen 443; # ssl not needed here server_name www.koshie.fr; # may also add IP here return 301 $scheme://koshie.fr$request_uri; # 301=perm, 302=temp } server { listen 443 ssl; server_name koshie.fr; root /var/www/koshiefr; # avoid non alfanum here #client_max_body_size 8M; #client_body_buffer_size 256K; ssl_certificate /etc/ssl/certs/dummy-web.crt; ssl_certificate_key /etc/ssl/private/dummy-web.key; location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; } } 4. link files into place: sudo ln -svf /etc/nginx/sites-available/default \ /etc/nginx/sites-enabled/ sudo ln -svf /etc/nginx/sites-available/koshiefr \ \ /etc/nginx/sites-enabled/ sudo ln -svf /etc/nginx/sites-available/koshiefrs \ \ /etc/nginx/sites-enabled/ 5. restart nginx: a) again keep it simple (I don't trust Debian's nginx restart) sudo /etc/init.d/nginx stop sudo /etc/init.d/nginx start sudo /etc/init.d/nginx status b) OR, if the server is 'in production', use alternative 'restart' trying to not disturb the established connections: pgrep nginx && sudo kill -s USR2 $(cat /var/run/nginx.pid) pgrep nginx >/dev/null && sudo kill -s QUIT \ $(cat /var/run/nginx.pid.oldbin) sleep .5 pgrep nginx || sudo /etc/init.d/nginx start # check status sudo /usr/sbin/nginx -t && /etc/init.d/nginx status 6. regarding PHP-FPM: a) DO install at least: sudo apt-get install php5-fpm php5-suhosin php-apc and, if needed: # sudo apt-get install php5-mysql php5-mcrypt php5-gd A common simple PHP config could include: grep -v '^\s*\(;\|$\)' /etc/php5/fpm/*.conf [global] pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log include=/etc/php5/fpm/pool.d/*.conf grep -v '^\s*\(;\|$\)' /etc/php5/fpm/pool.d/*.conf[www] user = www-data group = www-data listen = 127.0.0.1:9000 pm = dynamic pm.max_children = 10 pm.start_servers = 4 pm.min_spare_servers = 2 pm.max_spare_servers = 6 pm.max_requests = 384 request_terminate_timeout = 30s chdir = /var/www # restart it pgrep php5-fpm && sudo /etc/init.d/php5-fpm restart sleep .5 pgrep php5-fpm || sudo /etc/init.d/php5-fpm start Because of the above 'chdir = /var/www' and 'group = www-data' files inside /var/www/ like, for instance, those inside /var/www/koshiefr/ should be owned (and readable, or read/writeable) by group www-data REMEMBER: - keep it simple, - do trust nginx defaults as they usually work rather well, - test each config file well and restart/reload its parent app (nginx or php) before doing another config change. And, if you can live with a lighter Nginx, you can try my own extra-light nginx builds from: https://launchpad.net/~malan/+archive/dev sudo dpkg -i nginx-common*.deb sudo dpkg -i nginx-light*.deb Regards, M. From pasik at iki.fi Thu Feb 21 20:08:05 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Thu, 21 Feb 2013 22:08:05 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130118083821.GA8912@reaktio.net> References: <20130116151511.GS8912@reaktio.net> <20130118083821.GA8912@reaktio.net> Message-ID: <20130221200805.GT8912@reaktio.net> On Fri, Jan 18, 2013 at 10:38:21AM +0200, Pasi K?rkk?inen wrote: > On Thu, Jan 17, 2013 at 11:15:58AM +0800, ?????? wrote: > > Yes. It should work for any request method. > > > > Great, thanks, I'll let you know how it works for me. Probably in two weeks or so. > Hi, Adding the tengine pull request 91 on top of nginx 1.2.7 doesn't work: cc1: warnings being treated as errors src/http/ngx_http_request_body.c: In function 'ngx_http_read_non_buffered_client_request_body': src/http/ngx_http_request_body.c:506: error: implicit declaration of function 'ngx_http_top_input_body_filter' make[1]: *** [objs/src/http/ngx_http_request_body.o] Error 1 make[1]: Leaving directory `/root/src/nginx/nginx-1.2.7' make: *** [build] Error 2 ngx_http_top_input_body_filter() cannot be found from any .c/.h files.. Which other patches should I apply? Perhaps this? https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch Thanks, -- Pasi > > > 2013/1/16 Pasi K??rkk??inen <[1]pasik at iki.fi> > > > > On Sun, Jan 13, 2013 at 08:22:17PM +0800, ?????? wrote: > > > ? ? This patch should work between nginx-1.2.6 and nginx-1.3.8. > > > ? ? The documentation is here: > > > > > ? ? ## client_body_postpone_sending ## > > > ? ? Syntax: **client_body_postpone_sending** `size` > > > ? ? Default: 64k > > > ? ? Context: `http, server, location` > > > ? ? If you specify the `proxy_request_buffering` or > > > ? ? `fastcgi_request_buffering` to be off, Nginx will send the body > > to backend > > > ? ? when it receives more than `size` data or the whole request body > > has been > > > ? ? received. It could save the connection and reduce the IO number > > with > > > ? ? backend. > > > > > > ? ? ## proxy_request_buffering ## > > > ? ? Syntax: **proxy_request_buffering** `on | off` > > > ? ? Default: `on` > > > ? ? Context: `http, server, location` > > > ? ? Specify the request body will be buffered to the disk or not. If > > it's off, > > > ? ? the request body will be stored in memory and sent to backend > > after Nginx > > > ? ? receives more than `client_body_postpone_sending` data. It could > > save the > > > ? ? disk IO with large request body. > > > > > > > > > ? ? ? ? ? ? Note that, if you specify it to be off, the nginx > > retry mechanism > > > ? ? with unsuccessful response will be broken after you sent part of > > the > > > ? ? request to backend. It will just return 500 when it encounters > > such > > > ? ? unsuccessful response. This directive also breaks these > > variables: > > > ? ? $request_body, $request_body_file. You should not use these > > variables any > > > ? ? more while their values are undefined. > > > > > > > Hello, > > > > This patch sounds exactly like what I need aswell! > > I assume it works for both POST and PUT requests? > > > > Thanks, > > > > -- Pasi > > > > > ? ? ? Hello! > > > ? ? ? @yaoweibin > > > > > > ? ? ? ? If you are eager for this feature, you could try my > > > ? ? ? ? patch: [2][2]https://github.com/taobao/tengine/pull/91. > > This patch has > > > ? ? ? ? been running in our production servers. > > > > > > ? ? ? what's the nginx version your patch based on? > > > ? ? ? Thanks! > > > ? ? ? On Fri, Jan 11, 2013 at 5:17 PM, ?*? ?*?????? > > <[3][3]yaoweibin at gmail.com> wrote: > > > > > > ? ? ? ? I know nginx team are working on it. You can wait for it. > > > ? ? ? ? If you are eager for this feature, you could try my > > > ? ? ? ? patch: [4][4]https://github.com/taobao/tengine/pull/91. > > This patch has > > > ? ? ? ? been running in our production servers. > > > > > > ? ? ? ? 2013/1/11 li zJay <[5][5]zjay1987 at gmail.com> > > > > > > ? ? ? ? ? Hello! > > > ? ? ? ? ? is it possible that nginx will not buffer the client > > body before > > > ? ? ? ? ? handle the request to upstream? > > > ? ? ? ? ? we want to use nginx as a reverse proxy to upload very > > very big file > > > ? ? ? ? ? to the upstream, but the default behavior of nginx is to > > save the > > > ? ? ? ? ? whole request to the local disk first before handle it > > to the > > > ? ? ? ? ? upstream, which make the upstream impossible to process > > the file on > > > ? ? ? ? ? the fly when the file is uploading, results in much high > > request > > > ? ? ? ? ? latency and server-side resource consumption. > > > ? ? ? ? ? Thanks! > > > ? ? ? ? ? _______________________________________________ > > > ? ? ? ? ? nginx mailing list > > > ? ? ? ? ? [6][6]nginx at nginx.org > > > ? ? ? ? ? [7][7]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? ? ? -- > > > ? ? ? ? Weibin Yao > > > ? ? ? ? Developer @ Server Platform Team of Taobao > > > ? ? ? ? _______________________________________________ > > > ? ? ? ? nginx mailing list > > > ? ? ? ? [8][8]nginx at nginx.org > > > ? ? ? ? [9][9]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? ? _______________________________________________ > > > ? ? ? nginx mailing list > > > ? ? ? [10][10]nginx at nginx.org > > > ? ? ? [11][11]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? -- > > > ? ? Weibin Yao > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > References > > > > > > ? ? Visible links > > > ? ? 1. mailto:[12]zjay1987 at gmail.com > > > ? ? 2. [13]https://github.com/taobao/tengine/pull/91 > > > ? ? 3. mailto:[14]yaoweibin at gmail.com > > > ? ? 4. [15]https://github.com/taobao/tengine/pull/91 > > > ? ? 5. mailto:[16]zjay1987 at gmail.com > > > ? ? 6. mailto:[17]nginx at nginx.org > > > ? ? 7. [18]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? 8. mailto:[19]nginx at nginx.org > > > ? ? 9. [20]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 10. mailto:[21]nginx at nginx.org > > > ? 11. [22]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > > nginx mailing list > > > [23]nginx at nginx.org > > > [24]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > [25]nginx at nginx.org > > [26]http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Weibin Yao > > Developer @ Server Platform Team of Taobao > > > > References > > > > Visible links > > 1. mailto:pasik at iki.fi > > 2. https://github.com/taobao/tengine/pull/91 > > 3. mailto:yaoweibin at gmail.com > > 4. https://github.com/taobao/tengine/pull/91 > > 5. mailto:zjay1987 at gmail.com > > 6. mailto:nginx at nginx.org > > 7. http://mailman.nginx.org/mailman/listinfo/nginx > > 8. mailto:nginx at nginx.org > > 9. http://mailman.nginx.org/mailman/listinfo/nginx > > 10. mailto:nginx at nginx.org > > 11. http://mailman.nginx.org/mailman/listinfo/nginx > > 12. mailto:zjay1987 at gmail.com > > 13. https://github.com/taobao/tengine/pull/91 > > 14. mailto:yaoweibin at gmail.com > > 15. https://github.com/taobao/tengine/pull/91 > > 16. mailto:zjay1987 at gmail.com > > 17. mailto:nginx at nginx.org > > 18. http://mailman.nginx.org/mailman/listinfo/nginx > > 19. mailto:nginx at nginx.org > > 20. http://mailman.nginx.org/mailman/listinfo/nginx > > 21. mailto:nginx at nginx.org > > 22. http://mailman.nginx.org/mailman/listinfo/nginx > > 23. mailto:nginx at nginx.org > > 24. http://mailman.nginx.org/mailman/listinfo/nginx > > 25. mailto:nginx at nginx.org > > 26. http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From n-a-zhubr at yandex.ru Thu Feb 21 22:12:57 2013 From: n-a-zhubr at yandex.ru (Nikolai Zhubr) Date: Fri, 22 Feb 2013 02:12:57 +0400 Subject: Proxying websocket (to e.g. tomcat) In-Reply-To: <201302212003.52403.vbart@nginx.com> References: <512630A2.6030403@yandex.ru> <201302211830.49838.vbart@nginx.com> <51264381.1050305@yandex.ru> <201302212003.52403.vbart@nginx.com> Message-ID: <51269BE9.8090300@yandex.ru> 21.02.2013 20:03, Valentin V. Bartenev wrote: [...] >> Apparently such configuration implies that different kinds of >> connections (standard and websocket) can not be mixed in one "location" >> section? (As far as I understood it, magic headers do not get through >> directly, but essentially get reintroduced by these configuration >> settings?) >> > > Not quite so. Actually, they can be mixed. That's why the $http_upgrade variable > used. If there's no such header in request, then the variable is empty and the > header won't be set. You are right. Now I see. I've even actually made some tests to be completely sure and they all worked correctly. Thank you for precise explanation and usefull examples! Nikolai > > You can also set the Connection header to different values depending on > existence of the Upagrade header in a request. > > Example: > > http { > map $http_upgrade $conn_header { > default upgrade; > '' close; > } > > server { > ... > > location { > proxy_pass http://127.0.0.1:8080; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $conn_header; > } > } > > http://nginx.org/r/map > > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > From rakan.alhneiti at gmail.com Thu Feb 21 22:39:13 2013 From: rakan.alhneiti at gmail.com (Rakan Alhneiti) Date: Fri, 22 Feb 2013 01:39:13 +0300 Subject: Fwd: nginx performance on Amazon EC2 In-Reply-To: <5125D407.3060602@googlemail.com> References: <5125D407.3060602@googlemail.com> Message-ID: Hello, Thank you all for your support. *Mex: * *1) is the setup of your vmware similar to you ec2 - instance? i talk esp. about RAM/CPU-power here.* Yes, i've setup the vmware machine to be 1.7 G in ram and 1 core just like the small ec2 instance. *2) do you have a monitoring on your instances, checking for load/ram-zusage, iowait etc?* What goes on here is that my CPU usage per uwsgi process is around 13% and server load starts to get much higher. MySQL operations usually take 6-8 ms which are optimized and not slowing the app down. once the load starts rising, the app slows down and more connections start to fail. *Jan-Philip:* *Do you run the benchmark program on the same virtual machine as the web stack?? For yielding conclusive results, you certainly don't want to make ab, nginx, and all other entities involved compete for the same CPU. *Yes, on both machines i try to run ab on the same machine. So that i am profiling the app from within taking away any network latency that can affect the response rate. I tried running the test from a linode machine to an EC2 instance... same results. Again, thank you for your support, i am persevering on this until i find out what the issue is. Best Regards, *Rakan AlHneiti* Find me on the internet: Rakan Alhneiti | @rakanalh | Rakan Alhneiti | alhneiti ----- GTalk rakan.alhneiti at gmail.com ----- Mobile: +962-798-910 990 On Thu, Feb 21, 2013 at 11:00 AM, Jan-Philip Gehrcke < jgehrcke at googlemail.com> wrote: > Do you run the benchmark program on the same virtual machine as the web > stack?? For yielding conclusive results, you certainly don't want to make > ab, nginx, and all other entities involved compete for the same CPU. > > If yes, try running ab from a different machine in the same network (make > sure your network is not the bottle neck here) and compare your results > again. > > Cheers, > > Jan-Philip > > > > > On 02/20/2013 09:13 PM, Rakan Alhneiti wrote: > >> Hello, >> >> I am running a django app with nginx & uwsgi on an amazon ec2 instance >> and a vmware machine almost the same size as the ec2 one. Here's how i >> run uwsgi: >> >> >> |sudo uwsgi-b25000 --chdir=/www/python/apps/**pyapp--module=wsgi:**application--env >> DJANGO_SETTINGS_MODULE=**settings--socket=/tmp/pyapp.**socket--cheaper=8 >> --processes=16 --harakiri=10 --max-requests=5000 >> --vacuum--master--pidfile=/**tmp/pyapp-master.pid--uid=220 --gid=499| >> >> >> & nginx configurations: >> >> >> |server{ >> listen80; >> server_name test.com >> >> root/www/python/apps/pyapp/; >> >> access_log/var/log/nginx/test.**com.access.log; >> error_log/var/log/nginx/test.**com.error.log; >> >> #https://docs.djangoproject.**com/en/dev/howto/static-files/** >> #serving-static-files-in-**production >> location/static/ { >> alias /www/python/apps/pyapp/static/**; >> expires30d; >> } >> >> location/media/ { >> alias /www/python/apps/pyapp/media/; >> expires30d; >> } >> >> location/ { >> uwsgi_pass unix:///tmp/pyapp.socket; >> include uwsgi_params; >> proxy_read_timeout120; >> } >> >> # what to serve if upstream is not available or crashes >> #error_page 500 502 503 504 /media/50x.html; >> }| >> >> Here comes the problem. When doing "ab" (ApacheBenchmark) on both >> machines i get the following results: (vmware machine being almost the >> same size as the ec2 small instance) >> >> *Amazon EC2:* >> >> >> nginx version: nginx version: nginx/1.2.6 >> >> uwsgi version:1.4.5 >> >> >> |Concurrency Level: 500 >> Time takenfor tests: 21.954 seconds >> >> Complete requests: 5000 >> Failed requests: 126 >> (Connect: 0, Receive: 0, Length: 126, Exceptions: 0) >> Write errors: 0 >> Non-2xx responses: 4874 >> Total transferred: 4142182 bytes >> HTML transferred: 3384914 bytes >> Requests per second: 227.75 [#/sec] (mean) >> Time per request: 2195.384 [ms] (mean) >> Time per request: 4.391 [ms] (mean, across all concurrent >> requests) >> Transfer rate: 184.25 [Kbytes/sec] received| >> >> *Vmware machine (CentOS 6):* >> >> nginx version: nnginx version: nginx/1.0.15 >> >> uwsgi version: 1.4.5 >> >> >> |Concurrency Level: 1000 >> Time takenfor tests: 1.094 seconds >> >> Complete requests: 5000 >> Failed requests: 0 >> Write errors: 0 >> Total transferred: 30190000 bytes >> HTML transferred: 28930000 bytes >> Requests per second: 4568.73 [#/sec] (mean) >> Time per request: 218.879 [ms] (mean) >> Time per request: 0.219 [ms] (mean, across all concurrent >> requests) >> Transfer rate: 26939.42 [Kbytes/sec] received| >> >> As you can see... all requests on the ec2 instance fail with either >> timeout errors or "Client prematurely disconnected". However, on my >> vmware machine all requests go through with no problems. The other thing >> is the difference in reqs / second i am doing on both machines. >> >> What am i doing wrong on ec2? >> >> >> >> >> ______________________________**_________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/**mailman/listinfo/nginx >> >> > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Feb 22 00:51:50 2013 From: nginx-forum at nginx.us (digitalpoint) Date: Thu, 21 Feb 2013 19:51:50 -0500 Subject: 1.3.12 occasional segfaults In-Reply-To: <201302211400.10853.vbart@nginx.com> References: <201302211400.10853.vbart@nginx.com> Message-ID: <4e94f18a662cc3052b1b6dbf00602781.NginxMailingListEnglish@forum.nginx.org> Yeah sorry... I didn't see 1.3.13 came out the day prior. Either way, glad whatever it was seems to have been fixed before I asked... Upgrading to 1.3.13 and the occasional segfaults stopped. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236409,236454#msg-236454 From yaoweibin at gmail.com Fri Feb 22 02:06:11 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Fri, 22 Feb 2013 10:06:11 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130221200805.GT8912@reaktio.net> References: <20130116151511.GS8912@reaktio.net> <20130118083821.GA8912@reaktio.net> <20130221200805.GT8912@reaktio.net> Message-ID: Use the patch I attached in this mail thread instead, don't use the pull request patch which is for tengine. Thanks. 2013/2/22 Pasi K?rkk?inen > On Fri, Jan 18, 2013 at 10:38:21AM +0200, Pasi K?rkk?inen wrote: > > On Thu, Jan 17, 2013 at 11:15:58AM +0800, ?????? wrote: > > > Yes. It should work for any request method. > > > > > > > Great, thanks, I'll let you know how it works for me. Probably in two > weeks or so. > > > > Hi, > > Adding the tengine pull request 91 on top of nginx 1.2.7 doesn't work: > > cc1: warnings being treated as errors > src/http/ngx_http_request_body.c: In function > 'ngx_http_read_non_buffered_client_request_body': > src/http/ngx_http_request_body.c:506: error: implicit declaration of > function 'ngx_http_top_input_body_filter' > make[1]: *** [objs/src/http/ngx_http_request_body.o] Error 1 > make[1]: Leaving directory `/root/src/nginx/nginx-1.2.7' > make: *** [build] Error 2 > > ngx_http_top_input_body_filter() cannot be found from any .c/.h files.. > Which other patches should I apply? > > Perhaps this? > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > Thanks, > > -- Pasi > > > > > > 2013/1/16 Pasi K??rkk??inen <[1]pasik at iki.fi> > > > > > > On Sun, Jan 13, 2013 at 08:22:17PM +0800, ?????? wrote: > > > > ? ? This patch should work between nginx-1.2.6 and nginx-1.3.8. > > > > ? ? The documentation is here: > > > > > > > ? ? ## client_body_postpone_sending ## > > > > ? ? Syntax: **client_body_postpone_sending** `size` > > > > ? ? Default: 64k > > > > ? ? Context: `http, server, location` > > > > ? ? If you specify the `proxy_request_buffering` or > > > > ? ? `fastcgi_request_buffering` to be off, Nginx will send the > body > > > to backend > > > > ? ? when it receives more than `size` data or the whole > request body > > > has been > > > > ? ? received. It could save the connection and reduce the IO > number > > > with > > > > ? ? backend. > > > > > > > > ? ? ## proxy_request_buffering ## > > > > ? ? Syntax: **proxy_request_buffering** `on | off` > > > > ? ? Default: `on` > > > > ? ? Context: `http, server, location` > > > > ? ? Specify the request body will be buffered to the disk or > not. If > > > it's off, > > > > ? ? the request body will be stored in memory and sent to > backend > > > after Nginx > > > > ? ? receives more than `client_body_postpone_sending` data. It > could > > > save the > > > > ? ? disk IO with large request body. > > > > > > > > > > > > ? ? ? ? ? ? Note that, if you specify it to be off, the > nginx > > > retry mechanism > > > > ? ? with unsuccessful response will be broken after you sent > part of > > > the > > > > ? ? request to backend. It will just return 500 when it > encounters > > > such > > > > ? ? unsuccessful response. This directive also breaks these > > > variables: > > > > ? ? $request_body, $request_body_file. You should not use these > > > variables any > > > > ? ? more while their values are undefined. > > > > > > > > > > Hello, > > > > > > This patch sounds exactly like what I need aswell! > > > I assume it works for both POST and PUT requests? > > > > > > Thanks, > > > > > > -- Pasi > > > > > > > ? ? ? Hello! > > > > ? ? ? @yaoweibin > > > > > > > > ? ? ? ? If you are eager for this feature, you could try my > > > > ? ? ? ? patch: [2][2] > https://github.com/taobao/tengine/pull/91. > > > This patch has > > > > ? ? ? ? been running in our production servers. > > > > > > > > ? ? ? what's the nginx version your patch based on? > > > > ? ? ? Thanks! > > > > ? ? ? On Fri, Jan 11, 2013 at 5:17 PM, ?*? ?*?????? > > > <[3][3]yaoweibin at gmail.com> wrote: > > > > > > > > ? ? ? ? I know nginx team are working on it. You can wait > for it. > > > > ? ? ? ? If you are eager for this feature, you could try my > > > > ? ? ? ? patch: [4][4] > https://github.com/taobao/tengine/pull/91. > > > This patch has > > > > ? ? ? ? been running in our production servers. > > > > > > > > ? ? ? ? 2013/1/11 li zJay <[5][5]zjay1987 at gmail.com> > > > > > > > > ? ? ? ? ? Hello! > > > > ? ? ? ? ? is it possible that nginx will not buffer the > client > > > body before > > > > ? ? ? ? ? handle the request to upstream? > > > > ? ? ? ? ? we want to use nginx as a reverse proxy to upload > very > > > very big file > > > > ? ? ? ? ? to the upstream, but the default behavior of > nginx is to > > > save the > > > > ? ? ? ? ? whole request to the local disk first before > handle it > > > to the > > > > ? ? ? ? ? upstream, which make the upstream impossible to > process > > > the file on > > > > ? ? ? ? ? the fly when the file is uploading, results in > much high > > > request > > > > ? ? ? ? ? latency and server-side resource consumption. > > > > ? ? ? ? ? Thanks! > > > > ? ? ? ? ? _______________________________________________ > > > > ? ? ? ? ? nginx mailing list > > > > ? ? ? ? ? [6][6]nginx at nginx.org > > > > ? ? ? ? ? [7][7] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? ? ? -- > > > > ? ? ? ? Weibin Yao > > > > ? ? ? ? Developer @ Server Platform Team of Taobao > > > > ? ? ? ? _______________________________________________ > > > > ? ? ? ? nginx mailing list > > > > ? ? ? ? [8][8]nginx at nginx.org > > > > ? ? ? ? [9][9] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? ? _______________________________________________ > > > > ? ? ? nginx mailing list > > > > ? ? ? [10][10]nginx at nginx.org > > > > ? ? ? [11][11]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? -- > > > > ? ? Weibin Yao > > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > > > References > > > > > > > > ? ? Visible links > > > > ? ? 1. mailto:[12]zjay1987 at gmail.com > > > > ? ? 2. [13]https://github.com/taobao/tengine/pull/91 > > > > ? ? 3. mailto:[14]yaoweibin at gmail.com > > > > ? ? 4. [15]https://github.com/taobao/tengine/pull/91 > > > > ? ? 5. mailto:[16]zjay1987 at gmail.com > > > > ? ? 6. mailto:[17]nginx at nginx.org > > > > ? ? 7. [18]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? 8. mailto:[19]nginx at nginx.org > > > > ? ? 9. [20]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 10. mailto:[21]nginx at nginx.org > > > > ? 11. [22]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > [23]nginx at nginx.org > > > > [24]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > > nginx mailing list > > > [25]nginx at nginx.org > > > [26]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > > Weibin Yao > > > Developer @ Server Platform Team of Taobao > > > > > > References > > > > > > Visible links > > > 1. mailto:pasik at iki.fi > > > 2. https://github.com/taobao/tengine/pull/91 > > > 3. mailto:yaoweibin at gmail.com > > > 4. https://github.com/taobao/tengine/pull/91 > > > 5. mailto:zjay1987 at gmail.com > > > 6. mailto:nginx at nginx.org > > > 7. http://mailman.nginx.org/mailman/listinfo/nginx > > > 8. mailto:nginx at nginx.org > > > 9. http://mailman.nginx.org/mailman/listinfo/nginx > > > 10. mailto:nginx at nginx.org > > > 11. http://mailman.nginx.org/mailman/listinfo/nginx > > > 12. mailto:zjay1987 at gmail.com > > > 13. https://github.com/taobao/tengine/pull/91 > > > 14. mailto:yaoweibin at gmail.com > > > 15. https://github.com/taobao/tengine/pull/91 > > > 16. mailto:zjay1987 at gmail.com > > > 17. mailto:nginx at nginx.org > > > 18. http://mailman.nginx.org/mailman/listinfo/nginx > > > 19. mailto:nginx at nginx.org > > > 20. http://mailman.nginx.org/mailman/listinfo/nginx > > > 21. mailto:nginx at nginx.org > > > 22. http://mailman.nginx.org/mailman/listinfo/nginx > > > 23. mailto:nginx at nginx.org > > > 24. http://mailman.nginx.org/mailman/listinfo/nginx > > > 25. mailto:nginx at nginx.org > > > 26. http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Feb 22 06:54:54 2013 From: nginx-forum at nginx.us (mex) Date: Fri, 22 Feb 2013 01:54:54 -0500 Subject: Fwd: nginx performance on Amazon EC2 In-Reply-To: References: Message-ID: <9c93e5f9ca9a557c105799ba2d58e737.NginxMailingListEnglish@forum.nginx.org> now you **only** need to find the bootleneck :) maybe its better to run ab from a machine in the same network instead of running on the same machine, esp. with one core. Rakan Alhneiti Wrote: ------------------------------------------------------- > Hello, > > Thank you all for your support. > > *Mex: * > *1) is the setup of your vmware similar to you ec2 - instance? i talk > esp. > about RAM/CPU-power here.* > Yes, i've setup the vmware machine to be 1.7 G in ram and 1 core just > like > the small ec2 instance. > > *2) do you have a monitoring on your instances, checking for > load/ram-zusage, iowait etc?* > What goes on here is that my CPU usage per uwsgi process is around 13% > and > server load starts to get much higher. MySQL operations usually take > 6-8 ms > which are optimized and not slowing the app down. once the load starts > rising, the app slows down and more connections start to fail. > > *Jan-Philip:* > *Do you run the benchmark program on the same virtual machine as the > web > stack?? For yielding conclusive results, you certainly don't want to > make > ab, nginx, and all other entities involved compete for the same CPU. > *Yes, on both machines i try to run ab on the same machine. So that i > am > profiling the app from within taking away any network latency that can > affect the response rate. I tried running the test from a linode > machine to > an EC2 instance... same results. > > Again, thank you for your support, i am persevering on this until i > find > out what the issue is. > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236391,236459#msg-236459 From alexandernst at gmail.com Fri Feb 22 08:32:33 2013 From: alexandernst at gmail.com (Alexander Nestorov) Date: Fri, 22 Feb 2013 09:32:33 +0100 Subject: Default error_page for multiple vhosts Message-ID: I'm trying to set a default error_page for my entire nginx server (as in for all vhosts) I'm using following settings: http { error_page 404 /var/www/default/404.html; server { root /var/www/mydomain.com/; } } But I'm getting this error: [error] 16842#0: *1 open() "/var/www/mydomain.com/var/www/default/404.html" failed (2: No such file or directory) While I do understand the error, and I do understand why it's happening, I can't understand why can't I tell NGINX to user the default error_page as an absolute path instead of appending it to the root of my vhost. What I'm trying to achieve is being able to user /var/www/default/404.html error_page from all my vhosts without having to add it to all server{}. I think adding it for my 300 domains is an overkill. I asked in serverfault and I did got a semi-solution using includes, but this solution still requires me to edit every single server{}, which I really don't want to. Is there any other way I could achieve what I'm trying? Original serverfault question: http://serverfault.com/questions/481140/nginx-default-error-page Regards! -- alexandernst From mdounin at mdounin.ru Fri Feb 22 08:33:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 22 Feb 2013 12:33:53 +0400 Subject: Proxying non-ssl SMTP/POP to ssl SMTP/POP In-Reply-To: <1d8e7ff6af46473c9c4f4cf9ee3d1f0c.NginxMailingListEnglish@forum.nginx.org> References: <1d8e7ff6af46473c9c4f4cf9ee3d1f0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130222083352.GC81985@mdounin.ru> Hello! On Wed, Feb 20, 2013 at 12:02:04PM -0500, huynq wrote: > Hi everyone, > I'm now using nginx in setting up a proxying mail system in my company. The > model is as below: > > Mail client <=========> Nginx proxy <===========> Mail server > > The stream between mail client and nginx is non-ssl connection (includes > smtp and pop3 stream), whereas the stream between nginx and mail server uses > ssl. The reason I use this model is that: I want to create a neutral node > that can modify the emails before being delivered to mail server or client. > However, I'm still stuck on the configuration of nginx to implement this > model. So if you have any idea about how to configure this system as well as > it's possibility with nginx, please help me. nginx doesn't support talking to mail backends via ssl. -- Maxim Dounin http://nginx.com/support.html From pasik at iki.fi Fri Feb 22 09:25:24 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Fri, 22 Feb 2013 11:25:24 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: <20130116151511.GS8912@reaktio.net> <20130118083821.GA8912@reaktio.net> <20130221200805.GT8912@reaktio.net> Message-ID: <20130222092524.GV8912@reaktio.net> On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Yao wrote: > Use the patch I attached in this mail thread instead, don't use the pull > request patch which is for tengine.? > Thanks. > Oh sorry I missed that attachment. It seems to apply and build OK. I'll start testing it. Thanks! -- Pasi > 2013/2/22 Pasi K??rkk??inen <[1]pasik at iki.fi> > > On Fri, Jan 18, 2013 at 10:38:21AM +0200, Pasi K??rkk??inen wrote: > > On Thu, Jan 17, 2013 at 11:15:58AM +0800, ?????? wrote: > > > ? ? Yes. It should work for any request method. > > > > > > > Great, thanks, I'll let you know how it works for me. Probably in two > weeks or so. > > > > Hi, > > Adding the tengine pull request 91 on top of nginx 1.2.7 doesn't work: > > cc1: warnings being treated as errors > src/http/ngx_http_request_body.c: In function > 'ngx_http_read_non_buffered_client_request_body': > src/http/ngx_http_request_body.c:506: error: implicit declaration of > function 'ngx_http_top_input_body_filter' > make[1]: *** [objs/src/http/ngx_http_request_body.o] Error 1 > make[1]: Leaving directory `/root/src/nginx/nginx-1.2.7' > make: *** [build] Error 2 > > ngx_http_top_input_body_filter() cannot be found from any .c/.h files.. > Which other patches should I apply? > > Perhaps this? > [2]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > Thanks, > -- Pasi > > > > > > ? ? 2013/1/16 Pasi K?*?*?rkk?*?*?inen <[1][3]pasik at iki.fi> > > > > > > ? ? ? On Sun, Jan 13, 2013 at 08:22:17PM +0800, ?????? wrote: > > > ? ? ? > ?* ? ?* This patch should work between nginx-1.2.6 and > nginx-1.3.8. > > > ? ? ? > ?* ? ?* The documentation is here: > > > > > > ? ? ? > ?* ? ?* ## client_body_postpone_sending ## > > > ? ? ? > ?* ? ?* Syntax: **client_body_postpone_sending** `size` > > > ? ? ? > ?* ? ?* Default: 64k > > > ? ? ? > ?* ? ?* Context: `http, server, location` > > > ? ? ? > ?* ? ?* If you specify the `proxy_request_buffering` or > > > ? ? ? > ?* ? ?* `fastcgi_request_buffering` to be off, Nginx will > send the body > > > ? ? ? to backend > > > ? ? ? > ?* ? ?* when it receives more than `size` data or the > whole request body > > > ? ? ? has been > > > ? ? ? > ?* ? ?* received. It could save the connection and reduce > the IO number > > > ? ? ? with > > > ? ? ? > ?* ? ?* backend. > > > ? ? ? > > > > ? ? ? > ?* ? ?* ## proxy_request_buffering ## > > > ? ? ? > ?* ? ?* Syntax: **proxy_request_buffering** `on | off` > > > ? ? ? > ?* ? ?* Default: `on` > > > ? ? ? > ?* ? ?* Context: `http, server, location` > > > ? ? ? > ?* ? ?* Specify the request body will be buffered to the > disk or not. If > > > ? ? ? it's off, > > > ? ? ? > ?* ? ?* the request body will be stored in memory and sent > to backend > > > ? ? ? after Nginx > > > ? ? ? > ?* ? ?* receives more than `client_body_postpone_sending` > data. It could > > > ? ? ? save the > > > ? ? ? > ?* ? ?* disk IO with large request body. > > > ? ? ? > > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? Note that, if you specify it > to be off, the nginx > > > ? ? ? retry mechanism > > > ? ? ? > ?* ? ?* with unsuccessful response will be broken after > you sent part of > > > ? ? ? the > > > ? ? ? > ?* ? ?* request to backend. It will just return 500 when > it encounters > > > ? ? ? such > > > ? ? ? > ?* ? ?* unsuccessful response. This directive also breaks > these > > > ? ? ? variables: > > > ? ? ? > ?* ? ?* $request_body, $request_body_file. You should not > use these > > > ? ? ? variables any > > > ? ? ? > ?* ? ?* more while their values are undefined. > > > ? ? ? > > > > > > > ? ? ? Hello, > > > > > > ? ? ? This patch sounds exactly like what I need aswell! > > > ? ? ? I assume it works for both POST and PUT requests? > > > > > > ? ? ? Thanks, > > > > > > ? ? ? -- Pasi > > > > > > ? ? ? > ?* ? ?* ? ?* Hello! > > > ? ? ? > ?* ? ?* ? ?* @yaoweibin > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* ? ?* If you are eager for this feature, you > could try my > > > ? ? ? > ?* ? ?* ? ?* ? ?* patch: > [2][2][4]https://github.com/taobao/tengine/pull/91. > > > ? ? ? This patch has > > > ? ? ? > ?* ? ?* ? ?* ? ?* been running in our production servers. > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* what's the nginx version your patch based on? > > > ? ? ? > ?* ? ?* ? ?* Thanks! > > > ? ? ? > ?* ? ?* ? ?* On Fri, Jan 11, 2013 at 5:17 PM, ?**?* > ?**?*???*???*?? > > > ? ? ? <[3][3][5]yaoweibin at gmail.com> wrote: > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* ? ?* I know nginx team are working on it. You > can wait for it. > > > ? ? ? > ?* ? ?* ? ?* ? ?* If you are eager for this feature, you > could try my > > > ? ? ? > ?* ? ?* ? ?* ? ?* patch: > [4][4][6]https://github.com/taobao/tengine/pull/91. > > > ? ? ? This patch has > > > ? ? ? > ?* ? ?* ? ?* ? ?* been running in our production servers. > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* ? ?* 2013/1/11 li zJay > <[5][5][7]zjay1987 at gmail.com> > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* Hello! > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* is it possible that nginx will not > buffer the client > > > ? ? ? body before > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* handle the request to upstream? > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* we want to use nginx as a reverse > proxy to upload very > > > ? ? ? very big file > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* to the upstream, but the default > behavior of nginx is to > > > ? ? ? save the > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* whole request to the local disk > first before handle it > > > ? ? ? to the > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* upstream, which make the upstream > impossible to process > > > ? ? ? the file on > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* the fly when the file is uploading, > results in much high > > > ? ? ? request > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* latency and server-side resource > consumption. > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* Thanks! > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* > _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* [6][6][8]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* > [7][7][9]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* ? ?* -- > > > ? ? ? > ?* ? ?* ? ?* ? ?* Weibin Yao > > > ? ? ? > ?* ? ?* ? ?* ? ?* Developer @ Server Platform Team of > Taobao > > > ? ? ? > ?* ? ?* ? ?* ? ?* > _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* ? ?* nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* ? ?* [8][8][10]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* ? ?* > [9][9][11]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > ? ? ? > ?* ? ?* ? ?* > _______________________________________________ > > > ? ? ? > ?* ? ?* ? ?* nginx mailing list > > > ? ? ? > ?* ? ?* ? ?* [10][10][12]nginx at nginx.org > > > ? ? ? > ?* ? ?* ? ?* > [11][11][13]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > ? ? ? > ?* ? ?* -- > > > ? ? ? > ?* ? ?* Weibin Yao > > > ? ? ? > ?* ? ?* Developer @ Server Platform Team of Taobao > > > ? ? ? > > > > ? ? ? > References > > > ? ? ? > > > > ? ? ? > ?* ? ?* Visible links > > > ? ? ? > ?* ? ?* 1. mailto:[12][14]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* 2. > [13][15]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* 3. mailto:[14][16]yaoweibin at gmail.com > > > ? ? ? > ?* ? ?* 4. > [15][17]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > ?* ? ?* 5. mailto:[16][18]zjay1987 at gmail.com > > > ? ? ? > ?* ? ?* 6. mailto:[17][19]nginx at nginx.org > > > ? ? ? > ?* ? ?* 7. > [18][20]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? ?* 8. mailto:[19][21]nginx at nginx.org > > > ? ? ? > ?* ? ?* 9. > [20][22]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > ?* ? 10. mailto:[21][23]nginx at nginx.org > > > ? ? ? > ?* ? 11. > [22][24]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? ? > _______________________________________________ > > > ? ? ? > nginx mailing list > > > ? ? ? > [23][25]nginx at nginx.org > > > ? ? ? > [24][26]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? ? _______________________________________________ > > > ? ? ? nginx mailing list > > > ? ? ? [25][27]nginx at nginx.org > > > ? ? ? [26][28]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? -- > > > ? ? Weibin Yao > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > References > > > > > > ? ? Visible links > > > ? ? 1. mailto:[29]pasik at iki.fi > > > ? ? 2. [30]https://github.com/taobao/tengine/pull/91 > > > ? ? 3. mailto:[31]yaoweibin at gmail.com > > > ? ? 4. [32]https://github.com/taobao/tengine/pull/91 > > > ? ? 5. mailto:[33]zjay1987 at gmail.com > > > ? ? 6. mailto:[34]nginx at nginx.org > > > ? ? 7. [35]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? 8. mailto:[36]nginx at nginx.org > > > ? ? 9. [37]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 10. mailto:[38]nginx at nginx.org > > > ? 11. [39]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 12. mailto:[40]zjay1987 at gmail.com > > > ? 13. [41]https://github.com/taobao/tengine/pull/91 > > > ? 14. mailto:[42]yaoweibin at gmail.com > > > ? 15. [43]https://github.com/taobao/tengine/pull/91 > > > ? 16. mailto:[44]zjay1987 at gmail.com > > > ? 17. mailto:[45]nginx at nginx.org > > > ? 18. [46]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 19. mailto:[47]nginx at nginx.org > > > ? 20. [48]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 21. mailto:[49]nginx at nginx.org > > > ? 22. [50]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 23. mailto:[51]nginx at nginx.org > > > ? 24. [52]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 25. mailto:[53]nginx at nginx.org > > > ? 26. [54]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > > nginx mailing list > > > [55]nginx at nginx.org > > > [56]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > [57]nginx at nginx.org > > [58]http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > [59]nginx at nginx.org > [60]http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > References > > Visible links > 1. mailto:pasik at iki.fi > 2. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 3. mailto:pasik at iki.fi > 4. https://github.com/taobao/tengine/pull/91 > 5. mailto:yaoweibin at gmail.com > 6. https://github.com/taobao/tengine/pull/91 > 7. mailto:zjay1987 at gmail.com > 8. mailto:nginx at nginx.org > 9. http://mailman.nginx.org/mailman/listinfo/nginx > 10. mailto:nginx at nginx.org > 11. http://mailman.nginx.org/mailman/listinfo/nginx > 12. mailto:nginx at nginx.org > 13. http://mailman.nginx.org/mailman/listinfo/nginx > 14. mailto:zjay1987 at gmail.com > 15. https://github.com/taobao/tengine/pull/91 > 16. mailto:yaoweibin at gmail.com > 17. https://github.com/taobao/tengine/pull/91 > 18. mailto:zjay1987 at gmail.com > 19. mailto:nginx at nginx.org > 20. http://mailman.nginx.org/mailman/listinfo/nginx > 21. mailto:nginx at nginx.org > 22. http://mailman.nginx.org/mailman/listinfo/nginx > 23. mailto:nginx at nginx.org > 24. http://mailman.nginx.org/mailman/listinfo/nginx > 25. mailto:nginx at nginx.org > 26. http://mailman.nginx.org/mailman/listinfo/nginx > 27. mailto:nginx at nginx.org > 28. http://mailman.nginx.org/mailman/listinfo/nginx > 29. mailto:pasik at iki.fi > 30. https://github.com/taobao/tengine/pull/91 > 31. mailto:yaoweibin at gmail.com > 32. https://github.com/taobao/tengine/pull/91 > 33. mailto:zjay1987 at gmail.com > 34. mailto:nginx at nginx.org > 35. http://mailman.nginx.org/mailman/listinfo/nginx > 36. mailto:nginx at nginx.org > 37. http://mailman.nginx.org/mailman/listinfo/nginx > 38. mailto:nginx at nginx.org > 39. http://mailman.nginx.org/mailman/listinfo/nginx > 40. mailto:zjay1987 at gmail.com > 41. https://github.com/taobao/tengine/pull/91 > 42. mailto:yaoweibin at gmail.com > 43. https://github.com/taobao/tengine/pull/91 > 44. mailto:zjay1987 at gmail.com > 45. mailto:nginx at nginx.org > 46. http://mailman.nginx.org/mailman/listinfo/nginx > 47. mailto:nginx at nginx.org > 48. http://mailman.nginx.org/mailman/listinfo/nginx > 49. mailto:nginx at nginx.org > 50. http://mailman.nginx.org/mailman/listinfo/nginx > 51. mailto:nginx at nginx.org > 52. http://mailman.nginx.org/mailman/listinfo/nginx > 53. mailto:nginx at nginx.org > 54. http://mailman.nginx.org/mailman/listinfo/nginx > 55. mailto:nginx at nginx.org > 56. http://mailman.nginx.org/mailman/listinfo/nginx > 57. mailto:nginx at nginx.org > 58. http://mailman.nginx.org/mailman/listinfo/nginx > 59. mailto:nginx at nginx.org > 60. http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Fri Feb 22 10:12:04 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Feb 2013 10:12:04 +0000 Subject: Default error_page for multiple vhosts In-Reply-To: References: Message-ID: <20130222101204.GQ32392@craic.sysops.org> On Fri, Feb 22, 2013 at 09:32:33AM +0100, Alexander Nestorov wrote: Hi there, > I'm trying to set a default error_page for my entire nginx server (as > in for all vhosts) Based on the way that nginx configuration inheritance works, that is unlikely to be usefully possible at http{} level when you also want to add an extra "error_page 502" to a few domains. (That's not "it can't work"; it's "even when you add it at http{} level, be aware that some later configuration changes may undo it in some cases".) > While I do understand the error, and I do understand why it's > happening, I can't understand why can't I tell NGINX to > user the default error_page as an absolute path instead of appending > it to the root of my vhost. What's not to understand? error_page takes a uri, not a filename. I guess that, up to now, no-one who wanted error_page to be able to take a filename provided a patch with a convincing explanation of why it would be useful to include. > What I'm trying to achieve is being able to user > /var/www/default/404.html error_page from all my vhosts without > having to add it to all server{}. I think adding it for my 300 domains > is an overkill. I think that if you have 300 domains, then you should already have a way to auto-generate the configuration. Just put the suggested configuration into the template and regenerate. (And if you *don't* have such a way, then you can probably do a one-off thing with some text processing language; and you should probably consider creating such a way, in order to make future "global" changes easier for you.) > I asked in serverfault and I did got a semi-solution using includes, > but this solution still requires me to edit every > single server{}, which I really don't want to. Find each line "server {". Add one line with "include", or add a few lines with the "error_page" and "location = /404 {}" configuration. Not a lot of sed. Run "nginx -t" to check for syntax errors. If you're going to use nginx, it is likely to be more comfortable for you if you adapt to its configuration method. What you see as "overkill" and "semi-solution" and "don't want to", I see as elegant and simple-to-read. > Is there any other way I could achieve what I'm trying? error_page which takes a filename -- not without patching. Given that you want a common error_page which uses the same uri in each server (which is ok), but you don't want to use per-server configuration to map that uri to a common filename -- not that I can see. f -- Francis Daly francis at daoine.org From m6rkalan at gmail.com Fri Feb 22 10:45:27 2013 From: m6rkalan at gmail.com (Mark Alan) Date: Fri, 22 Feb 2013 10:45:27 +0000 Subject: Default error_page for multiple vhosts In-Reply-To: References: Message-ID: <51274c4a.6164b40a.7d86.ffffa5ad@mx.google.com> On Fri, 22 Feb 2013 09:32:33 +0100, Alexander Nestorov wrote: > I'm trying to set a default error_page for my entire nginx server (as > http { > error_page 404 /var/www/default/404.html; > server { > root /var/www/mydomain.com/; > } > } > Is there any other way I could achieve what I'm trying? What about soft linking it into wherever you want it? # in the OS ln -s /var/www/default/404.html /var/www/mydomain.com/ #in Nginx config error_page 404 /404.html; Regards, M. From pasik at iki.fi Fri Feb 22 10:50:52 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Fri, 22 Feb 2013 12:50:52 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130222092524.GV8912@reaktio.net> References: <20130116151511.GS8912@reaktio.net> <20130118083821.GA8912@reaktio.net> <20130221200805.GT8912@reaktio.net> <20130222092524.GV8912@reaktio.net> Message-ID: <20130222105052.GW8912@reaktio.net> On Fri, Feb 22, 2013 at 11:25:24AM +0200, Pasi K?rkk?inen wrote: > On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Yao wrote: > > Use the patch I attached in this mail thread instead, don't use the pull > > request patch which is for tengine.? > > Thanks. > > > > Oh sorry I missed that attachment. It seems to apply and build OK. > I'll start testing it. > I added the patch on top of nginx 1.2.7 and enabled the following options: client_body_postpone_sending 64k; proxy_request_buffering off; after that connections through the nginx reverse proxy started failing with errors like this: [error] 29087#0: *49 upstream prematurely closed connection while reading response header from upstream [error] 29087#0: *60 upstream sent invalid header while reading response header from upstream And the services are unusable. Commenting out the two config options above makes nginx happy again. Any idea what causes that? Any tips how to troubleshoot it? Thanks! -- Pasi > > > 2013/2/22 Pasi K??rkk??inen <[1]pasik at iki.fi> > > > > On Fri, Jan 18, 2013 at 10:38:21AM +0200, Pasi K??rkk??inen wrote: > > > On Thu, Jan 17, 2013 at 11:15:58AM +0800, ?????? wrote: > > > > ? ? Yes. It should work for any request method. > > > > > > > > > > Great, thanks, I'll let you know how it works for me. Probably in two > > weeks or so. > > > > > > > Hi, > > > > Adding the tengine pull request 91 on top of nginx 1.2.7 doesn't work: > > > > cc1: warnings being treated as errors > > src/http/ngx_http_request_body.c: In function > > 'ngx_http_read_non_buffered_client_request_body': > > src/http/ngx_http_request_body.c:506: error: implicit declaration of > > function 'ngx_http_top_input_body_filter' > > make[1]: *** [objs/src/http/ngx_http_request_body.o] Error 1 > > make[1]: Leaving directory `/root/src/nginx/nginx-1.2.7' > > make: *** [build] Error 2 > > > > ngx_http_top_input_body_filter() cannot be found from any .c/.h files.. > > Which other patches should I apply? > > > > Perhaps this? > > [2]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > Thanks, > > -- Pasi > > > > > > > > > ? ? 2013/1/16 Pasi K?*?*?rkk?*?*?inen <[1][3]pasik at iki.fi> > > > > > > > > ? ? ? On Sun, Jan 13, 2013 at 08:22:17PM +0800, ?????? wrote: > > > > ? ? ? > ?* ? ?* This patch should work between nginx-1.2.6 and > > nginx-1.3.8. > > > > ? ? ? > ?* ? ?* The documentation is here: > > > > > > > > ? ? ? > ?* ? ?* ## client_body_postpone_sending ## > > > > ? ? ? > ?* ? ?* Syntax: **client_body_postpone_sending** `size` > > > > ? ? ? > ?* ? ?* Default: 64k > > > > ? ? ? > ?* ? ?* Context: `http, server, location` > > > > ? ? ? > ?* ? ?* If you specify the `proxy_request_buffering` or > > > > ? ? ? > ?* ? ?* `fastcgi_request_buffering` to be off, Nginx will > > send the body > > > > ? ? ? to backend > > > > ? ? ? > ?* ? ?* when it receives more than `size` data or the > > whole request body > > > > ? ? ? has been > > > > ? ? ? > ?* ? ?* received. It could save the connection and reduce > > the IO number > > > > ? ? ? with > > > > ? ? ? > ?* ? ?* backend. > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ## proxy_request_buffering ## > > > > ? ? ? > ?* ? ?* Syntax: **proxy_request_buffering** `on | off` > > > > ? ? ? > ?* ? ?* Default: `on` > > > > ? ? ? > ?* ? ?* Context: `http, server, location` > > > > ? ? ? > ?* ? ?* Specify the request body will be buffered to the > > disk or not. If > > > > ? ? ? it's off, > > > > ? ? ? > ?* ? ?* the request body will be stored in memory and sent > > to backend > > > > ? ? ? after Nginx > > > > ? ? ? > ?* ? ?* receives more than `client_body_postpone_sending` > > data. It could > > > > ? ? ? save the > > > > ? ? ? > ?* ? ?* disk IO with large request body. > > > > ? ? ? > > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? Note that, if you specify it > > to be off, the nginx > > > > ? ? ? retry mechanism > > > > ? ? ? > ?* ? ?* with unsuccessful response will be broken after > > you sent part of > > > > ? ? ? the > > > > ? ? ? > ?* ? ?* request to backend. It will just return 500 when > > it encounters > > > > ? ? ? such > > > > ? ? ? > ?* ? ?* unsuccessful response. This directive also breaks > > these > > > > ? ? ? variables: > > > > ? ? ? > ?* ? ?* $request_body, $request_body_file. You should not > > use these > > > > ? ? ? variables any > > > > ? ? ? > ?* ? ?* more while their values are undefined. > > > > ? ? ? > > > > > > > > > ? ? ? Hello, > > > > > > > > ? ? ? This patch sounds exactly like what I need aswell! > > > > ? ? ? I assume it works for both POST and PUT requests? > > > > > > > > ? ? ? Thanks, > > > > > > > > ? ? ? -- Pasi > > > > > > > > ? ? ? > ?* ? ?* ? ?* Hello! > > > > ? ? ? > ?* ? ?* ? ?* @yaoweibin > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* If you are eager for this feature, you > > could try my > > > > ? ? ? > ?* ? ?* ? ?* ? ?* patch: > > [2][2][4]https://github.com/taobao/tengine/pull/91. > > > > ? ? ? This patch has > > > > ? ? ? > ?* ? ?* ? ?* ? ?* been running in our production servers. > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* what's the nginx version your patch based on? > > > > ? ? ? > ?* ? ?* ? ?* Thanks! > > > > ? ? ? > ?* ? ?* ? ?* On Fri, Jan 11, 2013 at 5:17 PM, ?**?* > > ?**?*???*???*?? > > > > ? ? ? <[3][3][5]yaoweibin at gmail.com> wrote: > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* I know nginx team are working on it. You > > can wait for it. > > > > ? ? ? > ?* ? ?* ? ?* ? ?* If you are eager for this feature, you > > could try my > > > > ? ? ? > ?* ? ?* ? ?* ? ?* patch: > > [4][4][6]https://github.com/taobao/tengine/pull/91. > > > > ? ? ? This patch has > > > > ? ? ? > ?* ? ?* ? ?* ? ?* been running in our production servers. > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* 2013/1/11 li zJay > > <[5][5][7]zjay1987 at gmail.com> > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* Hello! > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* is it possible that nginx will not > > buffer the client > > > > ? ? ? body before > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* handle the request to upstream? > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* we want to use nginx as a reverse > > proxy to upload very > > > > ? ? ? very big file > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* to the upstream, but the default > > behavior of nginx is to > > > > ? ? ? save the > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* whole request to the local disk > > first before handle it > > > > ? ? ? to the > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* upstream, which make the upstream > > impossible to process > > > > ? ? ? the file on > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* the fly when the file is uploading, > > results in much high > > > > ? ? ? request > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* latency and server-side resource > > consumption. > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* Thanks! > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* > > _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* [6][6][8]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* > > [7][7][9]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* -- > > > > ? ? ? > ?* ? ?* ? ?* ? ?* Weibin Yao > > > > ? ? ? > ?* ? ?* ? ?* ? ?* Developer @ Server Platform Team of > > Taobao > > > > ? ? ? > ?* ? ?* ? ?* ? ?* > > _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* ? ?* nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* ? ?* [8][8][10]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* ? ?* > > [9][9][11]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* ? ?* > > _______________________________________________ > > > > ? ? ? > ?* ? ?* ? ?* nginx mailing list > > > > ? ? ? > ?* ? ?* ? ?* [10][10][12]nginx at nginx.org > > > > ? ? ? > ?* ? ?* ? ?* > > [11][11][13]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* -- > > > > ? ? ? > ?* ? ?* Weibin Yao > > > > ? ? ? > ?* ? ?* Developer @ Server Platform Team of Taobao > > > > ? ? ? > > > > > ? ? ? > References > > > > ? ? ? > > > > > ? ? ? > ?* ? ?* Visible links > > > > ? ? ? > ?* ? ?* 1. mailto:[12][14]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* 2. > > [13][15]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* 3. mailto:[14][16]yaoweibin at gmail.com > > > > ? ? ? > ?* ? ?* 4. > > [15][17]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > ?* ? ?* 5. mailto:[16][18]zjay1987 at gmail.com > > > > ? ? ? > ?* ? ?* 6. mailto:[17][19]nginx at nginx.org > > > > ? ? ? > ?* ? ?* 7. > > [18][20]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? ?* 8. mailto:[19][21]nginx at nginx.org > > > > ? ? ? > ?* ? ?* 9. > > [20][22]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > ?* ? 10. mailto:[21][23]nginx at nginx.org > > > > ? ? ? > ?* ? 11. > > [22][24]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? ? > _______________________________________________ > > > > ? ? ? > nginx mailing list > > > > ? ? ? > [23][25]nginx at nginx.org > > > > ? ? ? > [24][26]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? ? _______________________________________________ > > > > ? ? ? nginx mailing list > > > > ? ? ? [25][27]nginx at nginx.org > > > > ? ? ? [26][28]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? -- > > > > ? ? Weibin Yao > > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > > > References > > > > > > > > ? ? Visible links > > > > ? ? 1. mailto:[29]pasik at iki.fi > > > > ? ? 2. [30]https://github.com/taobao/tengine/pull/91 > > > > ? ? 3. mailto:[31]yaoweibin at gmail.com > > > > ? ? 4. [32]https://github.com/taobao/tengine/pull/91 > > > > ? ? 5. mailto:[33]zjay1987 at gmail.com > > > > ? ? 6. mailto:[34]nginx at nginx.org > > > > ? ? 7. [35]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? 8. mailto:[36]nginx at nginx.org > > > > ? ? 9. [37]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 10. mailto:[38]nginx at nginx.org > > > > ? 11. [39]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 12. mailto:[40]zjay1987 at gmail.com > > > > ? 13. [41]https://github.com/taobao/tengine/pull/91 > > > > ? 14. mailto:[42]yaoweibin at gmail.com > > > > ? 15. [43]https://github.com/taobao/tengine/pull/91 > > > > ? 16. mailto:[44]zjay1987 at gmail.com > > > > ? 17. mailto:[45]nginx at nginx.org > > > > ? 18. [46]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 19. mailto:[47]nginx at nginx.org > > > > ? 20. [48]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 21. mailto:[49]nginx at nginx.org > > > > ? 22. [50]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 23. mailto:[51]nginx at nginx.org > > > > ? 24. [52]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 25. mailto:[53]nginx at nginx.org > > > > ? 26. [54]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > [55]nginx at nginx.org > > > > [56]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > > nginx mailing list > > > [57]nginx at nginx.org > > > [58]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > [59]nginx at nginx.org > > [60]http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Weibin Yao > > Developer @ Server Platform Team of Taobao > > > > References > > > > Visible links > > 1. mailto:pasik at iki.fi > > 2. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 3. mailto:pasik at iki.fi > > 4. https://github.com/taobao/tengine/pull/91 > > 5. mailto:yaoweibin at gmail.com > > 6. https://github.com/taobao/tengine/pull/91 > > 7. mailto:zjay1987 at gmail.com > > 8. mailto:nginx at nginx.org > > 9. http://mailman.nginx.org/mailman/listinfo/nginx > > 10. mailto:nginx at nginx.org > > 11. http://mailman.nginx.org/mailman/listinfo/nginx > > 12. mailto:nginx at nginx.org > > 13. http://mailman.nginx.org/mailman/listinfo/nginx > > 14. mailto:zjay1987 at gmail.com > > 15. https://github.com/taobao/tengine/pull/91 > > 16. mailto:yaoweibin at gmail.com > > 17. https://github.com/taobao/tengine/pull/91 > > 18. mailto:zjay1987 at gmail.com > > 19. mailto:nginx at nginx.org > > 20. http://mailman.nginx.org/mailman/listinfo/nginx > > 21. mailto:nginx at nginx.org > > 22. http://mailman.nginx.org/mailman/listinfo/nginx > > 23. mailto:nginx at nginx.org > > 24. http://mailman.nginx.org/mailman/listinfo/nginx > > 25. mailto:nginx at nginx.org > > 26. http://mailman.nginx.org/mailman/listinfo/nginx > > 27. mailto:nginx at nginx.org > > 28. http://mailman.nginx.org/mailman/listinfo/nginx > > 29. mailto:pasik at iki.fi > > 30. https://github.com/taobao/tengine/pull/91 > > 31. mailto:yaoweibin at gmail.com > > 32. https://github.com/taobao/tengine/pull/91 > > 33. mailto:zjay1987 at gmail.com > > 34. mailto:nginx at nginx.org > > 35. http://mailman.nginx.org/mailman/listinfo/nginx > > 36. mailto:nginx at nginx.org > > 37. http://mailman.nginx.org/mailman/listinfo/nginx > > 38. mailto:nginx at nginx.org > > 39. http://mailman.nginx.org/mailman/listinfo/nginx > > 40. mailto:zjay1987 at gmail.com > > 41. https://github.com/taobao/tengine/pull/91 > > 42. mailto:yaoweibin at gmail.com > > 43. https://github.com/taobao/tengine/pull/91 > > 44. mailto:zjay1987 at gmail.com > > 45. mailto:nginx at nginx.org > > 46. http://mailman.nginx.org/mailman/listinfo/nginx > > 47. mailto:nginx at nginx.org > > 48. http://mailman.nginx.org/mailman/listinfo/nginx > > 49. mailto:nginx at nginx.org > > 50. http://mailman.nginx.org/mailman/listinfo/nginx > > 51. mailto:nginx at nginx.org > > 52. http://mailman.nginx.org/mailman/listinfo/nginx > > 53. mailto:nginx at nginx.org > > 54. http://mailman.nginx.org/mailman/listinfo/nginx > > 55. mailto:nginx at nginx.org > > 56. http://mailman.nginx.org/mailman/listinfo/nginx > > 57. mailto:nginx at nginx.org > > 58. http://mailman.nginx.org/mailman/listinfo/nginx > > 59. mailto:nginx at nginx.org > > 60. http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at ruby-forum.com Fri Feb 22 10:51:28 2013 From: lists at ruby-forum.com (Namson Mon) Date: Fri, 22 Feb 2013 11:51:28 +0100 Subject: Updated hotlink protection with new Google Image Search In-Reply-To: <5110889A.9050209@digitalhit.com> References: <5110889A.9050209@digitalhit.com> Message-ID: <0c2cf7c2ac7efc81643cd472e1c7d6f1@ruby-forum.com> Hi Ian, we've just published an extensive blog post about how to achieve this kind of hotlinking protection against Google Images: http://pixabay.com/en/blog/posts/hotlinking-protection-and-watermarking-for-google-32/ -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Fri Feb 22 11:27:27 2013 From: nginx-forum at nginx.us (shumisha) Date: Fri, 22 Feb 2013 06:27:27 -0500 Subject: Issue with auto subdomain nd trailing slash Message-ID: Hi All, I am running into a weird issue, after having configured nginx 0.7.67 (comes with debian 6) for serving PHP pages, with automatic subdomains. It works fine for the most part, but when requesting a url that correspond to an actual folder, but without a trailing slash, the subdomain recognition fails. Here is the start of the config: server { listen 80; # base domain set $root_domain XXXX.net; server_name XXXX.net *.XXXX.net; if ($host ~* ^www\.XXXX\.net$) { rewrite ^(.*) $scheme://XXXX.net$1 permanent; break; } set $sub_domain ""; if ($host ~* ^(.*)\.XXXX\.net$) { set $sub_domain $1; } if ($sub_domain) { # debugging instruction: figure out what's the value of $host rewrite ^ http://google.com?q=$host permanent; break; set $root_path /var/www/$root_domain/$sub_domain/public; } if ($sub_domain = "") { # debugging instruction: figure out what's the value of $host rewrite ^ http://google.com?q=$host permanent; break; set $root_path /var/www/$root_domain/www/public; } #if the directory doesn't exist, prevent access if (!-d $root_path) { return 403; } #if we have made it here, set the root to the computed directory root $root_path; # logging access_log /var/log/nginx/$sub_domain.$root_domain.access.log; error_log /var/log/nginx/$sub_domain.$root_domain.error.log; location / { index index.html index.php; # serve static files that exist without running other rewrite tests if (-f $request_filename) { expires 30d; break; } # send all non-existing file or directory requests to index.php if (!-e $request_filename) { rewrite ^/(.*) /index.php last; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 try_files $uri =404; location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $root_path$fastcgi_script_name; } # optional authentication # would require to be put in its own server block # if only valid for some sub-domais #location / { # auth_basic "Authentication Required"; # auth_basic_user_file $root_path/private/authfile; #} location ~ /\.ht{ deny all; } } With this setup, you can access XXXX.net and any subdomain is served based on the corresponding folder under web root. All of that works very fine except in one case: If you request: http://subD1.XXXX.net/admin/, and /admin/ is a folder under the subD1 web site, everything goes well, the index file (index.php) is served. Now if one requests http://subD1.XXXX.net/admin (without trailing slash), the sub domain is not recognized anymore, and the root path is set to the path to XXXX.net instead of that associated with subD1. Now the real weird thing is that it is the $host variable that's wrong. In the first case (request for a folder with trailing slash), $host contains subD1.XXXX.net, and the regexp matches, the sub domain is properly calculated. In the second case (no trailing slash), $host is wrong: it contains XXXX.net instead of subD1.XXXX.net, and so the regexp doesn't match and the sub domain is not recognized. Please note that my question is not about trying to redirect urls with or without trailing slash to its counterpoint. My issue is with the subdomain regexp not matching because the $host value is wrong. Any pointer as to what's going on? Am I doing something wrong? Thanks for any pointer and best regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236469,236469#msg-236469 From alexandernst at gmail.com Fri Feb 22 11:42:17 2013 From: alexandernst at gmail.com (Alexander Nestorov) Date: Fri, 22 Feb 2013 12:42:17 +0100 Subject: Default error_page for multiple vhosts In-Reply-To: <51274c4a.6164b40a.7d86.ffffa5ad@mx.google.com> References: <51274c4a.6164b40a.7d86.ffffa5ad@mx.google.com> Message-ID: Thank you both for the replies :) I already thought about soft links and some hack with grep+sed, but that's not I really wanted to ask. What I really ment to ask/say is: it will be really usefull to set some default rules that could be overriden later on each server{}. Example: It would be really usefull (in my particular case) to set error_page to some absolute path so that all server{} get error_page automatically. Then, if my domain 42 needs a custom error_page I could just add the error_page to that server{} and it will get overriden. This is (the way I see it) more elegant than re-generating server{} templates or doing ln -s. Think about http{} as a class and server{} as a derived class in any OOP language. -- alexandernst From guardian at planet-d.net Fri Feb 22 12:00:45 2013 From: guardian at planet-d.net (Gregory Pakosz) Date: Fri, 22 Feb 2013 13:00:45 +0100 Subject: Default error_page for multiple vhosts In-Reply-To: References: <51274c4a.6164b40a.7d86.ffffa5ad@mx.google.com> Message-ID: <51275DED.5030707@planet-d.net> On 2/22/13 12:42 PM, Alexander Nestorov wrote: > Example: > > It would be really usefull (in my particular case) to set error_page > to some absolute path so that all server{} > get error_page automatically. Then, if my domain 42 needs a custom > error_page I could just add the error_page > to that server{} and it will get overriden. > Alexander, I think I achieved what you're after: - default 403, 404 and 503 error pages in /usr/local/share/www/ - custom 404, 404 and 504 error pages for each vhost in $document_root/ Typically, a vhost in /etc/nginx/sites-available/some.domain.org includes vhost.conf which includes errors.conf Tell me if that helps. Feedback from others appreciated! Gregory error.conf: error_page 403 @403; error_page 404 @404; error_page 503 @503; location @403 { try_files /403.html @403_fallback; } location = /403.html { if (-f $document_root/offline) { error_page 503 @offline; return 503; } return 404; } location @403_fallback { root /usr/local/share/www; try_files /403.html =403; } location @404 { try_files /404.html @404_fallback; } location = /404.html { if (-f $document_root/offline) { error_page 503 @offline; return 503; } return 404; } location @404_fallback { root /usr/local/share/www; try_files /404.html =404; } location @503 { try_files /503.html @503_fallback; } location = /503.html { if (-f $document_root/offline) { error_page 503 @offline; return 503; } return 404; } location @503_fallback { root /usr/local/share/www; try_files /50x.html =503; } --- offline.conf location @offline { try_files /offline.html @503; } location = /offline { if (-f $document_root/offline) { error_page 503 @offline; return 503; } return 404; } location = /offline.html { if (-f $document_root/offline) { error_page 503 @offline; return 503; } return 404; } ---- vhost.conf include errors.conf; include offline.conf; location / { if (-f $document_root/offline) { error_page 503 @offline; return 503; } try_files $uri $uri/ /index.php; } -- ___________________________________________________________________ IRCNet /msg guardian | guardian at planet-d.net | Twitter @planetdnews From alexandernst at gmail.com Fri Feb 22 12:10:06 2013 From: alexandernst at gmail.com (Alexander Nestorov) Date: Fri, 22 Feb 2013 13:10:06 +0100 Subject: Default error_page for multiple vhosts In-Reply-To: <51275DED.5030707@planet-d.net> References: <51274c4a.6164b40a.7d86.ffffa5ad@mx.google.com> <51275DED.5030707@planet-d.net> Message-ID: Thank you for the example config Gregory! :) I'll try it (not sure if I'll be able to do it until monday) and I'll say if everything works as expected. Regards :) -- alexandernst From francis at daoine.org Fri Feb 22 12:56:51 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Feb 2013 12:56:51 +0000 Subject: Issue with auto subdomain nd trailing slash In-Reply-To: References: Message-ID: <20130222125651.GR32392@craic.sysops.org> On Fri, Feb 22, 2013 at 06:27:27AM -0500, shumisha wrote: > I am running into a weird issue, after having configured nginx 0.7.67 (comes > with debian 6) for serving PHP pages, with automatic subdomains. > It works fine for the most part, but when requesting a url that correspond > to an actual folder, but without a trailing slash, the subdomain recognition > fails. What is probably happening is that when you ask for /dir, you get a http redirect to /dir/. That redirect must include the full url, including the hostname. And nginx has to choose which hostname to use. And it doesn't choose the one that you want it to choose. What is the output of curl -i http://subD1.XXXX.net/admin ? I guess it will show http://XXXX.net/admin/. You then separately request http://XXXX.net/admin/, and therefore shouldn't be surprised that the subdomain recognition fails, because in this request there is no subdomain. Possibly setting server_name_in_redirect (http://nginx.org/r/server_name_in_redirect) to "off" will work for you? The docs say that default is off, but I think that may have been different in the 0.7 series. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Feb 22 13:06:15 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Feb 2013 13:06:15 +0000 Subject: Default error_page for multiple vhosts In-Reply-To: References: <51274c4a.6164b40a.7d86.ffffa5ad@mx.google.com> Message-ID: <20130222130615.GS32392@craic.sysops.org> On Fri, Feb 22, 2013 at 12:42:17PM +0100, Alexander Nestorov wrote: Hi there, > What I really ment to ask/say is: it will be really usefull to set > some default rules that could be overriden later > on each server{}. That is already the case, within the limits of the nginx configuration inheritance model (which is approximately: inheritance is by replacement, or not at all) and valid directive contexts. > Example: > > It would be really usefull (in my particular case) to set error_page > to some absolute path so that all server{} > get error_page automatically. Then, if my domain 42 needs a custom > error_page I could just add the error_page > to that server{} and it will get overriden. >From the nginx configuration inheritance point of view, that already happens. Except that error_page take a uri, not a filename. > This is (the way I see it) more elegant than re-generating server{} > templates or doing ln -s. > Think about http{} as a class and server{} as a derived class in any > OOP language. Yes, that is how it works already. What you want is an alternative directive which is like error_page but takes a filename. Nothing to do with changing the inheritance model. And therefore conceptually much simpler to get into the code base, if someone cares enough to write it. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Feb 22 13:39:46 2013 From: nginx-forum at nginx.us (shumisha) Date: Fri, 22 Feb 2013 08:39:46 -0500 Subject: Issue with auto subdomain nd trailing slash In-Reply-To: <20130222125651.GR32392@craic.sysops.org> References: <20130222125651.GR32392@craic.sysops.org> Message-ID: <3c758ca1daa86c195d5f3d6e1a1cb8e6.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Thanks a lot for your message, I think I got things under control thanks to you! You're right, there's indeed a 301 thrown by nginx from http://subdomain.xxxx.net/admin to http://xxxx.net/admin/ (as shown by the curl output). I had tried server_name_in_redirect to off before, to no avail, but I tried again, just in case. I looked up the documentation again, and I realized: - it's probably not active on my version, as nginx still uses the first server in the server_name directive regardless of the directive value. However, it's a valid config value - but that got me thinking and I simply switched the orders of server, so that it reads: server_name *.xxxx.net xxxx.net; So now the automatic redirect still happens (I think it should be configurabe, maybe it is) but by placing the *.xxxx.net server first in the server_name directive, the sub domain is preserved and the redirect now happens with the correct host. Thanks again for your help :) Rgds Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236469,236477#msg-236477 From rakan.alhneiti at gmail.com Fri Feb 22 15:10:49 2013 From: rakan.alhneiti at gmail.com (Rakan Alhneiti) Date: Fri, 22 Feb 2013 18:10:49 +0300 Subject: Fwd: nginx performance on Amazon EC2 In-Reply-To: <9c93e5f9ca9a557c105799ba2d58e737.NginxMailingListEnglish@forum.nginx.org> References: <9c93e5f9ca9a557c105799ba2d58e737.NginxMailingListEnglish@forum.nginx.org> Message-ID: Since my requests according to my profiling take around 0.065 seconds to execute. I am really unsure of what happens after the response leaves the django side and uwsgi starts handling it with nginx. Is there a way to do some profiling at uwsgi or nginx level? Best Regards, *Rakan AlHneiti* Find me on the internet: Rakan Alhneiti | @rakanalh | Rakan Alhneiti | alhneiti ----- GTalk rakan.alhneiti at gmail.com ----- Mobile: +962-798-910 990 On Fri, Feb 22, 2013 at 9:54 AM, mex wrote: > now you **only** need to find the bootleneck :) > > maybe its better to run ab from a machine in the same network > instead of running on the same machine, esp. with one core. > > > > > > > Rakan Alhneiti Wrote: > ------------------------------------------------------- > > Hello, > > > > Thank you all for your support. > > > > *Mex: * > > *1) is the setup of your vmware similar to you ec2 - instance? i talk > > esp. > > about RAM/CPU-power here.* > > Yes, i've setup the vmware machine to be 1.7 G in ram and 1 core just > > like > > the small ec2 instance. > > > > *2) do you have a monitoring on your instances, checking for > > load/ram-zusage, iowait etc?* > > What goes on here is that my CPU usage per uwsgi process is around 13% > > and > > server load starts to get much higher. MySQL operations usually take > > 6-8 ms > > which are optimized and not slowing the app down. once the load starts > > rising, the app slows down and more connections start to fail. > > > > *Jan-Philip:* > > *Do you run the benchmark program on the same virtual machine as the > > web > > stack?? For yielding conclusive results, you certainly don't want to > > make > > ab, nginx, and all other entities involved compete for the same CPU. > > *Yes, on both machines i try to run ab on the same machine. So that i > > am > > profiling the app from within taking away any network latency that can > > affect the response rate. I tried running the test from a linode > > machine to > > an EC2 instance... same results. > > > > Again, thank you for your support, i am persevering on this until i > > find > > out what the issue is. > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,236391,236459#msg-236459 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Feb 22 20:11:26 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Feb 2013 20:11:26 +0000 Subject: How to remove the "IF" in this fcgi config In-Reply-To: <512670C8.40208@wildgooses.com> References: <51265DE5.4070209@wildgooses.com> <93E666FD-1A14-496A-BE1D-33DEB9EBB7F8@sysoev.ru> <512670C8.40208@wildgooses.com> Message-ID: <20130222201126.GT32392@craic.sysops.org> On Thu, Feb 21, 2013 at 07:08:56PM +0000, Ed W wrote: > On 21/02/2013 17:54, Igor Sysoev wrote: Hi there, > > location ~ ^(?.+\.php)(?/|$) { > Can I ask you to confirm the correction of a typo in your answer. Do I > want this: > > ....(?.*) { You probably want location ~ ^(?.+\.php)(?/.*|$) { because you want to match /X.php or /X.php/X but not /X.phpX. f -- Francis Daly francis at daoine.org From nouiz at nouiz.org Fri Feb 22 20:21:31 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Fri, 22 Feb 2013 15:21:31 -0500 Subject: Serve different static page depending of the server load Message-ID: Hi, We are planning releasing a new web site shortly and we try to get slash-dotted. Our website is hosted with nginx, but I currently set the images of the main page on amazon S3 as it won't be able to serve them if we get slash-dotted as planned. To lower our cost, is there a way to tell nginx to serve for index.html another file like index_local.html file when the load of the server is low and to serve index_aws.html when the load is high? index_local.html would contain URL for images and an applet to the same nginx file server but index_aws.html will set URL for them to S3. If that is not possible, is there another way to do that? I don't want to manually do the switch as we never know when the crowd come. thanks Fr?d?ric -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Feb 22 21:16:03 2013 From: nginx-forum at nginx.us (jims) Date: Fri, 22 Feb 2013 16:16:03 -0500 Subject: Reverse proxy and rewrite Message-ID: nginx 1.2.7 on CentOS 6.3. Reverse-proxy with two upstream servers. One server should be default, the other should only be used when called from a link on a specific web server. I have the reverse proxy working fine for the default upstream, but not quite right on the second, site-specific upstream. Ignoring that I am also rewriting for http to force https (yeah, I know, it's discouraged to go encrypted between the nginx proxy and its upstreams, but it's a corporate security thing) I am using a two server block configuration, one server name for the default and another for the site-specific server. The site-specific server will link to the default server URL but should rewrite to the site-specific upstream's proxy's URL. It seems to work - not sure if it gets stuck in the proxy or on the upstream for the site-specific setup -, but it was suggested that a single server, selecting between upstream servers, would be preferable. I would like to know how that would be configured - just a quick overview or link to a documentation page would help a lot. If I can throw together a quick test of a single-server-block reverse-proxy config with multiple upstreams which are selected based on calling server, I could test to see if it's my multi-server-block setup or a problem on the upstream. Thanks! Jim. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236487,236487#msg-236487 From vbart at nginx.com Fri Feb 22 22:05:56 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 23 Feb 2013 02:05:56 +0400 Subject: How to remove the "IF" in this fcgi config In-Reply-To: <20130222201126.GT32392@craic.sysops.org> References: <51265DE5.4070209@wildgooses.com> <512670C8.40208@wildgooses.com> <20130222201126.GT32392@craic.sysops.org> Message-ID: <201302230205.56443.vbart@nginx.com> On Saturday 23 February 2013 00:11:26 Francis Daly wrote: > On Thu, Feb 21, 2013 at 07:08:56PM +0000, Ed W wrote: > > On 21/02/2013 17:54, Igor Sysoev wrote: > Hi there, > > > > location ~ ^(?.+\.php)(?/|$) { > > > > Can I ask you to confirm the correction of a typo in your answer. Do I > > > > want this: > > ....(?.*) { > > You probably want > > location ~ ^(?.+\.php)(?/.*|$) { > > because you want to match /X.php or /X.php/X but not /X.phpX. > IMHO, location ~ ^(?.+\.php)(?/.*)?$ { looks better. % pcretest -b -d -m -s+ PCRE version 8.30 2012-02-04 re> !^(?.+\.php)(?/.*|$)! Memory allocation (code space): 42 Memory allocation (JIT code): 1739 ------------------------------------------------------------------ 0 38 Bra 3 ^ 4 15 CBra 1 9 Any+ 11 .php 19 15 Ket 22 9 CBra 2 27 / 29 Any* 31 4 Alt 34 $ 35 13 Ket 38 38 Ket 41 End ------------------------------------------------------------------ Capturing subpattern count = 2 Named capturing subpatterns: path_info 2 script_name 1 Options: anchored No first char Need char = 'p' Subject length lower bound = 5 No set of starting bytes data> re> !^(?.+\.php)(?/.*)?$! Memory allocation (code space): 40 Memory allocation (JIT code): 1732 ------------------------------------------------------------------ 0 36 Bra 3 ^ 4 15 CBra 1 9 Any+ 11 .php 19 15 Ket 22 Brazero 23 9 CBra 2 28 / 30 Any* 32 9 Ket 35 $ 36 36 Ket 39 End ------------------------------------------------------------------ Capturing subpattern count = 2 Named capturing subpatterns: path_info 2 script_name 1 Options: anchored No first char Need char = 'p' Subject length lower bound = 5 No set of starting bytes wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Feb 23 00:14:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 23 Feb 2013 04:14:31 +0400 Subject: Reverse proxy and rewrite In-Reply-To: References: Message-ID: <20130223001431.GH81985@mdounin.ru> Hello! On Fri, Feb 22, 2013 at 04:16:03PM -0500, jims wrote: > nginx 1.2.7 on CentOS 6.3. Reverse-proxy with two upstream servers. One > server should be default, the other should only be used when called from a > link on a specific web server. > > I have the reverse proxy working fine for the default upstream, but not > quite right on the second, site-specific upstream. Ignoring that I am also > rewriting for http to force https (yeah, I know, it's discouraged to go > encrypted between the nginx proxy and its upstreams, but it's a corporate > security thing) I am using a two server block configuration, one server name > for the default and another for the site-specific server. The site-specific > server will link to the default server URL but should rewrite to the > site-specific upstream's proxy's URL. > > It seems to work - not sure if it gets stuck in the proxy or on the upstream > for the site-specific setup -, but it was suggested that a single server, > selecting between upstream servers, would be preferable. I would like to > know how that would be configured - just a quick overview or link to a > documentation page would help a lot. > > If I can throw together a quick test of a single-server-block reverse-proxy > config with multiple upstreams which are selected based on calling server, I > could test to see if it's my multi-server-block setup or a problem on the > upstream. Normal nginx aproach is to use distinct server blocks for servers with different configuration. -- Maxim Dounin http://nginx.com/support.html From agentzh at gmail.com Sat Feb 23 07:12:30 2013 From: agentzh at gmail.com (agentzh) Date: Fri, 22 Feb 2013 23:12:30 -0800 Subject: [ANN] ngx_openresty devel version 1.2.7.1 released In-Reply-To: References: Message-ID: Hello! I am happy to announce that the new development version of ngx_openresty, 1.2.7.1, is now released: http://openresty.org/#Download Below is the complete change log for this release, as compared to the last (stable) release, 1.2.6.6: * upgraded the Nginx core to 1.2.7. * see for changes. * upgraded LuaJIT 2.0 to 2.0.1. * see for changes. * upgraded LuaNginxModule to 0.7.16. * optimize: removed the unsed "size" field and related computatins from the script engine for the "ngx.re" API. * optimize: saved a little memory in the script engine for the "ngx.re" API. OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From francis at daoine.org Sat Feb 23 13:41:14 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 23 Feb 2013 13:41:14 +0000 Subject: How to remove the "IF" in this fcgi config In-Reply-To: <201302230205.56443.vbart@nginx.com> References: <51265DE5.4070209@wildgooses.com> <512670C8.40208@wildgooses.com> <20130222201126.GT32392@craic.sysops.org> <201302230205.56443.vbart@nginx.com> Message-ID: <20130223134114.GU32392@craic.sysops.org> On Sat, Feb 23, 2013 at 02:05:56AM +0400, Valentin V. Bartenev wrote: > On Saturday 23 February 2013 00:11:26 Francis Daly wrote: > > location ~ ^(?.+\.php)(?/.*|$) { > location ~ ^(?.+\.php)(?/.*)?$ { > > looks better. Agreed. And it means I don't have to think about whether AB|C groups as (AB)|(C) or (A)(B|C) in this regex implementation :-) > % pcretest -b -d -m -s+ Using smaller Memory allocation numbers is an extra bonus. Cheers, f -- Francis Daly francis at daoine.org From pablo.platt at gmail.com Sat Feb 23 15:41:15 2013 From: pablo.platt at gmail.com (pablo platt) Date: Sat, 23 Feb 2013 17:41:15 +0200 Subject: deb package with websocket support Message-ID: Hi, Is there a 1.3.13 deb package for ubuntu with a websocket support? The ppa has 1.3.12 https://launchpad.net/~nginx/+archive/development Is this ppa official and can be trusted? Is it safe to use 1.3.13 in production as a replacement for the package from the ubuntu official repository or is it considered still experimental? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From m6rkalan at gmail.com Sat Feb 23 21:11:06 2013 From: m6rkalan at gmail.com (Mark Alan) Date: Sat, 23 Feb 2013 21:11:06 +0000 Subject: Problem with auth_basic + proxy_pass + transmission-daemon Message-ID: <5129306c.a567b40a.3db1.701a@mx.google.com> Hello list, While using nginx 1.3.12 + transmission-daemon 2.77 + Ubuntu 12.04, # /etc/transmission-daemon/settings.json ... "rpc-bind-address": "127.0.0.1", "rpc-port": 9091, "rpc-url": "/transmission/", ... ls -l /etc/nginx/.htpasswdtrans -rw-r----- 1 root www-data 64 ... /etc/nginx/.htpasswdtrans Trying to browse to: https://example.localdomain/transmission WORKS IF: location /transmission { proxy_pass http://127.0.0.1:9091/transmission; } DOES NOT WORK IF: location /transmission { auth_basic "Restricted Area"; auth_basic_user_file .htpasswdtrans; proxy_pass http://127.0.0.1:9091/transmission; } AND GIVES THESE ERRORS: ==> /var/log/nginx/access.log <== 192.168.0.70 - - [23/Feb/2013:20:38:19 +0000] "POST /transmission/rpc HTTP/1.1" 302 158 "https://example.localdomain/transmission/web/" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:19.0) Gecko/20100101 Firefox/19.0" 192.168.0.70 - - [23/Feb/2013:20:38:19 +0000] "{\x22method\x22:\x22session-get\x22}" 400 170 "-" "-" [error] 6012#0: *799 no user/password was provided for basic authentication, client: 192.168.0.70, server: example.localdomain, request: "GET /transmission/web/style/transmission/images/logo.png HTTP/1.1", host: "example.localdomain", referrer: "https://example.localdomain/transmission/web/style/transmission/common.css" ==> /var/log/nginx/error.log <== 2013/02/23 20:38:19 [error] 6012#0: *799 no user/password was provided for basic authentication, client: 192.168.0.70, server: example.localdomain, request: "GET /transmission/web/style/transmission/images/logo.png HTTP/1.1", host: "example.localdomain", referrer: "https://example.localdomain/transmission/web/style/transmission/common.css" Note: Adding the following to the 'location /transmission', or to the parent server {} did not help: proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; Any ideas on how to make 'auth_basic' work? Thank you. M. From ianevans at digitalhit.com Sat Feb 23 22:07:33 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Sat, 23 Feb 2013 17:07:33 -0500 Subject: Updated hotlink protection with new Google Image Search In-Reply-To: <0c2cf7c2ac7efc81643cd472e1c7d6f1@ruby-forum.com> References: <5110889A.9050209@digitalhit.com> <0c2cf7c2ac7efc81643cd472e1c7d6f1@ruby-forum.com> Message-ID: <51293DA5.4020404@digitalhit.com> On 22/02/2013 5:51 AM, Namson Mon wrote: > Hi Ian, we've just published an extensive blog post about how to achieve > this kind of hotlinking protection against Google Images: > > http://pixabay.com/en/blog/posts/hotlinking-protection-and-watermarking-for-google-32/ > Thanks for the link. I like the ideas in the post. Will have to see how I could integrate them with PHP file we use to locate the img in our database for the redirect. Great to see the traffic bounce back. From lists at ruby-forum.com Sat Feb 23 23:02:22 2013 From: lists at ruby-forum.com (Brian H.) Date: Sun, 24 Feb 2013 00:02:22 +0100 Subject: shell environment variables in "include"-directive not working In-Reply-To: <20100513002929.GP76989@mdounin.ru> References: <4BEB3071.9030108@koppzu.de> <20100513002929.GP76989@mdounin.ru> Message-ID: <2cdd70ee75de103d1d2a2518fa0c71e5@ruby-forum.com> Just wanted to respond that I had the similar need, and hopefully a better solution. We too have dev, staging, prod, and the configurations are similar, but not identical. We have hundreds of servers, and the variances in files means we need to have a much longer runway for new sysadm recruits to get up to speed, and increases the opportunity for errors caused by divergence - especially in active/active or active/passive configurations in production. Next - the link to sed which is a HORRIBLE idea to my OCD brain. sed is a line parser, it fundamentally doesn't understand the hierarchy of an nginx conf file(s). The opportunity for something to go horribly horribly wrong and be very difficult to troubleshoot (especially with high degrees of automation/scripting in other areas of our platform) scares the crap out of me. Using sed to parse old style unix files where everything is on a single line and all lines are identical = great, using it to parse a nested configuration file where each line is different is a horrid idea (even with excellent naming conventions). I am actually not aware of any valid NGINX hierarchy parser, and since it uses a very non standard, highly complex syntax I think it would be reasonably challenging to write one which could read, modify, write back the same file with comments, etc. (I bet a lot of people do) AND -- something as a simple __HOSTNAME__ or #IFHOSTNAME would be *amazingly* useful for a variety of situations. I also wanted to share to others who may also have the same issues why user defined environment variables are a horrible idea too and why the devs are so resistant to them. When trying to hot-swap executables since the environment won't carry across pids -- this is a cool feature nginx has, and using environment variables would break that. So stop asking for environment variables -- instead we should be thinking about a few well known variables that can be interpolated at config parsing time would be useful for those of us with hundreds of servers to manage the 1-2% variances between each servers without needing an elaborate custom build script. one config file to bind them all and KISS. **SO** I may write a plugin to do that in the future, but alas, no time. So I submit the slightly ghetto work around I devised. First use host file trickery (this is IMHO pretty common for dev/prod/staging) to alias well known role based names "ex: api, www, etc" Use symlinking with hostname to include the proper files based on hostname. Modify your init.d script so it links, or touches empty files when a corresponding file doesn't exist. I'm not going to provide examples here because inevitably your environment will be different than mine, but I can tell you that nginx will load a empty (zero byte) include file with no issues. Of course environment variables like hostname in a shell script is trivial. So when you bake it all together - in the nginx.conf file: include "some-role.conf" include "lotsofcustom-roles/*.conf" include "yetanother-role.conf" ** those includes point at symlinked or zero byte files. In the /etc/init.d/nginx script do something like: /bin/rm -f $NGINXROOT/conf/some-role.conf /bin/rm -f $NGINXROOT/conf/lotsofcustom-roles /bin/rm -f $NGINXROOT/conf/yetanother-role.conf if [ -f "$NGINXROOT/conf/some-role-$HOSTNAME.conf" ] ; then ln -s "$NGINXROOT/conf/some-role-$HOSTNAME.conf" $NGINXROOT/conf/some-role.conf" else touch $NGINXROOT/conf/some-role.conf fi if [ -d "$NGINXROOT/conf/lotsofcustom-roles-$HOSTNAME" ] ; then ln -sd "$NGINXROOT/conf/lotsofcustom-roles-$HOSTNAME" $NGINXROOT/lotsofcustom-roles" else mkdir $NGINXROOT/conf/lotsofcustom-roles touch $NGINXROOT/conf/lotsofcustom-roles/nothing-to-see-here.conf fi if [ -f "$NGINXROOT/conf/yetanother-role-$HOSTNAME.conf" ] ; then ln -s "$NGINXROOT/conf/yetanother-role-$HOSTNAME.conf" $NGINXROOT/conf/some-role.conf" else touch $NGINXROOT/conf/yetanother-role.conf fi Obviously this is highly dependent on exactly what you want to accomplish, but I think it strikes a much nicer balance than use a sed machete to hack through a config file using regular expressions it doesn't understand and accidentally clobbering something you didn't intend to. Regards, -Brian Horakh Chief Technical Guy anyCommerce Maxim Dounin wrote in post #911875: > Hello! > > On Thu, May 13, 2010 at 12:49:21AM +0200, Markus Grobelin wrote: > >> Hy everybody, >> i'm doing my first steps with nginx/0.8.36 and trying to get *NIX >> shell environment variables working inside the configuration files. >> Sadly, it's seems they aren't working inside the >> "include"-directive! :( > > nginx doesn't have syntax for expanding environment variables in > configuration file. Syntax $var used for runtime per-request > variables (supported by some directives, support explicitly noted > in directive descriptions). > > [...] > >> The [emerg] indicates, that the $INSTANCE environment variable isn't >> expanded, whereas the "user" and "pid" directive doesn't raise an >> exception?? > > Because there is no syntax error in user "$USER" and config parser > has nothing against it. As you aren't running as root nginx > just prints warning about being non-root and forgets about it. > Under root you should see: > > [emerg]: getpwnam("$USER") failed in /path/to/nginx.conf:line > > (unless you actually have "$USER" in your /etc/passwd) > > Similar thing with pid. It's syntactically correct and will only > produce error when nginx will try to create pid file. If you > happen to cleanup other critical config errors you should see > something like this on startup: > > [emerg]: open() "/nginx/$INSTANCE/run/nginx.pid" failed (2: No such file > or directory) > > (again, unless you actually have "/nginx/$INSTANCE/run/" directory) > > Maxim Dounin -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Sat Feb 23 23:32:45 2013 From: nginx-forum at nginx.us (refghbn) Date: Sat, 23 Feb 2013 18:32:45 -0500 Subject: mod_auth_request + php5-fpm gives error 504 on POST requests, GET requests are okay Message-ID: <17611a485460ae3b870c75662b6680be.NginxMailingListEnglish@forum.nginx.org> Setup: php5-fpm listening on socket + nginx 1.2.6 Wanted to use mod_auth_request for www authentication. Works fine for GET requests, POST requests time out with error 504. Error log: 2013/02/24 00:13:03 [error] 24004#0: *13 auth request unexpected status: 504 while sending to client, client: 80.187.106.XXX, server: xxx.net, request: "POST /test.php HTTP/1.1", host: "xxx.net", referrer: "https://xxx.net/test.php" With GET, as stated, and when testing the subrequest script directly, everything works fine. Gives 201/401 header, empty body as it should. location ~ \.php$ { root /var/www; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_intercept_errors on; error_page 404 /index.html; fastcgi_param DOCUMENT_ROOT /var/www; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; fastcgi_param PATH_TRANSLATED /var/www$fastcgi_script_name; include /etc/nginx/fastcgi.conf; auth_request /authenticator/www.php; } location = /authenitcator/www.php { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; fastcgi_param PATH_TRANSLATED /var/www$fastcgi_script_name; } Does anyone have an idea? /refghbn Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236524,236524#msg-236524 From lists at ruby-forum.com Sun Feb 24 02:42:46 2013 From: lists at ruby-forum.com (Emilian Robert Vicol) Date: Sun, 24 Feb 2013 03:42:46 +0100 Subject: Updated hotlink protection with new Google Image Search In-Reply-To: <5110889A.9050209@digitalhit.com> References: <5110889A.9050209@digitalhit.com> Message-ID: <202245706e53b4c5ff3f580f26ca9fa6@ruby-forum.com> for those using wordpress, WP-PICShield can be helpful, do almost everything you want or not :) - Pass-Through Images Request - Caching Support, - Custom image transprency - Anti-IFRAME Protection, - Custom PNG watermark - HostName over images as url and/or in QR-BarCode !!! - Protection against unauthorized requests - Redirect direct-link to: attachment, single/gallery, or home - Allow Online Translators - Avoid memory errors for big files - Allow share button for socials sites:Facebook, Pinterest, Thumblr, Twitter, Google Plus - Allow Wordpress via RPC and Twitter via OAuth - Allow remote ip list - Manual Clear Cache script avoid php limit execution - CDN Tools and helps !!! unfortunately only httacces rules , not nginx. Attachments: http://www.ruby-forum.com/attachment/8169/banner-772x250.jpg -- Posted via http://www.ruby-forum.com/. From ianevans at digitalhit.com Sun Feb 24 06:37:41 2013 From: ianevans at digitalhit.com (Ian M. Evans) Date: Sun, 24 Feb 2013 01:37:41 -0500 Subject: Updated hotlink protection with new Google Image Search In-Reply-To: <0c2cf7c2ac7efc81643cd472e1c7d6f1@ruby-forum.com> References: <5110889A.9050209@digitalhit.com> <0c2cf7c2ac7efc81643cd472e1c7d6f1@ruby-forum.com> Message-ID: On Fri, February 22, 2013 5:51 am, Namson Mon wrote: > Hi Ian, we've just published an extensive blog post about how to achieve > this kind of hotlinking protection against Google Images: just curious...unless I missed it in the article...how are you adding the GET param for visitors, but not for bots? From nginx-forum at nginx.us Sun Feb 24 14:41:38 2013 From: nginx-forum at nginx.us (jstrybis) Date: Sun, 24 Feb 2013 09:41:38 -0500 Subject: Problem with proxy_set_header $ssl_client_cert Message-ID: <6f06b644cc2d48619e33804c75063586.NginxMailingListEnglish@forum.nginx.org> Hello, I am having an issue while verifying client SSL certificates. Everything works fine until I attempt to forward the cert onto the upstream. Once I add a line similar to the following in my location block, all requests become an error 400 Bad Request. > proxy_set_header X-SSL-Client_Cert $ssl_client_cert; (I've also tried $ssl_client_raw_cert, but the docs say "[$ssl_client_cert] is intended for the use in the proxy_set_header directive;" Here is my entire location block: location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-SSL-Client-Cert $ssl_client_cert; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } Originally I was using add_header X-SSL-Client-Cert in the server block, which did not throw a 400, but my upstream app was not seeing the header. Once I remove the proxy_set_header line, the server works as expected: requests with a valid cert get passed through while unauthenticated requests get a 403. (This is done by checking $ssl_client_verify). Am I missing something obvious? Any help would be very appreciated. Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236546,236546#msg-236546 From mdounin at mdounin.ru Sun Feb 24 17:53:47 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 24 Feb 2013 21:53:47 +0400 Subject: mod_auth_request + php5-fpm gives error 504 on POST requests, GET requests are okay In-Reply-To: <17611a485460ae3b870c75662b6680be.NginxMailingListEnglish@forum.nginx.org> References: <17611a485460ae3b870c75662b6680be.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130224175347.GP81985@mdounin.ru> Hello! On Sat, Feb 23, 2013 at 06:32:45PM -0500, refghbn wrote: > Setup: php5-fpm listening on socket + nginx 1.2.6 > > Wanted to use mod_auth_request for www authentication. Works fine for GET > requests, POST requests time out with error 504. By "mod_auth_request" you mean auth request module, as available from http://mdounin.ru/hg/ngx_http_auth_request_module? > Error log: > 2013/02/24 00:13:03 [error] 24004#0: *13 auth request unexpected status: 504 > while sending to client, client: 80.187.106.XXX, server: xxx.net, request: > "POST /test.php HTTP/1.1", host: "xxx.net", referrer: > "https://xxx.net/test.php" > > With GET, as stated, and when testing the subrequest script directly, > everything works fine. Gives 201/401 header, empty body as it should. > > location ~ \.php$ { > > root /var/www; > > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > fastcgi_intercept_errors on; > error_page 404 /index.html; > > fastcgi_param DOCUMENT_ROOT /var/www; > fastcgi_param SCRIPT_FILENAME > /var/www$fastcgi_script_name; > fastcgi_param PATH_TRANSLATED > /var/www$fastcgi_script_name; > > include /etc/nginx/fastcgi.conf; > > auth_request /authenticator/www.php; > } > > location = /authenitcator/www.php { > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_param SCRIPT_FILENAME > /var/www$fastcgi_script_name; > fastcgi_param PATH_TRANSLATED > /var/www$fastcgi_script_name; > } > > Does anyone have an idea? You have a typo in location, "authenITcator" instead of "authenTIcator". This results in "location ~ \.php$" being used for auth requests instead of dedicated one. In "location ~ \.php$" it doesn't work for POST requests but times out instead as auth request doesn't have request body (it's not yet read from a client), but there will be Content-Length header. This might confuse backend's code, and in general you should have something equivalent to the following line from README: proxy_set_header Content-Length ""; I.e. for fastcgi you should _not_ provide fastcgi_param CONTENT_LENGTH. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Sun Feb 24 18:01:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 24 Feb 2013 22:01:18 +0400 Subject: Problem with proxy_set_header $ssl_client_cert In-Reply-To: <6f06b644cc2d48619e33804c75063586.NginxMailingListEnglish@forum.nginx.org> References: <6f06b644cc2d48619e33804c75063586.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130224180117.GQ81985@mdounin.ru> Hello! On Sun, Feb 24, 2013 at 09:41:38AM -0500, jstrybis wrote: > Hello, > > I am having an issue while verifying client SSL certificates. Everything > works fine until I attempt to forward the cert onto the upstream. > > Once I add a line similar to the following in my location block, all > requests become an error 400 Bad Request. > > proxy_set_header X-SSL-Client_Cert $ssl_client_cert; > (I've also tried $ssl_client_raw_cert, but the docs say "[$ssl_client_cert] > is intended for the use in the proxy_set_header directive;" > > Here is my entire location block: > location @unicorn { > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-SSL-Client-Cert $ssl_client_cert; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header Host $http_host; > proxy_redirect off; > proxy_pass http://unicorn; > } > > Originally I was using add_header X-SSL-Client-Cert in the server block, > which did not throw a 400, but my upstream app was not seeing the header. > > Once I remove the proxy_set_header line, the server works as expected: > requests with a valid cert get passed through while unauthenticated requests > get a 403. (This is done by checking $ssl_client_verify). > > Am I missing something obvious? Any help would be very appreciated. Thank > you. The $ssl_client_cert variable abuses header continuation, and this doesn't work with many http servers (including nginx itself). There should be more portable way to pass client certificate to an upstream server. -- Maxim Dounin http://nginx.com/support.html From chris+nginx at schug.net Sun Feb 24 19:40:24 2013 From: chris+nginx at schug.net (Christoph Schug) Date: Sun, 24 Feb 2013 20:40:24 +0100 Subject: How to remove the "IF" in this fcgi config In-Reply-To: <201302230205.56443.vbart@nginx.com> References: <51265DE5.4070209@wildgooses.com> <512670C8.40208@wildgooses.com> <20130222201126.GT32392@craic.sysops.org> <201302230205.56443.vbart@nginx.com> Message-ID: On 2013-02-22 23:05, Valentin V. Bartenev wrote: > IMHO, > > location ~ ^(?.+\.php)(?/.*)?$ { > > looks better. In order to put yet another iteration into the game, I prefer the script_name part to be non-greedy location ~ ^(?.+?\.php)(?/.*)?$ { Of course it still depends on someone's preference what to match in case of /foo.php/bar.php/quux -cs From potxoka at gmail.com Sun Feb 24 21:17:38 2013 From: potxoka at gmail.com (Anto) Date: Sun, 24 Feb 2013 22:17:38 +0100 Subject: Optimize rewrite Message-ID: hello I have a script that works with apache but I want to migrate to nginx, I have this rule, but maybe you can do differently optimized. ## HTACCESS RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)$ index.php?url=$1 [QSA,NC,L] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.*)$ index.php [QSA,NC,L] RewriteCond %{REQUEST_FILENAME} -s RewriteRule ^(.*)$ index.php [QSA,NC,L] ## NGINX location / { rewrite ^/(.*)/(.*)$ /$1/index.php?url=$2; } Thanks !! Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Mon Feb 25 02:13:42 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Mon, 25 Feb 2013 10:13:42 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130222105052.GW8912@reaktio.net> References: <20130116151511.GS8912@reaktio.net> <20130118083821.GA8912@reaktio.net> <20130221200805.GT8912@reaktio.net> <20130222092524.GV8912@reaktio.net> <20130222105052.GW8912@reaktio.net> Message-ID: Can you show me your configure? It works for me with nginx-1.2.7. Thanks. 2013/2/22 Pasi K?rkk?inen > On Fri, Feb 22, 2013 at 11:25:24AM +0200, Pasi K?rkk?inen wrote: > > On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Yao wrote: > > > Use the patch I attached in this mail thread instead, don't use the > pull > > > request patch which is for tengine.? > > > Thanks. > > > > > > > Oh sorry I missed that attachment. It seems to apply and build OK. > > I'll start testing it. > > > > I added the patch on top of nginx 1.2.7 and enabled the following options: > > client_body_postpone_sending 64k; > proxy_request_buffering off; > > after that connections through the nginx reverse proxy started failing > with errors like this: > > [error] 29087#0: *49 upstream prematurely closed connection while reading > response header from upstream > [error] 29087#0: *60 upstream sent invalid header while reading response > header from upstream > > And the services are unusable. > > Commenting out the two config options above makes nginx happy again. > Any idea what causes that? Any tips how to troubleshoot it? > > Thanks! > > -- Pasi > > > > > > > 2013/2/22 Pasi K??rkk??inen <[1]pasik at iki.fi> > > > > > > On Fri, Jan 18, 2013 at 10:38:21AM +0200, Pasi K??rkk??inen wrote: > > > > On Thu, Jan 17, 2013 at 11:15:58AM +0800, ?????? wrote: > > > > > ? ? Yes. It should work for any request method. > > > > > > > > > > > > > Great, thanks, I'll let you know how it works for me. Probably > in two > > > weeks or so. > > > > > > > > > > Hi, > > > > > > Adding the tengine pull request 91 on top of nginx 1.2.7 doesn't > work: > > > > > > cc1: warnings being treated as errors > > > src/http/ngx_http_request_body.c: In function > > > 'ngx_http_read_non_buffered_client_request_body': > > > src/http/ngx_http_request_body.c:506: error: implicit declaration > of > > > function 'ngx_http_top_input_body_filter' > > > make[1]: *** [objs/src/http/ngx_http_request_body.o] Error 1 > > > make[1]: Leaving directory `/root/src/nginx/nginx-1.2.7' > > > make: *** [build] Error 2 > > > > > > ngx_http_top_input_body_filter() cannot be found from any .c/.h > files.. > > > Which other patches should I apply? > > > > > > Perhaps this? > > > [2] > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > > > Thanks, > > > -- Pasi > > > > > > > > > > > > ? ? 2013/1/16 Pasi K?*?*?rkk?*?*?inen <[1][3]pasik at iki.fi> > > > > > > > > > > ? ? ? On Sun, Jan 13, 2013 at 08:22:17PM +0800, ?????? > wrote: > > > > > ? ? ? > ?* ? ?* This patch should work between nginx-1.2.6 > and > > > nginx-1.3.8. > > > > > ? ? ? > ?* ? ?* The documentation is here: > > > > > > > > > > ? ? ? > ?* ? ?* ## client_body_postpone_sending ## > > > > > ? ? ? > ?* ? ?* Syntax: **client_body_postpone_sending** > `size` > > > > > ? ? ? > ?* ? ?* Default: 64k > > > > > ? ? ? > ?* ? ?* Context: `http, server, location` > > > > > ? ? ? > ?* ? ?* If you specify the > `proxy_request_buffering` or > > > > > ? ? ? > ?* ? ?* `fastcgi_request_buffering` to be off, > Nginx will > > > send the body > > > > > ? ? ? to backend > > > > > ? ? ? > ?* ? ?* when it receives more than `size` data or > the > > > whole request body > > > > > ? ? ? has been > > > > > ? ? ? > ?* ? ?* received. It could save the connection and > reduce > > > the IO number > > > > > ? ? ? with > > > > > ? ? ? > ?* ? ?* backend. > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* ## proxy_request_buffering ## > > > > > ? ? ? > ?* ? ?* Syntax: **proxy_request_buffering** `on | > off` > > > > > ? ? ? > ?* ? ?* Default: `on` > > > > > ? ? ? > ?* ? ?* Context: `http, server, location` > > > > > ? ? ? > ?* ? ?* Specify the request body will be buffered > to the > > > disk or not. If > > > > > ? ? ? it's off, > > > > > ? ? ? > ?* ? ?* the request body will be stored in memory > and sent > > > to backend > > > > > ? ? ? after Nginx > > > > > ? ? ? > ?* ? ?* receives more than > `client_body_postpone_sending` > > > data. It could > > > > > ? ? ? save the > > > > > ? ? ? > ?* ? ?* disk IO with large request body. > > > > > ? ? ? > > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* ? ?* ? Note that, if you > specify it > > > to be off, the nginx > > > > > ? ? ? retry mechanism > > > > > ? ? ? > ?* ? ?* with unsuccessful response will be broken > after > > > you sent part of > > > > > ? ? ? the > > > > > ? ? ? > ?* ? ?* request to backend. It will just return 500 > when > > > it encounters > > > > > ? ? ? such > > > > > ? ? ? > ?* ? ?* unsuccessful response. This directive also > breaks > > > these > > > > > ? ? ? variables: > > > > > ? ? ? > ?* ? ?* $request_body, $request_body_file. You > should not > > > use these > > > > > ? ? ? variables any > > > > > ? ? ? > ?* ? ?* more while their values are undefined. > > > > > ? ? ? > > > > > > > > > > > ? ? ? Hello, > > > > > > > > > > ? ? ? This patch sounds exactly like what I need aswell! > > > > > ? ? ? I assume it works for both POST and PUT requests? > > > > > > > > > > ? ? ? Thanks, > > > > > > > > > > ? ? ? -- Pasi > > > > > > > > > > ? ? ? > ?* ? ?* ? ?* Hello! > > > > > ? ? ? > ?* ? ?* ? ?* @yaoweibin > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* If you are eager for this > feature, you > > > could try my > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* patch: > > > [2][2][4]https://github.com/taobao/tengine/pull/91. > > > > > ? ? ? This patch has > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* been running in our production > servers. > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* ? ?* what's the nginx version your patch > based on? > > > > > ? ? ? > ?* ? ?* ? ?* Thanks! > > > > > ? ? ? > ?* ? ?* ? ?* On Fri, Jan 11, 2013 at 5:17 PM, ?**?* > > > ?**?*???*???*?? > > > > > ? ? ? <[3][3][5]yaoweibin at gmail.com> wrote: > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* I know nginx team are working on > it. You > > > can wait for it. > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* If you are eager for this > feature, you > > > could try my > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* patch: > > > [4][4][6]https://github.com/taobao/tengine/pull/91. > > > > > ? ? ? This patch has > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* been running in our production > servers. > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* 2013/1/11 li zJay > > > <[5][5][7]zjay1987 at gmail.com> > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* Hello! > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* is it possible that nginx > will not > > > buffer the client > > > > > ? ? ? body before > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* handle the request to > upstream? > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* we want to use nginx as a > reverse > > > proxy to upload very > > > > > ? ? ? very big file > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* to the upstream, but the > default > > > behavior of nginx is to > > > > > ? ? ? save the > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* whole request to the local > disk > > > first before handle it > > > > > ? ? ? to the > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* upstream, which make the > upstream > > > impossible to process > > > > > ? ? ? the file on > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* the fly when the file is > uploading, > > > results in much high > > > > > ? ? ? request > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* latency and server-side > resource > > > consumption. > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* Thanks! > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* > > > _______________________________________________ > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* nginx mailing list > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* [6][6][8]nginx at nginx.org > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* ? ?* > > > [7][7][9]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* -- > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* Weibin Yao > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* Developer @ Server Platform Team > of > > > Taobao > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* > > > _______________________________________________ > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* nginx mailing list > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* [8][8][10]nginx at nginx.org > > > > > ? ? ? > ?* ? ?* ? ?* ? ?* > > > [9][9][11]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* ? ?* > > > _______________________________________________ > > > > > ? ? ? > ?* ? ?* ? ?* nginx mailing list > > > > > ? ? ? > ?* ? ?* ? ?* [10][10][12]nginx at nginx.org > > > > > ? ? ? > ?* ? ?* ? ?* > > > [11][11][13]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* -- > > > > > ? ? ? > ?* ? ?* Weibin Yao > > > > > ? ? ? > ?* ? ?* Developer @ Server Platform Team of Taobao > > > > > ? ? ? > > > > > > ? ? ? > References > > > > > ? ? ? > > > > > > ? ? ? > ?* ? ?* Visible links > > > > > ? ? ? > ?* ? ?* 1. mailto:[12][14]zjay1987 at gmail.com > > > > > ? ? ? > ?* ? ?* 2. > > > [13][15]https://github.com/taobao/tengine/pull/91 > > > > > ? ? ? > ?* ? ?* 3. mailto:[14][16]yaoweibin at gmail.com > > > > > ? ? ? > ?* ? ?* 4. > > > [15][17]https://github.com/taobao/tengine/pull/91 > > > > > ? ? ? > ?* ? ?* 5. mailto:[16][18]zjay1987 at gmail.com > > > > > ? ? ? > ?* ? ?* 6. mailto:[17][19]nginx at nginx.org > > > > > ? ? ? > ?* ? ?* 7. > > > [18][20]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? ? ? > ?* ? ?* 8. mailto:[19][21]nginx at nginx.org > > > > > ? ? ? > ?* ? ?* 9. > > > [20][22]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? ? ? > ?* ? 10. mailto:[21][23]nginx at nginx.org > > > > > ? ? ? > ?* ? 11. > > > [22][24]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > ? ? ? > _______________________________________________ > > > > > ? ? ? > nginx mailing list > > > > > ? ? ? > [23][25]nginx at nginx.org > > > > > ? ? ? > [24][26] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > ? ? ? _______________________________________________ > > > > > ? ? ? nginx mailing list > > > > > ? ? ? [25][27]nginx at nginx.org > > > > > ? ? ? [26][28] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > ? ? -- > > > > > ? ? Weibin Yao > > > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > > > > > References > > > > > > > > > > ? ? Visible links > > > > > ? ? 1. mailto:[29]pasik at iki.fi > > > > > ? ? 2. [30]https://github.com/taobao/tengine/pull/91 > > > > > ? ? 3. mailto:[31]yaoweibin at gmail.com > > > > > ? ? 4. [32]https://github.com/taobao/tengine/pull/91 > > > > > ? ? 5. mailto:[33]zjay1987 at gmail.com > > > > > ? ? 6. mailto:[34]nginx at nginx.org > > > > > ? ? 7. [35]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? ? 8. mailto:[36]nginx at nginx.org > > > > > ? ? 9. [37]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? 10. mailto:[38]nginx at nginx.org > > > > > ? 11. [39]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? 12. mailto:[40]zjay1987 at gmail.com > > > > > ? 13. [41]https://github.com/taobao/tengine/pull/91 > > > > > ? 14. mailto:[42]yaoweibin at gmail.com > > > > > ? 15. [43]https://github.com/taobao/tengine/pull/91 > > > > > ? 16. mailto:[44]zjay1987 at gmail.com > > > > > ? 17. mailto:[45]nginx at nginx.org > > > > > ? 18. [46]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? 19. mailto:[47]nginx at nginx.org > > > > > ? 20. [48]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? 21. mailto:[49]nginx at nginx.org > > > > > ? 22. [50]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? 23. mailto:[51]nginx at nginx.org > > > > > ? 24. [52]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ? 25. mailto:[53]nginx at nginx.org > > > > > ? 26. [54]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > _______________________________________________ > > > > > nginx mailing list > > > > > [55]nginx at nginx.org > > > > > [56]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > [57]nginx at nginx.org > > > > [58]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > > nginx mailing list > > > [59]nginx at nginx.org > > > [60]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > > Weibin Yao > > > Developer @ Server Platform Team of Taobao > > > > > > References > > > > > > Visible links > > > 1. mailto:pasik at iki.fi > > > 2. > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > 3. mailto:pasik at iki.fi > > > 4. https://github.com/taobao/tengine/pull/91 > > > 5. mailto:yaoweibin at gmail.com > > > 6. https://github.com/taobao/tengine/pull/91 > > > 7. mailto:zjay1987 at gmail.com > > > 8. mailto:nginx at nginx.org > > > 9. http://mailman.nginx.org/mailman/listinfo/nginx > > > 10. mailto:nginx at nginx.org > > > 11. http://mailman.nginx.org/mailman/listinfo/nginx > > > 12. mailto:nginx at nginx.org > > > 13. http://mailman.nginx.org/mailman/listinfo/nginx > > > 14. mailto:zjay1987 at gmail.com > > > 15. https://github.com/taobao/tengine/pull/91 > > > 16. mailto:yaoweibin at gmail.com > > > 17. https://github.com/taobao/tengine/pull/91 > > > 18. mailto:zjay1987 at gmail.com > > > 19. mailto:nginx at nginx.org > > > 20. http://mailman.nginx.org/mailman/listinfo/nginx > > > 21. mailto:nginx at nginx.org > > > 22. http://mailman.nginx.org/mailman/listinfo/nginx > > > 23. mailto:nginx at nginx.org > > > 24. http://mailman.nginx.org/mailman/listinfo/nginx > > > 25. mailto:nginx at nginx.org > > > 26. http://mailman.nginx.org/mailman/listinfo/nginx > > > 27. mailto:nginx at nginx.org > > > 28. http://mailman.nginx.org/mailman/listinfo/nginx > > > 29. mailto:pasik at iki.fi > > > 30. https://github.com/taobao/tengine/pull/91 > > > 31. mailto:yaoweibin at gmail.com > > > 32. https://github.com/taobao/tengine/pull/91 > > > 33. mailto:zjay1987 at gmail.com > > > 34. mailto:nginx at nginx.org > > > 35. http://mailman.nginx.org/mailman/listinfo/nginx > > > 36. mailto:nginx at nginx.org > > > 37. http://mailman.nginx.org/mailman/listinfo/nginx > > > 38. mailto:nginx at nginx.org > > > 39. http://mailman.nginx.org/mailman/listinfo/nginx > > > 40. mailto:zjay1987 at gmail.com > > > 41. https://github.com/taobao/tengine/pull/91 > > > 42. mailto:yaoweibin at gmail.com > > > 43. https://github.com/taobao/tengine/pull/91 > > > 44. mailto:zjay1987 at gmail.com > > > 45. mailto:nginx at nginx.org > > > 46. http://mailman.nginx.org/mailman/listinfo/nginx > > > 47. mailto:nginx at nginx.org > > > 48. http://mailman.nginx.org/mailman/listinfo/nginx > > > 49. mailto:nginx at nginx.org > > > 50. http://mailman.nginx.org/mailman/listinfo/nginx > > > 51. mailto:nginx at nginx.org > > > 52. http://mailman.nginx.org/mailman/listinfo/nginx > > > 53. mailto:nginx at nginx.org > > > 54. http://mailman.nginx.org/mailman/listinfo/nginx > > > 55. mailto:nginx at nginx.org > > > 56. http://mailman.nginx.org/mailman/listinfo/nginx > > > 57. mailto:nginx at nginx.org > > > 58. http://mailman.nginx.org/mailman/listinfo/nginx > > > 59. mailto:nginx at nginx.org > > > 60. http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mauro.stettler at gmail.com Mon Feb 25 07:52:38 2013 From: mauro.stettler at gmail.com (Mauro Stettler) Date: Mon, 25 Feb 2013 15:52:38 +0800 Subject: rds-json generate json with index key on first level of array Message-ID: Hello Nginx list I'm using OpenResty with libdrizzle to provide a faster API to query certain things from my db. My current config is like this: location ~* ^/resty/usersTable/userId/([0-9\,]+)$ { set_unescape_uri $uid $1; drizzle_query 'select id, nickname, age, age_p, city, plz, wio_plz, gender from users where id in ($uid)'; drizzle_pass projectdb; rds_json on; } So this works fine and it gives me the expected output. My problem is that if I query many user IDs i'm only getting a flat array of arrays without index key. But to improve the processing speed on the client side, I would like to define that the 'id' field should be the first level index in the returned array. I am trying to show what I mean: Current output: [ {"id":1971,"nickname":"Robby1","age":28,"age_p":42,"city":"Dresden","plz":"","wio_plz":"2,4,5","gender":"m"}, {"id":1972,"nickname":"Robby2","age":29,"age_p":43,"city":"Dresden2","plz":"","wio_plz":"4,5","gender":"f"}, {"id":1973,"nickname":"Robby3","age":30,"age_p":44,"city":"Dresden3","plz":"","wio_plz":"5","gender":"m"}, ] What I want: [ 1971:{"nickname":"Robby1","age":28,"age_p":42,"city":"Dresden","plz":"","wio_plz":"2,4,5","gender":"m"}, 1972:{"nickname":"Robby2","age":29,"age_p":43,"city":"Dresden2","plz":"","wio_plz":"4,5","gender":"f"}, 1973:{"nickname":"Robby3","age":30,"age_p":44,"city":"Dresden3","plz":"","wio_plz":"5","gender":"m"}, ] Is there some way how I can define a first level index key in rds-json? I already checked the `rds_json_root` parameter, but this doesn't seem to be what I'm looking for. Thanks, Mauro -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 25 08:04:33 2013 From: nginx-forum at nginx.us (replay) Date: Mon, 25 Feb 2013 03:04:33 -0500 Subject: rds-json generate json with index key on first level of array In-Reply-To: References: Message-ID: <4de1601a540fb9c15c347815472f941c.NginxMailingListEnglish@forum.nginx.org> I just realized that I messed up the formatting. Actually what I'm looking for should be like this: { 1971:{"nickname":"Robby1","age":28,"age_p":42,"city":"Dresden","plz":"","wio_plz":"2,4,5","gender":"m"}, 1972:{"nickname":"Robby2","age":29,"age_p":43,"city":"Dresden2","plz":"","wio_plz":"4,5","gender":"f"}, 1973:{"nickname":"Robby3","age":30,"age_p":44,"city":"Dresden3","plz":"","wio_plz":"5","gender":"m"} } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236563,236564#msg-236564 From smallfish.xy at gmail.com Mon Feb 25 08:12:11 2013 From: smallfish.xy at gmail.com (smallfish) Date: Mon, 25 Feb 2013 16:12:11 +0800 Subject: rds-json generate json with index key on first level of array In-Reply-To: <4de1601a540fb9c15c347815472f941c.NginxMailingListEnglish@forum.nginx.org> References: <4de1601a540fb9c15c347815472f941c.NginxMailingListEnglish@forum.nginx.org> Message-ID: can use ngx_lua(openresty), filter/format the output. -- smallfish http://chenxiaoyu.org On Mon, Feb 25, 2013 at 4:04 PM, replay wrote: > I just realized that I messed up the formatting. Actually what I'm looking > for should be like this: > > { > > 1971:{"nickname":"Robby1","age":28,"age_p":42,"city":"Dresden","plz":"","wio_plz":"2,4,5","gender":"m"}, > > 1972:{"nickname":"Robby2","age":29,"age_p":43,"city":"Dresden2","plz":"","wio_plz":"4,5","gender":"f"}, > > 1973:{"nickname":"Robby3","age":30,"age_p":44,"city":"Dresden3","plz":"","wio_plz":"5","gender":"m"} > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,236563,236564#msg-236564 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mauro.stettler at gmail.com Mon Feb 25 08:56:28 2013 From: mauro.stettler at gmail.com (Mauro Stettler) Date: Mon, 25 Feb 2013 16:56:28 +0800 Subject: rds-json generate json with index key on first level of array In-Reply-To: References: <4de1601a540fb9c15c347815472f941c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks, that would probably be possible. It's just that this URL is called very frequently, so I'm trying to do everything as resource efficient as possible. I am worried that parsing the json in lua in Nginx and reformatting it would be too heavy on the server load. So I would have preferred if there is some way how I can tell the rds-json module to format it the way I want. Seems there is no way to configure it in the module, so I will look for another solution. Thanks, Mauro On Mon, Feb 25, 2013 at 4:12 PM, smallfish wrote: > can use ngx_lua(openresty), filter/format the output. > -- > smallfish http://chenxiaoyu.org > > > On Mon, Feb 25, 2013 at 4:04 PM, replay wrote: > >> I just realized that I messed up the formatting. Actually what I'm looking >> for should be like this: >> >> { >> >> 1971:{"nickname":"Robby1","age":28,"age_p":42,"city":"Dresden","plz":"","wio_plz":"2,4,5","gender":"m"}, >> >> 1972:{"nickname":"Robby2","age":29,"age_p":43,"city":"Dresden2","plz":"","wio_plz":"4,5","gender":"f"}, >> >> 1973:{"nickname":"Robby3","age":30,"age_p":44,"city":"Dresden3","plz":"","wio_plz":"5","gender":"m"} >> } >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,236563,236564#msg-236564 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 25 09:39:53 2013 From: nginx-forum at nginx.us (double) Date: Mon, 25 Feb 2013 04:39:53 -0500 Subject: Proxy without buffering Message-ID: Hello, Large POST-request are buffered to disk, before passed to the backend. The backend has troubles to parse the POST-data, if the requests are huge (some GB). We use "haproxy" in front of "nginx", to workaround this issue. But this causes extra load on the server. Is there a chance to disable the request buffering? Thanks a lot Markus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,236568#msg-236568 From pasik at iki.fi Mon Feb 25 10:13:05 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Mon, 25 Feb 2013 12:13:05 +0200 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: <20130116151511.GS8912@reaktio.net> <20130118083821.GA8912@reaktio.net> <20130221200805.GT8912@reaktio.net> <20130222092524.GV8912@reaktio.net> <20130222105052.GW8912@reaktio.net> Message-ID: <20130225101304.GZ8912@reaktio.net> On Mon, Feb 25, 2013 at 10:13:42AM +0800, Weibin Yao wrote: > Can you show me your configure? It works for me with nginx-1.2.7. > Thanks. > Hi, I'm using the nginx 1.2.7 el6 src.rpm rebuilt with "headers more" module added, and your patch. I'm using the following configuration: server { listen public_ip:443 ssl; server_name service.domain.tld; ssl on; keepalive_timeout 70; access_log /var/log/nginx/access-service.log; access_log /var/log/nginx/access-service-full.log full; error_log /var/log/nginx/error-service.log; client_header_buffer_size 64k; client_header_timeout 120; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_buffering off; proxy_cache off; add_header Last-Modified ""; if_modified_since off; client_max_body_size 262144M; client_body_buffer_size 1024k; client_body_timeout 240; chunked_transfer_encoding off; # client_body_postpone_sending 64k; # proxy_request_buffering off; location / { proxy_pass https://service-backend; } } Thanks! -- Pasi > 2013/2/22 Pasi K??rkk??inen <[1]pasik at iki.fi> > > On Fri, Feb 22, 2013 at 11:25:24AM +0200, Pasi K??rkk??inen wrote: > > On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Yao wrote: > > > ? ? Use the patch I attached in this mail thread instead, don't use > the pull > > > ? ? request patch which is for tengine.?* > > > ? ? Thanks. > > > > > > > Oh sorry I missed that attachment. It seems to apply and build OK. > > I'll start testing it. > > > > I added the patch on top of nginx 1.2.7 and enabled the following > options: > > client_body_postpone_sending ? ? 64k; > proxy_request_buffering ? ? ? ? off; > > after that connections through the nginx reverse proxy started failing > with errors like this: > > [error] 29087#0: *49 upstream prematurely closed connection while > reading response header from upstream > [error] 29087#0: *60 upstream sent invalid header while reading response > header from upstream > > And the services are unusable. > > Commenting out the two config options above makes nginx happy again. > Any idea what causes that? Any tips how to troubleshoot it? > Thanks! > > -- Pasi > > > > > > ? ? 2013/2/22 Pasi K?*?*?rkk?*?*?inen <[1][2]pasik at iki.fi> > > > > > > ? ? ? On Fri, Jan 18, 2013 at 10:38:21AM +0200, Pasi > K?*?*?rkk?*?*?inen wrote: > > > ? ? ? > On Thu, Jan 17, 2013 at 11:15:58AM +0800, ?????? wrote: > > > ? ? ? > > ?* ? ?* Yes. It should work for any request method. > > > ? ? ? > > > > > ? ? ? > > > > ? ? ? > Great, thanks, I'll let you know how it works for me. > Probably in two > > > ? ? ? weeks or so. > > > ? ? ? > > > > > > > ? ? ? Hi, > > > > > > ? ? ? Adding the tengine pull request 91 on top of nginx 1.2.7 > doesn't work: > > > > > > ? ? ? cc1: warnings being treated as errors > > > ? ? ? src/http/ngx_http_request_body.c: In function > > > ? ? ? 'ngx_http_read_non_buffered_client_request_body': > > > ? ? ? src/http/ngx_http_request_body.c:506: error: implicit > declaration of > > > ? ? ? function 'ngx_http_top_input_body_filter' > > > ? ? ? make[1]: *** [objs/src/http/ngx_http_request_body.o] Error 1 > > > ? ? ? make[1]: Leaving directory `/root/src/nginx/nginx-1.2.7' > > > ? ? ? make: *** [build] Error 2 > > > > > > ? ? ? ngx_http_top_input_body_filter() cannot be found from any > .c/.h files.. > > > ? ? ? Which other patches should I apply? > > > > > > ? ? ? Perhaps this? > > > ? ? > ? [2][3]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > > > ? ? ? Thanks, > > > ? ? ? -- Pasi > > > > > > ? ? ? > > > > ? ? ? > > ?* ? ?* 2013/1/16 Pasi K?**??*??rkk?**??*??inen > <[1][3][4]pasik at iki.fi> > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* On Sun, Jan 13, 2013 at 08:22:17PM +0800, > ?????? wrote: > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** This patch should work between > nginx-1.2.6 and > > > ? ? ? nginx-1.3.8. > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** The documentation is here: > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ## > client_body_postpone_sending ## > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Syntax: > **client_body_postpone_sending** `size` > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Default: 64k > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Context: `http, server, > location` > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** If you specify the > `proxy_request_buffering` or > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** `fastcgi_request_buffering` to > be off, Nginx will > > > ? ? ? send the body > > > ? ? ? > > ?* ? ?* ? ?* to backend > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** when it receives more than > `size` data or the > > > ? ? ? whole request body > > > ? ? ? > > ?* ? ?* ? ?* has been > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** received. It could save the > connection and reduce > > > ? ? ? the IO number > > > ? ? ? > > ?* ? ?* ? ?* with > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** backend. > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ## proxy_request_buffering ## > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Syntax: > **proxy_request_buffering** `on | off` > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Default: `on` > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Context: `http, server, > location` > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Specify the request body will > be buffered to the > > > ? ? ? disk or not. If > > > ? ? ? > > ?* ? ?* ? ?* it's off, > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** the request body will be > stored in memory and sent > > > ? ? ? to backend > > > ? ? ? > > ?* ? ?* ? ?* after Nginx > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** receives more than > `client_body_postpone_sending` > > > ? ? ? data. It could > > > ? ? ? > > ?* ? ?* ? ?* save the > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** disk IO with large request > body. > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** ?* ?** ?* > Note that, if you specify it > > > ? ? ? to be off, the nginx > > > ? ? ? > > ?* ? ?* ? ?* retry mechanism > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** with unsuccessful response > will be broken after > > > ? ? ? you sent part of > > > ? ? ? > > ?* ? ?* ? ?* the > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** request to backend. It will > just return 500 when > > > ? ? ? it encounters > > > ? ? ? > > ?* ? ?* ? ?* such > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** unsuccessful response. This > directive also breaks > > > ? ? ? these > > > ? ? ? > > ?* ? ?* ? ?* variables: > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** $request_body, > $request_body_file. You should not > > > ? ? ? use these > > > ? ? ? > > ?* ? ?* ? ?* variables any > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** more while their values are > undefined. > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* Hello, > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* This patch sounds exactly like what I need > aswell! > > > ? ? ? > > ?* ? ?* ? ?* I assume it works for both POST and PUT > requests? > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* Thanks, > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* -- Pasi > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** Hello! > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** @yaoweibin > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** If you are eager > for this feature, you > > > ? ? ? could try my > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** patch: > > > ? ? ? [2][2][4][5]https://github.com/taobao/tengine/pull/91. > > > ? ? ? > > ?* ? ?* ? ?* This patch has > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** been running in > our production servers. > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** what's the nginx > version your patch based on? > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** Thanks! > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** On Fri, Jan 11, 2013 at > 5:17 PM, ?***?** > > > ? ? ? ?***?**?*???**?*???**?*?? > > > ? ? ? > > ?* ? ?* ? ?* <[3][3][5][6]yaoweibin at gmail.com> wrote: > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** I know nginx > team are working on it. You > > > ? ? ? can wait for it. > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** If you are eager > for this feature, you > > > ? ? ? could try my > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** patch: > > > ? ? ? [4][4][6][7]https://github.com/taobao/tengine/pull/91. > > > ? ? ? > > ?* ? ?* ? ?* This patch has > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** been running in > our production servers. > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** 2013/1/11 li > zJay > > > ? ? ? <[5][5][7][8]zjay1987 at gmail.com> > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** Hello! > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** is it > possible that nginx will not > > > ? ? ? buffer the client > > > ? ? ? > > ?* ? ?* ? ?* body before > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** handle > the request to upstream? > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** we want > to use nginx as a reverse > > > ? ? ? proxy to upload very > > > ? ? ? > > ?* ? ?* ? ?* very big file > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** to the > upstream, but the default > > > ? ? ? behavior of nginx is to > > > ? ? ? > > ?* ? ?* ? ?* save the > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** whole > request to the local disk > > > ? ? ? first before handle it > > > ? ? ? > > ?* ? ?* ? ?* to the > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** upstream, > which make the upstream > > > ? ? ? impossible to process > > > ? ? ? > > ?* ? ?* ? ?* the file on > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** the fly > when the file is uploading, > > > ? ? ? results in much high > > > ? ? ? > > ?* ? ?* ? ?* request > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** latency > and server-side resource > > > ? ? ? consumption. > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** Thanks! > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > > > ? ? ? _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** nginx > mailing list > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > [6][6][8][9]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > > > ? ? ? [7][7][9][10]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** -- > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** Weibin Yao > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** Developer @ > Server Platform Team of > > > ? ? ? Taobao > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** > > > ? ? ? _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** nginx mailing > list > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** > [8][8][10][11]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** > > > ? ? > ? [9][9][11][12]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** > > > ? ? ? _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** nginx mailing list > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** > [10][10][12][13]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** > > > ? ? > ? [11][11][13][14]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** -- > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Weibin Yao > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Developer @ Server Platform > Team of Taobao > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > References > > > ? ? ? > > ?* ? ?* ? ?* > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Visible links > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 1. > mailto:[12][14][15]zjay1987 at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 2. > > > ? ? ? [13][15][16]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 3. > mailto:[14][16][17]yaoweibin at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 4. > > > ? ? ? [15][17][18]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 5. > mailto:[16][18][19]zjay1987 at gmail.com > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 6. > mailto:[17][19][20]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 7. > > > ? ? ? [18][20][21]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 8. > mailto:[19][21][22]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 9. > > > ? ? ? [20][22][23]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* 10. > mailto:[21][23][24]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* 11. > > > ? ? ? [22][24][25]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* > > _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* > nginx mailing list > > > ? ? ? > > ?* ? ?* ? ?* > [23][25][26]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > > [24][26][27]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* ? ?* > _______________________________________________ > > > ? ? ? > > ?* ? ?* ? ?* nginx mailing list > > > ? ? ? > > ?* ? ?* ? ?* [25][27][28]nginx at nginx.org > > > ? ? ? > > ?* ? ?* ? ?* > [26][28][29]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* -- > > > ? ? ? > > ?* ? ?* Weibin Yao > > > ? ? ? > > ?* ? ?* Developer @ Server Platform Team of Taobao > > > ? ? ? > > > > > ? ? ? > > References > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* Visible links > > > ? ? ? > > ?* ? ?* 1. mailto:[29][30]pasik at iki.fi > > > ? ? ? > > ?* ? ?* 2. > [30][31]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* 3. mailto:[31][32]yaoweibin at gmail.com > > > ? ? ? > > ?* ? ?* 4. > [32][33]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? ?* 5. mailto:[33][34]zjay1987 at gmail.com > > > ? ? ? > > ?* ? ?* 6. mailto:[34][35]nginx at nginx.org > > > ? ? ? > > ?* ? ?* 7. > [35][36]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? ?* 8. mailto:[36][37]nginx at nginx.org > > > ? ? ? > > ?* ? ?* 9. > [37][38]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 10. mailto:[38][39]nginx at nginx.org > > > ? ? ? > > ?* ? 11. > [39][40]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 12. mailto:[40][41]zjay1987 at gmail.com > > > ? ? ? > > ?* ? 13. > [41][42]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? 14. mailto:[42][43]yaoweibin at gmail.com > > > ? ? ? > > ?* ? 15. > [43][44]https://github.com/taobao/tengine/pull/91 > > > ? ? ? > > ?* ? 16. mailto:[44][45]zjay1987 at gmail.com > > > ? ? ? > > ?* ? 17. mailto:[45][46]nginx at nginx.org > > > ? ? ? > > ?* ? 18. > [46][47]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 19. mailto:[47][48]nginx at nginx.org > > > ? ? ? > > ?* ? 20. > [48][49]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 21. mailto:[49][50]nginx at nginx.org > > > ? ? ? > > ?* ? 22. > [50][51]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 23. mailto:[51][52]nginx at nginx.org > > > ? ? ? > > ?* ? 24. > [52][53]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > ?* ? 25. mailto:[53][54]nginx at nginx.org > > > ? ? ? > > ?* ? 26. > [54][55]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > ? ? ? > > _______________________________________________ > > > ? ? ? > > nginx mailing list > > > ? ? ? > > [55][56]nginx at nginx.org > > > ? ? ? > > [56][57]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? ? ? > > > > ? ? ? > _______________________________________________ > > > ? ? ? > nginx mailing list > > > ? ? ? > [57][58]nginx at nginx.org > > > ? ? ? > [58][59]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? ? _______________________________________________ > > > ? ? ? nginx mailing list > > > ? ? ? [59][60]nginx at nginx.org > > > ? ? ? [60][61]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > ? ? -- > > > ? ? Weibin Yao > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > References > > > > > > ? ? Visible links > > > ? ? 1. mailto:[62]pasik at iki.fi > > > ? ? 2. > [63]https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > ? ? 3. mailto:[64]pasik at iki.fi > > > ? ? 4. [65]https://github.com/taobao/tengine/pull/91 > > > ? ? 5. mailto:[66]yaoweibin at gmail.com > > > ? ? 6. [67]https://github.com/taobao/tengine/pull/91 > > > ? ? 7. mailto:[68]zjay1987 at gmail.com > > > ? ? 8. mailto:[69]nginx at nginx.org > > > ? ? 9. [70]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 10. mailto:[71]nginx at nginx.org > > > ? 11. [72]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 12. mailto:[73]nginx at nginx.org > > > ? 13. [74]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 14. mailto:[75]zjay1987 at gmail.com > > > ? 15. [76]https://github.com/taobao/tengine/pull/91 > > > ? 16. mailto:[77]yaoweibin at gmail.com > > > ? 17. [78]https://github.com/taobao/tengine/pull/91 > > > ? 18. mailto:[79]zjay1987 at gmail.com > > > ? 19. mailto:[80]nginx at nginx.org > > > ? 20. [81]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 21. mailto:[82]nginx at nginx.org > > > ? 22. [83]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 23. mailto:[84]nginx at nginx.org > > > ? 24. [85]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 25. mailto:[86]nginx at nginx.org > > > ? 26. [87]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 27. mailto:[88]nginx at nginx.org > > > ? 28. [89]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 29. mailto:[90]pasik at iki.fi > > > ? 30. [91]https://github.com/taobao/tengine/pull/91 > > > ? 31. mailto:[92]yaoweibin at gmail.com > > > ? 32. [93]https://github.com/taobao/tengine/pull/91 > > > ? 33. mailto:[94]zjay1987 at gmail.com > > > ? 34. mailto:[95]nginx at nginx.org > > > ? 35. [96]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 36. mailto:[97]nginx at nginx.org > > > ? 37. [98]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 38. mailto:[99]nginx at nginx.org > > > ? 39. [100]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 40. mailto:[101]zjay1987 at gmail.com > > > ? 41. [102]https://github.com/taobao/tengine/pull/91 > > > ? 42. mailto:[103]yaoweibin at gmail.com > > > ? 43. [104]https://github.com/taobao/tengine/pull/91 > > > ? 44. mailto:[105]zjay1987 at gmail.com > > > ? 45. mailto:[106]nginx at nginx.org > > > ? 46. [107]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 47. mailto:[108]nginx at nginx.org > > > ? 48. [109]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 49. mailto:[110]nginx at nginx.org > > > ? 50. [111]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 51. mailto:[112]nginx at nginx.org > > > ? 52. [113]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 53. mailto:[114]nginx at nginx.org > > > ? 54. [115]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 55. mailto:[116]nginx at nginx.org > > > ? 56. [117]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 57. mailto:[118]nginx at nginx.org > > > ? 58. [119]http://mailman.nginx.org/mailman/listinfo/nginx > > > ? 59. mailto:[120]nginx at nginx.org > > > ? 60. [121]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > > nginx mailing list > > > [122]nginx at nginx.org > > > [123]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > [124]nginx at nginx.org > > [125]http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > [126]nginx at nginx.org > [127]http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > References > > Visible links > 1. mailto:pasik at iki.fi > 2. mailto:pasik at iki.fi > 3. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 4. mailto:pasik at iki.fi > 5. https://github.com/taobao/tengine/pull/91 > 6. mailto:yaoweibin at gmail.com > 7. https://github.com/taobao/tengine/pull/91 > 8. mailto:zjay1987 at gmail.com > 9. mailto:nginx at nginx.org > 10. http://mailman.nginx.org/mailman/listinfo/nginx > 11. mailto:nginx at nginx.org > 12. http://mailman.nginx.org/mailman/listinfo/nginx > 13. mailto:nginx at nginx.org > 14. http://mailman.nginx.org/mailman/listinfo/nginx > 15. mailto:zjay1987 at gmail.com > 16. https://github.com/taobao/tengine/pull/91 > 17. mailto:yaoweibin at gmail.com > 18. https://github.com/taobao/tengine/pull/91 > 19. mailto:zjay1987 at gmail.com > 20. mailto:nginx at nginx.org > 21. http://mailman.nginx.org/mailman/listinfo/nginx > 22. mailto:nginx at nginx.org > 23. http://mailman.nginx.org/mailman/listinfo/nginx > 24. mailto:nginx at nginx.org > 25. http://mailman.nginx.org/mailman/listinfo/nginx > 26. mailto:nginx at nginx.org > 27. http://mailman.nginx.org/mailman/listinfo/nginx > 28. mailto:nginx at nginx.org > 29. http://mailman.nginx.org/mailman/listinfo/nginx > 30. mailto:pasik at iki.fi > 31. https://github.com/taobao/tengine/pull/91 > 32. mailto:yaoweibin at gmail.com > 33. https://github.com/taobao/tengine/pull/91 > 34. mailto:zjay1987 at gmail.com > 35. mailto:nginx at nginx.org > 36. http://mailman.nginx.org/mailman/listinfo/nginx > 37. mailto:nginx at nginx.org > 38. http://mailman.nginx.org/mailman/listinfo/nginx > 39. mailto:nginx at nginx.org > 40. http://mailman.nginx.org/mailman/listinfo/nginx > 41. mailto:zjay1987 at gmail.com > 42. https://github.com/taobao/tengine/pull/91 > 43. mailto:yaoweibin at gmail.com > 44. https://github.com/taobao/tengine/pull/91 > 45. mailto:zjay1987 at gmail.com > 46. mailto:nginx at nginx.org > 47. http://mailman.nginx.org/mailman/listinfo/nginx > 48. mailto:nginx at nginx.org > 49. http://mailman.nginx.org/mailman/listinfo/nginx > 50. mailto:nginx at nginx.org > 51. http://mailman.nginx.org/mailman/listinfo/nginx > 52. mailto:nginx at nginx.org > 53. http://mailman.nginx.org/mailman/listinfo/nginx > 54. mailto:nginx at nginx.org > 55. http://mailman.nginx.org/mailman/listinfo/nginx > 56. mailto:nginx at nginx.org > 57. http://mailman.nginx.org/mailman/listinfo/nginx > 58. mailto:nginx at nginx.org > 59. http://mailman.nginx.org/mailman/listinfo/nginx > 60. mailto:nginx at nginx.org > 61. http://mailman.nginx.org/mailman/listinfo/nginx > 62. mailto:pasik at iki.fi > 63. https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > 64. mailto:pasik at iki.fi > 65. https://github.com/taobao/tengine/pull/91 > 66. mailto:yaoweibin at gmail.com > 67. https://github.com/taobao/tengine/pull/91 > 68. mailto:zjay1987 at gmail.com > 69. mailto:nginx at nginx.org > 70. http://mailman.nginx.org/mailman/listinfo/nginx > 71. mailto:nginx at nginx.org > 72. http://mailman.nginx.org/mailman/listinfo/nginx > 73. mailto:nginx at nginx.org > 74. http://mailman.nginx.org/mailman/listinfo/nginx > 75. mailto:zjay1987 at gmail.com > 76. https://github.com/taobao/tengine/pull/91 > 77. mailto:yaoweibin at gmail.com > 78. https://github.com/taobao/tengine/pull/91 > 79. mailto:zjay1987 at gmail.com > 80. mailto:nginx at nginx.org > 81. http://mailman.nginx.org/mailman/listinfo/nginx > 82. mailto:nginx at nginx.org > 83. http://mailman.nginx.org/mailman/listinfo/nginx > 84. mailto:nginx at nginx.org > 85. http://mailman.nginx.org/mailman/listinfo/nginx > 86. mailto:nginx at nginx.org > 87. http://mailman.nginx.org/mailman/listinfo/nginx > 88. mailto:nginx at nginx.org > 89. http://mailman.nginx.org/mailman/listinfo/nginx > 90. mailto:pasik at iki.fi > 91. https://github.com/taobao/tengine/pull/91 > 92. mailto:yaoweibin at gmail.com > 93. https://github.com/taobao/tengine/pull/91 > 94. mailto:zjay1987 at gmail.com > 95. mailto:nginx at nginx.org > 96. http://mailman.nginx.org/mailman/listinfo/nginx > 97. mailto:nginx at nginx.org > 98. http://mailman.nginx.org/mailman/listinfo/nginx > 99. mailto:nginx at nginx.org > 100. http://mailman.nginx.org/mailman/listinfo/nginx > 101. mailto:zjay1987 at gmail.com > 102. https://github.com/taobao/tengine/pull/91 > 103. mailto:yaoweibin at gmail.com > 104. https://github.com/taobao/tengine/pull/91 > 105. mailto:zjay1987 at gmail.com > 106. mailto:nginx at nginx.org > 107. http://mailman.nginx.org/mailman/listinfo/nginx > 108. mailto:nginx at nginx.org > 109. http://mailman.nginx.org/mailman/listinfo/nginx > 110. mailto:nginx at nginx.org > 111. http://mailman.nginx.org/mailman/listinfo/nginx > 112. mailto:nginx at nginx.org > 113. http://mailman.nginx.org/mailman/listinfo/nginx > 114. mailto:nginx at nginx.org > 115. http://mailman.nginx.org/mailman/listinfo/nginx > 116. mailto:nginx at nginx.org > 117. http://mailman.nginx.org/mailman/listinfo/nginx > 118. mailto:nginx at nginx.org > 119. http://mailman.nginx.org/mailman/listinfo/nginx > 120. mailto:nginx at nginx.org > 121. http://mailman.nginx.org/mailman/listinfo/nginx > 122. mailto:nginx at nginx.org > 123. http://mailman.nginx.org/mailman/listinfo/nginx > 124. mailto:nginx at nginx.org > 125. http://mailman.nginx.org/mailman/listinfo/nginx > 126. mailto:nginx at nginx.org > 127. http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From pasik at iki.fi Mon Feb 25 12:29:46 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Mon, 25 Feb 2013 14:29:46 +0200 Subject: Proxy without buffering In-Reply-To: References: Message-ID: <20130225122946.GA8912@reaktio.net> On Mon, Feb 25, 2013 at 04:39:53AM -0500, double wrote: > Hello, > > Large POST-request are buffered to disk, before passed to the backend. > The backend has troubles to parse the POST-data, if the requests are huge > (some GB). > > We use "haproxy" in front of "nginx", to workaround this issue. > But this causes extra load on the server. > Is there a chance to disable the request buffering? > Yep, it's possible, see the other active threads for a patch from tengine to implement non-buffered uploads. Also it'd be very nice to get this feature to standard nginx, many people need it! -- Pasi From nginx-forum at nginx.us Mon Feb 25 12:58:53 2013 From: nginx-forum at nginx.us (double) Date: Mon, 25 Feb 2013 07:58:53 -0500 Subject: Proxy without buffering In-Reply-To: <20130225122946.GA8912@reaktio.net> References: <20130225122946.GA8912@reaktio.net> Message-ID: Thanks a lot! > Also it'd be very nice to get this feature to standard nginx, > many people need it! 100% true. I hate this workaround via "haproxy" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,236576#msg-236576 From iptablez at yahoo.com Mon Feb 25 14:07:58 2013 From: iptablez at yahoo.com (Indo Php) Date: Mon, 25 Feb 2013 06:07:58 -0800 (PST) Subject: proxy_temp_path is very slow Message-ID: <1361801278.84945.YahooMailNeo@web142305.mail.bf1.yahoo.com> Hello, I'm using proxy_cache with nginx. I have nginx set as proxy cache to get the file from servers in onother country. Sometimes I have almost 3000 files in the temp path, and my disk I/O is very high. Actually I'm using 2 SSD with RAID-0. May I know is there any other problems? Here's my config ??? proxy_cache_path? /var/nginx/folder levels=1 keys_zone=one:1000m inactive=7d max_size=300g; ??? proxy_temp_path?? /var/nginx/temp; -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 25 14:37:43 2013 From: nginx-forum at nginx.us (Lynoure) Date: Mon, 25 Feb 2013 09:37:43 -0500 Subject: Problem with proxy_set_header $ssl_client_cert In-Reply-To: <20130224180117.GQ81985@mdounin.ru> References: <20130224180117.GQ81985@mdounin.ru> Message-ID: > The $ssl_client_cert variable abuses header continuation, and this > doesn't work with many http servers (including nginx itself). Noticed that with spray-can. > There should be more portable way to pass client certificate to an > upstream server. Is there already, or is there one in plans? Any known workarounds? Encoding and decoding the $ssl_client_cert somehow? (I'm really new to nginx.) -- Lynoure Braakman Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236546,236581#msg-236581 From contact at jpluscplusm.com Mon Feb 25 15:01:16 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 25 Feb 2013 15:01:16 +0000 Subject: proxy_temp_path is very slow In-Reply-To: <1361801278.84945.YahooMailNeo@web142305.mail.bf1.yahoo.com> References: <1361801278.84945.YahooMailNeo@web142305.mail.bf1.yahoo.com> Message-ID: On 25 February 2013 14:07, Indo Php wrote: > Hello, > > I'm using proxy_cache with nginx. I have nginx set as proxy cache to get the > file from servers in onother country. > > Sometimes I have almost 3000 files in the temp path, and my disk I/O is very > high. Actually I'm using 2 SSD with RAID-0. > > May I know is there any other problems? Here's my config > proxy_cache_path /var/nginx/folder levels=1 keys_zone=one:1000m > inactive=7d max_size=300g; > proxy_temp_path /var/nginx/temp; If you believe that number of files causes slow IO, why not try increasing the number of cache directories via the "levels=" parameter? I have no opinion either way; it would just be one of the things I would try if attempting to fix this issue for myself. Regards, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From alexandernst at gmail.com Mon Feb 25 15:00:49 2013 From: alexandernst at gmail.com (Alexander Nestorov) Date: Mon, 25 Feb 2013 16:00:49 +0100 Subject: Default error_page for multiple vhosts In-Reply-To: <20130222130615.GS32392@craic.sysops.org> References: <51274c4a.6164b40a.7d86.ffffa5ad@mx.google.com> <20130222130615.GS32392@craic.sysops.org> Message-ID: Gregory, hi, I tested your config and it's working, but that's even more complex than just writing the error_page in each server{}. Thank you for you help anyways :) Francis, I filled a feature request (#307), let's see what happens Regards! -- alexandernst -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Feb 25 20:03:23 2013 From: agentzh at gmail.com (agentzh) Date: Mon, 25 Feb 2013 12:03:23 -0800 Subject: rds-json generate json with index key on first level of array In-Reply-To: References: <4de1601a540fb9c15c347815472f941c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Mon, Feb 25, 2013 at 12:56 AM, Mauro Stettler wrote: > Thanks, that would probably be possible. > > It's just that this URL is called very frequently, so I'm trying to do > everything as resource efficient as possible. I am worried that parsing the > json in lua in Nginx and reformatting it would be too heavy on the server > load. So I would have preferred if there is some way how I can tell the > rds-json module to format it the way I want. > No, ngx_rds_json does not support this very JSON structure. Because there's tons of different possible JSON structures, I don't quite feel like implementing complex configuration templates in ngx_rds_json in pure C, which is a daunting task and adds complexity to this simple module. I believe using ngx_lua for complicated formatting requirements is the right way to go. Best regards, -agentzh From nginx-forum at nginx.us Mon Feb 25 20:32:19 2013 From: nginx-forum at nginx.us (Varix) Date: Mon, 25 Feb 2013 15:32:19 -0500 Subject: split-clients for vhosts how? Message-ID: <3f993ad563c4f8fa4b0f8e56ba4da26b.NginxMailingListEnglish@forum.nginx.org> Hallo, what is the right way to use the Module ngx_http_split_clients_module with some vhosts? Varix Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236593,236593#msg-236593 From sb at waeme.net Mon Feb 25 20:58:25 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Tue, 26 Feb 2013 00:58:25 +0400 Subject: Problem with proxy_set_header $ssl_client_cert In-Reply-To: References: <20130224180117.GQ81985@mdounin.ru> Message-ID: On 25 Feb2013, at 18:37 , Lynoure wrote: >> The $ssl_client_cert variable abuses header continuation, and this >> doesn't work with many http servers (including nginx itself). > > Noticed that with spray-can. > >> There should be more portable way to pass client certificate to an >> upstream server. > > Is there already, or is there one in plans? Any known workarounds? Encoding > and decoding the $ssl_client_cert somehow? (I'm really new to nginx.) You could hack ngx_ssl_get_certificate() function to get certificate in one line, or there is an ugly, but possible way to remove limited number of newline characters from variable with map directive: map $ssl_client_raw_cert $a { "~^(-.*-\n)(?<1st>[^\n]+)\n((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?

[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?(-.*-)$" $1st; } server { location / { proxy_set_header X-cert $a$b$c$d$e$f$g$h$i$j$k$l$m$n$o$p$q$r$s$t$v$u$w$x$y$z; proxy_pass http://localhost:8000; } } Example works for up to 26 line certificate, you could extend it to reasonable number of lines. From lists at wildgooses.com Mon Feb 25 21:19:53 2013 From: lists at wildgooses.com (Ed W) Date: Mon, 25 Feb 2013 21:19:53 +0000 Subject: How to remove the "IF" in this fcgi config In-Reply-To: References: <51265DE5.4070209@wildgooses.com> <512670C8.40208@wildgooses.com> <20130222201126.GT32392@craic.sysops.org> <201302230205.56443.vbart@nginx.com> Message-ID: <512BD579.7040503@wildgooses.com> On 24/02/2013 19:40, Christoph Schug wrote: > On 2013-02-22 23:05, Valentin V. Bartenev wrote: >> IMHO, >> >> location ~ ^(?.+\.php)(?/.*)?$ { >> >> looks better. > > In order to put yet another iteration into the game, I prefer the > script_name part to be non-greedy > > location ~ ^(?.+?\.php)(?/.*)?$ { > > Of course it still depends on someone's preference what to match in > case of > > /foo.php/bar.php/quux > > -cs You beat me - I was just about to post the same! I think the non greedy is probably the more common case Thanks! Ed W From nginx-forum at nginx.us Mon Feb 25 22:00:19 2013 From: nginx-forum at nginx.us (jstrybis) Date: Mon, 25 Feb 2013 17:00:19 -0500 Subject: Problem with proxy_set_header $ssl_client_cert In-Reply-To: <20130224180117.GQ81985@mdounin.ru> References: <20130224180117.GQ81985@mdounin.ru> Message-ID: Thanks for your reply. My application does not require the entire certificate, so I am just forwarding $ssl_client_s_dn in a custom header without any problems. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236546,236599#msg-236599 From nginx-forum at nginx.us Tue Feb 26 03:15:55 2013 From: nginx-forum at nginx.us (michael.heuberger) Date: Mon, 25 Feb 2013 22:15:55 -0500 Subject: Optimal nginx settings for websockets sending images Message-ID: <6b76416f686aa1898caa5369dae4d494.NginxMailingListEnglish@forum.nginx.org> Hello guys The recent nginx 1.3.13 websocket support is fantastic! Big thanks to the nginx devs, it works like a charm. I only have performance issues. Sending images through websockets turns out to be difficult and slow. I have a website sending 5 images per seconds to the server. Sometimes I have warnings like "an upstream response is buffered to a temporary file", then sometimes it's lagging and the server isn't that fast. I'm not sure if my settings for this scenario are optimal. Below you will find extracts of my nginx conf files. Maybe you spot some mistakes or have suggestions? Thanks, Michael nginx.conf: user www-data; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1536; } http { proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=one:8m max_size=1400m inactive=500m; proxy_temp_path /var/tmp; proxy_buffers 8 2m; proxy_buffer_size 10m; proxy_busy_buffers_size 10m; proxy_cache one; proxy_cache_key "$request_uri|$request_body"; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # on = don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. # here set off because of large bursts of data tcp_nodelay off; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 30; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log notice; gzip on; gzip_min_length 10240; gzip_disable "MSIE [1-6]\."; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /sites-available/other_site.conf: upstream other_site_upstream { server 127.0.0.1:4443; } server { ... location / { proxy_next_upstream error timeout http_502; proxy_pass https://other_site_upstream; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; proxy_redirect off; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236601,236601#msg-236601 From nginx-forum at nginx.us Tue Feb 26 09:31:42 2013 From: nginx-forum at nginx.us (Varix) Date: Tue, 26 Feb 2013 04:31:42 -0500 Subject: Virtualhosts and map Message-ID: What is the best way to configure virtualhost with map? Varix Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236611,236611#msg-236611 From aps2891 at gmail.com Tue Feb 26 09:32:58 2013 From: aps2891 at gmail.com (Aparna Bhat) Date: Tue, 26 Feb 2013 15:02:58 +0530 Subject: Floating Point Message-ID: Hi, I am working on a module for load balancing. I have to make use of the type float. Can anyone please tell me if there is any nginx specific data type for float such as the one that exists for int (ngx_int_t). I would also like to know how to write the float value to the logs. The type %f prints only the integer portion of the float value. Thank you in advance. With Regards, Aparna. -------------- next part -------------- An HTML attachment was scrubbed... URL: From benj.saiz at gmail.com Tue Feb 26 09:57:12 2013 From: benj.saiz at gmail.com (Benjamin Saiz) Date: Tue, 26 Feb 2013 10:57:12 +0100 Subject: Floating Point In-Reply-To: References: Message-ID: Default precision for %f is 6 digits (ISO C99) what's your test case ? On Tue, Feb 26, 2013 at 10:32 AM, Aparna Bhat wrote: > Hi, > > I am working on a module for load balancing. I have to make use of the > type float. Can anyone please tell me if there is any nginx specific data > type for float such as the one that exists for int (ngx_int_t). I would > also like to know how to write the float value to the logs. The type %f > prints only the integer portion of the float value. > > Thank you in advance. > > With Regards, > Aparna. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Saiz Benjamin -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Feb 26 10:09:30 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 26 Feb 2013 14:09:30 +0400 Subject: Floating Point In-Reply-To: References: Message-ID: On Feb 26, 2013, at 13:32 , Aparna Bhat wrote: > I am working on a module for load balancing. I have to make use of the type float. Can anyone please tell me if there is any nginx specific data type for float such as the one that exists for int (ngx_int_t). I would also like to know how to write the float value to the logs. The type %f prints only the integer portion of the float value. There is no special float integer type in nginx. nginx %f format is for double, so you have to cast float to double in ngx_sprintf(). To print fractional portion you should use "%.3f", the maximum supported format is "%18.5f". -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From aeriksson at gmx.de Tue Feb 26 11:04:12 2013 From: aeriksson at gmx.de (Alina Eriksson) Date: Tue, 26 Feb 2013 12:04:12 +0100 (CET) Subject: lua-resty-upload instead of HttpUploadModule? Message-ID: An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 26 11:27:16 2013 From: nginx-forum at nginx.us (Lynoure) Date: Tue, 26 Feb 2013 06:27:16 -0500 Subject: Problem with proxy_set_header $ssl_client_cert In-Reply-To: References: Message-ID: <20c1d028933848b8d623ab13c3106089.NginxMailingListEnglish@forum.nginx.org> Thank you Sergey, this workaround suffices for us for now. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236546,236618#msg-236618 From mdounin at mdounin.ru Tue Feb 26 12:31:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Feb 2013 16:31:46 +0400 Subject: Optimal nginx settings for websockets sending images In-Reply-To: <6b76416f686aa1898caa5369dae4d494.NginxMailingListEnglish@forum.nginx.org> References: <6b76416f686aa1898caa5369dae4d494.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130226123146.GN81985@mdounin.ru> Hello! On Mon, Feb 25, 2013 at 10:15:55PM -0500, michael.heuberger wrote: > The recent nginx 1.3.13 websocket support is fantastic! Big thanks to the > nginx devs, it works like a charm. Good to hear. :) > I only have performance issues. Sending images through websockets turns out > to be difficult and slow. I have a website sending 5 images per seconds to > the server. > > Sometimes I have warnings like "an upstream response is buffered to a > temporary file", then sometimes it's lagging and the server isn't that > fast. This messages are not related to websocket connections, as websocket connections doesn't do any disk buffering and only use in-memory buffers. (More specifically, a connection uses two in-memory buffers with size of proxy_buffer_size - one for backend-to-client data, and one for client-to-backend data.) Given the above, I would suppose that you actually have performance problems unrelated to websockets. > I'm not sure if my settings for this scenario are optimal. Below you will > find extracts of my nginx conf files. Maybe you spot some mistakes or have > suggestions? [...] > proxy_buffers 8 2m; > proxy_buffer_size 10m; > proxy_busy_buffers_size 10m; Buffers used looks huge, make sure you have enough memory. > proxy_cache one; > proxy_cache_key "$request_uri|$request_body"; Usuing request body as a cache key isn't really a good idea unless all request bodies are known to be small. [...] -- Maxim Dounin http://nginx.com/support.html From contact at jpluscplusm.com Tue Feb 26 13:53:08 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 26 Feb 2013 13:53:08 +0000 Subject: Virtualhosts and map In-Reply-To: References: Message-ID: On 26 February 2013 09:31, Varix wrote: > What is the best way to configure virtualhost with map? The question you have asked is very wide-ranging yet vague. The only answer I can give you is "it depends on what you're trying to achieve". I suggest you formulate a better question. If you include details of the things that you have tried already, this will serve as a good indicator to people on the list that you're not just being lazy, can't be bothered to google, or want people to do your job for you. HTH, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From yaoweibin at gmail.com Tue Feb 26 14:13:11 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Tue, 26 Feb 2013 22:13:11 +0800 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130225101304.GZ8912@reaktio.net> References: <20130116151511.GS8912@reaktio.net> <20130118083821.GA8912@reaktio.net> <20130221200805.GT8912@reaktio.net> <20130222092524.GV8912@reaktio.net> <20130222105052.GW8912@reaktio.net> <20130225101304.GZ8912@reaktio.net> Message-ID: It still worked in my box[?]. Can you show me the debug.log ( http://wiki.nginx.org/Debugging)? You need recompile with --with-debug configure argument and set debug level in error_log directive. Thanks 2013/2/25 Pasi K?rkk?inen > On Mon, Feb 25, 2013 at 10:13:42AM +0800, Weibin Yao wrote: > > Can you show me your configure? It works for me with nginx-1.2.7. > > Thanks. > > > > Hi, > > I'm using the nginx 1.2.7 el6 src.rpm rebuilt with "headers more" module > added, > and your patch. > > I'm using the following configuration: > > server { > listen public_ip:443 ssl; > server_name service.domain.tld; > > ssl on; > keepalive_timeout 70; > > access_log /var/log/nginx/access-service.log; > access_log /var/log/nginx/access-service-full.log > full; > error_log /var/log/nginx/error-service.log; > > client_header_buffer_size 64k; > client_header_timeout 120; > > proxy_next_upstream error timeout invalid_header http_500 http_502 > http_503; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_redirect off; > proxy_buffering off; > proxy_cache off; > > add_header Last-Modified ""; > if_modified_since off; > > client_max_body_size 262144M; > client_body_buffer_size 1024k; > client_body_timeout 240; > > chunked_transfer_encoding off; > > # client_body_postpone_sending 64k; > # proxy_request_buffering off; > > location / { > > proxy_pass https://service-backend; > > } > } > > > Thanks! > > -- Pasi > > > > > 2013/2/22 Pasi K??rkk??inen <[1]pasik at iki.fi> > > > > On Fri, Feb 22, 2013 at 11:25:24AM +0200, Pasi K??rkk??inen wrote: > > > On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Yao wrote: > > > > ? ? Use the patch I attached in this mail thread instead, > don't use > > the pull > > > > ? ? request patch which is for tengine.?* > > > > ? ? Thanks. > > > > > > > > > > Oh sorry I missed that attachment. It seems to apply and build OK. > > > I'll start testing it. > > > > > > > I added the patch on top of nginx 1.2.7 and enabled the following > > options: > > > > client_body_postpone_sending ? ? 64k; > > proxy_request_buffering ? ? ? ? off; > > > > after that connections through the nginx reverse proxy started > failing > > with errors like this: > > > > [error] 29087#0: *49 upstream prematurely closed connection while > > reading response header from upstream > > [error] 29087#0: *60 upstream sent invalid header while reading > response > > header from upstream > > > > And the services are unusable. > > > > Commenting out the two config options above makes nginx happy again. > > Any idea what causes that? Any tips how to troubleshoot it? > > Thanks! > > > > -- Pasi > > > > > > > > > ? ? 2013/2/22 Pasi K?*?*?rkk?*?*?inen <[1][2]pasik at iki.fi> > > > > > > > > ? ? ? On Fri, Jan 18, 2013 at 10:38:21AM +0200, Pasi > > K?*?*?rkk?*?*?inen wrote: > > > > ? ? ? > On Thu, Jan 17, 2013 at 11:15:58AM +0800, ?????? > wrote: > > > > ? ? ? > > ?* ? ?* Yes. It should work for any request method. > > > > ? ? ? > > > > > > ? ? ? > > > > > ? ? ? > Great, thanks, I'll let you know how it works for me. > > Probably in two > > > > ? ? ? weeks or so. > > > > ? ? ? > > > > > > > > > ? ? ? Hi, > > > > > > > > ? ? ? Adding the tengine pull request 91 on top of nginx 1.2.7 > > doesn't work: > > > > > > > > ? ? ? cc1: warnings being treated as errors > > > > ? ? ? src/http/ngx_http_request_body.c: In function > > > > ? ? ? 'ngx_http_read_non_buffered_client_request_body': > > > > ? ? ? src/http/ngx_http_request_body.c:506: error: implicit > > declaration of > > > > ? ? ? function 'ngx_http_top_input_body_filter' > > > > ? ? ? make[1]: *** [objs/src/http/ngx_http_request_body.o] > Error 1 > > > > ? ? ? make[1]: Leaving directory `/root/src/nginx/nginx-1.2.7' > > > > ? ? ? make: *** [build] Error 2 > > > > > > > > ? ? ? ngx_http_top_input_body_filter() cannot be found from > any > > .c/.h files.. > > > > ? ? ? Which other patches should I apply? > > > > > > > > ? ? ? Perhaps this? > > > > ? ? > > ? [2][3] > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > > > > > ? ? ? Thanks, > > > > ? ? ? -- Pasi > > > > > > > > ? ? ? > > > > > ? ? ? > > ?* ? ?* 2013/1/16 Pasi K?**??*??rkk?**??*??inen > > <[1][3][4]pasik at iki.fi> > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* On Sun, Jan 13, 2013 at 08:22:17PM > +0800, > > ?????? wrote: > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** This patch should work > between > > nginx-1.2.6 and > > > > ? ? ? nginx-1.3.8. > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** The documentation is here: > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ## > > client_body_postpone_sending ## > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Syntax: > > **client_body_postpone_sending** `size` > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Default: 64k > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Context: `http, server, > > location` > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** If you specify the > > `proxy_request_buffering` or > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** > `fastcgi_request_buffering` to > > be off, Nginx will > > > > ? ? ? send the body > > > > ? ? ? > > ?* ? ?* ? ?* to backend > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** when it receives more than > > `size` data or the > > > > ? ? ? whole request body > > > > ? ? ? > > ?* ? ?* ? ?* has been > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** received. It could save > the > > connection and reduce > > > > ? ? ? the IO number > > > > ? ? ? > > ?* ? ?* ? ?* with > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** backend. > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ## > proxy_request_buffering ## > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Syntax: > > **proxy_request_buffering** `on | off` > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Default: `on` > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Context: `http, server, > > location` > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Specify the request body > will > > be buffered to the > > > > ? ? ? disk or not. If > > > > ? ? ? > > ?* ? ?* ? ?* it's off, > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** the request body will be > > stored in memory and sent > > > > ? ? ? to backend > > > > ? ? ? > > ?* ? ?* ? ?* after Nginx > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** receives more than > > `client_body_postpone_sending` > > > > ? ? ? data. It could > > > > ? ? ? > > ?* ? ?* ? ?* save the > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** disk IO with large request > > body. > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** ?* > ?** ?* > > Note that, if you specify it > > > > ? ? ? to be off, the nginx > > > > ? ? ? > > ?* ? ?* ? ?* retry mechanism > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** with unsuccessful response > > will be broken after > > > > ? ? ? you sent part of > > > > ? ? ? > > ?* ? ?* ? ?* the > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** request to backend. It > will > > just return 500 when > > > > ? ? ? it encounters > > > > ? ? ? > > ?* ? ?* ? ?* such > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** unsuccessful response. > This > > directive also breaks > > > > ? ? ? these > > > > ? ? ? > > ?* ? ?* ? ?* variables: > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** $request_body, > > $request_body_file. You should not > > > > ? ? ? use these > > > > ? ? ? > > ?* ? ?* ? ?* variables any > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** more while their values > are > > undefined. > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* Hello, > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* This patch sounds exactly like what I > need > > aswell! > > > > ? ? ? > > ?* ? ?* ? ?* I assume it works for both POST and PUT > > requests? > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* Thanks, > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* -- Pasi > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** Hello! > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** @yaoweibin > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** If you are > eager > > for this feature, you > > > > ? ? ? could try my > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** patch: > > > > ? ? ? [2][2][4][5]https://github.com/taobao/tengine/pull/91. > > > > ? ? ? > > ?* ? ?* ? ?* This patch has > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** been > running in > > our production servers. > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** what's the nginx > > version your patch based on? > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** Thanks! > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** On Fri, Jan 11, > 2013 at > > 5:17 PM, ?***?** > > > > ? ? ? ?***?**?*???**?*???**?*?? > > > > ? ? ? > > ?* ? ?* ? ?* <[3][3][5][6]yaoweibin at gmail.com> > wrote: > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** I know nginx > > team are working on it. You > > > > ? ? ? can wait for it. > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** If you are > eager > > for this feature, you > > > > ? ? ? could try my > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** patch: > > > > ? ? ? [4][4][6][7]https://github.com/taobao/tengine/pull/91. > > > > ? ? ? > > ?* ? ?* ? ?* This patch has > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** been > running in > > our production servers. > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** 2013/1/11 li > > zJay > > > > ? ? ? <[5][5][7][8]zjay1987 at gmail.com> > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > Hello! > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** is it > > possible that nginx will not > > > > ? ? ? buffer the client > > > > ? ? ? > > ?* ? ?* ? ?* body before > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > handle > > the request to upstream? > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** we > want > > to use nginx as a reverse > > > > ? ? ? proxy to upload very > > > > ? ? ? > > ?* ? ?* ? ?* very big file > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** to > the > > upstream, but the default > > > > ? ? ? behavior of nginx is to > > > > ? ? ? > > ?* ? ?* ? ?* save the > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** whole > > request to the local disk > > > > ? ? ? first before handle it > > > > ? ? ? > > ?* ? ?* ? ?* to the > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > upstream, > > which make the upstream > > > > ? ? ? impossible to process > > > > ? ? ? > > ?* ? ?* ? ?* the file on > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** the > fly > > when the file is uploading, > > > > ? ? ? results in much high > > > > ? ? ? > > ?* ? ?* ? ?* request > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > latency > > and server-side resource > > > > ? ? ? consumption. > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > Thanks! > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** nginx > > mailing list > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > > [6][6][8][9]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** ?* ?** > > > > ? ? ? [7][7][9][10] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** -- > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** Weibin Yao > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** Developer @ > > Server Platform Team of > > > > ? ? ? Taobao > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** nginx > mailing > > list > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** > > [8][8][10][11]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** ?* ?** > > > > ? ? > > ? [9][9][11][12]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** > > > > ? ? ? _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** nginx mailing list > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** > > [10][10][12][13]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** ?* ?** > > > > ? ? > > ? [11][11][13][14]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** -- > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Weibin Yao > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Developer @ Server > Platform > > Team of Taobao > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > References > > > > ? ? ? > > ?* ? ?* ? ?* > > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** Visible links > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 1. > > mailto:[12][14][15]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 2. > > > > ? ? ? [13][15][16]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 3. > > mailto:[14][16][17]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 4. > > > > ? ? ? [15][17][18]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 5. > > mailto:[16][18][19]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 6. > > mailto:[17][19][20]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 7. > > > > ? ? ? [18][20][21] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 8. > > mailto:[19][21][22]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* ?** 9. > > > > ? ? ? [20][22][23] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* 10. > > mailto:[21][23][24]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > ?** ?* 11. > > > > ? ? ? [22][24][25] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* > > > _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* > nginx mailing list > > > > ? ? ? > > ?* ? ?* ? ?* > [23][25][26]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > > [24][26][27]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* ? ?* > > _______________________________________________ > > > > ? ? ? > > ?* ? ?* ? ?* nginx mailing list > > > > ? ? ? > > ?* ? ?* ? ?* [25][27][28]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* ? ?* > > [26][28][29]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* -- > > > > ? ? ? > > ?* ? ?* Weibin Yao > > > > ? ? ? > > ?* ? ?* Developer @ Server Platform Team of Taobao > > > > ? ? ? > > > > > > ? ? ? > > References > > > > ? ? ? > > > > > > ? ? ? > > ?* ? ?* Visible links > > > > ? ? ? > > ?* ? ?* 1. mailto:[29][30]pasik at iki.fi > > > > ? ? ? > > ?* ? ?* 2. > > [30][31]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* 3. mailto:[31][32]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? ?* 4. > > [32][33]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? ?* 5. mailto:[33][34]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? ?* 6. mailto:[34][35]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* 7. > > [35][36]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? ?* 8. mailto:[36][37]nginx at nginx.org > > > > ? ? ? > > ?* ? ?* 9. > > [37][38]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 10. mailto:[38][39]nginx at nginx.org > > > > ? ? ? > > ?* ? 11. > > [39][40]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 12. mailto:[40][41]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? 13. > > [41][42]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? 14. mailto:[42][43]yaoweibin at gmail.com > > > > ? ? ? > > ?* ? 15. > > [43][44]https://github.com/taobao/tengine/pull/91 > > > > ? ? ? > > ?* ? 16. mailto:[44][45]zjay1987 at gmail.com > > > > ? ? ? > > ?* ? 17. mailto:[45][46]nginx at nginx.org > > > > ? ? ? > > ?* ? 18. > > [46][47]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 19. mailto:[47][48]nginx at nginx.org > > > > ? ? ? > > ?* ? 20. > > [48][49]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 21. mailto:[49][50]nginx at nginx.org > > > > ? ? ? > > ?* ? 22. > > [50][51]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 23. mailto:[51][52]nginx at nginx.org > > > > ? ? ? > > ?* ? 24. > > [52][53]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > ?* ? 25. mailto:[53][54]nginx at nginx.org > > > > ? ? ? > > ?* ? 26. > > [54][55]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > ? ? ? > > _______________________________________________ > > > > ? ? ? > > nginx mailing list > > > > ? ? ? > > [55][56]nginx at nginx.org > > > > ? ? ? > > [56][57] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? ? ? > > > > > ? ? ? > _______________________________________________ > > > > ? ? ? > nginx mailing list > > > > ? ? ? > [57][58]nginx at nginx.org > > > > ? ? ? > [58][59] > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? ? _______________________________________________ > > > > ? ? ? nginx mailing list > > > > ? ? ? [59][60]nginx at nginx.org > > > > ? ? ? [60][61]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > ? ? -- > > > > ? ? Weibin Yao > > > > ? ? Developer @ Server Platform Team of Taobao > > > > > > > > References > > > > > > > > ? ? Visible links > > > > ? ? 1. mailto:[62]pasik at iki.fi > > > > ? ? 2. > > [63] > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > > > ? ? 3. mailto:[64]pasik at iki.fi > > > > ? ? 4. [65]https://github.com/taobao/tengine/pull/91 > > > > ? ? 5. mailto:[66]yaoweibin at gmail.com > > > > ? ? 6. [67]https://github.com/taobao/tengine/pull/91 > > > > ? ? 7. mailto:[68]zjay1987 at gmail.com > > > > ? ? 8. mailto:[69]nginx at nginx.org > > > > ? ? 9. [70]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 10. mailto:[71]nginx at nginx.org > > > > ? 11. [72]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 12. mailto:[73]nginx at nginx.org > > > > ? 13. [74]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 14. mailto:[75]zjay1987 at gmail.com > > > > ? 15. [76]https://github.com/taobao/tengine/pull/91 > > > > ? 16. mailto:[77]yaoweibin at gmail.com > > > > ? 17. [78]https://github.com/taobao/tengine/pull/91 > > > > ? 18. mailto:[79]zjay1987 at gmail.com > > > > ? 19. mailto:[80]nginx at nginx.org > > > > ? 20. [81]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 21. mailto:[82]nginx at nginx.org > > > > ? 22. [83]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 23. mailto:[84]nginx at nginx.org > > > > ? 24. [85]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 25. mailto:[86]nginx at nginx.org > > > > ? 26. [87]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 27. mailto:[88]nginx at nginx.org > > > > ? 28. [89]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 29. mailto:[90]pasik at iki.fi > > > > ? 30. [91]https://github.com/taobao/tengine/pull/91 > > > > ? 31. mailto:[92]yaoweibin at gmail.com > > > > ? 32. [93]https://github.com/taobao/tengine/pull/91 > > > > ? 33. mailto:[94]zjay1987 at gmail.com > > > > ? 34. mailto:[95]nginx at nginx.org > > > > ? 35. [96]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 36. mailto:[97]nginx at nginx.org > > > > ? 37. [98]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 38. mailto:[99]nginx at nginx.org > > > > ? 39. [100]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 40. mailto:[101]zjay1987 at gmail.com > > > > ? 41. [102]https://github.com/taobao/tengine/pull/91 > > > > ? 42. mailto:[103]yaoweibin at gmail.com > > > > ? 43. [104]https://github.com/taobao/tengine/pull/91 > > > > ? 44. mailto:[105]zjay1987 at gmail.com > > > > ? 45. mailto:[106]nginx at nginx.org > > > > ? 46. [107]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 47. mailto:[108]nginx at nginx.org > > > > ? 48. [109]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 49. mailto:[110]nginx at nginx.org > > > > ? 50. [111]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 51. mailto:[112]nginx at nginx.org > > > > ? 52. [113]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 53. mailto:[114]nginx at nginx.org > > > > ? 54. [115]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 55. mailto:[116]nginx at nginx.org > > > > ? 56. [117]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 57. mailto:[118]nginx at nginx.org > > > > ? 58. [119]http://mailman.nginx.org/mailman/listinfo/nginx > > > > ? 59. mailto:[120]nginx at nginx.org > > > > ? 60. [121]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > [122]nginx at nginx.org > > > > [123]http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > > nginx mailing list > > > [124]nginx at nginx.org > > > [125]http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > [126]nginx at nginx.org > > [127]http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Weibin Yao > > Developer @ Server Platform Team of Taobao > > > > References > > > > Visible links > > 1. mailto:pasik at iki.fi > > 2. mailto:pasik at iki.fi > > 3. > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 4. mailto:pasik at iki.fi > > 5. https://github.com/taobao/tengine/pull/91 > > 6. mailto:yaoweibin at gmail.com > > 7. https://github.com/taobao/tengine/pull/91 > > 8. mailto:zjay1987 at gmail.com > > 9. mailto:nginx at nginx.org > > 10. http://mailman.nginx.org/mailman/listinfo/nginx > > 11. mailto:nginx at nginx.org > > 12. http://mailman.nginx.org/mailman/listinfo/nginx > > 13. mailto:nginx at nginx.org > > 14. http://mailman.nginx.org/mailman/listinfo/nginx > > 15. mailto:zjay1987 at gmail.com > > 16. https://github.com/taobao/tengine/pull/91 > > 17. mailto:yaoweibin at gmail.com > > 18. https://github.com/taobao/tengine/pull/91 > > 19. mailto:zjay1987 at gmail.com > > 20. mailto:nginx at nginx.org > > 21. http://mailman.nginx.org/mailman/listinfo/nginx > > 22. mailto:nginx at nginx.org > > 23. http://mailman.nginx.org/mailman/listinfo/nginx > > 24. mailto:nginx at nginx.org > > 25. http://mailman.nginx.org/mailman/listinfo/nginx > > 26. mailto:nginx at nginx.org > > 27. http://mailman.nginx.org/mailman/listinfo/nginx > > 28. mailto:nginx at nginx.org > > 29. http://mailman.nginx.org/mailman/listinfo/nginx > > 30. mailto:pasik at iki.fi > > 31. https://github.com/taobao/tengine/pull/91 > > 32. mailto:yaoweibin at gmail.com > > 33. https://github.com/taobao/tengine/pull/91 > > 34. mailto:zjay1987 at gmail.com > > 35. mailto:nginx at nginx.org > > 36. http://mailman.nginx.org/mailman/listinfo/nginx > > 37. mailto:nginx at nginx.org > > 38. http://mailman.nginx.org/mailman/listinfo/nginx > > 39. mailto:nginx at nginx.org > > 40. http://mailman.nginx.org/mailman/listinfo/nginx > > 41. mailto:zjay1987 at gmail.com > > 42. https://github.com/taobao/tengine/pull/91 > > 43. mailto:yaoweibin at gmail.com > > 44. https://github.com/taobao/tengine/pull/91 > > 45. mailto:zjay1987 at gmail.com > > 46. mailto:nginx at nginx.org > > 47. http://mailman.nginx.org/mailman/listinfo/nginx > > 48. mailto:nginx at nginx.org > > 49. http://mailman.nginx.org/mailman/listinfo/nginx > > 50. mailto:nginx at nginx.org > > 51. http://mailman.nginx.org/mailman/listinfo/nginx > > 52. mailto:nginx at nginx.org > > 53. http://mailman.nginx.org/mailman/listinfo/nginx > > 54. mailto:nginx at nginx.org > > 55. http://mailman.nginx.org/mailman/listinfo/nginx > > 56. mailto:nginx at nginx.org > > 57. http://mailman.nginx.org/mailman/listinfo/nginx > > 58. mailto:nginx at nginx.org > > 59. http://mailman.nginx.org/mailman/listinfo/nginx > > 60. mailto:nginx at nginx.org > > 61. http://mailman.nginx.org/mailman/listinfo/nginx > > 62. mailto:pasik at iki.fi > > 63. > https://github.com/cfsego/limit_upload_rate/blob/master/for-nginx.patch > > 64. mailto:pasik at iki.fi > > 65. https://github.com/taobao/tengine/pull/91 > > 66. mailto:yaoweibin at gmail.com > > 67. https://github.com/taobao/tengine/pull/91 > > 68. mailto:zjay1987 at gmail.com > > 69. mailto:nginx at nginx.org > > 70. http://mailman.nginx.org/mailman/listinfo/nginx > > 71. mailto:nginx at nginx.org > > 72. http://mailman.nginx.org/mailman/listinfo/nginx > > 73. mailto:nginx at nginx.org > > 74. http://mailman.nginx.org/mailman/listinfo/nginx > > 75. mailto:zjay1987 at gmail.com > > 76. https://github.com/taobao/tengine/pull/91 > > 77. mailto:yaoweibin at gmail.com > > 78. https://github.com/taobao/tengine/pull/91 > > 79. mailto:zjay1987 at gmail.com > > 80. mailto:nginx at nginx.org > > 81. http://mailman.nginx.org/mailman/listinfo/nginx > > 82. mailto:nginx at nginx.org > > 83. http://mailman.nginx.org/mailman/listinfo/nginx > > 84. mailto:nginx at nginx.org > > 85. http://mailman.nginx.org/mailman/listinfo/nginx > > 86. mailto:nginx at nginx.org > > 87. http://mailman.nginx.org/mailman/listinfo/nginx > > 88. mailto:nginx at nginx.org > > 89. http://mailman.nginx.org/mailman/listinfo/nginx > > 90. mailto:pasik at iki.fi > > 91. https://github.com/taobao/tengine/pull/91 > > 92. mailto:yaoweibin at gmail.com > > 93. https://github.com/taobao/tengine/pull/91 > > 94. mailto:zjay1987 at gmail.com > > 95. mailto:nginx at nginx.org > > 96. http://mailman.nginx.org/mailman/listinfo/nginx > > 97. mailto:nginx at nginx.org > > 98. http://mailman.nginx.org/mailman/listinfo/nginx > > 99. mailto:nginx at nginx.org > > 100. http://mailman.nginx.org/mailman/listinfo/nginx > > 101. mailto:zjay1987 at gmail.com > > 102. https://github.com/taobao/tengine/pull/91 > > 103. mailto:yaoweibin at gmail.com > > 104. https://github.com/taobao/tengine/pull/91 > > 105. mailto:zjay1987 at gmail.com > > 106. mailto:nginx at nginx.org > > 107. http://mailman.nginx.org/mailman/listinfo/nginx > > 108. mailto:nginx at nginx.org > > 109. http://mailman.nginx.org/mailman/listinfo/nginx > > 110. mailto:nginx at nginx.org > > 111. http://mailman.nginx.org/mailman/listinfo/nginx > > 112. mailto:nginx at nginx.org > > 113. http://mailman.nginx.org/mailman/listinfo/nginx > > 114. mailto:nginx at nginx.org > > 115. http://mailman.nginx.org/mailman/listinfo/nginx > > 116. mailto:nginx at nginx.org > > 117. http://mailman.nginx.org/mailman/listinfo/nginx > > 118. mailto:nginx at nginx.org > > 119. http://mailman.nginx.org/mailman/listinfo/nginx > > 120. mailto:nginx at nginx.org > > 121. http://mailman.nginx.org/mailman/listinfo/nginx > > 122. mailto:nginx at nginx.org > > 123. http://mailman.nginx.org/mailman/listinfo/nginx > > 124. mailto:nginx at nginx.org > > 125. http://mailman.nginx.org/mailman/listinfo/nginx > > 126. mailto:nginx at nginx.org > > 127. http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 323.gif Type: image/gif Size: 100 bytes Desc: not available URL: From nginx-forum at nginx.us Tue Feb 26 14:38:52 2013 From: nginx-forum at nginx.us (aschlosberg) Date: Tue, 26 Feb 2013 09:38:52 -0500 Subject: FastCGI stderr being truncated Message-ID: <85944138ac5597ceafdb5531e6c4a4bd.NginxMailingListEnglish@forum.nginx.org> Whilst investigating a series of 502 errors I have noticed that related error messages from FastCGI stderr are being truncated and at other times the error logging seems to fail part way. Example: 2013/02/25 09:33:14 [error] 2032#0: *29484 FastCGI sent in stderr: "o/au/com/***domain-censored***/_public/wp-includes/capabilities.php on line 1026 I came across a really old bug that I thought may have been related - http://mailman.nginx.org/pipermail/nginx-ru/2010-March/032836.html - but as I'm running 1.2.7 this patch has been implemented. I run upwards of 60 domains on the same server and this is the only one that is having these issues. Can anyone please help me shed some light on the issue? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236627,236627#msg-236627 From reallfqq-nginx at yahoo.fr Tue Feb 26 20:58:56 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 26 Feb 2013 15:58:56 -0500 Subject: Using Nginx auth in PHP scripts Message-ID: Hello, The only information I got on the Web was to protect folders with Nginx auth, just like the plein old Apache's .htaccess. I am already using AuthPlain to secure a folder with Nginx, using PHP scripts inside. I wonder if it was possible to use Nginx auth information inside those PHP scripts. More precisely, I would have a directory protected with AuthPlain and I would have a script (let's call it index.php) being called when the authentication is successful. The AuthPlain has several users registered and I would like to know if index.php might be aware who logged in to welcom him with his/her username. Am I dreaming too much ? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 26 21:10:39 2013 From: nginx-forum at nginx.us (double) Date: Tue, 26 Feb 2013 16:10:39 -0500 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: Message-ID: <3b1a4f4f10723e48737378e2e40db823.NginxMailingListEnglish@forum.nginx.org> This would be an excellent feature! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234926,236640#msg-236640 From steve at greengecko.co.nz Tue Feb 26 21:15:01 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 27 Feb 2013 10:15:01 +1300 Subject: Using Nginx auth in PHP scripts In-Reply-To: References: Message-ID: <1361913301.15187.232.camel@steve-new> You can pass environment vars to PHP no problem... Eg: if you set up geoip in nginx.conf geoip_country /usr/share/GeoIP/GeoIP.dat; this makes $geoip_country_code available to your nginx config. If you then set fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; then $_SERVER['GEOIP_COUNTRY_CODE'] is then available to php. This can be done for any variable set within nginx. hth, Stve On Tue, 2013-02-26 at 15:58 -0500, B.R. wrote: > Hello, > > > The only information I got on the Web was to protect folders with > Nginx auth, just like the plein old Apache's .htaccess. > > > I am already using AuthPlain to secure a folder with Nginx, using PHP > scripts inside. > > I wonder if it was possible to use Nginx auth information inside those > PHP scripts. > > > More precisely, I would have a directory protected with AuthPlain and > I would have a script (let's call it index.php) being called when the > authentication is successful. > > The AuthPlain has several users registered and I would like to know if > index.php might be aware who logged in to welcom him with his/her > username. > > > Am I dreaming too much ? > --- > B. R. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From reallfqq-nginx at yahoo.fr Tue Feb 26 21:23:14 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 26 Feb 2013 16:23:14 -0500 Subject: Using Nginx auth in PHP scripts In-Reply-To: <1361913301.15187.232.camel@steve-new> References: <1361913301.15187.232.camel@steve-new> Message-ID: Thanks, I learned something there. But does the auth_basic moduleallows you to load the username in a Nginx variable? >From what I understand everything is processed internally. --- *B. R.* On Tue, Feb 26, 2013 at 4:15 PM, Steve Holdoway wrote: > You can pass environment vars to PHP no problem... > > Eg: if you set up geoip in nginx.conf > > geoip_country /usr/share/GeoIP/GeoIP.dat; > > this makes $geoip_country_code available to your nginx config. > > If you then set > fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; > > then > > $_SERVER['GEOIP_COUNTRY_CODE'] is then available to php. > > This can be done for any variable set within nginx. > > hth, > > Stve > > > On Tue, 2013-02-26 at 15:58 -0500, B.R. wrote: > > Hello, > > > > > > The only information I got on the Web was to protect folders with > > Nginx auth, just like the plein old Apache's .htaccess. > > > > > > I am already using AuthPlain to secure a folder with Nginx, using PHP > > scripts inside. > > > > I wonder if it was possible to use Nginx auth information inside those > > PHP scripts. > > > > > > More precisely, I would have a directory protected with AuthPlain and > > I would have a script (let's call it index.php) being called when the > > authentication is successful. > > > > The AuthPlain has several users registered and I would like to know if > > index.php might be aware who logged in to welcom him with his/her > > username. > > > > > > Am I dreaming too much ? > > --- > > B. R. > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 26 21:30:25 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 26 Feb 2013 21:30:25 +0000 Subject: Using Nginx auth in PHP scripts In-Reply-To: References: <1361913301.15187.232.camel@steve-new> Message-ID: <20130226213025.GX32392@craic.sysops.org> On Tue, Feb 26, 2013 at 04:23:14PM -0500, B.R. wrote: Hi there, > But does the auth_basic > moduleallows > you to load the username in a Nginx variable? > From what I understand everything is processed internally. http://nginx.org/en/docs/http/ngx_http_core_module.html#variables $remote_user is what you want as the nginx variable, if you are using http basic authentication. f -- Francis Daly francis at daoine.org From steve at greengecko.co.nz Tue Feb 26 21:30:41 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 27 Feb 2013 10:30:41 +1300 Subject: Using Nginx auth in PHP scripts In-Reply-To: References: <1361913301.15187.232.camel@steve-new> Message-ID: <1361914241.15187.247.camel@steve-new> It's my understanding the $remote_user is available. Sorry, a poor example. Could have used one that answers your question directly... Steve On Tue, 2013-02-26 at 16:23 -0500, B.R. wrote: > Thanks, I learned something there. > > But does the auth_basic module allows you to load the username in a > Nginx variable? > From what I understand everything is processed internally. > > > --- > B. R. > > > On Tue, Feb 26, 2013 at 4:15 PM, Steve Holdoway > wrote: > You can pass environment vars to PHP no problem... > > Eg: if you set up geoip in nginx.conf > > geoip_country /usr/share/GeoIP/GeoIP.dat; > > this makes $geoip_country_code available to your nginx config. > > If you then set > fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; > > then > > $_SERVER['GEOIP_COUNTRY_CODE'] is then available to php. > > This can be done for any variable set within nginx. > > hth, > > Stve > > > On Tue, 2013-02-26 at 15:58 -0500, B.R. wrote: > > Hello, > > > > > > The only information I got on the Web was to protect folders > with > > Nginx auth, just like the plein old Apache's .htaccess. > > > > > > I am already using AuthPlain to secure a folder with Nginx, > using PHP > > scripts inside. > > > > I wonder if it was possible to use Nginx auth information > inside those > > PHP scripts. > > > > > > More precisely, I would have a directory protected with > AuthPlain and > > I would have a script (let's call it index.php) being called > when the > > authentication is successful. > > > > The AuthPlain has several users registered and I would like > to know if > > index.php might be aware who logged in to welcom him with > his/her > > username. > > > > > > Am I dreaming too much ? > > --- > > B. R. > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From reallfqq-nginx at yahoo.fr Tue Feb 26 22:26:49 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 26 Feb 2013 17:26:49 -0500 Subject: Using Nginx auth in PHP scripts In-Reply-To: <1361914241.15187.247.camel@steve-new> References: <1361913301.15187.232.camel@steve-new> <1361914241.15187.247.camel@steve-new> Message-ID: Thanks to both of you, I got the info I was seeking for. However, when I try to activate the fastcgi_param, the content doesn't seem to reach PHP anymore. No error logged neither in Nginx nor PHP it seems. I feel like I am a noob... oO server { listen 80; server_name ~ab.cd$; root /var/web/$host; index index.html index.php; try_files $uri $uri/ =404; access_log /var/log/nginx/ab.cd/access.log main; error_log /var/log/nginx/ab.cd/error.log warn; include conf.d/includes/fastcgi.conf; auth_basic "Get out!"; auth_basic_user_file /var/web/ab.cd.htpasswd; location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm.sock; #If the following is activated, no more PHP output... Something is wrong # fastcgi_param MY_GREAT_USER $remote_user; } } --- *B. R.* On Tue, Feb 26, 2013 at 4:30 PM, Steve Holdoway wrote: > It's my understanding the $remote_user is available. > > Sorry, a poor example. Could have used one that answers your question > directly... > > Steve > > On Tue, 2013-02-26 at 16:23 -0500, B.R. wrote: > > Thanks, I learned something there. > > > > But does the auth_basic module allows you to load the username in a > > Nginx variable? > > From what I understand everything is processed internally. > > > > > > --- > > B. R. > > > > > > On Tue, Feb 26, 2013 at 4:15 PM, Steve Holdoway > > wrote: > > You can pass environment vars to PHP no problem... > > > > Eg: if you set up geoip in nginx.conf > > > > geoip_country /usr/share/GeoIP/GeoIP.dat; > > > > this makes $geoip_country_code available to your nginx config. > > > > If you then set > > fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; > > > > then > > > > $_SERVER['GEOIP_COUNTRY_CODE'] is then available to php. > > > > This can be done for any variable set within nginx. > > > > hth, > > > > Stve > > > > > > On Tue, 2013-02-26 at 15:58 -0500, B.R. wrote: > > > Hello, > > > > > > > > > The only information I got on the Web was to protect folders > > with > > > Nginx auth, just like the plein old Apache's .htaccess. > > > > > > > > > I am already using AuthPlain to secure a folder with Nginx, > > using PHP > > > scripts inside. > > > > > > I wonder if it was possible to use Nginx auth information > > inside those > > > PHP scripts. > > > > > > > > > More precisely, I would have a directory protected with > > AuthPlain and > > > I would have a script (let's call it index.php) being called > > when the > > > authentication is successful. > > > > > > The AuthPlain has several users registered and I would like > > to know if > > > index.php might be aware who logged in to welcom him with > > his/her > > > username. > > > > > > > > > Am I dreaming too much ? > > > --- > > > B. R. > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Steve Holdoway BSc(Hons) MIITP > > http://www.greengecko.co.nz > > Skype: sholdowa > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 26 22:35:21 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 26 Feb 2013 22:35:21 +0000 Subject: Using Nginx auth in PHP scripts In-Reply-To: References: <1361913301.15187.232.camel@steve-new> <1361914241.15187.247.camel@steve-new> Message-ID: <20130226223521.GY32392@craic.sysops.org> On Tue, Feb 26, 2013 at 05:26:49PM -0500, B.R. wrote: > I feel like I am a noob... oO The problem is due to how nginx directives are inherited to different levels. It's consistent within nginx, so once you learn it you can apply it to all directives which inherit. Just put your new fastcgi_param directive at the same level as the others (which are in the "include" file). So, either: > include conf.d/includes/fastcgi.conf; put "fastcgi_param" here, or > # fastcgi_param MY_GREAT_USER $remote_user; put "include" here. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Feb 27 04:02:57 2013 From: nginx-forum at nginx.us (xinghua_hi) Date: Tue, 26 Feb 2013 23:02:57 -0500 Subject: Proxy without buffering In-Reply-To: References: Message-ID: <5cb97ee66d79dff436d1df935eec09cd.NginxMailingListEnglish@forum.nginx.org> Hello, I have two questions. 1. why does backend has troubles to parse request? because parsing huge request make heavy load? 2. why haproxy in front nginx can workaround the issue? I think nginx will still buffer the request to temp file Thanks a lot Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,236649#msg-236649 From reallfqq-nginx at yahoo.fr Wed Feb 27 04:32:35 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 26 Feb 2013 23:32:35 -0500 Subject: Using Nginx auth in PHP scripts In-Reply-To: <20130226213025.GX32392@craic.sysops.org> References: <1361913301.15187.232.camel@steve-new> <20130226213025.GX32392@craic.sysops.org> Message-ID: Thanks for your great help! Now I think I got it all. Well it was non-sense to put fastcgi out of the PHP location anyway... but I didn't know about the environment problem. I'll try to think about that next time something I play with Nginx configuration. Everything is OK now. --- *B. R.* On Tue, Feb 26, 2013 at 4:30 PM, Francis Daly wrote: > On Tue, Feb 26, 2013 at 04:23:14PM -0500, B.R. wrote: > > Hi there, > > > But does the auth_basic > > module >allows > > you to load the username in a Nginx variable? > > From what I understand everything is processed internally. > > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > > $remote_user is what you want as the nginx variable, if you are using > http basic authentication. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mauro.stettler at gmail.com Wed Feb 27 04:56:14 2013 From: mauro.stettler at gmail.com (Mauro Stettler) Date: Wed, 27 Feb 2013 12:56:14 +0800 Subject: rds-json generate json with index key on first level of array In-Reply-To: References: <4de1601a540fb9c15c347815472f941c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks for the reply agentzh, Ok, I'll probably implement this either on lua in the Nginx or on the client side inside the javascript then. Will have to do some tests of both solutions to decide. Regards, Mauro On Tue, Feb 26, 2013 at 4:03 AM, agentzh wrote: > Hello! > > On Mon, Feb 25, 2013 at 12:56 AM, Mauro Stettler wrote: > > Thanks, that would probably be possible. > > > > It's just that this URL is called very frequently, so I'm trying to do > > everything as resource efficient as possible. I am worried that parsing > the > > json in lua in Nginx and reformatting it would be too heavy on the server > > load. So I would have preferred if there is some way how I can tell the > > rds-json module to format it the way I want. > > > > No, ngx_rds_json does not support this very JSON structure. Because > there's tons of different possible JSON structures, I don't quite feel > like implementing complex configuration templates in ngx_rds_json in > pure C, which is a daunting task and adds complexity to this simple > module. > > I believe using ngx_lua for complicated formatting requirements is the > right way to go. > > Best regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 27 06:49:27 2013 From: nginx-forum at nginx.us (amodpandey) Date: Wed, 27 Feb 2013 01:49:27 -0500 Subject: Want to access UNIX environment variable Message-ID: Hi, I have system where I have defined OS environment variable $ENV=prod or dev. I want to access this variable inside my nginx configuration. Please help. I have tried -g "env ENV" in command line. But how do I access it? $ENV does not work? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236654,236654#msg-236654 From nginx-forum at nginx.us Wed Feb 27 08:24:57 2013 From: nginx-forum at nginx.us (double) Date: Wed, 27 Feb 2013 03:24:57 -0500 Subject: Proxy without buffering In-Reply-To: <5cb97ee66d79dff436d1df935eec09cd.NginxMailingListEnglish@forum.nginx.org> References: <5cb97ee66d79dff436d1df935eec09cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Example: Size of POST-request: 10 GB (e.g. HD video) Upload-time: 4h The front-server proxies the request to the upload-server. After the upload has finished, the uploading client wants a quick "all fine"-message (maximum 30 seconds). If the upload-server gets the stream directly, the machine has has 4h to parse the request (e.g. "upload-module" or "apache-fastcgi-module"). And this does not overload the system. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,236656#msg-236656 From nginx-forum at nginx.us Wed Feb 27 09:56:07 2013 From: nginx-forum at nginx.us (traquila) Date: Wed, 27 Feb 2013 04:56:07 -0500 Subject: proxy_cache_path with huge zone size Message-ID: <91d068f7e7ea93bbba2ffa6942844e60.NginxMailingListEnglish@forum.nginx.org> Hello, I have some troubles to configure my reverse proxy cache. I would like to use a proxy cache with about 5 TB. If I estimate an average file size of 128KB, I need to define my zone size to about 5GB. (5 TB / (128KB / 128)) Here the configuration line: proxy_cache_path /opt/.../hdd_storage/cache levels=1:2 keys_zone=proxyCacheZone_hdd:5000m max_size=5000g inactive=1d; Now, as I have 16 CPUs, I defined 16 worker_process. The problem is, for each worker an allocation of 5 GB is done for a total of 80 GB of ram usage!!! Do I have misunderstood somethig? Is there a way to used a shared memory between workers for the zone cache? Thank you in advance Traquila Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236662,236662#msg-236662 From igor at sysoev.ru Wed Feb 27 09:59:51 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 27 Feb 2013 13:59:51 +0400 Subject: proxy_cache_path with huge zone size In-Reply-To: <91d068f7e7ea93bbba2ffa6942844e60.NginxMailingListEnglish@forum.nginx.org> References: <91d068f7e7ea93bbba2ffa6942844e60.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7C68EBD6-CA36-4312-ABD7-A347A9DA9DDE@sysoev.ru> On Feb 27, 2013, at 13:56 , traquila wrote: > Hello, > > I have some troubles to configure my reverse proxy cache. > > I would like to use a proxy cache with about 5 TB. > If I estimate an average file size of 128KB, I need to define my zone size > to about 5GB. (5 TB / (128KB / 128)) > > Here the configuration line: > proxy_cache_path /opt/.../hdd_storage/cache levels=1:2 > keys_zone=proxyCacheZone_hdd:5000m max_size=5000g inactive=1d; > Now, as I have 16 CPUs, I defined 16 worker_process. > > The problem is, for each worker an allocation of 5 GB is done for a total of > 80 GB of ram usage!!! > > Do I have misunderstood somethig? > Is there a way to used a shared memory between workers for the zone cache? These 5G is shared memory. -- Igor Sysoev From nginx-forum at nginx.us Wed Feb 27 10:16:48 2013 From: nginx-forum at nginx.us (traquila) Date: Wed, 27 Feb 2013 05:16:48 -0500 Subject: proxy_cache_path with huge zone size In-Reply-To: <7C68EBD6-CA36-4312-ABD7-A347A9DA9DDE@sysoev.ru> References: <7C68EBD6-CA36-4312-ABD7-A347A9DA9DDE@sysoev.ru> Message-ID: <70551fe919f5d1a2436b1fc68810261a.NginxMailingListEnglish@forum.nginx.org> Thank you and sorry for my mistake. You have done a great job! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236662,236664#msg-236664 From nginx-forum at nginx.us Wed Feb 27 10:36:06 2013 From: nginx-forum at nginx.us (xinghua_hi) Date: Wed, 27 Feb 2013 05:36:06 -0500 Subject: Proxy without buffering In-Reply-To: References: <5cb97ee66d79dff436d1df935eec09cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, thanks for your reply. but i am still confused with the workground about haproxy. i know haproxy dose not buffer request, but if use haproxy in front of nginx, when haproxy pass request to nginx, nginx will still buffer the request to disk Thanks a lot Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,236665#msg-236665 From mdounin at mdounin.ru Wed Feb 27 10:40:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Feb 2013 14:40:30 +0400 Subject: FastCGI stderr being truncated In-Reply-To: <85944138ac5597ceafdb5531e6c4a4bd.NginxMailingListEnglish@forum.nginx.org> References: <85944138ac5597ceafdb5531e6c4a4bd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130227104030.GR81985@mdounin.ru> Hello! On Tue, Feb 26, 2013 at 09:38:52AM -0500, aschlosberg wrote: > Whilst investigating a series of 502 errors I have noticed that related > error messages from FastCGI stderr are being truncated and at other times > the error logging seems to fail part way. > > Example: > 2013/02/25 09:33:14 [error] 2032#0: *29484 FastCGI sent in stderr: > "o/au/com/***domain-censored***/_public/wp-includes/capabilities.php on line > 1026 > > I came across a really old bug that I thought may have been related - > http://mailman.nginx.org/pipermail/nginx-ru/2010-March/032836.html - but as > I'm running 1.2.7 this patch has been implemented. > > I run upwards of 60 domains on the same server and this is the only one that > is having these issues. Can anyone please help me shed some light on the > issue? Try looking at the following lines. The above error from fastcgi looks like a multiline one. -- Maxim Dounin http://nginx.com/support.html From contact at jpluscplusm.com Wed Feb 27 10:50:46 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 27 Feb 2013 10:50:46 +0000 Subject: Want to access UNIX environment variable In-Reply-To: References: Message-ID: On 27 February 2013 06:49, amodpandey wrote: > Hi, > > I have system where I have defined OS environment variable $ENV=prod or dev. > I want to access this variable inside my nginx configuration. Please help. > > I have tried -g "env ENV" in command line. But how do I access it? $ENV does > not work? No, this isn't supported - both specifically what you tried, and the more general "envvars in config" concept. You'll have to find another way to achieve what you want. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Wed Feb 27 11:25:52 2013 From: nginx-forum at nginx.us (double) Date: Wed, 27 Feb 2013 06:25:52 -0500 Subject: Proxy without buffering In-Reply-To: References: <5cb97ee66d79dff436d1df935eec09cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <74c14ca918330f0cd3185b83c745cfce.NginxMailingListEnglish@forum.nginx.org> haproxy does not buffer the request - it proxies immediately. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,236668#msg-236668 From nginx-forum at nginx.us Wed Feb 27 11:40:27 2013 From: nginx-forum at nginx.us (Varix) Date: Wed, 27 Feb 2013 06:40:27 -0500 Subject: Virtualhosts and map In-Reply-To: References: Message-ID: Hallo Jonathan, I have some proplems with the english language. This is bad in the IT section. My problem is, i can't find an complett example to do that for education. All I find is the "old Way" with sites-availbled and sites-enabled, what I have done for years. In January I chance nginx to the new version. My files in the folder sites-availabled default example1.com example2.com example3.com My default file # default server # server { listen 80 default_server; server_name _; access_log logs/default/default.access.log; error_log logs/default/default.error.log; root /www/default; location / { root /www/default; index index.html index.htm; } location /i/ { alias /www/123/; } # redirect server error pages to the static page /40x.html # error_page 400 401 402 403 404 /40x.html; location = /40x.html { root /www/default/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /www/default/html; } If the IP (xxx.xxx.xxx.xxx) is type in the browser, it shows only domain example1.com. After a few days I found the reason for this: Change: now if the "include" directive with mask is used on Unix systems, included files are sorted in alphabetical order. My solution for this is to make a new big nginx.conf In the part with the serverblocks the latest is the default one and all is OK. This solution is OK for a few domains, but not for many domains. I read that this can be done with map. But I can't find an example for this. Now I am looking for examples how I can do that with map. This from the nginx docu is not enough for me. map $http_host $name { hostnames; default 0; example.com 1; *.example.com 1; example.org 2; *.example.org 2; .example.net 3; wap.* 4; } Varix Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236611,236669#msg-236669 From mdounin at mdounin.ru Wed Feb 27 13:03:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Feb 2013 17:03:54 +0400 Subject: Virtualhosts and map In-Reply-To: References: Message-ID: <20130227130354.GT81985@mdounin.ru> Hello! On Wed, Feb 27, 2013 at 06:40:27AM -0500, Varix wrote: > Hallo Jonathan, > > I have some proplems with the english language. This is bad in the IT > section. > > My problem is, i can't find an complett example to do that for education. > All I find is > the "old Way" with sites-availbled and sites-enabled, what I have done for > years. > > In January I chance nginx to the new version. > > My files in the folder sites-availabled > default > example1.com > example2.com > example3.com > > > My default file > > # default server > # > > server { > listen 80 default_server; > server_name _; > access_log logs/default/default.access.log; > error_log logs/default/default.error.log; > > root /www/default; > > location / { > root /www/default; > index index.html index.htm; > } > > location /i/ { > alias /www/123/; > } > > # redirect server error pages to the static page /40x.html > # > > error_page 400 401 402 403 404 /40x.html; > location = /40x.html { > root /www/default/html; > } > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /www/default/html; > } > > > If the IP (xxx.xxx.xxx.xxx) is type in the browser, it shows only domain > example1.com. > > After a few days I found the reason for this: > > Change: now if the "include" directive with mask is used on Unix > systems, included files are sorted in alphabetical order. This can't be a reason as long as you have "default_server" parameter of the "listen" directive properly set for listen sockets used. > My solution for this is to make a new big nginx.conf > In the part with the serverblocks the latest is the default one and all is > OK. > This solution is OK for a few domains, but not for many domains. > I read that this can be done with map. But I can't find an example for > this. > > Now I am looking for examples how I can do that with map. > > This from the nginx docu is not enough for me. > > map $http_host $name { > hostnames; > > default 0; > > example.com 1; > *.example.com 1; > example.org 2; > *.example.org 2; > .example.net 3; > wap.* 4; > } Maps may be used to handle multiple domains in one server block, e.g. to set document root depending on a domain: map $host $server_root { hostnames; default /www/default; foo.example.com /www/foo; foo.example.org /www/foo; bar.example.com /www/bar; } server { listen 80; server_name foo.example.com foo.example.org bar.example.com ...; root $server_root; ... } This aproach may be usable if you have mostly identical handling for many domains, with some minor differences which can be handled using variables. It only makes sense if you have really many domains (thousands of), and can't afford distinct server{} blocks for them. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Wed Feb 27 15:42:48 2013 From: nginx-forum at nginx.us (Varix) Date: Wed, 27 Feb 2013 10:42:48 -0500 Subject: Virtualhosts and map In-Reply-To: <20130227130354.GT81985@mdounin.ru> References: <20130227130354.GT81985@mdounin.ru> Message-ID: <54d89a656e4f36b498d191eb7dbc60fa.NginxMailingListEnglish@forum.nginx.org> Hallo Maxim Dounin, thanks for your answer. It helps me. >This can't be a reason as long as you have "default_server" >parameter of the "listen" directive properly set for listen >sockets used. My thought was, the default file is the first alphabetical file in the folder. And the next domainfile namend with do*.*. As I make the big nginx.conf for the first time I had the same problem. I had the default at the first serverblock. Then I write this serverblock to the end of the serverblocks and all was OK. This alphabetical sort managed something in a other way, I thought. I will test same things in the next days. Thanks Varix Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236611,236677#msg-236677 From nginx-forum at nginx.us Wed Feb 27 17:47:26 2013 From: nginx-forum at nginx.us (xinghua_hi) Date: Wed, 27 Feb 2013 12:47:26 -0500 Subject: Proxy without buffering In-Reply-To: <74c14ca918330f0cd3185b83c745cfce.NginxMailingListEnglish@forum.nginx.org> References: <5cb97ee66d79dff436d1df935eec09cd.NginxMailingListEnglish@forum.nginx.org> <74c14ca918330f0cd3185b83c745cfce.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9aab789f934d54d4afcd391ef5140bf2.NginxMailingListEnglish@forum.nginx.org> Hello, as you said, you use haproxy in front of nginx, that means the request will be proxied by haproxy first, and then be proxied by nginx, although haproxy can proxy request immediately, but nginx can not pass the request on-the-fly, it will buffer request anagin. Have i made some mistakes? thanks a lot Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,236695#msg-236695 From nginx-forum at nginx.us Wed Feb 27 17:48:44 2013 From: nginx-forum at nginx.us (Varix) Date: Wed, 27 Feb 2013 12:48:44 -0500 Subject: split-clients for vhosts how? In-Reply-To: <3f993ad563c4f8fa4b0f8e56ba4da26b.NginxMailingListEnglish@forum.nginx.org> References: <3f993ad563c4f8fa4b0f8e56ba4da26b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3223c38cc78b18dbf614b10a278d92a5.NginxMailingListEnglish@forum.nginx.org> Or same question in an other ask. For example I have a server with 10 virtualdomains. vhost 1, example.com with 5,000 visitors a month vhost 2, example.org with 300,000 visitors a month vhost 3, example.net with 10,000 visitors a month vhost 4, example.de with 20,0000 visitors a month vhost 5, example.fr with 800,000 visitors a month vhost 6, example.nl with 350,000 visitors a month vhost 7, example2.com with 20,000 visitors a month vhost 8, example3.com with 2,000 visitors a month vhost 9, example4.com with 175,000 visitors a month vhost 10, example5.com with 50,000 visitors a month We will splittest on vhost 1,6,8 and 9. vhost 1 with two sites 50:50 vhost 6 with three sites 33.33:33.33:33.33 vhost 8 with two sites 50:50 vhost 9 with four sites 30:30:20:20 Where and how is the configuration for each domain? What is "AAA" meaning in the first line? split_clients "${remote_addr}AAA" $variant { 0.5% .one; 2.0% .two; * ""; } Varix Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236593,236678#msg-236678 From nginx-forum at nginx.us Wed Feb 27 19:07:24 2013 From: nginx-forum at nginx.us (double) Date: Wed, 27 Feb 2013 14:07:24 -0500 Subject: Proxy without buffering In-Reply-To: <9aab789f934d54d4afcd391ef5140bf2.NginxMailingListEnglish@forum.nginx.org> References: <5cb97ee66d79dff436d1df935eec09cd.NginxMailingListEnglish@forum.nginx.org> <74c14ca918330f0cd3185b83c745cfce.NginxMailingListEnglish@forum.nginx.org> <9aab789f934d54d4afcd391ef5140bf2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2feb8c91afe7dabd12e11e330363e4f1.NginxMailingListEnglish@forum.nginx.org> Machine ""front-end"": haproxy (port 80) -> proxies to "port 8080" and "machine upload" nginx (port 8080) Machine "upload": nginx (port 80) via "front-end" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,236699#msg-236699 From nginx-forum at nginx.us Thu Feb 28 04:26:16 2013 From: nginx-forum at nginx.us (korneil) Date: Wed, 27 Feb 2013 23:26:16 -0500 Subject: include_shell Message-ID: i needed a small feature for own purposes and pushed to github, include config generated by program at nginx startup. https://github.com/korneil/ngx_include_shell_module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236702,236702#msg-236702 From nginx-forum at nginx.us Thu Feb 28 06:04:57 2013 From: nginx-forum at nginx.us (amodpandey) Date: Thu, 28 Feb 2013 01:04:57 -0500 Subject: Want to access UNIX environment variable In-Reply-To: References: Message-ID: <428dde917203bebddf25f1b01f33847d.NginxMailingListEnglish@forum.nginx.org> Let me put what I want to achieve. Before that I want to know what is this 'env' global directive for? I want to maintain a single set of configuration files for dev and prod. The machines would have environment variable set ENV=dev or ENV=prod. Based on the values of the ENV varibale I wanted to pick my upstream conf, either upstream_dev.conf or upstream_prod.conf through 'include upstream_$ENV.conf'. Please suggest an approach. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236654,236704#msg-236704 From zs at enternewmedia.com Thu Feb 28 06:14:20 2013 From: zs at enternewmedia.com (Zachary Stern) Date: Thu, 28 Feb 2013 01:14:20 -0500 Subject: Want to access UNIX environment variable In-Reply-To: <428dde917203bebddf25f1b01f33847d.NginxMailingListEnglish@forum.nginx.org> References: <428dde917203bebddf25f1b01f33847d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Have puppet generate the configuration based on that variable. -- Snet form my moblie phoen. Please excues tpyos. On Feb 28, 2013 1:05 AM, "amodpandey" wrote: > Let me put what I want to achieve. > > Before that I want to know what is this 'env' global directive for? > > I want to maintain a single set of configuration files for dev and prod. > The > machines would have environment variable set ENV=dev or ENV=prod. Based on > the values of the ENV varibale I wanted to pick my upstream conf, either > upstream_dev.conf or upstream_prod.conf through 'include > upstream_$ENV.conf'. > > Please suggest an approach. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,236654,236704#msg-236704 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 28 06:22:33 2013 From: nginx-forum at nginx.us (amodpandey) Date: Thu, 28 Feb 2013 01:22:33 -0500 Subject: Want to access UNIX environment variable In-Reply-To: References: Message-ID: <27f59c792f1cde8d9cfcff61b589886a.NginxMailingListEnglish@forum.nginx.org> Thank you. I am looking for an simpler ( direct ) approach. For now I have put a sed script in my bounce nginx which does that. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236654,236706#msg-236706 From nginx-forum at nginx.us Thu Feb 28 08:33:06 2013 From: nginx-forum at nginx.us (gmor) Date: Thu, 28 Feb 2013 03:33:06 -0500 Subject: Exchange / Outlook - RPC Method and Error 405 Message-ID: <01a0c9fbb6c39609b3ab81a237af4aeb.NginxMailingListEnglish@forum.nginx.org> Hi all, I've tried to do some research on this already, but without much luck. So I'm hoping others may be able to assist. I'm trying to use Nginx as a Reverse Proxy back to an Exchange 2007 environment. - Nginx is terminating to HTTPS / SSL connection for the Client - Nginx is then proxying to the Exchange environment over HTTP As other have found, this appears to be working find for all Exchange Services, with the exception of 'Outlook Anywhere', which is also known as Outlook RCP/HTTP(S). To better define the problem, an Exchange Client Access Server hosts the following Virtual Directories (or services) is IIS7: /Autodiscover - For devices to automatically configure their connection setting /owa - "Outlook Web Access" - Basically webmail /OAB - "Offline Address Book" - Where clients can download copies off the offline address book /EWS - "Exchange Web Services" - Good question as to what this is! /Public - "Public Folders" - Conenction point for Public Folder access /Microsoft-Server-ActiveSync - Used with devices for ActiveSync /Rpc - "Outlook Anywhere" - Outlook connecting using RPC/HTTP(S) - This is the problematic one. (plus a few legacy and admin services) So, I have everything working through Nginx, with the exception RPC/HTTP(S). Initially I was seeing this error: 10.110.2.15 - username [27/Feb/2013:17:24:31 +0000] "RPC_IN_DATA /rpc/rpcproxy.dll?EX-SERVER-2008.servers.null.org:6002 HTTP/1.1" 413 198 "-" "MSRPC" "-" After a bit of reading, this was resolved with the following directive: http { client_max_body_size 0; } (Yes, I know 0=unlimited and that may not be appropriate, but I'm still testing!) So now all that I'm left with is trying to resolve this error: 10.110.2.15 - username [28/Feb/2013:07:39:27 +0000] "RPC_OUT_DATA /rpc/rpcproxy.dll?EX-SERVER-2008.servers.null.or:6004 HTTP/1.1" 405 172 "-" "MSRPC" "-" 10.110.2.15 - username [28/Feb/2013:07:39:27 +0000] "RPC_IN_DATA /rpc/rpcproxy.dll?EX-SERVER-2008.servers.null.org:6004 HTTP/1.1" 405 172 "-" "MSRPC" "-" With my very limited knowledge Error 405 is "Method Not Allowed". I've seen various solutions which suggest changing the 'error_page 405' directive to different things. Such as: location / { error_page 405 = @app; try_files $uri @app; } location @app { proxy_pass http://app_servers; } But these don't seem to solve the issue. So my questions are: 1. How can I allow this Method, if that is the issue? 2. If what I'm doing is fundamentally not possible, please just let me know! For Reference: I'm running Nginx v1.2.7 from the nginx Repo on Centos 6.3 My main Nginx Config looks like this (Some of the names and IPs have been changed to protect the innocent): upstream exchange_all { ip_hash; server 10.1.1.1 max_fails=1 fail_timeout=10s; server 10.1.1.2 max_fails=1 fail_timeout=10s; # Do NOT Remove - this is needed for auth to work keepalive 32; } server { listen 10.2.1.1; return 301 https://webmail.null.com$request_uri; } server { listen 10.2.1.1:443 ; ssl on; ssl_certificate /etc/ssl/webmail.aeltc.com.crt; ssl_certificate_key /etc/ssl/ae-lb02-key.pem; ssl_session_cache shared:SSL:60m; ssl_session_timeout 60m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers RC4:HIGH:!aNULL:!MD5; proxy_redirect off; proxy_buffering off; proxy_read_timeout 3600; proxy_pass_header Date; proxy_pass_header Server; proxy_set_header Connection ""; proxy_set_header Accept-Encoding ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto off; add_header Front-End-Https on; proxy_http_version 1.1; location ~*/Autodiscover { proxy_pass http://exchange_all; } location ~*/owa { proxy_pass http://exchange_all; } location ~*/OAB { proxy_pass http://exchange_all; } # location ~*/rpc/ { location ~*/rpc/rpcproxy\.dll\? { proxy_pass http://exchange_all; } location ~*/EWS { proxy_pass http://exchange_all; } location ~*/Public { proxy_pass http://exchange_all; } location ~*/Microsoft-Server-ActiveSync { proxy_pass http://exchange_all; } location ~*/$ { return 301 https://webmail.null.com/owa; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236709,236709#msg-236709 From contact at jpluscplusm.com Thu Feb 28 10:54:16 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 28 Feb 2013 10:54:16 +0000 Subject: Want to access UNIX environment variable In-Reply-To: <428dde917203bebddf25f1b01f33847d.NginxMailingListEnglish@forum.nginx.org> References: <428dde917203bebddf25f1b01f33847d.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 28 February 2013 06:04, amodpandey wrote: > Let me put what I want to achieve. > > Before that I want to know what is this 'env' global directive for? I don't know. The documentation suggests it's not going to solve the problem you want; but I've not used it. > I want to maintain a single set of configuration files for dev and prod. The > machines would have environment variable set ENV=dev or ENV=prod. Based on > the values of the ENV varibale I wanted to pick my upstream conf, either > upstream_dev.conf or upstream_prod.conf through 'include > upstream_$ENV.conf'. > > Please suggest an approach. Use something that helps you template your config files. Chef, puppet, Erb, Perl, bash - whatever works for you. [ However, I will suggest that rolling your own templating setup is a strong indicator of an inexperienced sysadmin; one who doesn't yet understand the value of consistency, the principle of least surprise, and why NIH is a Bad Thing. ] HTH, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nick_ik at mail.ru Thu Feb 28 11:50:19 2013 From: nick_ik at mail.ru (=?UTF-8?B?Tmljaw==?=) Date: Thu, 28 Feb 2013 15:50:19 +0400 Subject: proxy_cache and internal redirect Message-ID: <1362052219.499794677@f54.mail.ru> Hello. Can you please tell how to enable caching of responses with 'X-Accel-Redirect' headers. Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Feb 28 12:26:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Feb 2013 16:26:40 +0400 Subject: Exchange / Outlook - RPC Method and Error 405 In-Reply-To: <01a0c9fbb6c39609b3ab81a237af4aeb.NginxMailingListEnglish@forum.nginx.org> References: <01a0c9fbb6c39609b3ab81a237af4aeb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130228122640.GF81985@mdounin.ru> Hello! On Thu, Feb 28, 2013 at 03:33:06AM -0500, gmor wrote: [...] > 1. How can I allow this Method, if that is the issue? > 2. If what I'm doing is fundamentally not possible, please just let me > know! The 405 error, at least if returned by nginx, means that module which used to handle the request doesn't understand the request method used. E.g. static module will only handle GET and HEAD, and will return 405 to anything else as it doesn't know how to handle other methods. Proxy module, in contrast, just proxies the request and don't care which method was used. So first of all I would suggest you to make sure requests are handled in a location with proxy_pass properly configured. A simple way to do this would be to just throw away all funny regexp locations you wrote in your config, and start with a simple location / { proxy_pass http://backend; } to handle all requests. Using debug log might be also helpful, see http://nginx.org/en/debugging_log.html. (Note: I'm not sure this will work at all, I've never tried. I would rather suppose it won't work, as RPC likely tries to establish in/out data streams. I wouldn't expect a stream from a client to a backend to work as nginx will insist on reading a request body before it will be passed to the backend.) [...] > upstream exchange_all { > ip_hash; > server 10.1.1.1 max_fails=1 fail_timeout=10s; > server 10.1.1.2 max_fails=1 fail_timeout=10s; > > # Do NOT Remove - this is needed for auth to work > keepalive 32; Just a side note: This is wrong - there is no guarantee that the same upstream server connection will used for requests from a particular client. While with keepalive Integrated Windows Authentication might appear to work, it in fact doesn't. You should switch to Basic authentication instead. See http://en.wikipedia.org/wiki/Integrated_Windows_Authentication for more details. [...] -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Feb 28 14:15:52 2013 From: nginx-forum at nginx.us (gmor) Date: Thu, 28 Feb 2013 09:15:52 -0500 Subject: Exchange / Outlook - RPC Method and Error 405 In-Reply-To: <20130228122640.GF81985@mdounin.ru> References: <20130228122640.GF81985@mdounin.ru> Message-ID: Hi, Thanks for the quick response. I've done what you suggested with the following results: >A >simple way to do this would be to just throw away all funny >regexp locations you wrote in your config, and start with a simple >location / { >proxy_pass http://backend; >} Absolutely happy to. Now I'm seeing slightly different behaviour. No more 405 errors, the Outlook client just hangs now, eventually reporting 'Server not Available' In the debug log, I'm now seeing the following: 2013/02/28 13:56:55 [info] 3150#0: *1 client prematurely closed connection, client: 10.110.2.15, server: , request: "RPC_IN_DATA /rpc/rpcproxy.dll?EX-SERVER-2008.servers.null.org:6002 HTTP/1.1", host: "webmail.null.com" 2013/02/28 13:56:55 [info] 3150#0: *2 client prematurely closed connection, so upstream connection is closed too (104: Connection reset by peer) while reading upstream, client: 10.110.2.15, server: , request: "RPC_OUT_DATA /rpc/rpcproxy.dll?EX-SERVER-2008.servers.null.org:6002 HTTP/1.1", upstream: "http://10.1.1.2:80/rpc/rpcproxy.dll?EX-MBX-2008.servers.aeltc.org:6002", host: "webmail.null.com" Which is me eventually cancelling the Outlook connection. This is evident from tailing the debug log - Nothing during the connection; but these events which the connection attempt is cancelled. So... Good news, Error 405 doesn't seem to be the cause of my issues, it would appear that my location directives were wrong in some way. Bad news, Outlook Anywhere still isn't working. If anyone has any other suggestions, please let me know. Thanks, Graham. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236709,236722#msg-236722 From nginx-forum at nginx.us Thu Feb 28 15:16:51 2013 From: nginx-forum at nginx.us (double) Date: Thu, 28 Feb 2013 10:16:51 -0500 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: Message-ID: <0b978a9f636d364e49e79bf5e91418bb.NginxMailingListEnglish@forum.nginx.org> > I know nginx team are working on it. You can wait for it. Hopefully they will find a solution! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234926,236723#msg-236723 From mdounin at mdounin.ru Thu Feb 28 15:20:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Feb 2013 19:20:03 +0400 Subject: Exchange / Outlook - RPC Method and Error 405 In-Reply-To: References: <20130228122640.GF81985@mdounin.ru> Message-ID: <20130228152003.GJ81985@mdounin.ru> Hello! On Thu, Feb 28, 2013 at 09:15:52AM -0500, gmor wrote: > Hi, > > Thanks for the quick response. I've done what you suggested with the > following results: > > >A > >simple way to do this would be to just throw away all funny > >regexp locations you wrote in your config, and start with a simple > > >location / { > >proxy_pass http://backend; > >} > > Absolutely happy to. > > Now I'm seeing slightly different behaviour. No more 405 errors, the Outlook > client just hangs now, eventually reporting 'Server not Available' > > In the debug log, I'm now seeing the following: > > 2013/02/28 13:56:55 [info] 3150#0: *1 client prematurely closed connection, > client: 10.110.2.15, server: , request: "RPC_IN_DATA > /rpc/rpcproxy.dll?EX-SERVER-2008.servers.null.org:6002 HTTP/1.1", host: > "webmail.null.com" > 2013/02/28 13:56:55 [info] 3150#0: *2 client prematurely closed connection, > so upstream connection is closed too (104: Connection reset by peer) while > reading upstream, client: 10.110.2.15, server: , request: "RPC_OUT_DATA > /rpc/rpcproxy.dll?EX-SERVER-2008.servers.null.org:6002 HTTP/1.1", upstream: > "http://10.1.1.2:80/rpc/rpcproxy.dll?EX-MBX-2008.servers.aeltc.org:6002", > host: "webmail.null.com" > > Which is me eventually cancelling the Outlook connection. This is evident > from tailing the debug log - Nothing during the connection; but these events > which the connection attempt is cancelled. This what I've talked about - RPC client tries to establish data stream, presumably by faking big request body, but fails because nginx actually waits for the body before passing the request to an upstream server. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Feb 28 16:35:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Feb 2013 20:35:41 +0400 Subject: proxy_cache and internal redirect In-Reply-To: <1362052219.499794677@f54.mail.ru> References: <1362052219.499794677@f54.mail.ru> Message-ID: <20130228163541.GN81985@mdounin.ru> Hello! On Thu, Feb 28, 2013 at 03:50:19PM +0400, Nick wrote: > Hello. > Can you please tell how to enable caching of responses with 'X-Accel-Redirect' headers. > Nick. You may do so by using proxy_ignore_headers X-Accel-Redirect; This will prevent nginx from doing a redirect based on X-Accel-Redirect though. See http://nginx.org/r/proxy_ignore_headers for details. -- Maxim Dounin http://nginx.org/en/donation.html From andre.cruz at co.sapo.pt Thu Feb 28 17:36:23 2013 From: andre.cruz at co.sapo.pt (=?iso-8859-1?Q?Andr=E9_Cruz?=) Date: Thu, 28 Feb 2013 17:36:23 +0000 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <0b978a9f636d364e49e79bf5e91418bb.NginxMailingListEnglish@forum.nginx.org> References: <0b978a9f636d364e49e79bf5e91418bb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <88C197B9-A5A4-4201-999B-4995243FE204@co.sapo.pt> I'm also very interested in being able to configure nginx to NOT proxy the entire request. Regarding this patch, https://github.com/alibaba/tengine/pull/91, is anything fundamentally wrong with it? I don't understand Chinese so I'm at a loss here... Best regards, Andr? Cruz On Feb 28, 2013, at 3:16 PM, double wrote: >> I know nginx team are working on it. You can wait for it. > > Hopefully they will find a solution! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234926,236723#msg-236723 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Feb 28 18:12:47 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Feb 2013 22:12:47 +0400 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <88C197B9-A5A4-4201-999B-4995243FE204@co.sapo.pt> References: <0b978a9f636d364e49e79bf5e91418bb.NginxMailingListEnglish@forum.nginx.org> <88C197B9-A5A4-4201-999B-4995243FE204@co.sapo.pt> Message-ID: <20130228181246.GT81985@mdounin.ru> Hello! On Thu, Feb 28, 2013 at 05:36:23PM +0000, Andr? Cruz wrote: > I'm also very interested in being able to configure nginx to NOT > proxy the entire request. > > Regarding this patch, > https://github.com/alibaba/tengine/pull/91, is anything > fundamentally wrong with it? I don't understand Chinese so I'm > at a loss here... As a non-default mode of operation the aproach taken is likely good enough (not looked into details), but the patch won't work with current nginx versions - at least it needs (likely major) adjustments to cope with changes introduced during work on chunked request body support as available in nginx 1.3.9+. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Thu Feb 28 18:51:03 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 28 Feb 2013 22:51:03 +0400 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <88C197B9-A5A4-4201-999B-4995243FE204@co.sapo.pt> References: <0b978a9f636d364e49e79bf5e91418bb.NginxMailingListEnglish@forum.nginx.org> <88C197B9-A5A4-4201-999B-4995243FE204@co.sapo.pt> Message-ID: <201302282251.03866.vbart@nginx.com> On Thursday 28 February 2013 21:36:23 Andr? Cruz wrote: > I'm also very interested in being able to configure nginx to NOT proxy the > entire request. > [...] Actually, you can. http://nginx.org/r/proxy_set_body http://nginx.org/r/proxy_pass_request_body wbr, Valentin V. Bartenev From andre.cruz at co.sapo.pt Thu Feb 28 19:02:46 2013 From: andre.cruz at co.sapo.pt (=?iso-8859-1?Q?Andr=E9_Cruz?=) Date: Thu, 28 Feb 2013 19:02:46 +0000 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <201302282251.03866.vbart@nginx.com> References: <0b978a9f636d364e49e79bf5e91418bb.NginxMailingListEnglish@forum.nginx.org> <88C197B9-A5A4-4201-999B-4995243FE204@co.sapo.pt> <201302282251.03866.vbart@nginx.com> Message-ID: On Feb 28, 2013, at 6:51 PM, Valentin V. Bartenev wrote: > On Thursday 28 February 2013 21:36:23 Andr? Cruz wrote: >> I'm also very interested in being able to configure nginx to NOT proxy the >> entire request. >> > [...] > > Actually, you can. > > http://nginx.org/r/proxy_set_body > http://nginx.org/r/proxy_pass_request_body I've probably explained myself wrong. What I want is for nginx to buffer only chunks of the request body and pass these chunks to the upstream server as they arrive. Andr? From reallfqq-nginx at yahoo.fr Thu Feb 28 22:36:59 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 28 Feb 2013 17:36:59 -0500 Subject: Location regex + if + basic auth to restrict directory access Message-ID: Hello, I am using basic auth + $remote_user variable send to the back-end application to change context depending on the logged-in user. The thing is, even if the page rendered by the back-end uses nginx user authentication, resources from a directory are still allowed for everyone. My 'documents' directory is sorted as follows: documents/ abc/ --> stores content for user 'abc' def/ --> stores content for user 'def' ... I tried the following: location ^~ /documents/(\w+) { if ($1 != $remote_user) { return 503; } } But Nginx refuses to validate configuration: nginx: [emerg] unknown "1" variable nginx: configuration file /etc/nginx/nginx.conf test failed Does the 'if' directive have an environment isolated for the on of the 'location' directive? Am I using wrong syntax? Is there a 'IfIsEvil' case corresponding to my needs to avoid the use of the 'if' directive? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: