From phil at pricom.com.au Tue Nov 1 03:05:41 2011 From: phil at pricom.com.au (Philip Rhoades) Date: Tue, 01 Nov 2011 14:05:41 +1100 Subject: First Question In-Reply-To: References: Message-ID: <8f5f460e1ade4806f4427aafb4484401@www.pricom.com.au> People, I want to switch my Apache SSL to Nginx - that might help me resolve a Ruby on Rails issue and it is too much work to change all the non-SSL stuff over as well - I will do that later when I have time. So for the time being, http will be handled by Apache and https will be handled by Nginx. I commented out the default server in: /etc/nginx/nginx.conf and changed: /etc/nginx/conf.d/ssl.conf thus: # # HTTPS server configuration # server { listen 443; server_name www.pricom.com.au; access_log /var/log/nginx/nginx.vhost.access.log; error_log /var/log/nginx/nginx.vhost.error.log; ssl on; ssl_certificate /etc/httpd/conf/ssl.crt/RapidSSL.crt; ssl_certificate_key /etc/httpd/conf/ssl.key/mars-server.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { root /var/www/ssl/wac; } } but in: /var/log/nginx/nginx.vhost.error.log I get: 2011/11/01 13:50:28 [error] 9411#0: *7 open() "/usr/share/nginx/html/wac" failed (2: No such file or directory), client: 203.206.181.78, server: www.pricom.com.au, request: "GET /wac HTTP/1.1", host: "pricom.com.au" Why is nginx reverting to the default path and how is it getting the right dir without the correct path? Thanks, Phil. -- Philip Rhoades GPO Box 3411 Sydney NSW 2001 Australia E-mail: phil at pricom.com.au From ilan at time4learning.com Tue Nov 1 03:09:18 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Mon, 31 Oct 2011 23:09:18 -0400 Subject: First Question In-Reply-To: <8f5f460e1ade4806f4427aafb4484401@www.pricom.com.au> References: <8f5f460e1ade4806f4427aafb4484401@www.pricom.com.au> Message-ID: A beginner's guess... Your server is listening on port 443 (HTTPS). Your log file shows that the file not found is being generated by the HTTP protocol for which your configuration is not listening on. Are you should that you're accessing your site using "https://" and not "http://"? ...GET /wac HTTP/1.1", ... On Mon, Oct 31, 2011 at 11:05 PM, Philip Rhoades wrote: > People, > > I want to switch my Apache SSL to Nginx - that might help me resolve a > Ruby on > Rails issue and it is too much work to change all the non-SSL stuff over as > well - I will do that later when I have time. So for the time being, http > will > be handled by Apache and https will be handled by Nginx. > > I commented out the default server in: > > /etc/nginx/nginx.conf > > and changed: > > /etc/nginx/conf.d/ssl.conf > > thus: > > # > # HTTPS server configuration > # > > server { > listen 443; > server_name www.pricom.com.au; > access_log /var/log/nginx/nginx.vhost.**access.log; > error_log /var/log/nginx/nginx.vhost.**error.log; > > ssl on; > ssl_certificate /etc/httpd/conf/ssl.crt/**RapidSSL.crt; > ssl_certificate_key /etc/httpd/conf/ssl.key/mars-**server.key; > > ssl_session_timeout 5m; > > ssl_protocols SSLv2 SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+** > HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; > ssl_prefer_server_ciphers on; > > location / { > root /var/www/ssl/wac; > } > } > > but in: > > /var/log/nginx/nginx.vhost.**error.log > > I get: > > 2011/11/01 13:50:28 [error] 9411#0: *7 open() "/usr/share/nginx/html/wac" > failed (2: No such file or directory), client: 203.206.181.78, server: > www.pricom.com.au, request: "GET /wac HTTP/1.1", host: "pricom.com.au" > > Why is nginx reverting to the default path and how is it getting the right > dir without the correct path? > > Thanks, > > Phil. > > -- > Philip Rhoades > > GPO Box 3411 > Sydney NSW 2001 > Australia > E-mail: phil at pricom.com.au > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From btm at loftninjas.org Tue Nov 1 05:00:25 2011 From: btm at loftninjas.org (Bryan McLellan) Date: Tue, 1 Nov 2011 01:00:25 -0400 Subject: Timeout when sending over 16k of data with UTF-8 characters Message-ID: I'm experiencing an issue where nginx and having a difficult time determining the cause. It is timing out on an HTTPS PUT (to an HTTP backend) when content length is greater than 16384 (per $content_length) and it contains UTF-8 characters. The connection hangs and I get a 408 returned by nginx after 60 seconds. When I make the same call directly to the backend, it succeeds. Removing the UTF-8 characters or reducing the seize to 16384 or below will both allow the request to succeed through nginx. client_body_buffer_size is defaulting to 8k, and using $request_body_file I've determined that it isn't the use of a temporary file triggering the error. Are there any other configuration values that default to 16k? My current nginx configuration: nginx: nginx version: nginx/1.0.5 nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5) nginx: TLS SNI support enabled nginx: configure arguments: --conf-path=/etc/nginx --prefix=/srv/nginx/1.0.5 --with-http_ssl_module --with-http_stub_status_module --add-module=../nginx-x-rid-header --with-ld-opt=-luuid I also tried 1.1.6 to see if its UTF-8 patch would help, to no avail: nginx: nginx version: nginx/1.1.6 nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5) nginx: TLS SNI support enabled nginx: configure arguments: --conf-path=/etc/nginx --prefix=/tmp/nginx-1.1.6 --with-http_ssl_module --with-http_stub_status_module --add-module=../nginx-x-rid-header --with-ld-opt=-luuid Successful PUT: $status 200 $request_time 0.943 $body_bytes_sent 16384 $upstream_status 200 $upstream_response_time 0.216 $request_length 17330 $content_length 16384 $request_body_file /srv/nginx/1.0.5/client_body_temp/0000000249 Failing PUT: $status 408 $request_time 60.911 $body_bytes_sent - $upstream_status - $upstream_response_time - $request_length 17330 $content_length 16385 $request_body_file /srv/nginx/1.0.5/client_body_temp/0000000250 Ideas? Bryan From igor at sysoev.ru Tue Nov 1 06:21:12 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 1 Nov 2011 10:21:12 +0400 Subject: Timeout when sending over 16k of data with UTF-8 characters In-Reply-To: References: Message-ID: <20111101062112.GA73819@nginx.com> On Tue, Nov 01, 2011 at 01:00:25AM -0400, Bryan McLellan wrote: > I'm experiencing an issue where nginx and having a difficult time > determining the cause. It is timing out on an HTTPS PUT (to an HTTP > backend) when content length is greater than 16384 (per > $content_length) and it contains UTF-8 characters. The connection > hangs and I get a 408 returned by nginx after 60 seconds. When I make > the same call directly to the backend, it succeeds. Removing the UTF-8 > characters or reducing the seize to 16384 or below will both allow the > request to succeed through nginx. > > client_body_buffer_size is defaulting to 8k, and using > $request_body_file I've determined that it isn't the use of a > temporary file triggering the error. Are there any other configuration > values that default to 16k? > > My current nginx configuration: > nginx: nginx version: nginx/1.0.5 > nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5) > nginx: TLS SNI support enabled > nginx: configure arguments: --conf-path=/etc/nginx > --prefix=/srv/nginx/1.0.5 --with-http_ssl_module > --with-http_stub_status_module --add-module=../nginx-x-rid-header > --with-ld-opt=-luuid > > I also tried 1.1.6 to see if its UTF-8 patch would help, to no avail: > nginx: nginx version: nginx/1.1.6 > nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5) > nginx: TLS SNI support enabled > nginx: configure arguments: --conf-path=/etc/nginx > --prefix=/tmp/nginx-1.1.6 --with-http_ssl_module > --with-http_stub_status_module --add-module=../nginx-x-rid-header > --with-ld-opt=-luuid > > Successful PUT: > > $status 200 > $request_time 0.943 > $body_bytes_sent 16384 > $upstream_status 200 > $upstream_response_time 0.216 > $request_length 17330 > $content_length 16384 > $request_body_file /srv/nginx/1.0.5/client_body_temp/0000000249 > > Failing PUT: > $status 408 > $request_time 60.911 > $body_bytes_sent - > $upstream_status - > $upstream_response_time - > $request_length 17330 > $content_length 16385 > $request_body_file /srv/nginx/1.0.5/client_body_temp/0000000250 > > Ideas? > Bryan Could you create debug log: http://nginx.org/en/docs/debugging_log.html What does this module do ? --add-module=../nginx-x-rid-header -- Igor Sysoev From stef at caunter.ca Tue Nov 1 07:41:55 2011 From: stef at caunter.ca (Stefan Caunter) Date: Tue, 1 Nov 2011 03:41:55 -0400 Subject: First Question In-Reply-To: <8f5f460e1ade4806f4427aafb4484401@www.pricom.com.au> References: <8f5f460e1ade4806f4427aafb4484401@www.pricom.com.au> Message-ID: On Mon, Oct 31, 2011 at 11:05 PM, Philip Rhoades wrote: > People, > > I want to switch my Apache SSL to Nginx - that might help me resolve a Ruby > on > Rails issue and it is too much work to change all the non-SSL stuff over as > well - I will do that later when I have time. ?So for the time being, http > will > be handled by Apache and https will be handled by Nginx. > > I commented out the default server in: > > ?/etc/nginx/nginx.conf > > and changed: > > ?/etc/nginx/conf.d/ssl.conf > > thus: > > # > # HTTPS server configuration > # > > server { > ? ?listen ? ? ? 443; > ? ?server_name ?www.pricom.com.au; > ? ?access_log /var/log/nginx/nginx.vhost.access.log; > ? ?error_log /var/log/nginx/nginx.vhost.error.log; > > ? ?ssl ? ? ? ? ? ? ? ? ?on; > ? ?ssl_certificate ? ? ?/etc/httpd/conf/ssl.crt/RapidSSL.crt; > ? ?ssl_certificate_key ?/etc/httpd/conf/ssl.key/mars-server.key; > > ? ?ssl_session_timeout ?5m; > > ? ?ssl_protocols ?SSLv2 SSLv3 TLSv1; > ? ?ssl_ciphers ?ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; > ? ?ssl_prefer_server_ciphers ? on; > > ? ?location / { > ? ? ? ?root ? /var/www/ssl/wac; > ? ?} > } Did you restart nginx after these changes? > > but in: > > ?/var/log/nginx/nginx.vhost.error.log > > I get: > > 2011/11/01 13:50:28 [error] 9411#0: *7 open() "/usr/share/nginx/html/wac" > failed (2: No such file or directory), client: 203.206.181.78, server: > www.pricom.com.au, request: "GET /wac HTTP/1.1", host: "pricom.com.au" You get 404, correct? Restart nginx and see if it still does this. > > Why is nginx reverting to the default path and how is it getting the right > dir without the correct path? > Why not use nginx to proxy_pass ssl to http apache back end? > Thanks, > > Phil. > > -- > Philip Rhoades > > GPO Box 3411 > Sydney NSW ? ? ?2001 > Australia > E-mail: ?phil at pricom.com.au > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From kworthington at gmail.com Tue Nov 1 08:20:35 2011 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 1 Nov 2011 04:20:35 -0400 Subject: nginx-1.1.7 In-Reply-To: <20111031145842.GD45607@nginx.com> References: <20111031145842.GD45607@nginx.com> Message-ID: Hello Nginx Users, I just released Nginx 1.1.7 For Windows http://goo.gl/3UWH0 (32-bit and 64-bit) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Official Windows binaries are at nginx.org Thanks, Kevin -- Kevin Worthington kworthington (at] gmail {dot) .com. http://www.kevinworthington.com/ On Mon, Oct 31, 2011 at 10:58 AM, Igor Sysoev wrote: > Changes with nginx 1.1.7 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 31 Oct 2011 > > ? ?*) Feature: support of several resolvers in the "resolver" directive. > ? ? ? Thanks to Kirill A. Korinskiy. > > ? ?*) Bugfix: a segmentation fault occurred on start or while > ? ? ? reconfiguration if the "ssl" directive was used at http level and > ? ? ? there was no "ssl_certificate" defined. > > ? ?*) Bugfix: reduced memory consumption while proxying of big files if > ? ? ? they were buffered to disk. > > ? ?*) Bugfix: a segmentation fault might occur in a worker process if > ? ? ? "proxy_http_version 1.1" directive was used. > > ? ?*) Bugfix: in the "expires @time" directive. > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From sslavic at gmail.com Tue Nov 1 10:54:27 2011 From: sslavic at gmail.com (=?UTF-8?Q?Stevo_Slavi=C4=87?=) Date: Tue, 1 Nov 2011 11:54:27 +0100 Subject: nginx, jmeter and xml-rpc Message-ID: Hello nginx users, I'm experiencing issue testing java based xml-rpc service deployed on tomcat 6 using jmeter 2.5.1. When the application server is fronted by nginx 1.0.8 I'm getting http 408 error code when message being sent is ~30+ lines of indented XML, nginx doesn't seem to get whole content of the message jmeter is sending and keeps waiting until timeout occurs. When message is ~10 lines of XML, it gets passed through to the tomcat and back to the client well. When I replace nginx with apache httpd it works well for both bigger and smaller message. Has anyone experienced anything similar? Any thoughts where to look for the root cause are more than welcome. IMO it's either a bug in nginx or configuration issue. At the moment I can share just the output of nginx -V (see [1]) Regards, Stevo. [1] "nginx -V" output [foo at bar ~]$ nginx -V nginx: nginx version: nginx/1.0.8 nginx: built by gcc 4.1.2 20080704 (Red Hat 4.1.2-46) nginx: TLS SNI support disabled nginx: configure arguments: --user=nginx --group=nginx --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_sub_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-cc-opt='-O2 -g -m64 -mtune=generic' --with-cc-opt='-O2 -g -m64 -mtune=generic' --add-module=nginx_ajp_module --add-module=ngx_postgres-0.8 --add-module=agentzh-nginx-eval-module From j.j.molenaar at gmail.com Tue Nov 1 12:20:55 2011 From: j.j.molenaar at gmail.com (Joost Molenaar) Date: Tue, 1 Nov 2011 13:20:55 +0100 Subject: nginx, jmeter and xml-rpc In-Reply-To: References: Message-ID: Your issue is probably the buffer sizes: http://wiki.nginx.org/HttpProxyModule#proxy_buffer_size Greetings, Joost -------------- next part -------------- An HTML attachment was scrubbed... URL: From btm at loftninjas.org Tue Nov 1 13:13:44 2011 From: btm at loftninjas.org (Bryan McLellan) Date: Tue, 1 Nov 2011 09:13:44 -0400 Subject: Timeout when sending over 16k of data with UTF-8 characters In-Reply-To: <20111101062112.GA73819@nginx.com> References: <20111101062112.GA73819@nginx.com> Message-ID: On Tue, Nov 1, 2011 at 2:21 AM, Igor Sysoev wrote: > Could you create debug log: > http://nginx.org/en/docs/debugging_log.html https://gist.github.com/1330454 You Ruby throws an EOFError when this occurs: EOFError: end of file reached from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/protocol.rb:135:in `sysread' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/protocol.rb:135:in `block in rbuf_fill' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/timeout.rb:52:in `timeout' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/timeout.rb:82:in `timeout' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/protocol.rb:134:in `rbuf_fill' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/protocol.rb:116:in `readuntil' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/protocol.rb:126:in `readline' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/http.rb:2136:in `read_status_line' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/http.rb:2125:in `read_new' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/http.rb:1117:in `transport_request' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/http.rb:1103:in `request' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/http.rb:1096:in `block in request' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/http.rb:564:in `start' from /home/btm/.rvm/rubies/ruby-1.9.1-p431/lib/ruby/1.9.1/net/http.rb:1094:in `request' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/rest/rest_request.rb:84:in `block in call' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/rest/rest_request.rb:99:in `hide_net_http_bug' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/rest/rest_request.rb:83:in `call' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/rest.rb:219:in `block in api_request' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/rest.rb:288:in `retriable_rest_request' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/rest.rb:218:in `api_request' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/rest.rb:130:in `put_rest' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/node.rb:626:in `save' from (irb):9 from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/shef.rb:73:in `block in start' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/shef.rb:72:in `catch' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/lib/chef/shef.rb:72:in `start' from /home/btm/.rvm/gems/ruby-1.9.1-p431/gems/chef-0.10.4/bin/shef:34:in `' from /home/btm/.rvm/gems/ruby-1.9.1-p431/bin/shef:19:in `load' from /home/btm/.rvm/gems/ruby-1.9.1-p431/bin/shef:19:in `
'chef > quit > What does this module do ? > --add-module=../nginx-x-rid-header https://github.com/newobj/nginx-x-rid-header It adds a header line with a uuid for associating frontend and backend requests. It fails without that module as well with this configuration: nginx: nginx version: nginx/1.1.6 nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5) nginx: TLS SNI support enabled nginx: configure arguments: --conf-path=/etc/nginx --prefix=/tmp/nginx-1.1.6 --with-http_ssl_module --with-http_stub_status_module --with-debug Bryan From mdounin at mdounin.ru Tue Nov 1 13:56:22 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Nov 2011 17:56:22 +0400 Subject: Timeout when sending over 16k of data with UTF-8 characters In-Reply-To: References: <20111101062112.GA73819@nginx.com> Message-ID: <20111101135622.GI95664@mdounin.ru> Hello! On Tue, Nov 01, 2011 at 09:13:44AM -0400, Bryan McLellan wrote: > On Tue, Nov 1, 2011 at 2:21 AM, Igor Sysoev wrote: > > Could you create debug log: > > http://nginx.org/en/docs/debugging_log.html > > https://gist.github.com/1330454 2011/11/01 12:44:23 [debug] 13689#0: *129 http header: "Content-Length: 16385" ... 2011/11/01 12:44:23 [debug] 13689#0: *129 http header: "User-Agent: Chef Client/0.10.4 (ruby-1.9.1-p431; ohai-0.6.10; x86_64-linux; +http://opscode.com)" ... Note: content length is 16385. ... 2011/11/01 12:44:23 [debug] 13689#0: *129 http read client request body 2011/11/01 12:44:23 [debug] 13689#0: *129 SSL_read: -1 2011/11/01 12:44:23 [debug] 13689#0: *129 SSL_get_error: 2 2011/11/01 12:44:23 [debug] 13689#0: *129 http client request body recv -2 2011/11/01 12:44:23 [debug] 13689#0: *129 http client request body rest 16385 ... 2011/11/01 12:44:24 [debug] 13689#0: *129 http read client request body 2011/11/01 12:44:24 [debug] 13689#0: *129 SSL_read: 8192 2011/11/01 12:44:24 [debug] 13689#0: *129 http client request body recv 8192 ... 2011/11/01 12:44:24 [debug] 13689#0: *129 SSL_read: 8192 2011/11/01 12:44:24 [debug] 13689#0: *129 http client request body recv 8192 2011/11/01 12:44:24 [debug] 13689#0: *129 write: 13, 0870D8E0, 8192, 8192 2011/11/01 12:44:24 [debug] 13689#0: *129 SSL_read: -1 2011/11/01 12:44:24 [debug] 13689#0: *129 SSL_get_error: 2 2011/11/01 12:44:24 [debug] 13689#0: *129 http client request body recv -2 2011/11/01 12:44:24 [debug] 13689#0: *129 http client request body rest 1 Here 16384 bytes has been read from client, and one byte remains ("rest 1"). OpenSSL doesn't provide any more bytes and claims it needs more network input (2, SSL_ERROR_WANT_READ). 2011/11/01 12:44:24 [debug] 13689#0: *129 event timer del: 12: 1596563798 2011/11/01 12:44:24 [debug] 13689#0: *129 event timer add: 12: 60000:1596564354 2011/11/01 12:45:24 [debug] 13689#0: *129 event timer del: 12: 1596564354 2011/11/01 12:45:24 [debug] 13689#0: *129 http run request: "/organizations/opscode-btm/nodes/broken?" 2011/11/01 12:45:24 [debug] 13689#0: *129 http finalize request: 408, "/organizations/opscode-btm/nodes/broken?" a:1, c:1 Though client fails to provide one more byte. For me, it looks like problem in client. It's either calculate content length incorrectly or fails to properly flush ssl buffers on it's side. Maxim Dounin From igor at sysoev.ru Tue Nov 1 14:57:36 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 1 Nov 2011 18:57:36 +0400 Subject: nginx-1.0.9 Message-ID: <20111101145736.GA89884@nginx.com> Changes with nginx 1.0.9 01 Nov 2011 *) Change: now the 0x7F-0x1F characters are escaped as \xXX in an access_log. *) Change: now SIGWINCH signal works only in daemon mode. *) Feature: "proxy/fastcgi/scgi/uwsgi_ignore_headers" directives support the following additional values: X-Accel-Limit-Rate, X-Accel-Buffering, X-Accel-Charset. *) Feature: decrease of memory consumption if SSL is used. *) Feature: accept filters are now supported on NetBSD. *) Feature: the "uwsgi_buffering" and "scgi_buffering" directives. Thanks to Peter Smit. *) Bugfix: a segmentation fault occurred on start or while reconfiguration if the "ssl" directive was used at http level and there was no "ssl_certificate" defined. *) Bugfix: some UTF-8 characters were processed incorrectly. Thanks to Alexey Kuts. *) Bugfix: the ngx_http_rewrite_module directives specified at "server" level were executed twice if no matching locations were defined. *) Bugfix: a socket leak might occurred if "aio sendfile" was used. *) Bugfix: connections with fast clients might be closed after send_timeout if file AIO was used. *) Bugfix: in the ngx_http_autoindex_module. *) Bugfix: the module ngx_http_mp4_module did not support seeking on 32-bit platforms. *) Bugfix: non-cacheable responses might be cached if "proxy_cache_bypass" directive was used. Thanks to John Ferlito. *) Bugfix: cached responses with an empty body were returned incorrectly; the bug had appeared in 0.8.31. *) Bugfix: 201 responses of the ngx_http_dav_module were incorrect; the bug had appeared in 0.8.32. *) Bugfix: in the "return" directive. *) Bugfix: the "ssl_verify_client", "ssl_verify_depth", and "ssl_prefer_server_ciphers" directives might work incorrectly if SNI was used. -- Igor Sysoev From ilan at time4learning.com Tue Nov 1 15:00:19 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Tue, 1 Nov 2011 11:00:19 -0400 Subject: nginx-1.0.9 In-Reply-To: <20111101145736.GA89884@nginx.com> References: <20111101145736.GA89884@nginx.com> Message-ID: Thanks, question: Do these bug fixes and changes make it into the current development branch as well (1.1.17)? On Tue, Nov 1, 2011 at 10:57 AM, Igor Sysoev wrote: > Changes with nginx 1.0.9 01 Nov > 2011 > > *) Change: now the 0x7F-0x1F characters are escaped as \xXX in an > access_log. > > *) Change: now SIGWINCH signal works only in daemon mode. > > *) Feature: "proxy/fastcgi/scgi/uwsgi_ignore_headers" directives support > the following additional values: X-Accel-Limit-Rate, > X-Accel-Buffering, X-Accel-Charset. > > *) Feature: decrease of memory consumption if SSL is used. > > *) Feature: accept filters are now supported on NetBSD. > > *) Feature: the "uwsgi_buffering" and "scgi_buffering" directives. > Thanks to Peter Smit. > > *) Bugfix: a segmentation fault occurred on start or while > reconfiguration if the "ssl" directive was used at http level and > there was no "ssl_certificate" defined. > > *) Bugfix: some UTF-8 characters were processed incorrectly. > Thanks to Alexey Kuts. > > *) Bugfix: the ngx_http_rewrite_module directives specified at "server" > level were executed twice if no matching locations were defined. > > *) Bugfix: a socket leak might occurred if "aio sendfile" was used. > > *) Bugfix: connections with fast clients might be closed after > send_timeout if file AIO was used. > > *) Bugfix: in the ngx_http_autoindex_module. > > *) Bugfix: the module ngx_http_mp4_module did not support seeking on > 32-bit platforms. > > *) Bugfix: non-cacheable responses might be cached if > "proxy_cache_bypass" directive was used. > Thanks to John Ferlito. > > *) Bugfix: cached responses with an empty body were returned > incorrectly; the bug had appeared in 0.8.31. > > *) Bugfix: 201 responses of the ngx_http_dav_module were incorrect; the > bug had appeared in 0.8.32. > > *) Bugfix: in the "return" directive. > > *) Bugfix: the "ssl_verify_client", "ssl_verify_depth", and > "ssl_prefer_server_ciphers" directives might work incorrectly if SNI > was used. > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Nov 1 15:04:48 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 1 Nov 2011 19:04:48 +0400 Subject: nginx-1.0.9 In-Reply-To: References: <20111101145736.GA89884@nginx.com> Message-ID: <20111101150448.GE89884@nginx.com> On Tue, Nov 01, 2011 at 11:00:19AM -0400, Ilan Berkner wrote: > Thanks, question: > > Do these bug fixes and changes make it into the current development branch > as well (1.1.17)? Quite the opposite, these fixes and changes were merged from development branch. -- Igor Sysoev From nginx-forum at nginx.us Tue Nov 1 15:29:44 2011 From: nginx-forum at nginx.us (artemg) Date: Tue, 01 Nov 2011 11:29:44 -0400 Subject: What is the right way if module doesnt provide data for upstream filters Message-ID: I have module, that matches data going via nginx. I am buffering data, so there is situation that there is no data to provide to upstream filters. What is the right way to do in such case? Now I am calling next_body_filter(r, NULL); Is that ok? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217654,217654#msg-217654 From quintinpar at gmail.com Tue Nov 1 15:44:29 2011 From: quintinpar at gmail.com (Quintin Par) Date: Tue, 1 Nov 2011 21:14:29 +0530 Subject: =?UTF-8?Q?Cache_HIT=E2=80=99s_not_happening?= Message-ID: Hi all, Can someone please answer this question on serverfault(Its difficult to retain code formatting and images in the group mail, hence the post in serverfault.com). http://serverfault.com/questions/326545/nginx-cache-hits-not-happening Thanks, Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 1 16:22:54 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Nov 2011 20:22:54 +0400 Subject: =?UTF-8?Q?Re=3A_Cache_HIT=E2=80=99s_not_happening?= In-Reply-To: References: Message-ID: <20111101162254.GK95664@mdounin.ru> Hello! On Tue, Nov 01, 2011 at 09:14:29PM +0530, Quintin Par wrote: > Hi all, > > Can someone please answer this question on serverfault(Its difficult to > retain code formatting and images in the group mail, hence the post in > serverfault.com). > > http://serverfault.com/questions/326545/nginx-cache-hits-not-happening Answer from kolbyjack suggesting to add proxy_ignore_headers Set-Cookie; is likely correct as long as your backend actually sends Set-Cookie header. If it doesn't fix your problem, please provide full config and debug log. See here http://wiki.nginx.org/Debugging for more details. Maxim Dounin From stef at caunter.ca Tue Nov 1 16:25:12 2011 From: stef at caunter.ca (Stefan Caunter) Date: Tue, 1 Nov 2011 12:25:12 -0400 Subject: nginx, jmeter and xml-rpc In-Reply-To: References: Message-ID: On Tue, Nov 1, 2011 at 6:54 AM, Stevo Slavi? wrote: > Hello nginx users, > > I'm experiencing issue testing java based xml-rpc service deployed on > tomcat 6 using jmeter 2.5.1. When the application server is fronted by > nginx 1.0.8 I'm getting http 408 error code when message being sent is > ~30+ lines of indented XML, nginx doesn't seem to get whole content of > the message jmeter is sending and keeps waiting until timeout occurs. > When message is ~10 lines of XML, it gets passed through to the tomcat > and back to the client well. When I replace nginx with apache httpd it > works well for both bigger and smaller message. > > Has anyone experienced anything similar? Any thoughts where to look > for the root cause are more than welcome. > > IMO it's either a bug in nginx or configuration issue. At the moment I > can share just the output of nginx -V (see [1]) > You need to provide nginx configuration, and log output at a minimum. It is probably configuration, but no one can help you without you providing data. > Regards, > Stevo. > > > [1] "nginx -V" output > [foo at bar ~]$ nginx -V > nginx: nginx version: nginx/1.0.8 > nginx: built by gcc 4.1.2 20080704 (Red Hat 4.1.2-46) > nginx: TLS SNI support disabled > nginx: configure arguments: --user=nginx --group=nginx > --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/lib/nginx/tmp/client_body > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi > --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx > --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_xslt_module > --with-http_sub_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-cc-opt='-O2 -g -m64 > -mtune=generic' --with-cc-opt='-O2 -g -m64 -mtune=generic' > --add-module=nginx_ajp_module --add-module=ngx_postgres-0.8 > --add-module=agentzh-nginx-eval-module > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Tue Nov 1 17:55:45 2011 From: nginx-forum at nginx.us (firestorm) Date: Tue, 01 Nov 2011 13:55:45 -0400 Subject: Problem with fastcgi cache In-Reply-To: <4974ddde02c516685929d60a0ee705f9.NginxMailingListEnglish@forum.nginx.org> References: <4974ddde02c516685929d60a0ee705f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <28963de8765052c595f6afb7a55e4e44.NginxMailingListEnglish@forum.nginx.org> For more information my nginx server is installed in Ubuntu 10.04, compiled from sources the following compilation options: ./configure --user=www-data --group=www-data --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-debug Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217538,217668#msg-217668 From diniz at freeddom.com Tue Nov 1 19:22:47 2011 From: diniz at freeddom.com (=?utf-8?Q?Felipe_Jos=C3=A9_Diniz?=) Date: Tue, 1 Nov 2011 17:22:47 -0200 (BRST) Subject: Proxy_upstrem error In-Reply-To: <344842845.212894.1320174426177.JavaMail.root@webmail> Message-ID: <1511750432.213022.1320175367217.JavaMail.root@webmail> Hi, We are using nginx 0.7.67, and we need to filter a specific error, and match it with proxy_next_upstream. The error we need is when a client send a Connection Reset. I believe nginx filter it with "proxy_next_upstream error" along with all the others connection errors. For ours application the reset error must be the only one filtered and then the nginx would use proxy_next_upstream to send the request to another machine. Is there a way to do such filter in nginx configuration? Thanx Felipe Diniz From ehabkost at raisama.net Wed Nov 2 00:27:46 2011 From: ehabkost at raisama.net (Eduardo Habkost) Date: Tue, 1 Nov 2011 22:27:46 -0200 Subject: nginx-1.0.9 In-Reply-To: <20111101145736.GA89884@nginx.com> References: <20111101145736.GA89884@nginx.com> Message-ID: Hi, I don't know if others reported this, already: I don't see a PGP signature file available for 1.0.9; the pgp link on nginx.org (http://nginx.org/download/nginx-1.0.9.tar.gz.asc) returns a 404 error. -- Eduardo On Tue, Nov 1, 2011 at 12:57 PM, Igor Sysoev wrote: > Changes with nginx 1.0.9 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 01 Nov 2011 > [...] > > -- > Igor Sysoev > From igor at sysoev.ru Wed Nov 2 05:27:09 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 2 Nov 2011 09:27:09 +0400 Subject: nginx-1.0.9 In-Reply-To: References: <20111101145736.GA89884@nginx.com> Message-ID: <20111102052708.GA8238@nginx.com> On Tue, Nov 01, 2011 at 10:27:46PM -0200, Eduardo Habkost wrote: > Hi, > > I don't know if others reported this, already: I don't see a PGP > signature file available for 1.0.9; the pgp link on nginx.org > (http://nginx.org/download/nginx-1.0.9.tar.gz.asc) returns a 404 > error. Fixed, thank you. -- Igor Sysoev From nginx-forum at nginx.us Wed Nov 2 06:21:31 2011 From: nginx-forum at nginx.us (wangbin579) Date: Wed, 02 Nov 2011 02:21:31 -0400 Subject: Tcpcopy,an online request replication tool fit for nginx Message-ID: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> Tcpcopy is an online request replication tool . It may be useful for migrating to nginx from apache https://github.com/wangbin579/tcpcopy or http://code.google.com/p/tcpcopy/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217680,217680#msg-217680 From nginx-forum at nginx.us Wed Nov 2 06:32:21 2011 From: nginx-forum at nginx.us (wangbin579) Date: Wed, 02 Nov 2011 02:32:21 -0400 Subject: nginx_hmux_module - support hmux protocol proxy with Nginx Message-ID: <2d7909af72c23eb07f1385d88d6ae095.NginxMailingListEnglish@forum.nginx.org> With this module, Nginx can connect to Resin through hmux protocol directly. You also can use tcpcopy to test this module. nginx_hmux_module: https://github.com/wangbin579/nginx-hmux-module or http://code.google.com/p/nginx-hmux-module/ tcpcopy: https://github.com/wangbin579/tcpcopy or http://code.google.com/p/tcpcopy/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217681,217681#msg-217681 From nginx-forum at nginx.us Wed Nov 2 06:34:46 2011 From: nginx-forum at nginx.us (wangbin579) Date: Wed, 02 Nov 2011 02:34:46 -0400 Subject: Tcpcopy,an online request replication tool fit for nginx In-Reply-To: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> References: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2845639b8a3be7293ab599241e0689ed.NginxMailingListEnglish@forum.nginx.org> you can use tcpcopy to compare the performances of apache and nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217680,217682#msg-217682 From nginx-forum at nginx.us Wed Nov 2 06:56:38 2011 From: nginx-forum at nginx.us (wangbin579) Date: Wed, 02 Nov 2011 02:56:38 -0400 Subject: Tcpcopy,an online request replication tool fit for nginx In-Reply-To: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> References: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <913df437a76e6c067628e67705adcdfd.NginxMailingListEnglish@forum.nginx.org> tcpcopy helps me a lot when deveploping the netease ad system and other systems Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217680,217684#msg-217684 From kworthington at gmail.com Wed Nov 2 09:45:05 2011 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 2 Nov 2011 05:45:05 -0400 Subject: nginx-1.0.9 In-Reply-To: <20111101145736.GA89884@nginx.com> References: <20111101145736.GA89884@nginx.com> Message-ID: Hello Nginx Users, Just released: Nginx 1.0.9 For Windows http://goo.gl/pZnNA (32-bit and 64-bit) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Official Windows binaries are at nginx.org Thank you,Kevin--Kevin Worthingtonkworthington (at] gmail [dot} .com.http://www.kevinworthington.com/ On Tue, Nov 1, 2011 at 10:57 AM, Igor Sysoev wrote: > Changes with nginx 1.0.9 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 01 Nov 2011 > > ? ?*) Change: now the 0x7F-0x1F characters are escaped as \xXX in an > ? ? ? access_log. > > ? ?*) Change: now SIGWINCH signal works only in daemon mode. > > ? ?*) Feature: "proxy/fastcgi/scgi/uwsgi_ignore_headers" directives support > ? ? ? the following additional values: X-Accel-Limit-Rate, > ? ? ? X-Accel-Buffering, X-Accel-Charset. > > ? ?*) Feature: decrease of memory consumption if SSL is used. > > ? ?*) Feature: accept filters are now supported on NetBSD. > > ? ?*) Feature: the "uwsgi_buffering" and "scgi_buffering" directives. > ? ? ? Thanks to Peter Smit. > > ? ?*) Bugfix: a segmentation fault occurred on start or while > ? ? ? reconfiguration if the "ssl" directive was used at http level and > ? ? ? there was no "ssl_certificate" defined. > > ? ?*) Bugfix: some UTF-8 characters were processed incorrectly. > ? ? ? Thanks to Alexey Kuts. > > ? ?*) Bugfix: the ngx_http_rewrite_module directives specified at "server" > ? ? ? level were executed twice if no matching locations were defined. > > ? ?*) Bugfix: a socket leak might occurred if "aio sendfile" was used. > > ? ?*) Bugfix: connections with fast clients might be closed after > ? ? ? send_timeout if file AIO was used. > > ? ?*) Bugfix: in the ngx_http_autoindex_module. > > ? ?*) Bugfix: the module ngx_http_mp4_module did not support seeking on > ? ? ? 32-bit platforms. > > ? ?*) Bugfix: non-cacheable responses might be cached if > ? ? ? "proxy_cache_bypass" directive was used. > ? ? ? Thanks to John Ferlito. > > ? ?*) Bugfix: cached responses with an empty body were returned > ? ? ? incorrectly; the bug had appeared in 0.8.31. > > ? ?*) Bugfix: 201 responses of the ngx_http_dav_module were incorrect; the > ? ? ? bug had appeared in 0.8.32. > > ? ?*) Bugfix: in the "return" directive. > > ? ?*) Bugfix: the "ssl_verify_client", "ssl_verify_depth", and > ? ? ? "ssl_prefer_server_ciphers" directives might work incorrectly if SNI > ? ? ? was used. > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Wed Nov 2 10:09:59 2011 From: nginx-forum at nginx.us (est) Date: Wed, 02 Nov 2011 06:09:59 -0400 Subject: nginx parse var in if file exist statement error? Message-ID: Hi guys, I found a stange bug in nginx 0.7.65, 0.8.54, and 1.0.6 Here is my setup of three machines $ nginx -V nginx version: nginx/0.7.65 TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug --with-http_stub_status_module --with-http_flv_module --with-http_ssl_module --with-http_dav_module --with-http_gzip_static_module --with-http_realip_module --with-mail --with-mail_ssl_module --with-ipv6 --add-module=/build/buildd/nginx-0.7.65/modules/nginx-upstream-fair $nginx -V nginx version: nginx/0.8.54 built by gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) configure arguments: --user=www-data --group=www-data --prefix=/usr/local/nginx --with-http_stub_status_module $ nginx -V nginx: nginx version: nginx/1.0.6 nginx: TLS SNI support enabled nginx: configure arguments: --prefix=/etc/nginx/ --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwcgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 So I have dir to store images, either jpg or png, the file name (image id) is unique. I want the URL to provide image id only, nginx to serve static jpg or png. the config snippet is something like this: server { listen 80; error_log /var/log/nginx/error_imglib.log debug; root /home/develop/image_library; location ~* ^/img/small/(\d+)/?$ { set $fext 'jpg'; if (!-f '/home/develop/image_library/dump/$1-s.jpg'){ set $fext png; } alias '/home/develop/image_library/dump/$1-s.$fext' ; } } When running curl it returls something like this $ curl "http://127.0.0.1/img/small/1" 404 Not Found

404 Not Found


nginx/1.0.6
Now the bug: in three of my machines I set the error_log level to "debug", here is the output: 2011/11/02 14:01:09 [debug] 4194#0: *24 http script capture: "1" 2011/11/02 14:01:09 [debug] 4194#0: *24 http script copy: "-s." 2011/11/02 14:01:09 [debug] 4194#0: *24 http script var: "png" 2011/11/02 14:01:09 [debug] 4194#0: *24 http filename: "/home/steve/image_library/dump/1-s.png1 User-Agent" 2011/11/02 14:01:09 [debug] 4194#0: *24 add cleanup: 0833B2F0 2011/11/02 14:01:09 [error] 4194#0: *24 open() "/home/steve/image_library/dump/1" failed (2: No such file or directory), client: 127.0.0.1, request: "GET /img/small/1 HTTP/1.1" 2011/11/01 23:00:53 [alert] 7793#0: *38 "/home/develop/image_library/dump/1-s.jpgindex.html" is not a directory, request: "GET /img/small/1 HTTP/1.1" 2011/11/02 02:30:18 [error] 7666#0: *2491 open() "/home/develop/image_library/dump/1-s.pn" failed (2: No such file or directory), client: 127.0.0.1, request: "GET /img/small/1 HTTP/1.1" 2011/11/01 22:47:45 [error] 6740#0: *24 "/home/develop/image_library/dump/1-s.jpgTP/1.1 Hostindex.html" is not found (2: No such file or directory), request: "GET /img/small/1 HTTP/1.1" So clearly somehow, nginx managed to mess HTTP headers into filename parser. Can anyone help me? This is the weirdest bug I have ever encountered with nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217683,217683#msg-217683 From david.yu.ftw at gmail.com Wed Nov 2 13:14:40 2011 From: david.yu.ftw at gmail.com (David Yu) Date: Wed, 2 Nov 2011 21:14:40 +0800 Subject: nginx_hmux_module - support hmux protocol proxy with Nginx In-Reply-To: <2d7909af72c23eb07f1385d88d6ae095.NginxMailingListEnglish@forum.nginx.org> References: <2d7909af72c23eb07f1385d88d6ae095.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Nov 2, 2011 at 2:32 PM, wangbin579 wrote: > With this module, Nginx can connect to Resin through hmux protocol > directly. > Cool. Do you know where I can some documentation on the hmux protocol (already tried googling)? Judging by the name "mux", I couldn't help but think this is a binary protocol with multiplexing+keep-alive (w/c is awesome if it is). > You also can use tcpcopy to test this module. > > nginx_hmux_module: > https://github.com/wangbin579/nginx-hmux-module or > http://code.google.com/p/nginx-hmux-module/ > tcpcopy: > https://github.com/wangbin579/tcpcopy or > http://code.google.com/p/tcpcopy/ > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,217681,217681#msg-217681 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- When the cat is away, the mouse is alone. - David Yu -------------- next part -------------- An HTML attachment was scrubbed... URL: From i at vbart.ru Wed Nov 2 13:34:44 2011 From: i at vbart.ru (Valentin V. Bartenev) Date: Wed, 2 Nov 2011 17:34:44 +0400 Subject: nginx parse var in if file exist statement error? In-Reply-To: References: Message-ID: <201111021734.44906.i@vbart.ru> On Wednesday 02 November 2011 14:09:59 est wrote: [...] > So I have dir to store images, either jpg or png, the file name (image > id) is unique. I want the URL to provide image id only, nginx to serve > static jpg or png. > > the config snippet is something like this: > > > server { > listen 80; > error_log /var/log/nginx/error_imglib.log debug; > root /home/develop/image_library; > > location ~* ^/img/small/(\d+)/?$ { > set $fext 'jpg'; > if (!-f '/home/develop/image_library/dump/$1-s.jpg'){ > set $fext png; > } > alias '/home/develop/image_library/dump/$1-s.$fext' ; > } > } > [...] server { listen 80; error_log /var/log/nginx/error_imglib.log debug; root /home/develop/image_library; location ~* ^/img/small/(\d+)/?$ { try_files /dump/$1-s.jpg /dump/$1-s.png =404 } } wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Nov 2 13:51:06 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Nov 2011 17:51:06 +0400 Subject: nginx parse var in if file exist statement error? In-Reply-To: References: Message-ID: <20111102135106.GR95664@mdounin.ru> Hello! On Wed, Nov 02, 2011 at 06:09:59AM -0400, est wrote: [...] > So clearly somehow, nginx managed to mess HTTP headers into filename > parser. > > Can anyone help me? This is the weirdest bug I have ever encountered > with nginx. Valentin already replied how to do this properly. As for the bug itself, it's documented here: http://wiki.nginx.org/IfIsEvil Maxim Dounin From phil at pricom.com.au Wed Nov 2 13:59:24 2011 From: phil at pricom.com.au (Philip Rhoades) Date: Thu, 03 Nov 2011 00:59:24 +1100 Subject: Nginx + RoundCubeMail + SSL ? Message-ID: <40a725e8b5d18b56d51c9bb938b71792@pricom.com.au> People, Does anyone have a working setup with this combination? If so, could I see the nginx.conf (ssl.conf) file? (I have tried all sorts of Google solutions with no success). Thanks, Phil. -- Philip Rhoades GPO Box 3411 Sydney NSW 2001 Australia E-mail: phil at pricom.com.au From ml-nginx at zu-con.org Wed Nov 2 16:54:24 2011 From: ml-nginx at zu-con.org (Matthias Rieber) Date: Wed, 02 Nov 2011 17:54:24 +0100 Subject: large_client_header_buffers - meaning of number Message-ID: <4EB175C0.8060504@zu-con.org> Hi, the doc says: large_client_header_buffers number size What's the meaning of number? At most number of large headers can be 'processed' at once? Per request or globally? If yes, what happens to the request that exceeds that limit? Matthias From mdounin at mdounin.ru Wed Nov 2 17:57:37 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Nov 2011 21:57:37 +0400 Subject: large_client_header_buffers - meaning of number In-Reply-To: <4EB175C0.8060504@zu-con.org> References: <4EB175C0.8060504@zu-con.org> Message-ID: <20111102175737.GV95664@mdounin.ru> Hello! On Wed, Nov 02, 2011 at 05:54:24PM +0100, Matthias Rieber wrote: > Hi, > > the doc says: large_client_header_buffers number size You probably mean wiki. > What's the meaning of number? At most number of large headers can be > 'processed' at once? Per request or globally? If yes, what happens to > the request that exceeds that limit? Docs may be found here: http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers I believe the description is quite complete and answers all of your questions. Maxim Dounin From al-nginx at none.at Wed Nov 2 19:10:12 2011 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 02 Nov 2011 20:10:12 +0100 Subject: Nginx + RoundCubeMail + SSL ? In-Reply-To: <40a725e8b5d18b56d51c9bb938b71792@pricom.com.au> References: <40a725e8b5d18b56d51c9bb938b71792@pricom.com.au> Message-ID: <2b62d3ebdbe92614df34890fd7718e7b@none.at> Hi Philip, On 02.11.2011 14:59, Philip Rhoades wrote: > People, > > Does anyone have a working setup with this combination? If so, could > I see the nginx.conf (ssl.conf) file? (I have tried all sorts of > Google solutions with no success). What's the problem. it's pretty straight forward. 1.) php setup ### location ~ \.php { include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass php; # => upstream php { ... } fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /installed$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } ### 2.) location setup. ### location /roundcube{ alias /installed/roundcube; index index.php; try_files $uri $uri/ index.php; } ### BR Aleks From nginx-forum at nginx.us Wed Nov 2 21:43:22 2011 From: nginx-forum at nginx.us (firestorm) Date: Wed, 02 Nov 2011 17:43:22 -0400 Subject: Problem with fastcgi cache In-Reply-To: <4974ddde02c516685929d60a0ee705f9.NginxMailingListEnglish@forum.nginx.org> References: <4974ddde02c516685929d60a0ee705f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <997d175cf15b53661ade07b51a4be200.NginxMailingListEnglish@forum.nginx.org> Problem solved. The solution was include the statements: fastcgi_pass_header Cookie; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; I added the configuration to log the cache request result (HIT or MISS): log_format cache '$remote_addr - $remote_user [$time_local] "$request" ' '$status $upstream_cache_status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log cache; And now you can see in the log: 10.35.9.129 - - [02/Nov/2011:12:41:00 -0400] "GET /administration.php/school HTTP/1.1" 200 HIT 2712 "http://10.128.50.101/administration.php/school" "Mozilla/5.0 (Windows NT 5.1; rv:7.0) Gecko/20100101 Firefox/7.0" "-" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217538,217725#msg-217725 From nginx-forum at nginx.us Thu Nov 3 01:24:08 2011 From: nginx-forum at nginx.us (est) Date: Wed, 02 Nov 2011 21:24:08 -0400 Subject: nginx parse var in if file exist statement error? In-Reply-To: <201111021734.44906.i@vbart.ru> References: <201111021734.44906.i@vbart.ru> Message-ID: <8ba8f7a6b724aefc68b0b7d469c45ef4.NginxMailingListEnglish@forum.nginx.org> Oh thanks very much guys. try_files worked well. I tried using try_files before, but mistakenly wrote it as something like this try_files dump/$1-s.jpg dump/$1-s.png 404; The correct working line should be like this try_files /dump/$1-s.jpg /dump/$1-s.png =404; And I didn't expect IF is evil. Suprise! Thanks again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217683,217730#msg-217730 From delta.yeh at gmail.com Thu Nov 3 02:49:52 2011 From: delta.yeh at gmail.com (Delta Yeh) Date: Thu, 3 Nov 2011 10:49:52 +0800 Subject: how to set the proxy buffer if web server return large cookie via Set-Cookie Message-ID: Hi, According to http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers, for long uri or large cookie from browser, I can use large_client_header_buffers . For the large cookie from web server via Set-Cookie, how to set proxy buffer? Should I set proxy_buffer_size the same size as large_client_header_buffers? Or set the proxy_buffers the same as large_client_header_buffers? For example, there is a 32k cookie location / { large_client_header_buffers 4 32k; proxy_buffer_size 32k; proxy_pass .... } or location / { large_client_header_buffers 4 32k; proxy_buffers 4 32k; proxy_pass .... } According to the WIKI, I prefer proxy_buffers directive. BR, DeltaY From nginx-forum at nginx.us Thu Nov 3 02:59:45 2011 From: nginx-forum at nginx.us (wangbin579) Date: Wed, 02 Nov 2011 22:59:45 -0400 Subject: nginx_hmux_module - support hmux protocol proxy with Nginx In-Reply-To: <2d7909af72c23eb07f1385d88d6ae095.NginxMailingListEnglish@forum.nginx.org> References: <2d7909af72c23eb07f1385d88d6ae095.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2ac5444cde8a1d83cb20abb48b3593e3.NginxMailingListEnglish@forum.nginx.org> source code from mod_caucho.c (apache module) and resin source code Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217681,217733#msg-217733 From nginx-forum at nginx.us Thu Nov 3 03:03:02 2011 From: nginx-forum at nginx.us (wangbin579) Date: Wed, 02 Nov 2011 23:03:02 -0400 Subject: Tcpcopy,an online request replication tool fit for nginx In-Reply-To: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> References: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6f2273e5254cfc91047a0e409d84b19c.NginxMailingListEnglish@forum.nginx.org> An example: Suppose 13 and 14 are online machines, 148 is a target machine which is similar to the online machines, and 12321 is used both as local port and remote port. We use tcpcopy to test if 148 can endure two times of current online stress. Using tcpcopy to perform the above test task. the target machine(148) # modprobe ip_queue (if not run up) # iptables -I OUTPUT -p tcp --sport 12321 -j QUEUE (if not set) # ./interception online machine(13): # ./tcpcopy xx.xx.xx.13 12321 xx.xx.xx.148 12321 online machine(14): # ./tcpcopy xx.xx.xx.14 12321 xx.xx.xx.148 12321 Cpu load and memory usage are as follows: 13 cpu: 11124 adrun 15 0 193m 146m 744 S 18.6 7.3 495:31.56 asyn_server 11281 root 15 0 65144 40m 1076 S 12.3 2.0 0:47.89 tcpcopy 14 cpu: 16855 adrun 15 0 98.7m 55m 744 S 21.6 2.7 487:49.51 asyn_server 16429 root 15 0 41156 17m 1076 S 14.0 0.9 0:33.63 tcpcopy 148 cpu : 25609 root 15 0 76892 59m 764 S 49.6 2.9 63:03.14 asyn_server 20184 root 15 0 5624 4232 292 S 17.0 0.2 0:52.82 interception Access log analysis: 13 online machine: grep 'Tue 11:08' access_0913_11.log |wc -l :89316, 1489 reqs/sec 14 online machine: grep 'Tue 11:08' access_0913_11.log |wc -l :89309, 1488 reqs/sec 148 test machine: grep 'Tue 11:08' access_0913_11.log |wc -l :178175, 2969 reqs/sec request loss rate: (89316+89309-178175)/(89316+89309)=0.25% From the above, we can see that the target machine can endure two times of current online stress. What about the cpu load ? tcpcopy on online machine 13 occupies 12.3% of cpu load, tcpcopy on online 14 occupies 14% and interception on target machine 148 occupies 17%. We can see that the cpu load is very low here, and so is the memory usage. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217680,217734#msg-217734 From nginx-forum at nginx.us Thu Nov 3 03:07:32 2011 From: nginx-forum at nginx.us (wangbin579) Date: Wed, 02 Nov 2011 23:07:32 -0400 Subject: Tcpcopy,an online request replication tool fit for nginx In-Reply-To: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> References: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: a stress test for nginx written in chinese. http://blog.csdn.net/wangbin579/article/details/6929495 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217680,217735#msg-217735 From phil at pricom.com.au Thu Nov 3 05:17:52 2011 From: phil at pricom.com.au (Philip Rhoades) Date: Thu, 03 Nov 2011 16:17:52 +1100 Subject: Getting my own posts but NOT replies? Message-ID: <7990b6f18be850ab94f4d37ee4f0a868@pricom.com.au> People, I am getting my own posts to this list but NOT any replies (I have to look them up on the nginx email archive pages). I have checked out my preferences and everything looks OK - anyone got any ideas why this might be happening? Thanks, Phil. -- Philip Rhoades GPO Box 3411 Sydney NSW 2001 Australia E-mail: phil at pricom.com.au From nginx-forum at nginx.us Thu Nov 3 07:06:35 2011 From: nginx-forum at nginx.us (forum_id) Date: Thu, 03 Nov 2011 03:06:35 -0400 Subject: HTTP Request filter module Message-ID: I have a requirement for HTTP request filter. Whenever I receive a HTTP request this filter module should read the request and does some internal statistical work, it doesn't manipulate the request page nor creates response page. It just need to peep into request that's it. After this filter module, nginx HTTP server should continue to process the request and serve the response to user. Can we do it without a proxy handler, because nginx serves the responses directly here. Please help me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217739,217739#msg-217739 From matt.starcrest at yahoo.com Thu Nov 3 07:08:49 2011 From: matt.starcrest at yahoo.com (Matt Starcrest) Date: Thu, 3 Nov 2011 00:08:49 -0700 (PDT) Subject: python wsgi behind nginx: yield not behaving as expected Message-ID: <1320304129.63380.YahooMailNeo@web46009.mail.sp1.yahoo.com> Hi, I want to run wsgi python code on a web server behind nginx. ?The server needs to respond quickly to an http request, then continue to do some (slow) work after responding. ?Python's yield statement seems to fit the bill, as follows: def application(environ, start_response): ? ? output = get_response_quickly(environ) ? ? start_response('200 OK',?[('Content-type', 'text/plain'),?('Content-Length', str(len(output)))]) ? ? yield output ? ? do_slow_work() If I run this wsgi in a standalone python server (I tried wsgiref, fapws, uwsgi), the caller receives the output immediately (after get_response_quickly()), as desired. ?Great. ?However, if I run any of these servers behind nginx, the caller doesn't receive a response until *after* do_slow_work() -- thus defeating the purpose. Is there a way to make this pattern work with nginx? ?Or is there a better way in general to respond quickly but continue work, without manually creating python threads / other clumsiness? Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Thu Nov 3 07:12:47 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 3 Nov 2011 11:12:47 +0400 Subject: python wsgi behind nginx: yield not behaving as expected In-Reply-To: <1320304129.63380.YahooMailNeo@web46009.mail.sp1.yahoo.com> References: <1320304129.63380.YahooMailNeo@web46009.mail.sp1.yahoo.com> Message-ID: <20111103071247.GB37071@nginx.com> On Thu, Nov 03, 2011 at 12:08:49AM -0700, Matt Starcrest wrote: > Hi, > I want to run wsgi python code on a web server behind nginx. ?The server needs to respond quickly to an http request, then continue to do some (slow) work after responding. ?Python's yield statement seems to fit the bill, as follows: > > def application(environ, start_response): > ? ? output = get_response_quickly(environ) > > ? ? start_response('200 OK',?[('Content-type', 'text/plain'),?('Content-Length', str(len(output)))]) > > ? ? yield output > ? ? do_slow_work() > > If I run this wsgi in a standalone python server (I tried wsgiref, fapws, uwsgi), the caller receives the output immediately (after get_response_quickly()), as desired. ?Great. ?However, if I run any of these servers behind nginx, the caller doesn't receive a response until *after* do_slow_work() -- thus defeating the purpose. > > Is there a way to make this pattern work with nginx? ?Or is there a better way in general to respond quickly but continue work, without manually creating python threads / other clumsiness? In nginx-1.0.9 or nginx-1.1.5 you can try uwsgi_buffering off; or sgi_buffering off; depending on protocol. In any modern enough nginx version you can use proxy_buffering off; There is no way currently to disable buffering of FastCGI servers. -- Igor Sysoev From mdounin at mdounin.ru Thu Nov 3 07:44:43 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Nov 2011 11:44:43 +0400 Subject: how to set the proxy buffer if web server return large cookie via Set-Cookie In-Reply-To: References: Message-ID: <20111103074443.GA95664@mdounin.ru> Hello! On Thu, Nov 03, 2011 at 10:49:52AM +0800, Delta Yeh wrote: > Hi, > > According to http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers, > for long uri or large cookie from browser, I can use > large_client_header_buffers . > For the large cookie from web server via Set-Cookie, how to set proxy buffer? > > Should I set proxy_buffer_size the same size as large_client_header_buffers? > Or set the proxy_buffers the same as large_client_header_buffers? > > For example, there is a 32k cookie > location / { > large_client_header_buffers 4 32k; > proxy_buffer_size 32k; > proxy_pass .... > } > > or > > location / { > large_client_header_buffers 4 32k; > proxy_buffers 4 32k; > > proxy_pass .... > } > > > According to the WIKI, I prefer proxy_buffers directive. Response headers from upstream have to fit into proxy_buffer_size buffer. The proxy_buffers directive is used when reading response body from upstream, not headers. Maxim Dounin From matt.starcrest at yahoo.com Thu Nov 3 07:48:30 2011 From: matt.starcrest at yahoo.com (Matt Starcrest) Date: Thu, 3 Nov 2011 00:48:30 -0700 (PDT) Subject: python wsgi behind nginx: yield not behaving as expected In-Reply-To: <20111103071247.GB37071@nginx.com> References: <1320304129.63380.YahooMailNeo@web46009.mail.sp1.yahoo.com> <20111103071247.GB37071@nginx.com> Message-ID: <1320306510.61261.YahooMailNeo@web46005.mail.sp1.yahoo.com> Looks like that solved it! ?Thanks Igor! ________________________________ From: Igor Sysoev To: nginx at nginx.org Sent: Thursday, November 3, 2011 12:12 AM Subject: Re: python wsgi behind nginx: yield not behaving as expected On Thu, Nov 03, 2011 at 12:08:49AM -0700, Matt Starcrest wrote: > Hi, > I want to run wsgi python code on a web server behind nginx. ?The server needs to respond quickly to an http request, then continue to do some (slow) work after responding. ?Python's yield statement seems to fit the bill, as follows: > > def application(environ, start_response): > ? ? output = get_response_quickly(environ) > > ? ? start_response('200 OK',?[('Content-type', 'text/plain'),?('Content-Length', str(len(output)))]) > > ? ? yield output > ? ? do_slow_work() > > If I run this wsgi in a standalone python server (I tried wsgiref, fapws, uwsgi), the caller receives the output immediately (after get_response_quickly()), as desired. ?Great. ?However, if I run any of these servers behind nginx, the caller doesn't receive a response until *after* do_slow_work() -- thus defeating the purpose. > > Is there a way to make this pattern work with nginx? ?Or is there a better way in general to respond quickly but continue work, without manually creating python threads / other clumsiness? In nginx-1.0.9 or nginx-1.1.5 you can try ? uwsgi_buffering? off; or ? sgi_buffering? off; depending on protocol. In any modern enough nginx version you can use ? ? proxy_buffering off; There is no way currently to disable buffering of FastCGI servers. -- Igor Sysoev _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From delta.yeh at gmail.com Thu Nov 3 07:51:07 2011 From: delta.yeh at gmail.com (Delta Yeh) Date: Thu, 3 Nov 2011 15:51:07 +0800 Subject: how to set the proxy buffer if web server return large cookie via Set-Cookie In-Reply-To: <20111103074443.GA95664@mdounin.ru> References: <20111103074443.GA95664@mdounin.ru> Message-ID: So to resolve large cookie issue, the config should be: location / { large_client_header_buffers 4 32k; proxy_buffer_size 32k; proxy_pass .... } For response, default proxy_buffers should be OK for most web application. 2011/11/3 Maxim Dounin : > Hello! > > On Thu, Nov 03, 2011 at 10:49:52AM +0800, Delta Yeh wrote: > >> Hi, >> >> ? According to http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers, >> ? for long uri or large cookie from browser, I can use >> large_client_header_buffers ?. >> ? For the large cookie from web server via Set-Cookie, how to set proxy buffer? >> >> ? Should I set proxy_buffer_size ?the same size ?as large_client_header_buffers? >> ? Or set the proxy_buffers ?the same as ? large_client_header_buffers? >> >> ? For example, there is a 32k cookie >> ? ? location / { >> ? ? large_client_header_buffers ?4 32k; >> ? ? proxy_buffer_size ?32k; >> ? ?proxy_pass .... >> } >> >> ? or >> >> ?location / { >> ? ? large_client_header_buffers ?4 32k; >> ? ? proxy_buffers ?4 32k; >> >> ? ?proxy_pass .... >> } >> >> >> According to the WIKI, I prefer proxy_buffers directive. > > Response headers from upstream have to fit into proxy_buffer_size > buffer. ?The proxy_buffers directive is used when reading response > body from upstream, not headers. > > Maxim Dounin > From igor at sysoev.ru Thu Nov 3 07:51:55 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 3 Nov 2011 11:51:55 +0400 Subject: python wsgi behind nginx: yield not behaving as expected In-Reply-To: <1320306510.61261.YahooMailNeo@web46005.mail.sp1.yahoo.com> References: <1320304129.63380.YahooMailNeo@web46009.mail.sp1.yahoo.com> <20111103071247.GB37071@nginx.com> <1320306510.61261.YahooMailNeo@web46005.mail.sp1.yahoo.com> Message-ID: <20111103075154.GB37679@nginx.com> On Thu, Nov 03, 2011 at 12:48:30AM -0700, Matt Starcrest wrote: > Looks like that solved it! ?Thanks Igor! Please note, that any "..._buffering off" disables caching, so you can set it exactly on locations which should be cached: location /some/page { ..._buffering off; } > From: Igor Sysoev > To: nginx at nginx.org > Sent: Thursday, November 3, 2011 12:12 AM > Subject: Re: python wsgi behind nginx: yield not behaving as expected > > On Thu, Nov 03, 2011 at 12:08:49AM -0700, Matt Starcrest wrote: > > Hi, > > I want to run wsgi python code on a web server behind nginx. ?The server needs to respond quickly to an http request, then continue to do some (slow) work after responding. ?Python's yield statement seems to fit the bill, as follows: > > > > def application(environ, start_response): > > ? ? output = get_response_quickly(environ) > > > > ? ? start_response('200 OK',?[('Content-type', 'text/plain'),?('Content-Length', str(len(output)))]) > > > > ? ? yield output > > ? ? do_slow_work() > > > > If I run this wsgi in a standalone python server (I tried wsgiref, fapws, uwsgi), the caller receives the output immediately (after get_response_quickly()), as desired. ?Great. ?However, if I run any of these servers behind nginx, the caller doesn't receive a response until *after* do_slow_work() -- thus defeating the purpose. > > > > Is there a way to make this pattern work with nginx? ?Or is there a better way in general to respond quickly but continue work, without manually creating python threads / other clumsiness? > > In nginx-1.0.9 or nginx-1.1.5 you can try > ? uwsgi_buffering? off; > or > ? sgi_buffering? off; > depending on protocol. In any modern enough nginx version you can use > ? ? proxy_buffering off; > > There is no way currently to disable buffering of FastCGI servers. > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Igor Sysoev From quintinpar at gmail.com Thu Nov 3 09:17:34 2011 From: quintinpar at gmail.com (Quintin Par) Date: Thu, 3 Nov 2011 14:47:34 +0530 Subject: Nginx & long poll: Best practices to reduce memory and bandwidth footprint Message-ID: Can someone please answer this question on serverfault.com? http://serverfault.com/questions/327301/nginx-long-poll-best-practices-to-reduce-memory-and-bandwidth-footprint -Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Thu Nov 3 09:34:09 2011 From: agentzh at gmail.com (agentzh) Date: Thu, 3 Nov 2011 17:34:09 +0800 Subject: [ANN] ngx_openresty 1.0.8.26 (stable) released In-Reply-To: References: Message-ID: Hi, folks! I'm happy to announce that the new stable release of ngx_openresty, 1.0.8.26, has just been kicked out of door: http://openresty.org/#Download This is the first stable release of ngx_openresty that is based on the Nginx core 1.0.8. And this is a big release with a *lot* of bug fixes and new features. Special thanks go to all our contributors and users to help make this happen over the last month :) Here goes the complete change log for this release, as compared to the last stable release, 1.0.6.22, released nearly a month ago: - upgraded the Nginx core to 1.0.8. - upgraded LuaNginxModule to 0.3.1rc23. - feature: added new directive lua_shared_dict: http://wiki.nginx.org/HttpLuaModule#lua_shared_dict - feature: added Lua API for the shm-based dictionary: http://wiki.nginx.org/HttpLuaModule#ngx.shared.DICT - feature: now we apply the patch to the nginx core so as to allow main request body modifications. - feature: added new Lua API ngx.req.set_body_file(): http://wiki.nginx.org/HttpLuaModule#ngx.req.set_body_file - feature: added new Lua API ngx.req.set_body_data(): http://wiki.nginx.org/HttpLuaModule#ngx.req.set_body_data - feature: added new Lua functions ngx.req.read_body(), ngx.req.discard_body(), ngx.req.get_body_data(), and ngx.req.get_body_file(). see the docs here: http://wiki.nginx.org/HttpLuaModule#ngx.req.read_body - feature: now we implemented ngx.req.set_uri() and ngx.req.set_uri_args() to emulate ngx_rewrite's rewrite directive (without redirect or permanent modifiers). thanks Vladimir Protasov (utros) and Nginx User. - feature: added constant ngx.HTTP_METHOD_NOT_IMPLEMENTED (501). thanks Nginx User. - feature: now for HTTP 1.0 requests, we disable the automatic full buffering mode if the user sets the Content-Length response header before sending out the headers. this allows streaming output for HTTP 1.0 requests if the content length can be calculated beforehand. thanks Li Ziyi. - bugfix: now we properly support setting the Cache-Control response header via the ngx.header.HEADER interface. - bugfix: no longer set header hash to 1. use the ngx_hash_key_lcinstead. - bugfix: now we skip rewrite phase Lua handlers altogether if ngx_rewrite's rewrite directive issue a location re-lookup by changing URIs (but not including rewrite ... break). thanks Nginx User. - bugfix: fixed hanging issues when using ngx.exec() within rewrite_by_lua and access_by_lua. thanks Nginx User for reporting it. - bugfix: lua_need_request_body should not skip requests with methods other than POST and PUT. thanks Nginx User. - bugfix: ndk.set_var.DIRECTIVE had a memory issue and might pass empty argument values to the directive being called. thanks dannynoonan. - bugfix: no longer free request body buffers that are not allocated by ourselves. - bugfix: now we allow setting ngx.var.VARIABLE to nil. - bugfix: now we explicitly clear all the modules' contexts before dump to named location with ngx.exec. thanks Nginx User. - upgraded EchoNginxModule to 0.37rc7. - bugfix: fixed a memory issue in both echo_sleep and echo_blocking_sleep: we should not pass ngx_str_t strings to atof()which expects C strings. - bugfix: now we explicitly clear all the modules' contexts before dump to named location with echo_exec. - bugfix: bugfix: echo_exec may hang when running after echo_sleep(or other I/O interruption calls): we should have called ngx_http_finalize_request on NGX_DONE to decrement r->main->countanyway. - bugfix: now we properly set the Content-Length request header for subrequests. - upgraded SrcacheNginxModule to 0.13rc2. - feature: implemented response status line and general response header cachin and added new directives srcache_store_hide_header and srcache_store_pass_header to control which headers to cache and which not. - feature: added new directive srcache_response_cache_control to control whether honor response headers Cache-Control and Expires, default on. - feature: we disable srcache_store automatically by default when Cache-Control: max-age=0 and Expires: are seen. - feature: implemented builtin nginx variable $srcache_expire for automatic expiration time calculation based on response headers Cache-Control (max-age) and Expires; also added new directives srcache_max_expire and srcache_default_expire. - feature: implemented the srcache_store_no_cache directive; now by default, we do not store responses with the header Cache-Control: no-cache into the cache. - feature: implemented the srcache_store_no_store directive (default off). Now by default, responses with the header Cache-Control: no-store will not be stored into the cache. - feature: implemented the srcache_store_private directive to control whether to store responses with the header Cache-Control: private. - feature: implemented the srcache_request_cache_control directive to allow request headers Cache-Control: no-cache or Pragma: no-cache to force bypassing cache lookup. it also honors the request header Cache-Control: no-store. this directive is turned off by default. - feature: now we check response header Content-Encoding by default and a non-empty header value will skip srcache_store; also introduced a new directive named srcache_ignore_content_encoding to ignore this response header. - feature: implemented the srcache_methods directive to specify request methods that are cacheable, by default, only GET and HEAD are cacheable. - bugfix: we no longer set header hash to 1; we use ngx_hash_key_lcinstead. - bugfix: when we skip srcache_fetch by means of srcache_fetch_skip, we should not automatically skip srcache_store. - bugfix: now we ignore the Content-Length header (if any) of the main request for the subrequests. - bugfix: there might be a segfault when failing to allocate memory in ngx_http_srcache_add_copy_chain. thanks Shaun savage. - feature: implemented new directive srcache_store_statuses to allow the user to specify the response status code list that is to be stored into the cache. - bugfix: we now only cache 200, 301, and 302 responses by default. - upgraded IconvNginxModule to 0.10rc5. - bugfix: fixed -Wset-but-not-used warnings issued by gcc 4.6.0. thanks Zhi Jiale (Calio). - upgraded HeadersMoreNginxModule to 0.16rc3. - bugfix: we should set header hash using ngx_hash_key_lc, not simply to 1. - bugfix: fixed setting Cache-Control response headers. we should properly prepare the r->cache_control array as well. - upgraded RdsJsonNginxModule to 0.12rc6. - bugfix: fixed compatibility with nginx 1.1.4+. - upgraded RdsCsvNginxModule to 0.04. - bugfix: fixed compatibility issues with nginx 1.1.4+. - optimization: now we only register our filters when rds_csv is actually used in nginx.conf. - upgraded Redis2NginxModule to 0.08rc1. - bugfix: fixed compatibility with nginx 1.1.4+. - upgraded DrizzleNginxModule to 0.1.2rc2. - bugfix: fixed compatibility with nginx 1.1.4+ - upgraded MemcNginxModule to 0.13rc1. - bugfix: fixed compatibility with nginx 1.1.4+. - upgraded SetMiscNginxModule to v0.22rc3. - minor code cleanup. - applied the patch to the Nginx core that always clears all modules' contexts in ngx_http_named_location. - applied the patch for the variable-header-ignore-no-hash issue. see http://forum.nginx.org/read.php?29,216062 for details. As always, you're welcome to report bugs and feature requests either here or directly to me :) It'll also be highly appreciated to try out the devel releases (based on the Nginx core 1.0.9+) that are coming out later ;) OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. By taking adantage of various well-designed Nginx modules, OpenResty effectively turns the nginx server into a powerful web app server, in which the web developers can use the Lua programming language to script various existing nginx C modules and Lua modules and construct extremely high-performance web applications that is capable to handle 10K+ connections. OpenResty aims to run your server-side web app completely in the Nginx server, leveraging Nginx's event model to do non-blocking I/O not only with the HTTP clients, but also with remote backends like MySQL, PostgreSQL, Memcached, and Redis. You can find more details on the homepage of ngx_openresty here: http://openresty.org Have fun! -agentzh -------------- next part -------------- An HTML attachment was scrubbed... URL: From ft at falkotimme.com Thu Nov 3 09:43:06 2011 From: ft at falkotimme.com (Falko Timme) Date: Thu, 3 Nov 2011 10:43:06 +0100 Subject: ISPConfig 3 now with full nginx support Message-ID: Hi, ISPConfig 3 is an open source hosting control panel for Linux which is capable of managing multiple servers from one control panel. ISPConfig is licensed under the BSD license. Its feature list is here: http://www.ispconfig.org/ispconfig-3/ Today we have released ISPConfig 3.0.4, and one of the new features is full support for nginx, i.e., ISPConfig can now create and manage websites on an nginx server. It has support for SSL (SNI is possible as well), rewrites/redirects, CGI (through fcgiwrap), PHP-FPM (both TCP connections and sockets are supported), custom php.ini settings, custom nginx directives, basic http authentication, subdomains, alias domains, IPv6, etc. You can find the full announcement on http://www.ispconfig.org/releases/ispconfig-3-0-4-released/ ISPConfig can be downloaded from http://www.ispconfig.org/ispconfig-3/download/ Best Regards, Falko Timme ISPConfig Team From lists at ruby-forum.com Thu Nov 3 09:46:14 2011 From: lists at ruby-forum.com (Noah C.) Date: Thu, 03 Nov 2011 10:46:14 +0100 Subject: DNS TTLs being ignored In-Reply-To: References: Message-ID: Thanks for the reply Andrew. Do you have any idea when it's likely to be generally available? This is a pretty big nuisance for us, and I'd like to be able to figure out if I need to look at using a new reverse proxy, at least for the time being. --Noah -- Posted via http://www.ruby-forum.com/. From andrew at nginx.com Thu Nov 3 09:50:58 2011 From: andrew at nginx.com (Andrew Alexeev) Date: Thu, 3 Nov 2011 13:50:58 +0400 Subject: DNS TTLs being ignored In-Reply-To: References: Message-ID: Noah, This fix/improvement be introduced in 1.1.8 which will come out around Nov 14. Hope this helps On Nov 3, 2011, at 1:46 PM, Noah C. wrote: > Thanks for the reply Andrew. Do you have any idea when it's likely to be > generally available? This is a pretty big nuisance for us, and I'd like > to be able to figure out if I need to look at using a new reverse proxy, > at least for the time being. > > --Noah > > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From calin.don at gmail.com Thu Nov 3 10:41:56 2011 From: calin.don at gmail.com (Calin Don) Date: Thu, 3 Nov 2011 12:41:56 +0200 Subject: HTTP Request filter module In-Reply-To: References: Message-ID: Doesn't post_action do what you need? http://wiki.nginx.org/HttpCoreModule#post_action On Thu, Nov 3, 2011 at 09:06, forum_id wrote: > I have a requirement for HTTP request filter. Whenever I receive a HTTP > request this filter module should read the request and does some > internal statistical work, it doesn't manipulate the request page nor > creates response page. It just need to peep into request that's it. > After this filter module, nginx HTTP server should continue to process > the request and serve the response to user. Can we do it without a proxy > handler, because nginx serves the responses directly here. > > Please help me. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,217739,217739#msg-217739 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Thu Nov 3 11:00:37 2011 From: agentzh at gmail.com (agentzh) Date: Thu, 3 Nov 2011 19:00:37 +0800 Subject: HTTP Request filter module In-Reply-To: References: Message-ID: On Thu, Nov 3, 2011 at 3:06 PM, forum_id wrote: > > I have a requirement for HTTP request filter. Whenever I receive a HTTP > request this filter module should read the request and does some > internal statistical work, it doesn't manipulate the request page nor > creates response page. It just need to peep into request that's it. > After this filter module, nginx HTTP server should continue to process > the request and serve the response to user. Can we do it without a proxy > handler, because nginx serves the responses directly here. > I think this is a perfect use case for the rewrite_by_lua or access_by_lua directives provided by the ngx_lua module. See ??? http://wiki.nginx.org/HttpLuaModule You can read and modify the request URI, query args, or even request bodies in your Lua code and save the statistical results in the shm-based dictionary (shared_dict) provided by ngx_lua or issue a subrequest from within Lua to internal locations configured by ngx_memc or ngx_redis2 via the ngx.location.capture API. Because LuaJIT 2.0 is so light and so fast, this setting can achieve performance comparable with an Nginx C module and also provide scripting flexibility :) Regards, -agentzh From nginx-forum at nginx.us Thu Nov 3 12:12:02 2011 From: nginx-forum at nginx.us (forum_id) Date: Thu, 03 Nov 2011 08:12:02 -0400 Subject: HTTP Request filter module In-Reply-To: References: Message-ID: <415d07b2a31d75210dfceb8ef4c979b1.NginxMailingListEnglish@forum.nginx.org> Thank you. I am new to nginx, so spare me if I talk anything nonsense. I want through link you provided, and then went through code as well. I couldn't understand few things here 1) Can we override nginx core's directive? 2) Since that post action is been used in multiple place, how far it safe to over write it? Thank you once again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217739,217761#msg-217761 From nginx-forum at nginx.us Thu Nov 3 12:13:24 2011 From: nginx-forum at nginx.us (forum_id) Date: Thu, 03 Nov 2011 08:13:24 -0400 Subject: HTTP Request filter module In-Reply-To: References: Message-ID: <917477f9cde52b53ea5f9cd28ea54872.NginxMailingListEnglish@forum.nginx.org> Thank you. As per the project requirements need to develop C modules only, so if @agentzh can suggest me in this way, that would be helpful. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217739,217762#msg-217762 From agentzh at gmail.com Thu Nov 3 12:54:57 2011 From: agentzh at gmail.com (agentzh) Date: Thu, 3 Nov 2011 20:54:57 +0800 Subject: HTTP Request filter module In-Reply-To: <917477f9cde52b53ea5f9cd28ea54872.NginxMailingListEnglish@forum.nginx.org> References: <917477f9cde52b53ea5f9cd28ea54872.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Thu, Nov 3, 2011 at 8:13 PM, forum_id wrote: > Thank you. As per the project requirements need to develop C modules > only, so if @agentzh can suggest me in this way, that would be helpful. > Just take a look at how ngx_lua does those things on the C level :) Regards, -agentzh From wendal1985 at gmail.com Thu Nov 3 13:39:45 2011 From: wendal1985 at gmail.com (Wendal Chen) Date: Thu, 3 Nov 2011 21:39:45 +0800 Subject: [ANN] ngx_openresty 1.0.8.26 (stable) released In-Reply-To: References: Message-ID: Great!! 2011/11/3 agentzh > Hi, folks! > > I'm happy to announce that the new stable release of ngx_openresty, > 1.0.8.26, has just been kicked out of door: > > http://openresty.org/#Download > > This is the first stable release of ngx_openresty that is based on the > Nginx core 1.0.8. And this is a big release with a *lot* of bug fixes and > new features. > > Special thanks go to all our contributors and users to help make this > happen over the last month :) > > Here goes the complete change log for this release, as compared to the > last stable release, 1.0.6.22, released nearly a month ago: > > - upgraded the Nginx core to 1.0.8. > - upgraded LuaNginxModule to 0.3.1rc23. > - feature: added new directive lua_shared_dict: > http://wiki.nginx.org/HttpLuaModule#lua_shared_dict > - feature: added Lua API for the shm-based dictionary: > http://wiki.nginx.org/HttpLuaModule#ngx.shared.DICT > - feature: now we apply the patch to the nginx core so as to allow > main request body modifications. > - feature: added new Lua API ngx.req.set_body_file(): > http://wiki.nginx.org/HttpLuaModule#ngx.req.set_body_file > - feature: added new Lua API ngx.req.set_body_data(): > http://wiki.nginx.org/HttpLuaModule#ngx.req.set_body_data > - feature: added new Lua functions ngx.req.read_body(), > ngx.req.discard_body(), ngx.req.get_body_data(), and > ngx.req.get_body_file(). see the docs here: > http://wiki.nginx.org/HttpLuaModule#ngx.req.read_body > - feature: now we implemented ngx.req.set_uri() and > ngx.req.set_uri_args() to emulate ngx_rewrite's rewrite directive > (without redirect or permanent modifiers). thanks Vladimir Protasov > (utros) and Nginx User. > - feature: added constant ngx.HTTP_METHOD_NOT_IMPLEMENTED (501). > thanks Nginx User. > - feature: now for HTTP 1.0 requests, we disable the automatic full > buffering mode if the user sets the Content-Length response header > before sending out the headers. this allows streaming output for HTTP 1.0 > requests if the content length can be calculated beforehand. thanks Li Ziyi. > - bugfix: now we properly support setting the Cache-Controlresponse header via the > ngx.header.HEADER interface. > - bugfix: no longer set header hash to 1. use the ngx_hash_key_lcinstead. > - bugfix: now we skip rewrite phase Lua handlers altogether if > ngx_rewrite's rewrite directive issue a location re-lookup by > changing URIs (but not including rewrite ... break). thanks Nginx User. > - bugfix: fixed hanging issues when using ngx.exec() within > rewrite_by_lua and access_by_lua. thanks Nginx User for reporting > it. > - bugfix: lua_need_request_body should not skip requests with > methods other than POST and PUT. thanks Nginx User. > - bugfix: ndk.set_var.DIRECTIVE had a memory issue and might pass > empty argument values to the directive being called. thanks dannynoonan. > - bugfix: no longer free request body buffers that are not > allocated by ourselves. > - bugfix: now we allow setting ngx.var.VARIABLE to nil. > - bugfix: now we explicitly clear all the modules' contexts before > dump to named location with ngx.exec. thanks Nginx User. > - upgraded EchoNginxModule to 0.37rc7. > - bugfix: fixed a memory issue in both echo_sleep and > echo_blocking_sleep: we should not pass ngx_str_t strings to atof()which expects C strings. > - bugfix: now we explicitly clear all the modules' contexts before > dump to named location with echo_exec. > - bugfix: bugfix: echo_exec may hang when running after echo_sleep(or other I/O interruption calls): we should have called > ngx_http_finalize_request on NGX_DONE to decrement r->main->countanyway. > - bugfix: now we properly set the Content-Length request header for > subrequests. > - upgraded SrcacheNginxModule to 0.13rc2. > - feature: implemented response status line and general response > header cachin and added new directives srcache_store_hide_headerand > srcache_store_pass_header to control which headers to cache and > which not. > - feature: added new directive srcache_response_cache_control to > control whether honor response headers Cache-Control and Expires, > default on. > - feature: we disable srcache_store automatically by default when Cache-Control: > max-age=0 and Expires: are seen. > - feature: implemented builtin nginx variable $srcache_expire for > automatic expiration time calculation based on response headers > Cache-Control (max-age) and Expires; also added new directives > srcache_max_expire and srcache_default_expire. > - feature: implemented the srcache_store_no_cache directive; now by > default, we do not store responses with the header Cache-Control: > no-cache into the cache. > - feature: implemented the srcache_store_no_store directive(default > off). Now by default, responses with the header Cache-Control: > no-store will not be stored into the cache. > - feature: implemented the srcache_store_private directive to > control whether to store responses with the header Cache-Control: > private. > - feature: implemented the srcache_request_cache_control directive > to allow request headers Cache-Control: no-cache or Pragma: no-cacheto force bypassing cache lookup. it also honors the request header Cache-Control: > no-store. this directive is turned off by default. > - feature: now we check response header Content-Encoding by default > and a non-empty header value will skip srcache_store; also > introduced a new directive named srcache_ignore_content_encoding to > ignore this response header. > - feature: implemented the srcache_methods directive to specify > request methods that are cacheable, by default, only GET and HEADare cacheable. > - bugfix: we no longer set header hash to 1; we use ngx_hash_key_lcinstead. > - bugfix: when we skip srcache_fetch by means of srcache_fetch_skip, > we should not automatically skip srcache_store. > - bugfix: now we ignore the Content-Length header (if any) of the > main request for the subrequests. > - bugfix: there might be a segfault when failing to allocate memory > in ngx_http_srcache_add_copy_chain. thanks Shaun savage. > - feature: implemented new directive srcache_store_statuses to > allow the user to specify the response status code list that is to be > stored into the cache. > - bugfix: we now only cache 200, 301, and 302 responses by default. > - upgraded IconvNginxModule to 0.10rc5. > - bugfix: fixed -Wset-but-not-used warnings issued by gcc 4.6.0. > thanks Zhi Jiale (Calio). > - upgraded HeadersMoreNginxModule to 0.16rc3. > - bugfix: we should set header hash using ngx_hash_key_lc, not > simply to 1. > - bugfix: fixed setting Cache-Control response headers. we should > properly prepare the r->cache_control array as well. > - upgraded RdsJsonNginxModule to 0.12rc6. > - bugfix: fixed compatibility with nginx 1.1.4+. > - upgraded RdsCsvNginxModule to 0.04. > - bugfix: fixed compatibility issues with nginx 1.1.4+. > - optimization: now we only register our filters when rds_csv is > actually used in nginx.conf. > - upgraded Redis2NginxModule to 0.08rc1. > - bugfix: fixed compatibility with nginx 1.1.4+. > - upgraded DrizzleNginxModule to 0.1.2rc2. > - bugfix: fixed compatibility with nginx 1.1.4+ > - upgraded MemcNginxModule to 0.13rc1. > - bugfix: fixed compatibility with nginx 1.1.4+. > - upgraded SetMiscNginxModule to v0.22rc3. > - minor code cleanup. > - applied the patch to the Nginx core that always clears all modules' > contexts in ngx_http_named_location. > - applied the patch for the variable-header-ignore-no-hash issue. see > http://forum.nginx.org/read.php?29,216062 for details. > > As always, you're welcome to report bugs and feature requests either here > or directly to me :) > > It'll also be highly appreciated to try out the devel releases (based on > the Nginx core 1.0.9+) that are coming out later ;) > > OpenResty (aka. ngx_openresty) is a full-fledged web application server > by bundling the standard Nginx core, lots of 3rd-party Nginx modules, > as well as most of their external dependencies. > > By taking adantage of various well-designed Nginx modules, OpenResty > effectively turns the nginx server into a powerful web app server, in which > the web developers can use the Lua programming language to script various > existing nginx C modules and Lua modules and construct extremely > high-performance web applications that is capable to handle 10K+ > connections. > > OpenResty aims to run your server-side web app completely in the Nginx > server, leveraging Nginx's event model to do non-blocking I/O not only with > the HTTP clients, but also with remote backends like MySQL, PostgreSQL, > Memcached, and Redis. > > You can find more details on the homepage of ngx_openresty here: > > http://openresty.org > > Have fun! > -agentzh > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Wendal Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Thu Nov 3 15:29:45 2011 From: lists at ruby-forum.com (Roger Gue) Date: Thu, 03 Nov 2011 16:29:45 +0100 Subject: Upstream time out Message-ID: I am using NGINX 1.0.4 on Redhat Linux 5 and I am having the error message: 504 Gateway Time out NGINX 1.0.4 HERE IS THE DETAIL OF MY ERROR.LOG FILE; 2011/11/03 09:05:11 [error] 22348#0: *898 upstream timed out (110 Connection timed out) while reading response header from upstream, client:172.16.19.180, server: mcb-http-t.jlg.com, request: "POST /ole/findshipto.jsp HTTP/1.1", upstream: "http://172.21.68:80/ole/findshipto.jsp", host: "mcb-htp-t.jlg.com", referrer: :https://mcb-http-t.jlg.com/ole/findshipto.jsp" Please some one help me as I am new to NGINX: HERE IS MY PROXY CONFIGURATION FILE: proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body-size 50m; client_body_buffer_size 256k; proxy_connect_timeout 300; proxy_send_timeout 300; proxy_read_timeout 300; proxy_buffers 32 4k; client_body_timeout 300; proxy_ignore_client_abort on; I WILL APPRECIATE YOUR HELP IN SOLVING THIS PROBLEM. THANKS IN ADVANCE -- Posted via http://www.ruby-forum.com/. From ilan at time4learning.com Thu Nov 3 15:34:05 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Thu, 3 Nov 2011 11:34:05 -0400 Subject: Slow loading SWF files during high load Message-ID: During our busier days and hours, pages that have SWF files loading on them take a long time to load. Granted, some of our SWF files are around half a meg (.5 MB) so that's expected, but what specific settings in the Nginx configuration file could assist with speeding up downloads of large files? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 3 16:58:42 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Nov 2011 20:58:42 +0400 Subject: Upstream time out In-Reply-To: References: Message-ID: <20111103165842.GD95664@mdounin.ru> Hello! On Thu, Nov 03, 2011 at 04:29:45PM +0100, Roger Gue wrote: > I am using NGINX 1.0.4 on Redhat Linux 5 and I am having the error > message: 504 Gateway Time out NGINX 1.0.4 > > HERE IS THE DETAIL OF MY ERROR.LOG FILE; > > 2011/11/03 09:05:11 [error] 22348#0: *898 upstream timed out (110 > Connection timed out) while reading response header from upstream, > client:172.16.19.180, server: mcb-http-t.jlg.com, request: "POST > /ole/findshipto.jsp HTTP/1.1", upstream: > "http://172.21.68:80/ole/findshipto.jsp", host: "mcb-htp-t.jlg.com", > referrer: :https://mcb-http-t.jlg.com/ole/findshipto.jsp" [...] > proxy_read_timeout 300; The error suggests you backend doesn't handle requests properly/in time. If it's expected that request may take longer than 300 seconds to complete - you may want to tune proxy_read_timeout. If not - your backend is probably overloaded or not working at all, and you may want to fix it or add more backends. Maxim Dounin From lists at ruby-forum.com Thu Nov 3 17:47:51 2011 From: lists at ruby-forum.com (Roger Gue) Date: Thu, 03 Nov 2011 18:47:51 +0100 Subject: 504 Gateway Time out Nginx 1.0.4: Upstream time out In-Reply-To: References: Message-ID: <0c4c2869127d45aaf119cad0fb6df639@ruby-forum.com> Thanks for your help. I know my question will sound little ackward but I am new to all this: How do I tune proxy_read_tomeout? How do I check the backend to see if it is overloaded? or not working at all? How do I fix the backends or add more to it? Thanks a lot for taken time to help me on this. -- Posted via http://www.ruby-forum.com/. From quintinpar at gmail.com Thu Nov 3 18:30:00 2011 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 4 Nov 2011 00:00:00 +0530 Subject: Correct way to setup maintenance page in nginx Message-ID: Hi all, I tried using this method with `try_files`. Didn?t work. Also the HTTP status code(503) is not being set. location / { try_files /var/www/during_build.html @maintenance; proxy_pass http://localhost:82; } location @maintenance { return 503; } This method outside of `location` directive error_page 503 /var/www/during_build.html; ## System Maintenance (Service Unavailable) if (-f /var/www/during_build.html) { return 503; } Is also not working. Nginx just returns 503 without the custom page. What is the correct way to show system down pages? -Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From i at vbart.ru Thu Nov 3 20:18:03 2011 From: i at vbart.ru (Valentin V. Bartenev) Date: Fri, 4 Nov 2011 00:18:03 +0400 Subject: Correct way to setup maintenance page in nginx In-Reply-To: References: Message-ID: <201111040018.04115.i@vbart.ru> On Thursday 03 November 2011 22:30:00 Quintin Par wrote: [...] > This method outside of `location` directive > > error_page 503 /var/www/during_build.html; > > ## System Maintenance (Service Unavailable) > if (-f /var/www/during_build.html) { > return 503; > } > > Is also not working. Nginx just returns 503 without the custom page. > > What is the correct way to show system down pages? > Try this one: error_page 503 /during_build.html; location / { if (-f /var/www/during_build.html) { return 503; } } location = /during_build.html { root /var/www/; internal; } wbr, Valentin V. Bartenev From appa at perusio.net Thu Nov 3 21:02:11 2011 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Thu, 03 Nov 2011 21:02:11 +0000 Subject: Correct way to setup maintenance page in nginx In-Reply-To: <201111040018.04115.i@vbart.ru> References: <201111040018.04115.i@vbart.ru> Message-ID: <87bossrkbw.wl%appa@perusio.net> On 3 Nov 2011 20h18 WET, i at vbart.ru wrote: > On Thursday 03 November 2011 22:30:00 Quintin Par wrote: > [...] >> This method outside of `location` directive >> >> error_page 503 /var/www/during_build.html; >> >> ## System Maintenance (Service Unavailable) >> if (-f /var/www/during_build.html) { >> return 503; >> } >> >> Is also not working. Nginx just returns 503 without the custom >> page. >> >> What is the correct way to show system down pages? >> > > Try this one: > > error_page 503 /during_build.html; > > location / { > > if (-f /var/www/during_build.html) { > return 503; > } > > } > > location = /during_build.html { > root /var/www/; > internal; > } Why not this? root /var/www; location / { error_page 503 @unavailable; location @unavailable { try_files /during_building.html @503; } } location @503 { return 503; } No need to use the if and the internal is implicit on the try_files. --- appa From i at vbart.ru Thu Nov 3 21:42:50 2011 From: i at vbart.ru (Valentin V. Bartenev) Date: Fri, 4 Nov 2011 01:42:50 +0400 Subject: Correct way to setup maintenance page in nginx In-Reply-To: <87bossrkbw.wl%appa@perusio.net> References: <201111040018.04115.i@vbart.ru> <87bossrkbw.wl%appa@perusio.net> Message-ID: <201111040142.50947.i@vbart.ru> On Friday 04 November 2011 01:02:11 Ant?nio P. P. Almeida wrote: > Why not this? > > root /var/www; > > location / { > > error_page 503 @unavailable; > > location @unavailable { > try_files /during_building.html @503; > } > } > > location @503 { > return 503; > } > > No need to use the if and the internal is implicit on the try_files. First of all, named locations can be on the server level only. And, how does a request get into @unavailable? Only after 503 has occurred. But, what will cause it? I think the main idea was: when we need to do some maintenance, we just create a specific file (probably with explaining message for users) and Nginx starts to return 503 on every request. wbr, Valentin V. Bartenev From zzz at zzz.org.ua Thu Nov 3 21:54:02 2011 From: zzz at zzz.org.ua (Alexandr Gomoliako) Date: Thu, 3 Nov 2011 23:54:02 +0200 Subject: Correct way to setup maintenance page in nginx In-Reply-To: <201111040142.50947.i@vbart.ru> References: <201111040018.04115.i@vbart.ru> <87bossrkbw.wl%appa@perusio.net> <201111040142.50947.i@vbart.ru> Message-ID: On 11/3/11, Valentin V. Bartenev wrote: > I think the main idea was: when we need to do some maintenance, we just > create > a specific file (probably with explaining message for users) and Nginx > starts > to return 503 on every request. If you are willing to create some file for this you can just as well swap nginx.conf with special maintenance config, where you can specify a single server { } block with return 503 or anything you like. From i at vbart.ru Thu Nov 3 22:06:24 2011 From: i at vbart.ru (Valentin V. Bartenev) Date: Fri, 4 Nov 2011 02:06:24 +0400 Subject: Correct way to setup maintenance page in nginx In-Reply-To: References: <201111040142.50947.i@vbart.ru> Message-ID: <201111040206.24783.i@vbart.ru> On Friday 04 November 2011 01:54:02 Alexandr Gomoliako wrote: > If you are willing to create some file for this you can just as well swap > nginx.conf with special maintenance config, where you can specify a single > server { } block with return 503 or anything you like. Of course, you have many options to do this with different pros and cons. And I just help to solve the problem the way that topic starter wants. wbr, Valentin V. Bartenev From coderight at gmail.com Fri Nov 4 01:51:25 2011 From: coderight at gmail.com (Chris) Date: Thu, 3 Nov 2011 21:51:25 -0400 Subject: "if" statement breaks try_files? Message-ID: I'm seeing something really weird. If I put an "if" statement anywhere within the same block as a "try_files" then it stops working. It always returns that the file exists even when it doesn't. I'm testing against nginx-1.0.9 on a Linux host. This can be tested with this simple.conf: worker_processes 1; events { worker_connections 1024; } http { server { listen 80; server_name localhost; root /var/www; index index.html index.htm index.php; location ~ \.php($|/) { try_files doesnotexist =405; fastcgi_index index.php; fastcgi_pass unix:/tmp/php.socket; if ($uri) {} fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; fastcgi_param HTTPS off; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; } } } If that "if ($uri)" statement is left in it never returns the 405 error when accessing whatever.php. If you comment out that "if" then it works as expected and always returns the 405 error. Any idea what is going on here? CR From quintinpar at gmail.com Fri Nov 4 03:13:40 2011 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 4 Nov 2011 08:43:40 +0530 Subject: Correct way to setup maintenance page in nginx In-Reply-To: <201111040018.04115.i@vbart.ru> References: <201111040018.04115.i@vbart.ru> Message-ID: Thanks Valentin. This worked like a charm. I spend nearly 2-3 hours trying for a solution that evaded ?if? and just ?try_files?. Even the official wiki supported that. There are a lot of solutions out in the wild and most for them don?t work. Thanks again for the help. -Quintin On Fri, Nov 4, 2011 at 1:48 AM, Valentin V. Bartenev wrote: > location = /during_build.html { > root /var/www/; > internal; > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Fri Nov 4 03:15:20 2011 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 4 Nov 2011 08:45:20 +0530 Subject: Nginx & long poll: Best practices to reduce memory and bandwidth footprint In-Reply-To: References: Message-ID: Posting the contents inline. Can someone review this? ----------------------------------------------------------------------------------------------------- I use nginx in this mode for [BOSH][1] and chat clients along with gzip. location ~* /http-bind/ { proxy_buffering off; keepalive_timeout 55; access_log off; tcp_nodelay on; proxy_pass http://x.x.x.x:1111; } Is this the best approach to **managing long polling** in nginx. I also use just one worker process for altogether for web & chat (single CPU). Is that fine? [1]: http://xmpp.org/extensions/xep-0206.html -Quintin On Thu, Nov 3, 2011 at 2:47 PM, Quintin Par wrote: > Can someone please answer this question on serverfault.com? > > > http://serverfault.com/questions/327301/nginx-long-poll-best-practices-to-reduce-memory-and-bandwidth-footprint > > > -Quintin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at nginxuser.net Fri Nov 4 06:13:47 2011 From: nginx at nginxuser.net (Nginx User) Date: Fri, 4 Nov 2011 09:13:47 +0300 Subject: "if" statement breaks try_files? In-Reply-To: References: Message-ID: See: http://wiki.nginx.org/IfIsEvil From quintinpar at gmail.com Fri Nov 4 08:55:45 2011 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 4 Nov 2011 14:25:45 +0530 Subject: Correct way to setup maintenance page in nginx In-Reply-To: References: <201111040018.04115.i@vbart.ru> Message-ID: On second thought don?t you think the file check in the ?if? condition is expensive considering it will be executed on every hit? -Quintin On Fri, Nov 4, 2011 at 8:43 AM, Quintin Par wrote: > Thanks Valentin. This worked like a charm. > > I spend nearly 2-3 hours trying for a solution that evaded ?if? and just > ?try_files?. Even the official wiki supported that. > > There are a lot of solutions out in the wild and most for them don?t work. > Thanks again for the help. > > -Quintin > On Fri, Nov 4, 2011 at 1:48 AM, Valentin V. Bartenev wrote: > >> location = /during_build.html { >> root /var/www/; >> internal; >> } >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From i at vbart.ru Fri Nov 4 09:48:15 2011 From: i at vbart.ru (Valentin V. Bartenev) Date: Fri, 4 Nov 2011 13:48:15 +0400 Subject: Correct way to setup maintenance page in nginx In-Reply-To: References: Message-ID: <201111041348.15401.i@vbart.ru> On Friday 04 November 2011 12:55:45 Quintin Par wrote: > On second thought don?t you think the file check in the ?if? condition is > expensive considering it will be executed on every hit? If you don't deal with high-load or dedicated file storage with big access latency, then don't worry. IMHO. Also, you can try to tune Nginx open file cache: http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache wbr, Valentin V. Bartenev From quintinpar at gmail.com Fri Nov 4 09:57:19 2011 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 4 Nov 2011 15:27:19 +0530 Subject: Correct way to setup maintenance page in nginx In-Reply-To: <201111041348.15401.i@vbart.ru> References: <201111041348.15401.i@vbart.ru> Message-ID: Thanks. On Fri, Nov 4, 2011 at 3:18 PM, Valentin V. Bartenev wrote: > On Friday 04 November 2011 12:55:45 Quintin Par wrote: > > On second thought don?t you think the file check in the ?if? condition is > > expensive considering it will be executed on every hit? > > If you don't deal with high-load or dedicated file storage with big access > latency, then don't worry. IMHO. > > Also, you can try to tune Nginx open file cache: > http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Nov 4 10:50:58 2011 From: nginx-forum at nginx.us (forum_id) Date: Fri, 04 Nov 2011 06:50:58 -0400 Subject: HTTP Request filter module In-Reply-To: References: Message-ID: <16ecfbaa9bb2dbf62dda34a3415b2c44.NginxMailingListEnglish@forum.nginx.org> @agentzh, thank you. As you suggested, I went through the ngx_lua code. Since I am new to nginx, my understanding was not so good, but I want to write it in few points, 1) Lua handler receives request 2) Forwards request to Lua interpreter 3) Send Lua response back to user. Let me know if my understanding itself is wrong. Give me some simple pointers to understand, because I need to start with my module as soon as possible. Thank you once again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217739,217810#msg-217810 From nginx-forum at nginx.us Fri Nov 4 11:00:53 2011 From: nginx-forum at nginx.us (forum_id) Date: Fri, 04 Nov 2011 07:00:53 -0400 Subject: Proxy handler to multiple servers for single request. Message-ID: <8424e8ab817f240b5c6e51e54bd3e760.NginxMailingListEnglish@forum.nginx.org> I have requirement where proxy handler needs to talk to multiple server before sending request to actual server where it gets processed. For example proxy handler receives user request, talks to virus scanner to see user data contain any virus or not then only forward to actual server. Here virus scanner may be another server or a thread part of nginx. My doubts are 1) When I went through proxy module, ngx_http_proxy_create_request is the callback responsible to prepare request to send to another server. As per understanding, I need to call here virus scanner before sending request. Is it correct? Or is there any other place I can do this work? In this function if I call virus scanner API's, how will the request forwarded the server which serves request? 2) How to make it asynchronous with out effecting nginx's performance. -- Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217811,217811#msg-217811 From nginx-forum at nginx.us Fri Nov 4 11:58:10 2011 From: nginx-forum at nginx.us (forum_id) Date: Fri, 04 Nov 2011 07:58:10 -0400 Subject: Proxy handler to multiple servers for single request. In-Reply-To: <8424e8ab817f240b5c6e51e54bd3e760.NginxMailingListEnglish@forum.nginx.org> References: <8424e8ab817f240b5c6e51e54bd3e760.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9596b7b7d94f7d74660ee753aea0a05b.NginxMailingListEnglish@forum.nginx.org> I am trying to develop some thing similar to mod_clamav for apache. This module scans only responses, I need to scan even requests. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217811,217814#msg-217814 From tomlove at gmail.com Fri Nov 4 12:50:52 2011 From: tomlove at gmail.com (Thomas Love) Date: Fri, 4 Nov 2011 14:50:52 +0200 Subject: Nginx & long poll: Best practices to reduce memory and bandwidth footprint In-Reply-To: References: Message-ID: On 4 November 2011 05:15, Quintin Par wrote: > Posting the contents inline. Can someone review this? > > ----------------------------------------------------------------------------------------------------- > I use nginx in this mode for [BOSH][1] and chat clients along with gzip. > > location ~* /http-bind/ { > proxy_buffering off; > keepalive_timeout 55; > access_log off; > tcp_nodelay on; > proxy_pass http://x.x.x.x:1111; > } > > Is this the best approach to **managing long polling** in nginx. > > I also use just one worker process for altogether for web & chat (single > CPU). Is that fine? > > It looks fine. Use that until you have a performance problem, and then gather evidence identifying your bottleneck before changing anything. You might as well start with n worker processes though, where n is the number of cores on your CPU. Give yourself a few thousand worker_connections, because you'll have a relatively large number of relatively idle sockets. Long-polling is very simple and nginx is good at it. I would advise not to try to fix anything until you find a real problem. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Fri Nov 4 13:02:23 2011 From: agentzh at gmail.com (agentzh) Date: Fri, 4 Nov 2011 21:02:23 +0800 Subject: HTTP Request filter module In-Reply-To: <16ecfbaa9bb2dbf62dda34a3415b2c44.NginxMailingListEnglish@forum.nginx.org> References: <16ecfbaa9bb2dbf62dda34a3415b2c44.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Fri, Nov 4, 2011 at 6:50 PM, forum_id wrote: > @agentzh, thank you. As you suggested, I went through the ngx_lua code. > Since I am new to nginx, my understanding was not so good, but I want to > write it in few points, If you're not familiar with Nginx internals, then I think use ngx_lua and Lua scripting is the quickest and easiest way to get the job done. I don't think I have the time to explain every thing here because it's really complicated for freshmen :) If you insist in Nginx C module development, I suggest you start with the resources listed here: http://wiki.nginx.org/Resources Regards, -agentzh From coderight at gmail.com Fri Nov 4 14:02:04 2011 From: coderight at gmail.com (Chris) Date: Fri, 4 Nov 2011 10:02:04 -0400 Subject: "if" statement breaks try_files? In-Reply-To: References: Message-ID: On Fri, Nov 4, 2011 at 2:13 AM, Nginx User wrote: > See: http://wiki.nginx.org/IfIsEvil Thanks, I should have known. I had read that but forgotten the most important parts. Shame so many projects design their own language when so many proven works are out there, but I digress. CR From simone.fumagalli at contactlab.com Fri Nov 4 14:04:01 2011 From: simone.fumagalli at contactlab.com (Simone Fumagalli) Date: Fri, 4 Nov 2011 15:04:01 +0100 Subject: NGINX cache. Real meaning of zone_size Message-ID: <4EB3F0D1.6040604@contactlab.com> Hello, I read from the list "zone_size" is a size of keys_zone, i.e. shared memory zone used to store cache keys (some minimal metadata about cached pages, about 64 bytes on 32-bit platforms). but what does this practically mean ? Let say a new request arrive and the request has to be cached. What does NGINX do if the "space" in the shared memory is over ? It delete the oldest file in the cache (and the key too) ? Is there a rule to size this parameter ? Thanks -- Simone From nginx-forum at nginx.us Fri Nov 4 14:11:59 2011 From: nginx-forum at nginx.us (forum_id) Date: Fri, 04 Nov 2011 10:11:59 -0400 Subject: HTTP Request filter module In-Reply-To: References: Message-ID: @agentzh, thank you. Even though I am new to nginx I went through to understand how nginx works and understood how nginx modules work. I just need very few pointers, like lua modules receives request and forwards to lua interpreter, after processing the request how it sends details. I need to overall details not complete in depth details, so that I can spend more time on code in productive way. It will make easy to understand in better way. I need to use C language. Please give very small notes with 5 points are also fine for me to start on it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217739,217820#msg-217820 From lists at ruby-forum.com Fri Nov 4 19:38:37 2011 From: lists at ruby-forum.com (Roger Gue) Date: Fri, 04 Nov 2011 20:38:37 +0100 Subject: 504 Gateway Time out Nginx 1.0.4: Upstream time out In-Reply-To: References: Message-ID: <073867d2c99fa075d96095ce39dd647c@ruby-forum.com> HERE IS MY NGINX.CONF FILE DETAIL: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx/nginx.pid; events { worker_connections 2048; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 3; tcp_nodelay off; ignore_invalid_headers on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_comp_level 6; gzip_static on; gzip_min_length 2200; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/x-javascript image/x-icon text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } AND HERE IS MY PROXY.CONF FILE DETAIL #proxy.conf #proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_body_buffer_size 256k; proxy_connect_timeout 10m; proxy_send_timeout 10m; proxy_read_timeout 8m; client_header_timeout 10m; proxy_buffers 32 4k; client_body_timeout 800; proxy_ignore_client_abort on; client_header_buffer_size 1k; large_client_header_buffers 8 8k; client_max_body_size 2g; postpone_output 1460; -- Posted via http://www.ruby-forum.com/. From djdarkbeat at gmail.com Fri Nov 4 23:04:36 2011 From: djdarkbeat at gmail.com (Brian Loomis) Date: Fri, 4 Nov 2011 17:04:36 -0600 Subject: beyond proxy_pass Message-ID: I have a situation where I have a very high write intensive site in nginx. This site aggregates GPS positions for transit from a high amount of devices. The incoming traffic generates push notifications with the push_module from slack.net (Leo) I need to get nginx to send the same packet it receives to a development server so that we can in essence `fork` the traffic so the second server can make it's own set of push notifications to have live http activity but completely sandboxed from the production site. I've got the two packt books and have been going through them and am using proxy_pass already o serve JSON across domains in productions but can't seem to get proxy_pass to send to the development server successfully. What's the best way to fork or mirror https requests to two locations like this? Brian Loomis From nginx at nginxuser.net Sat Nov 5 16:04:41 2011 From: nginx at nginxuser.net (Nginx User) Date: Sat, 5 Nov 2011 19:04:41 +0300 Subject: ngx.req.get_post_args Issue Message-ID: -************************ START -************************ if ngx.var.request_method == "POST" then ngx.req.read_body() local post_args = ngx.req.get_post_args() for my_key, v in pairs( post_args ) do if type(v) == "table" then my_arg = table.concat(v, " ") else my_arg = v end regex_rules.log_alert("Alert:- " .. my_key .. " ::: " .. my_arg) end end -************************ END -************************ The code in ngx_lua below which should log key/value pairs from a post request results in the log output below: -************************ START -************************ [alert] 17670#0: *1 Alert:- app_form[localServerPath] ::: /home/user/testsite.com/public_html/share/data/505 /home/user/testsite.com/public_html/share/data/505, client: 89.148.6.249, server: testsite.com, request: "POST // HTTP/1.1", host: "testsite.com", referrer: "http://testsite.com/?app_view=core.ItemAdmin&app_subView=core.ItemAdd&app_addPlugin=ItemAddFromServer&app_form%5BlocalServerPath%5D=/home/user/testsite.com/public_html/share/data/505&app_itemId=470&app_form%5Baction%5D%5BfindFilesFromLocalServer%5D=1&app_form%5BformName%5D=ItemAddFromServer" [alert] 17670#0: *1 Alert:- app_form[formName] ::: ItemAddFromServer ItemAddFromServer, client: 89.148.6.249, server: testsite.com, request: "POST // HTTP/1.1", host: "testsite.com", referrer: "http://testsite.com/?app_view=core.ItemAdmin&app_subView=core.ItemAdd&app_addPlugin=ItemAddFromServer&app_form%5BlocalServerPath%5D=/home/user/testsite.com/public_html/share/data/505&app_itemId=470&app_form%5Baction%5D%5BfindFilesFromLocalServer%5D=1&app_form%5BformName%5D=ItemAddFromServer" [debug] 17670#0: *1 posix_memalign: 08A6A910:4096 @16 [alert] 17670#0: *1 Alert:- app_itemId ::: 470 ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_formUrl" //?app_view=core.ItemAdmin 470 ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_authToken" 8845e98e9296 ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_controller" core.ItemAdd ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_form[formName]" ItemAddFromServer ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_itemId" 470 ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_addPlugin" ItemAddFromServer ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_form[localServerPath]" /home/user/testsite.com/public_html/share/data/505 ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_form[set][title]" on ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_form[CreateThumbnailOption][createThumbnail]" on ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name="app_form[action][addFromLocalServer]" Add Files ------WebKitFormBoundary3c6CizwarDCoyCst-- , client: 89.148.6.249, server: testsite.com, request: "POST // HTTP/1.1", host: "testsite.com", referrer: "http://testsite.com/?app_view=core.ItemAdmin&app_subView=core.ItemAdd&app_addPlugin=ItemAddFromServer&app_form%5BlocalServerPath%5D=/home/user/testsite.com/public_html/share/data/505&app_itemId=470&app_form%5Baction%5D%5BfindFilesFromLocalServer%5D=1&app_form%5BformName%5D=ItemAddFromServer" [alert] 17670#0: *1 Alert:- app_form[action][findFilesFromLocalServer] ::: 1 1, client: 89.148.6.249, server: testsite.com, request: "POST // HTTP/1.1", host: "testsite.com", referrer: "http://testsite.com/?app_view=core.ItemAdmin&app_subView=core.ItemAdd&app_addPlugin=ItemAddFromServer&app_form%5BlocalServerPath%5D=/home/user/testsite.com/public_html/share/data/505&app_itemId=470&app_form%5Baction%5D%5BfindFilesFromLocalServer%5D=1&app_form%5BformName%5D=ItemAddFromServer" [alert] 17670#0: *1 Alert:- ------WebKitFormBoundary3c6CizwarDCoyCst Content-Disposition: form-data; name ::: "app_return" //?app_view=core.ItemAdmin, client: 89.148.6.249, server: testsite.com, request: "POST // HTTP/1.1", host: "testsite.com", referrer: "http://testsite.com/?app_view=core.ItemAdmin&app_subView=core.ItemAdd&app_addPlugin=ItemAddFromServer&app_form%5BlocalServerPath%5D=/home/user/testsite.com/public_html/share/data/505&app_itemId=470&app_form%5Baction%5D%5BfindFilesFromLocalServer%5D=1&app_form%5BformName%5D=ItemAddFromServer" [alert] 17670#0: *1 Alert:- app_subView ::: core.ItemAdd core.ItemAdd, client: 89.148.6.249, server: testsite.com, request: "POST // HTTP/1.1", host: "testsite.com", referrer: "http://testsite.com/?app_view=core.ItemAdmin&app_subView=core.ItemAdd&app_addPlugin=ItemAddFromServer&app_form%5BlocalServerPath%5D=/home/user/testsite.com/public_html/share/data/505&app_itemId=470&app_form%5Baction%5D%5BfindFilesFromLocalServer%5D=1&app_form%5BformName%5D=ItemAddFromServer" [alert] 17670#0: *1 Alert:- app_addPlugin ::: ItemAddFromServer ItemAddFromServer, client: 89.148.6.249, server: testsite.com, request: "POST // HTTP/1.1", host: "testsite.com", referrer: "http://testsite.com/?app_view=core.ItemAdmin&app_subView=core.ItemAdd&app_addPlugin=ItemAddFromServer&app_form%5BlocalServerPath%5D=/home/user/testsite.com/public_html/share/data/505&app_itemId=470&app_form%5Baction%5D%5BfindFilesFromLocalServer%5D=1&app_form%5BformName%5D=ItemAddFromServer" -************************ END -************************ 1. The concat command suggests that when "v" is a table, the table holds the same value twice. E.G. "[alert] 17670#0: *1 Alert:- app_form[action][findFilesFromLocalServer] ::: 1 1," and "[alert] 17670#0: *1 Alert:- app_form[localServerPath] ::: /home/user/testsite.com/public_html/share/data/505 /home/user/testsite.com/public_html/share/data/505," 2. The "------WebKitFormBoundary3c6CizwarDCoyCst" lines seem to suggest the request body has been somehow attached to one of the variables "app_itemId". This should just have a value of 140. I have posted the form that generates this at: http://pastebin.com/6scK6cWP. (Pls widen page view to see properly) Thanks From agentzh at gmail.com Sun Nov 6 02:38:53 2011 From: agentzh at gmail.com (agentzh) Date: Sun, 6 Nov 2011 10:38:53 +0800 Subject: ngx.req.get_post_args Issue In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 12:04 AM, Nginx User wrote: > [alert] 17670#0: *1 Alert:- app_itemId ::: 470 > ------WebKitFormBoundary3c6CizwarDCoyCst > Content-Disposition: form-data; name="app_formUrl" > > //?app_view=core.ItemAdmin 470 > ------WebKitFormBoundary3c6CizwarDCoyCst > Content-Disposition: form-data; name="app_authToken" > Multipart form format is not supported in ngx.req.get_post_args. Only urlencoded format is supported for now. I'll make this clear in the documentation. Regards, -agentzh From nginx at nginxuser.net Sun Nov 6 07:12:28 2011 From: nginx at nginxuser.net (Nginx User) Date: Sun, 6 Nov 2011 10:12:28 +0300 Subject: ngx.req.get_post_args Issue In-Reply-To: References: Message-ID: On 6 November 2011 05:38, agentzh wrote: > Multipart form format is not supported in ngx.req.get_post_args. Only > urlencoded format is supported for now. I'll make this clear in the > documentation. Thanks for the clarification.. As I need to have a mix of both types, I have combined three files from the cgilua package into one module (cgilua_module) and using this instead. Handles both multipart and urlencoded formats and outputs a table of key/pair values. (Also has "application/xml", "text/xml", and "text/plain" under the POST request handler. Not yet sure where these fit in and just left them in) My code (work in progress) is now -************************ START -************************ local ngx_cgi = require "cgilua_module" if ngx.var.request_method == "POST" then ngx.req.read_body() local post_args = ngx_cgi.get_post_args() for my_key, v in pairs( post_args ) do if type(v) == "table" then my_arg = table.concat(v, " ") else my_arg = v end regex_rules.log_alert("Alert:- " .. my_key .. " ::: " .. my_arg) end end -************************ END -************************ Since cgilua has a permissive license, perhaps this, although in lua and not C, will be helpful in extending the coverage of ngx.req.get_post_args to multipart encoded forms. Cheers! From nginx at nginxuser.net Sun Nov 6 10:20:45 2011 From: nginx at nginxuser.net (Nginx User) Date: Sun, 6 Nov 2011 13:20:45 +0300 Subject: ngx.req.get_post_args Issue In-Reply-To: References: Message-ID: On 6 November 2011 10:12, Nginx User wrote: > My code (work in progress) is now > -************************ > ? ? ? ? ?START > -************************ > local ngx_cgi = require "cgilua_module" > if ngx.var.request_method == "POST" then > ? ? ? ngx.req.read_body() > ? ? ? local post_args = ngx_cgi.get_post_args() > ? ? ? for my_key, v in pairs( post_args ) do > ? ? ? ? ? ? ? if type(v) == "table" then > ? ? ? ? ? ? ? ? ? ? ? my_arg = table.concat(v, " ") > ? ? ? ? ? ? ? else > ? ? ? ? ? ? ? ? ? ? ? my_arg = v > ? ? ? ? ? ? ? end > ? ? ? ? ? ? ? regex_rules.log_alert("Alert:- " .. my_key .. " ::: " .. my_arg) > ? ? ? end > end > -************************ > ? ? ? ? ?END > -************************ UPDATED -************************ START -************************ if ngx.var.request_method == "POST" then -- ngx.req.read_body() -- Not needed as cgilua handles this local ngx_cgi = require "cgilua_module" local post_args = ngx_cgi.get_post_args() for my_key, v in pairs( post_args ) do if type(v) == "table" then my_arg = table.concat(v, " ") else my_arg = v end regex_rules.log_alert("Alert:- " .. my_key .. " ::: " .. my_arg) end end -************************ END -************************ From kgorlo at gmail.com Sun Nov 6 20:48:08 2011 From: kgorlo at gmail.com (Kamil Gorlo) Date: Sun, 6 Nov 2011 21:48:08 +0100 Subject: Log response header from upstream Message-ID: Hi, I have some problem with access log combined with X-Accel-Redirect requests. In my case I have Nginx set up as load-balancer to group of application servers. These servers return some special header in every request - I need to log value of this special header (lets call it 'X-user') in access log - also I do not want to expose this header to the world (proxy_hide_header helps here). Everything seems to work, but when there is X-Accel-Redirect request I have empty field in access log because of subrequest ($upstream_http_x_special is cleared because of subrequest, if I understand this mechanism correctly). How to make this work for every request? Here is my config: http { log_format extended '$request $upstream_http_x_user'; access_log /var/log/nginx/access.log extended; ... server { listen 80; location / { proxy_pass http://backend; proxy_hide_header X-User; } location /files { internal; proxy_pass http://filestore; } } } Cheers, -- Kamil Gorlo From mdounin at mdounin.ru Sun Nov 6 21:15:04 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Nov 2011 01:15:04 +0400 Subject: Log response header from upstream In-Reply-To: References: Message-ID: <20111106211504.GY95664@mdounin.ru> Hello! On Sun, Nov 06, 2011 at 09:48:08PM +0100, Kamil Gorlo wrote: > Hi, > > I have some problem with access log combined with X-Accel-Redirect > requests. In my case I have Nginx set up as load-balancer to group of > application servers. These servers return some special header in every > request - I need to log value of this special header (lets call it > 'X-user') in access log - also I do not want to expose this header to > the world (proxy_hide_header helps here). > > Everything seems to work, but when there is X-Accel-Redirect request I > have empty field in access log because of subrequest > ($upstream_http_x_special is cleared because of subrequest, if I > understand this mechanism correctly). How to make this work for every > request? > > Here is my config: > > http { > log_format extended '$request $upstream_http_x_user'; > access_log /var/log/nginx/access.log extended; > > ... > > server { > listen 80; > > location / { > proxy_pass http://backend; > proxy_hide_header X-User; > } > > location /files { > internal; > proxy_pass http://filestore; Workaround is to use set $x_user $upstream_http_x_user; here (and to log $x_user instead). > } > } > } Maxim Dounin From nginx-forum at nginx.us Mon Nov 7 05:34:13 2011 From: nginx-forum at nginx.us (Mark) Date: Mon, 07 Nov 2011 00:34:13 -0500 Subject: Rewrite /index.html to / Message-ID: <25358168f10d160cb15bb45891cbfc78.NginxMailingListEnglish@forum.nginx.org> Developing static files on the local filesystem I use index.html instead of / for the home page. When I push them to the remote server I want http://domain.com/index.html to be redirected to http://domain.com/ for SEO reasons. I've Googled but the best I seem to be able to achieve is a redirect loop. What do I need to do? Here are my failed rules : # rewrite ^index.html$ $scheme://domain.com/ permanent; # rewrite /index.html $scheme://domain.com/ permanent; # location = /index.html { # rewrite ^(.*) $scheme://domain.com/ permanent; # } Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217899,217899#msg-217899 From kgorlo at gmail.com Mon Nov 7 05:42:09 2011 From: kgorlo at gmail.com (Kamil Gorlo) Date: Mon, 7 Nov 2011 06:42:09 +0100 Subject: Log response header from upstream In-Reply-To: <20111106211504.GY95664@mdounin.ru> References: <20111106211504.GY95664@mdounin.ru> Message-ID: Yeah, I have tried this, but this way non x-accel-redirect requests must be handled in old way which means I have to use 2 fields in access log. Is there any way to use only one field? 06-11-2011 22:15 u?ytkownik "Maxim Dounin" napisa?: > Hello! > > On Sun, Nov 06, 2011 at 09:48:08PM +0100, Kamil Gorlo wrote: > > > Hi, > > > > I have some problem with access log combined with X-Accel-Redirect > > requests. In my case I have Nginx set up as load-balancer to group of > > application servers. These servers return some special header in every > > request - I need to log value of this special header (lets call it > > 'X-user') in access log - also I do not want to expose this header to > > the world (proxy_hide_header helps here). > > > > Everything seems to work, but when there is X-Accel-Redirect request I > > have empty field in access log because of subrequest > > ($upstream_http_x_special is cleared because of subrequest, if I > > understand this mechanism correctly). How to make this work for every > > request? > > > > Here is my config: > > > > http { > > log_format extended '$request $upstream_http_x_user'; > > access_log /var/log/nginx/access.log extended; > > > > ... > > > > server { > > listen 80; > > > > location / { > > proxy_pass http://backend; > > proxy_hide_header X-User; > > } > > > > location /files { > > internal; > > proxy_pass http://filestore; > > Workaround is to use > > set $x_user $upstream_http_x_user; > > here (and to log $x_user instead). > > > } > > } > > } > > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Mon Nov 7 07:24:36 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 7 Nov 2011 11:24:36 +0400 Subject: Rewrite /index.html to / In-Reply-To: <25358168f10d160cb15bb45891cbfc78.NginxMailingListEnglish@forum.nginx.org> References: <25358168f10d160cb15bb45891cbfc78.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111107072436.GA75036@nginx.com> On Mon, Nov 07, 2011 at 12:34:13AM -0500, Mark wrote: > Developing static files on the local filesystem I use index.html instead > of / for the home page. > When I push them to the remote server I want > http://domain.com/index.html to be redirected to http://domain.com/ for > SEO reasons. > I've Googled but the best I seem to be able to achieve is a redirect > loop. What do I need to do? > > Here are my failed rules : > > # rewrite ^index.html$ $scheme://domain.com/ permanent; > # rewrite /index.html $scheme://domain.com/ permanent; > # location = /index.html { > # rewrite ^(.*) $scheme://domain.com/ permanent; > # } > > Thanks. Either location = / { try_files /index.html =404; } location = /index.html { internal; error_page 404 =301 $scheme://domain.com/; } or location = / { index index.html; } location = /index.html { internal; error_page 404 =301 $scheme://domain.com/; } -- Igor Sysoev From nginx-forum at nginx.us Mon Nov 7 07:55:22 2011 From: nginx-forum at nginx.us (mikiso) Date: Mon, 07 Nov 2011 02:55:22 -0500 Subject: Calculating the max. clients by worker_connections In-Reply-To: <819481.24136.qm@web120511.mail.ne1.yahoo.com> References: <819481.24136.qm@web120511.mail.ne1.yahoo.com> Message-ID: <05599f0f70e8bc3be50bb4cac31a4166.NginxMailingListEnglish@forum.nginx.org> Hi, Regarding max clients , I understand it's normal to have 4 in reverse proxy since browser uses 2 connections in HTTP 1.1. How about mail reverse proxy? It seems POP command from telnet client uses 4 also, although I couldn't find POP3 uses more than 1 connection in RFC 1939. Regards, Soichiro Posted at Nginx Forum: http://forum.nginx.org/read.php?2,171776,217904#msg-217904 From nginx-forum at nginx.us Mon Nov 7 08:48:53 2011 From: nginx-forum at nginx.us (roger.moffatt) Date: Mon, 07 Nov 2011 03:48:53 -0500 Subject: nginx config: multiple locations, authentication in one, triggered for both? Message-ID: <1800a639381b1c7a0d427faad62c9f16.NginxMailingListEnglish@forum.nginx.org> I originally posted this question on SO, but it might of course be more logical to ask here; http://stackoverflow.com/questions/8031471/nginx-location-directive-authentication-happening-in-wrong-location-block I'm flummoxed. I have a server that is primarily running couchdb over ssl (using nginx to proxy the ssl connection) but also has to serve some apache stuff. Basically I want everything that DOESN'T start /www to be sent to the couchdb backend. If a url DOES start /www then it should be mapped to the local apache server on port 8080. My config below works with the exception that I'm getting prompted for authentication on the /www paths as well. I'm a bit more used to configuring Apache than nginx, so I suspect I'm mis-understanding something, but if anyone can see what is wrong from my configuration (below) I'd be most grateful. To clarify my use scenario; https://my-domain.com/www/script.cgi should be proxied to http://localhost:8080/script.cgi https://my-domain.com/anythingelse should be proxied to http://localhost:5984/anythingelse ONLY the second should require authentication. It is the authentication issue that is causing problems - as I mentioned, I am being challenged on https://my-domain.com/www/anything as well :-( Here's the config, thanks for any insight. server { listen 443; ssl on; # Any url starting /www needs to be mapped to the root # of the back end application server on 8080 location ^~ /www/ { proxy_pass http://localhost:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # Everything else has to be sent to the couchdb server running on # port 5984 and for security, this is protected with auth_basic # authentication. location / { auth_basic "Restricted"; auth_basic_user_file /path-to-passwords; proxy_pass http://localhost:5984; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; } } Thanks for some pointers - I'm not sure how I can resolve this correctly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217906,217906#msg-217906 From nginx-forum at nginx.us Mon Nov 7 09:34:41 2011 From: nginx-forum at nginx.us (attiks) Date: Mon, 07 Nov 2011 04:34:41 -0500 Subject: Finer grained control on caching Message-ID: I posted and idea last week to Ideas and Feature Requests: "Finer grained control on caching" (http://forum.nginx.org/read.php?10,217748), but since I'm new to this forum/mailing list I have no idea if anybody saw this. I want to find out if this is an acceptable feature request and if anybody else sees the benefits of this. Short summary: Add a new directive fastcgi_cache_key_suffix Construct the caching filename like this md5(fastcgi_cache_key) . '_' . fastcgi_cache_key_suffix Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217909,217909#msg-217909 From mdounin at mdounin.ru Mon Nov 7 10:19:03 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Nov 2011 14:19:03 +0400 Subject: Log response header from upstream In-Reply-To: References: <20111106211504.GY95664@mdounin.ru> Message-ID: <20111107101902.GD95664@mdounin.ru> Hello! On Mon, Nov 07, 2011 at 06:42:09AM +0100, Kamil Gorlo wrote: > Yeah, I have tried this, but this way non x-accel-redirect requests must be > handled in old way which means I have to use 2 fields in access log. Is > there any way to use only one field? E.g. you may use different log format for internal location. Maxim Dounin > 06-11-2011 22:15 u?ytkownik "Maxim Dounin" napisa?: > > > Hello! > > > > On Sun, Nov 06, 2011 at 09:48:08PM +0100, Kamil Gorlo wrote: > > > > > Hi, > > > > > > I have some problem with access log combined with X-Accel-Redirect > > > requests. In my case I have Nginx set up as load-balancer to group of > > > application servers. These servers return some special header in every > > > request - I need to log value of this special header (lets call it > > > 'X-user') in access log - also I do not want to expose this header to > > > the world (proxy_hide_header helps here). > > > > > > Everything seems to work, but when there is X-Accel-Redirect request I > > > have empty field in access log because of subrequest > > > ($upstream_http_x_special is cleared because of subrequest, if I > > > understand this mechanism correctly). How to make this work for every > > > request? > > > > > > Here is my config: > > > > > > http { > > > log_format extended '$request $upstream_http_x_user'; > > > access_log /var/log/nginx/access.log extended; > > > > > > ... > > > > > > server { > > > listen 80; > > > > > > location / { > > > proxy_pass http://backend; > > > proxy_hide_header X-User; > > > } > > > > > > location /files { > > > internal; > > > proxy_pass http://filestore; > > > > Workaround is to use > > > > set $x_user $upstream_http_x_user; > > > > here (and to log $x_user instead). > > > > > } > > > } > > > } > > > > > > Maxim Dounin > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Nov 7 10:25:26 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Nov 2011 14:25:26 +0400 Subject: nginx config: multiple locations, authentication in one, triggered for both? In-Reply-To: <1800a639381b1c7a0d427faad62c9f16.NginxMailingListEnglish@forum.nginx.org> References: <1800a639381b1c7a0d427faad62c9f16.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111107102526.GE95664@mdounin.ru> Hello! On Mon, Nov 07, 2011 at 03:48:53AM -0500, roger.moffatt wrote: > I originally posted this question on SO, but it might of course be more > logical to ask here; > > http://stackoverflow.com/questions/8031471/nginx-location-directive-authentication-happening-in-wrong-location-block > > I'm flummoxed. > > I have a server that is primarily running couchdb over ssl (using nginx > to proxy the ssl connection) but also has to serve some apache stuff. > > Basically I want everything that DOESN'T start /www to be sent to the > couchdb backend. If a url DOES start /www then it should be mapped to > the local apache server on port 8080. > > My config below works with the exception that I'm getting prompted for > authentication on the /www paths as well. I'm a bit more used to > configuring Apache than nginx, so I suspect I'm mis-understanding > something, but if anyone can see what is wrong from my configuration > (below) I'd be most grateful. > > To clarify my use scenario; > > https://my-domain.com/www/script.cgi should be proxied to > http://localhost:8080/script.cgi > https://my-domain.com/anythingelse should be proxied to > http://localhost:5984/anythingelse > > ONLY the second should require authentication. It is the authentication > issue that is causing problems - as I mentioned, I am being challenged > on https://my-domain.com/www/anything as well :-( Most likely, the authentication request appears due to your browser doing automatic requests to /favicon.ico or something like. Try adding location = /favicon.ico { return 404; } to see if it helps. > > Here's the config, thanks for any insight. > > server { > listen 443; > ssl on; > > # Any url starting /www needs to be mapped to the root > # of the back end application server on 8080 > > location ^~ /www/ { > proxy_pass http://localhost:8080/; > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > } > > # Everything else has to be sent to the couchdb server running > on > # port 5984 and for security, this is protected with auth_basic > # authentication. > > location / { > > auth_basic "Restricted"; > auth_basic_user_file /path-to-passwords; > > proxy_pass http://localhost:5984; > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Ssl on; > > } > } > > Thanks for some pointers - I'm not sure how I can resolve this > correctly. Config looks correct and should work. Try testing it by hand (e.g. nc/telnet/fetch/wget/curl) to see if it actually works. See above for a possible cause of the authentication request. Maxim Dounin From kgorlo at gmail.com Mon Nov 7 10:57:15 2011 From: kgorlo at gmail.com (Kamil Gorlo) Date: Mon, 7 Nov 2011 11:57:15 +0100 Subject: Log response header from upstream In-Reply-To: <20111107101902.GD95664@mdounin.ru> References: <20111106211504.GY95664@mdounin.ru> <20111107101902.GD95664@mdounin.ru> Message-ID: OK, thanks - it works. One more question: I have to use this variables in quotes because sometimes it resolves to just empty string and there is problem with parsing such logs. Why sometimes empty variables ends in single hyphen ("-", this is what I allways want) in access log and sometimes its just empty string ("")? -- Kamil Gorlo On Mon, Nov 7, 2011 at 11:19 AM, Maxim Dounin wrote: > Hello! > > On Mon, Nov 07, 2011 at 06:42:09AM +0100, Kamil Gorlo wrote: > >> Yeah, I have tried this, but this way non x-accel-redirect requests must be >> handled in old way which means I have to use 2 fields in access log. Is >> there any way to use only one field? > > E.g. you may use different log format for internal location. > > Maxim Dounin > >> 06-11-2011 22:15 u?ytkownik "Maxim Dounin" napisa?: >> >> > Hello! >> > >> > On Sun, Nov 06, 2011 at 09:48:08PM +0100, Kamil Gorlo wrote: >> > >> > > Hi, >> > > >> > > I have some problem with access log combined with X-Accel-Redirect >> > > requests. In my case I have Nginx set up as load-balancer to group of >> > > application servers. These servers return some special header in every >> > > request - I need to log value of this special header (lets call it >> > > 'X-user') in access log - also I do not want to expose this header to >> > > the world (proxy_hide_header helps here). >> > > >> > > Everything seems to work, but when there is X-Accel-Redirect request I >> > > have empty field in access log because of subrequest >> > > ($upstream_http_x_special is cleared because of subrequest, if I >> > > understand this mechanism correctly). How to make this work for every >> > > request? >> > > >> > > Here is my config: >> > > >> > > http { >> > > ? log_format extended '$request $upstream_http_x_user'; >> > > ? access_log /var/log/nginx/access.log extended; >> > > >> > > ? ... >> > > >> > > ? server { >> > > ? ? listen 80; >> > > >> > > ? ? location / { >> > > ? ? ? proxy_pass http://backend; >> > > ? ? ? proxy_hide_header X-User; >> > > ? ? } >> > > >> > > ? ? location /files { >> > > ? ? ? internal; >> > > ? ? ? proxy_pass http://filestore; >> > >> > Workaround is to use >> > >> > ? ? ? ?set $x_user $upstream_http_x_user; >> > >> > here (and to log $x_user instead). >> > >> > > ? ? } >> > > ? } >> > > } >> > >> > >> > Maxim Dounin >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> > > >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Nov 7 11:21:05 2011 From: nginx-forum at nginx.us (imran) Date: Mon, 07 Nov 2011 06:21:05 -0500 Subject: Can I log HTTP 400s to a different log file instead of my default access logs Message-ID: Hello I'd like to know if its possible for me to log HTTP 400 errors to a different log file. Right now they go in the access log. To give some background, we've got this problem where if a single resource is requested from chrome, after the resource request is served, a http 400 error is logged in the access logs. This I've understood to be is due to the nature of connections opened by Chrome (2 for every request) and one of the connections not been used before its closed. And nginx reports a 400 in this instance. I've gathered this based on a different forum entry. And when this 400 is logged, the $request_filename is /etc/nginx//html So I'm looking at any possibility where the 400s can be logged else where. I tried the following with no luck as well; server { location / { if ($request_filename ~* /etc/nginx) { access_log /someotherlog.log myformat } } } Any help/advice is much appreciated. Thanks!! Cheers -- Imran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217913,217913#msg-217913 From mdounin at mdounin.ru Mon Nov 7 11:29:11 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Nov 2011 15:29:11 +0400 Subject: Log response header from upstream In-Reply-To: References: <20111106211504.GY95664@mdounin.ru> <20111107101902.GD95664@mdounin.ru> Message-ID: <20111107112911.GI95664@mdounin.ru> Hello! On Mon, Nov 07, 2011 at 11:57:15AM +0100, Kamil Gorlo wrote: > OK, thanks - it works. > > One more question: I have to use this variables in quotes because > sometimes it resolves to just empty string and there is problem with > parsing such logs. > > Why sometimes empty variables ends in single hyphen ("-", this is what > I allways want) in access log and sometimes its just empty string > ("")? Hyphen is used when variable isn't found. Empty string - if it's found, but empty. Maxim Dounin > > -- > Kamil Gorlo > > > > On Mon, Nov 7, 2011 at 11:19 AM, Maxim Dounin wrote: > > Hello! > > > > On Mon, Nov 07, 2011 at 06:42:09AM +0100, Kamil Gorlo wrote: > > > >> Yeah, I have tried this, but this way non x-accel-redirect requests must be > >> handled in old way which means I have to use 2 fields in access log. Is > >> there any way to use only one field? > > > > E.g. you may use different log format for internal location. > > > > Maxim Dounin > > > >> 06-11-2011 22:15 u?ytkownik "Maxim Dounin" napisa?: > >> > >> > Hello! > >> > > >> > On Sun, Nov 06, 2011 at 09:48:08PM +0100, Kamil Gorlo wrote: > >> > > >> > > Hi, > >> > > > >> > > I have some problem with access log combined with X-Accel-Redirect > >> > > requests. In my case I have Nginx set up as load-balancer to group of > >> > > application servers. These servers return some special header in every > >> > > request - I need to log value of this special header (lets call it > >> > > 'X-user') in access log - also I do not want to expose this header to > >> > > the world (proxy_hide_header helps here). > >> > > > >> > > Everything seems to work, but when there is X-Accel-Redirect request I > >> > > have empty field in access log because of subrequest > >> > > ($upstream_http_x_special is cleared because of subrequest, if I > >> > > understand this mechanism correctly). How to make this work for every > >> > > request? > >> > > > >> > > Here is my config: > >> > > > >> > > http { > >> > > ? log_format extended '$request $upstream_http_x_user'; > >> > > ? access_log /var/log/nginx/access.log extended; > >> > > > >> > > ? ... > >> > > > >> > > ? server { > >> > > ? ? listen 80; > >> > > > >> > > ? ? location / { > >> > > ? ? ? proxy_pass http://backend; > >> > > ? ? ? proxy_hide_header X-User; > >> > > ? ? } > >> > > > >> > > ? ? location /files { > >> > > ? ? ? internal; > >> > > ? ? ? proxy_pass http://filestore; > >> > > >> > Workaround is to use > >> > > >> > ? ? ? ?set $x_user $upstream_http_x_user; > >> > > >> > here (and to log $x_user instead). > >> > > >> > > ? ? } > >> > > ? } > >> > > } > >> > > >> > > >> > Maxim Dounin > >> > > >> > _______________________________________________ > >> > nginx mailing list > >> > nginx at nginx.org > >> > http://mailman.nginx.org/mailman/listinfo/nginx > >> > > > > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Nov 7 11:33:06 2011 From: nginx-forum at nginx.us (Mark) Date: Mon, 07 Nov 2011 06:33:06 -0500 Subject: Rewrite /index.html to / In-Reply-To: <20111107072436.GA75036@nginx.com> References: <20111107072436.GA75036@nginx.com> Message-ID: <17abaaff9bd33c5e96dd8651f9e7d368.NginxMailingListEnglish@forum.nginx.org> Perfect! Many thanks. ??????? ???????. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217899,217915#msg-217915 From nginx-forum at nginx.us Mon Nov 7 11:38:01 2011 From: nginx-forum at nginx.us (imran) Date: Mon, 07 Nov 2011 06:38:01 -0500 Subject: Can I log HTTP 400s to a different log file instead of my default access logs In-Reply-To: References: Message-ID: Also to add on a bit more about the chrome connection issue which is causing the 400, I turned on info logging for error logs and when I get the 400, it logs a 'client closed prematurely connection while reading client request...' in the error logs. Cheers -- Imran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217913,217916#msg-217916 From appa at perusio.net Mon Nov 7 11:38:54 2011 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 07 Nov 2011 11:38:54 +0000 Subject: Can I log HTTP 400s to a different log file instead of my default access logs In-Reply-To: References: Message-ID: <87bosonovl.wl%appa@perusio.net> On 7 Nov 2011 11h21 WET, nginx-forum at nginx.us wrote: > Hello > > I'd like to know if its possible for me to log HTTP 400 errors to a > different log file. Right now they go in the access log. > > To give some background, we've got this problem where if a single > resource is requested from chrome, after the resource request is > served, a http 400 error is logged in the access logs. This I've > understood to be is due to the nature of connections opened by > Chrome (2 for every request) and one of the connections not been > used before its closed. And nginx reports a 400 in this > instance. I've gathered this based on a different forum entry. And > when this 400 is logged, the $request_filename is /etc/nginx//html Try (untested): at the http level: map $request_filename $is_400 { default 0; /etc/nginx/html 1; } on the vhost config: error_page 400 @log-400; location @log-400 { access_log /path/to/400.log; } Caveat emptor: I've never played with the access_log directive that much. --- appa From nginx-forum at nginx.us Mon Nov 7 11:48:59 2011 From: nginx-forum at nginx.us (roger.moffatt) Date: Mon, 07 Nov 2011 06:48:59 -0500 Subject: nginx config: multiple locations, authentication in one, triggered for both? In-Reply-To: <20111107102526.GE95664@mdounin.ru> References: <20111107102526.GE95664@mdounin.ru> Message-ID: <60cd72290ee1886f5f936a65de4d1920.NginxMailingListEnglish@forum.nginx.org> Doh! Of course ... I had a note on my list about favicon showing the wrong thing, and of course it was showing the wrong thing on my logged in system precisely because of this! >Most likely, the authentication request appears due to your >browser doing automatic requests to /favicon.ico or something >like. I can't test it at present but I'm certain this will be the problem, It makes me think that perhaps my config is a little dangerous so now I know the approach is correct, I'll perhaps swap things around so that I can keep / unprotected completely just in case and add the auth for known paths to the couch back end. That should work fine in my case as I only have a couple of databases to secure. Many thanks Maxim! Roger Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217906,217918#msg-217918 From appa at perusio.net Mon Nov 7 12:04:22 2011 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 07 Nov 2011 12:04:22 +0000 Subject: Can I log HTTP 400s to a different log file instead of my default access logs In-Reply-To: <87bosonovl.wl%appa@perusio.net> References: <87bosonovl.wl%appa@perusio.net> Message-ID: <878vnsnnp5.wl%appa@perusio.net> On 7 Nov 2011 11h38 WET, appa at perusio.net wrote: > On 7 Nov 2011 11h21 WET, nginx-forum at nginx.us wrote: > >> Hello >> >> I'd like to know if its possible for me to log HTTP 400 errors to a >> different log file. Right now they go in the access log. >> >> To give some background, we've got this problem where if a single >> resource is requested from chrome, after the resource request is >> served, a http 400 error is logged in the access logs. This I've >> understood to be is due to the nature of connections opened by >> Chrome (2 for every request) and one of the connections not been >> used before its closed. And nginx reports a 400 in this >> instance. I've gathered this based on a different forum entry. And >> when this 400 is logged, the $request_filename is /etc/nginx//html > > Try (untested): > > at the http level: > > map $request_filename $is_400 { > default 0; > /etc/nginx/html 1; > } Oops. Solly I haven't my coffe yet :) Try (untested): at the http level: map $request_filename $is_400 { default 0; /etc/nginx/html 1; } On the vhost: if ($is_400) { return 302 @log-400; } location @log-400 { access_log /path/to/400.log; } --- appa From nginx-forum at nginx.us Mon Nov 7 13:03:10 2011 From: nginx-forum at nginx.us (imran) Date: Mon, 07 Nov 2011 08:03:10 -0500 Subject: Can I log HTTP 400s to a different log file instead of my default access logs In-Reply-To: References: Message-ID: <1ca59b34305b7f9c4a0bae770c7d2736.NginxMailingListEnglish@forum.nginx.org> Hi Thanks for your reply. I tried this but didn't have any luck with it. Based on the error reported on the error log (client closed permaturely connection while reading client request line), I'm wondering if i can instead ignore logging those errors in the access logs. Would you if that's possible? Thanks!! Cheers -- Imran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217913,217921#msg-217921 From appa at perusio.net Mon Nov 7 14:19:34 2011 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 07 Nov 2011 14:19:34 +0000 Subject: Can I log HTTP 400s to a different log file instead of my default access logs In-Reply-To: <1ca59b34305b7f9c4a0bae770c7d2736.NginxMailingListEnglish@forum.nginx.org> References: <1ca59b34305b7f9c4a0bae770c7d2736.NginxMailingListEnglish@forum.nginx.org> Message-ID: <877h3cnhft.wl%appa@perusio.net> On 7 Nov 2011 13h03 WET, nginx-forum at nginx.us wrote: Well you can try this: location @is-400 { access_log off; } If you want to re-purpose the 400 code you have to use the error_page directive. --- appa From nginx-forum at nginx.us Mon Nov 7 14:43:43 2011 From: nginx-forum at nginx.us (mikolajj) Date: Mon, 07 Nov 2011 09:43:43 -0500 Subject: problem with url http://x.x.x.x/folder_name/index.php/css/column.css Message-ID: <7646443abbb1e9ceb9028a179f308ca6.NginxMailingListEnglish@forum.nginx.org> Problematic URL is http://x.x.x.x/folder_name/index.php/css/column.css Why apache and lighttpd can serve this url passing "/css/column.css" as REQUEST_URI to index.php file. Both in default configuration (right after clean apt-get installation on debian) ...and nginx is throwing 404 not found error What rewrite rules or other settings do I have to add/change to achieve the same functionality? (because nginx is really much faster ;-) no doubt about this) Regards Mikolaj Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217926,217926#msg-217926 From nginx-forum at nginx.us Mon Nov 7 16:17:14 2011 From: nginx-forum at nginx.us (mikolajj) Date: Mon, 07 Nov 2011 11:17:14 -0500 Subject: Rewrite problem http://x.x.x.x/folder_name/index.php/css/first.css Message-ID: <974ef8502ef4834d08bd7bfd138fc699.NginxMailingListEnglish@forum.nginx.org> Hello, I made test against lighttpd and ngninx is 50% faster... So I tried to use it with my already made application. (postgres & php). After setting evrything I found the URL to be a problem. Apache and lighttpd can use URL http://x.x.x.x/folder_name/index.php/css/first.css as http://x.x.x.x/folder_name/index.php with REQUEST_URI part = "/css/first.css" - this is handle by PHP and processed along... Nginx is trying to open index.php as folder an of course 404 is thrown... I was trying to rewrite this URL but no success my /etc/nginx/sites-available/default [code] ... server { listen 80; # listen 443 ssl; server_name jakasnazwa.localhost; ssl off; ssl_certificate /etc/nginx/server.crt; ssl_certificate_key /etc/nginx/server.key; root /home/main-www; index index.php index.htm index.html; error_log /var/log/nginx/website.error_log notice; #rewrite_log on; location ~ .php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/main-www$fastcgi_script_name; include fastcgi_params; } location / { autoindex on; if (-f $request_filename) { break; } if (!-e $request_filename) { rewrite ^(.*)/index.php/(.+)$ $1/index.php/$2 last; break; } } } [/code] Of course rewrite don't change anything (cut and paste, part of url in the same place) but I have no idea how to force nginx to fire php-fpm na index.php file passing everything what is after this as parameter? Can you point me to some clues or solution? PS I know that URL should look different, but application is ready, I don't want to change it in this moment . Regards Mikolaj Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217923,217923#msg-217923 From nginx-forum at nginx.us Mon Nov 7 16:17:28 2011 From: nginx-forum at nginx.us (olan) Date: Mon, 07 Nov 2011 11:17:28 -0500 Subject: port_in_redirect not working? Message-ID: <81d98cc4e630e880491f56113535bf85.NginxMailingListEnglish@forum.nginx.org> Hi all, I posted this question on SO but haven't received much attention, so I figured here would be better. Basically I am attempting to run varnish on port 80 and my nginx server on 8080, however any request to 80 redirects and appends the port to the URL. i.e. http://site.com/ -> http://site.com:8080/ >From everything I've read it seems that 'port_in_redirect off' should disable this, however it does not appear to be working. I've used a workaround for php by using "fastcgi_param SERVER_PORT 80;", which works, but requests to my "location /" are getting the port appended. My server setup is: server { listen 8080 default; server_name "" xxx.xxx.xxx.xxx; #just using IP here (no domain yet) port_in_redirect off; server_name_in_redirect off; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/site/html/; index index index.php; try_files $uri/ $uri /index.php?q=$uri&$args; } } upstream backend { server 127.0.0.1:9000; } I have tried port_in_redirect in both the "location /" and the server blocks, but neither works. Can anyone shed any light on this problem? I'm currently using nginx v1.0.9 on a ubuntu server. [Link to SO question (full .conf there): http://stackoverflow.com/questions/8026763/nginx-port-in-redirect-not-working] Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217928,217928#msg-217928 From nginx-forum at nginx.us Mon Nov 7 16:36:10 2011 From: nginx-forum at nginx.us (CarlWang) Date: Mon, 07 Nov 2011 11:36:10 -0500 Subject: How to handle NGX_AGAIN returned by ngx_http_read_client_request_body() within handler module? In-Reply-To: <15d512b5e71760db17d3b689b07a7fee.NginxMailingList@forum.nginx.org> References: <593ae63ad4bdca3873ae92148badf3a4.NginxMailingList@forum.nginx.org> <15d512b5e71760db17d3b689b07a7fee.NginxMailingList@forum.nginx.org> Message-ID: <801e5194299b4667b56f5d6ac28791e7.NginxMailingListEnglish@forum.nginx.org> I have a similar problem here. My handler module in Nginx is supposed to intercept POST requests, analyse it and send response( the response body maybe larger than 64k). Of course, I used ngx_http_read_client_request_body to register my request_body_handler. The request_body_handler does some computing and calls ngx_http_output_filter(r,out) to send the response. Here is the problem. If this response is less than 64k, it works fine. But if this response is larger than 64k, the ngx_http_output_filter will return NGX_AGAIN. However, since it's in the request_body_handler function and it has nothing to return. I can't figure out how to handle this NGX_AGAIN. Can anyone help me out? Thanks a lot. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3545,217932#msg-217932 From mdounin at mdounin.ru Mon Nov 7 17:26:34 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Nov 2011 21:26:34 +0400 Subject: port_in_redirect not working? In-Reply-To: <81d98cc4e630e880491f56113535bf85.NginxMailingListEnglish@forum.nginx.org> References: <81d98cc4e630e880491f56113535bf85.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111107172634.GR95664@mdounin.ru> Hello! On Mon, Nov 07, 2011 at 11:17:28AM -0500, olan wrote: > Hi all, > > I posted this question on SO but haven't received much attention, so I > figured here would be better. > > Basically I am attempting to run varnish on port 80 and my nginx server > on 8080, however any request to 80 redirects and appends the port to the > URL. > > i.e. http://site.com/ -> http://site.com:8080/ > > From everything I've read it seems that 'port_in_redirect off' should > disable this, however it does not appear to be working. I've used a > workaround for php by using "fastcgi_param SERVER_PORT 80;", which > works, but requests to my "location /" are getting the port appended. > > My server setup is: > > server { > listen 8080 default; > server_name "" xxx.xxx.xxx.xxx; #just using IP here (no domain > yet) > > port_in_redirect off; > server_name_in_redirect off; > > access_log /var/log/nginx/localhost.access.log; > > > location / { > root /var/www/site/html/; > index index index.php; > try_files $uri/ $uri /index.php?q=$uri&$args; > } > } > upstream backend { > server 127.0.0.1:9000; > } > > I have tried port_in_redirect in both the "location /" and the server > blocks, but neither works. > > Can anyone shed any light on this problem? I'm currently using nginx > v1.0.9 on a ubuntu server. > > [Link to SO question (full .conf there): > http://stackoverflow.com/questions/8026763/nginx-port-in-redirect-not-working] The "port_in_redirect" directive only alters behaviour of nginx itself (for trailing slash redirects, rewrites and so on), and it's expected that it does nothing for redirects returned by php. With "fastcgi_param SERVER_PORT 80;" php should be fine, and I suspect that problem you see is actually related to your browser (or varnish) cache sitting here from previous configurations. If clearing cache doesn't help, please provide debug log (see http://wiki.nginx.org/Debugging), it will show what goes on here. Maxim Dounin From mdounin at mdounin.ru Mon Nov 7 17:47:59 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Nov 2011 21:47:59 +0400 Subject: How to handle NGX_AGAIN returned by ngx_http_read_client_request_body() within handler module? In-Reply-To: <801e5194299b4667b56f5d6ac28791e7.NginxMailingListEnglish@forum.nginx.org> References: <593ae63ad4bdca3873ae92148badf3a4.NginxMailingList@forum.nginx.org> <15d512b5e71760db17d3b689b07a7fee.NginxMailingList@forum.nginx.org> <801e5194299b4667b56f5d6ac28791e7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111107174759.GT95664@mdounin.ru> Hello! On Mon, Nov 07, 2011 at 11:36:10AM -0500, CarlWang wrote: > I have a similar problem here. > My handler module in Nginx is supposed to intercept POST requests, > analyse it and send response( the response body maybe larger than 64k). > Of course, I used ngx_http_read_client_request_body to register my > request_body_handler. > The request_body_handler does some computing and calls > ngx_http_output_filter(r,out) to send the response. > Here is the problem. If this response is less than 64k, it works fine. > But if this response is larger than 64k, the ngx_http_output_filter will > return NGX_AGAIN. However, since it's in the request_body_handler > function and it has nothing to return. I can't figure out how to handle > this NGX_AGAIN. > > Can anyone help me out? Thanks a lot. You have to finalize request as usual, ngx_http_finalize_request(r, ngx_http_output_filter(r, out)); will do all work as long as you have full response in the "out" chain. Maxim Dounin From nginx-forum at nginx.us Mon Nov 7 18:25:44 2011 From: nginx-forum at nginx.us (olan) Date: Mon, 07 Nov 2011 13:25:44 -0500 Subject: port_in_redirect not working? In-Reply-To: <20111107172634.GR95664@mdounin.ru> References: <20111107172634.GR95664@mdounin.ru> Message-ID: <54dbeaf04d18eba2fa2b0b178cb9de48.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, thanks for getting back to me. I've enabled debugging and the output is here: http://pastebin.com/Dmw2nJMY. I've cleared all caches and restarted varnish, nginx and php5-fpm and mysql. I didn't explain myself very well earlier. The "fastcgi_param SERVER_PORT 80" does work for php pages (it's in my location ~ .php$ block) but the problem I'm having is with my server root. http://site.com/page.php <-- works fine http://site.com/ redirects to http://site.com:8080 http://site.com/index.php redirects to http://site.com:8080 One thing I noticed is "location /" block is used first, the index.php is tried, and then the "location ~ .php$" block processes this page. I would have thought that the SERVER_PORT would have rewritten the port used at this stage...? Any help is greatly appreciated! Thanks, Olan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217928,217939#msg-217939 From francis at daoine.org Mon Nov 7 19:42:11 2011 From: francis at daoine.org (Francis Daly) Date: Mon, 7 Nov 2011 19:42:11 +0000 Subject: Rewrite problem http://x.x.x.x/folder_name/index.php/css/first.css In-Reply-To: <974ef8502ef4834d08bd7bfd138fc699.NginxMailingListEnglish@forum.nginx.org> References: <974ef8502ef4834d08bd7bfd138fc699.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111107194211.GM27078@craic.sysops.org> On Mon, Nov 07, 2011 at 11:17:14AM -0500, mikolajj wrote: Hi there, > Apache and lighttpd can use URL > http://x.x.x.x/folder_name/index.php/css/first.css as > http://x.x.x.x/folder_name/index.php with REQUEST_URI part = > "/css/first.css" - this is handle by PHP and processed along... > > Nginx is trying to open index.php as folder an of course 404 is > thrown... Yes, that's how it is frequently configured by default. You'll want to configure it to match your application. > location ~ .php$ { > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /home/main-www$fastcgi_script_name; > include fastcgi_params; > } That location matches urls that end in "php". Your url is /folder_name/index.php/css/first.css, which doesn't end in php, so you'll want to use some location setting that matches that url -- perhaps "starts with /folder_name/index.php/" will be best. For testing, you could use "includes php", which would be location ~ php {} but that is unlikely to be good for the live site. Once you have the location definition correct, you'll want to use fastcgi_split_path_info http://wiki.nginx.org/HttpFcgiModule#fastcgi_split_path_info and then you'll probably want to set PATH_INFO or REQUEST_URI or whatever your application requires, like in the example there. > Can you point me to some clues or solution? Hopefully the above makes sense. Either change your current "php" location to match all urls you want handled by php; or make a new one that matches this location. And use fastcgi_split_path_info. And the debug log can be very useful if you get lost. Good luck, f -- Francis Daly francis at daoine.org From nginx at nginxuser.net Mon Nov 7 19:44:43 2011 From: nginx at nginxuser.net (Nginx User) Date: Mon, 7 Nov 2011 22:44:43 +0300 Subject: port_in_redirect not working? In-Reply-To: <54dbeaf04d18eba2fa2b0b178cb9de48.NginxMailingListEnglish@forum.nginx.org> References: <20111107172634.GR95664@mdounin.ru> <54dbeaf04d18eba2fa2b0b178cb9de48.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 7 November 2011 21:25, olan wrote: > Hi Maxim, thanks for getting back to me. > > I've enabled debugging and the output is here: > http://pastebin.com/Dmw2nJMY. I've cleared all caches and restarted > varnish, nginx and php5-fpm and mysql. Line 245 of your debug shows the issue. php is sending a header - http fastcgi header: "Location: http://XXX.XXX.XXX.XXX:8080/". The proxy_redirect directive handles this for proxy_pass setups but there isn't an equivalent for fastcgi. You can try tricking php by putting $_SERVER["SERVER_PORT"] right at the top of your index php (or use autoprepend in php.ini so it applies to all php files). I just had a problem with this with a proxy setup as the proxy_redirect only accepts variables in the redirect part such that I had to do .... proxy_redirect http://example-1.com:8080/ http://example-1.com/; proxy_redirect http://example-2.com:8080/ http://example-2.com/; proxy_redirect http://example-3.com:8080/ http://example-3.com/; ... proxy_redirect http://example-n.com:8080/ http://example-n.com/; If the directive accepted variables in the original uri part, a single line ... proxy_redirect http://$host:8080/ http://$host/; ... would have done the job and given flexibility. I have to remember to add every domain I create to the list. Anyway, I digress. Try the trick on PHP. Nothing can be done in Nginx until a fastcgi_redirect directive is introduced. From nginx at nginxuser.net Mon Nov 7 19:46:03 2011 From: nginx at nginxuser.net (Nginx User) Date: Mon, 7 Nov 2011 22:46:03 +0300 Subject: port_in_redirect not working? In-Reply-To: References: <20111107172634.GR95664@mdounin.ru> <54dbeaf04d18eba2fa2b0b178cb9de48.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 7 November 2011 22:44, Nginx User wrote: > You can try tricking php by putting $_SERVER["SERVER_PORT"] right at > the top of your index php (or use autoprepend in php.ini so it applies > to all php files). That should read - You can try tricking php by putting $_SERVER["SERVER_PORT"] = "80"; From varia at e-healthexpert.org Mon Nov 7 20:53:12 2011 From: varia at e-healthexpert.org (Mark Alan) Date: Mon, 7 Nov 2011 20:53:12 +0000 Subject: error: too many arguments to function 'ngx_time_update' Message-ID: <20111107205312.3eb23869@e-healthexpert.org> Hello, While trying to compile with: pdebuild ../nginx_1.1.7-0.ubuntu.2.dsc the latest nginx code using an Ubuntu 11.10 machine with these parameters: ./configure \\\ \t --prefix=/etc/nginx \\\ \t --conf-path=/etc/nginx/nginx.conf \\\ \t --error-log-path=/var/log/nginx/error.log \\\ \t --http-client-body-temp-path=/var/lib/nginx/body \\\ \t --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \\\ \t --http-log-path=/var/log/nginx/access.log \\\ \t --http-proxy-temp-path=/var/lib/nginx/proxy \\\ \t --lock-path=/var/lock/nginx.lock \\\ \t --pid-path=/var/run/nginx.pid \\\ \t --with-http_gzip_static_module \\\ \t --with-http_ssl_module \\\ \t --without-http-cache \\\ \t --without-http_browser_module \\\ \t --without-http_geo_module \\\ \t --without-http_limit_req_module \\\ \t --without-http_limit_zone_module \\\ \t --without-http_map_module \\\ \t --without-http_memcached_module \\\ \t --without-http_referer_module \\\ \t --without-http_scgi_module \\\ \t --without-http_split_clients_module \\\ \t --without-http_ssi_module \\\ \t --without-http_upstream_keepalive_module \\\ \t --without-http_userid_module \\\ \t --without-http_uwsgi_module \\\ $CONFIGURE_OPTS >$@ \ \ttouch $@ Can you help me to avoid the following errors? M. (...) gcc -c -Wall -g -O2 -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/src/ngx_http_echo_timer.o \ /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c: In function 'ngx_http_echo_timer_elapsed_variable': /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c:32:5: error: too many arguments to function 'ngx_time_update' src/core/ngx_times.h:23:6: note: declared here /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c: In function 'ngx_http_echo_exec_echo_reset_timer': /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c:70:5: error: too many arguments to function 'ngx_time_update' src/core/ngx_times.h:23:6: note: declared here make[2]: *** [objs/addon/src/ngx_http_echo_timer.o] Error 1 make[2]: Leaving directory `/tmp/buildd/nginx-1.1.7/debian/build-full' make[1]: *** [build] Error 2 make[1]: Leaving directory `/tmp/buildd/nginx-1.1.7/debian/build-full' make: *** [build-arch.full] Error 2 dpkg-buildpackage: error: debian/rules build gave error exit status 2 E: Failed autobuilding of package I: unmounting /var/cache/pbuilder/ccache filesystem I: unmounting dev/pts filesystem I: unmounting proc filesystem I: cleaning the build env I: removing directory /var/cache/pbuilder/build//23240 and its subdirectories From nginx-forum at nginx.us Mon Nov 7 20:54:24 2011 From: nginx-forum at nginx.us (token) Date: Mon, 07 Nov 2011 15:54:24 -0500 Subject: Multipule Map modules Message-ID: <848848260804f79dc05def8e8126ab0c.NginxMailingListEnglish@forum.nginx.org> What i would like to do is like Apache is to use multiple map files and get the vars form them i have tried the following and can only get one value `$variable1` what is a rewrite to get `$variable1` and `$variable2` in one rewrite rewrite ^(^\/*)/(.*)$ /index.php?key1=$variable1&key2=$variable2 last; map $uri $variable1 { default 11; /sub 7; } map $uri $variable2 { default 78; /pep 23; } Regards, David Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217936,217936#msg-217936 From appa at perusio.net Mon Nov 7 22:07:17 2011 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 07 Nov 2011 22:07:17 +0000 Subject: Multipule Map modules In-Reply-To: <848848260804f79dc05def8e8126ab0c.NginxMailingListEnglish@forum.nginx.org> References: <848848260804f79dc05def8e8126ab0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <871utjoacq.wl%appa@perusio.net> On 7 Nov 2011 20h54 WET, nginx-forum at nginx.us wrote: > What i would like to do is like Apache is to use multiple map files > and get the vars form them > > i have tried the following and can only get one value `$variable1` > what is a rewrite to get `$variable1` and `$variable2` in one > rewrite > > rewrite ^(^\/*)/(.*)$ /index.php?key1=$variable1&key2=$variable2 > last; > What are you trying to accomplish? > map $uri $variable1 { > default 11; > /sub 7; If your URI is /sub $variable1 becomes 7. > } > map $uri $variable2 { > default 78; > /pep 23; >} If your URI is /sub $variable2 becomes 23. It works as it should. I don't see any issue. Also your rewrite is unnecessary. Try this: return 301 /index.php?key1=$variable1&key2=$variable2; Do you mean that the request URI can contain both /pep and /sub?Like this: http://example.com/sub/pep/other-stuff-if-any If so then your matching must be done with a regex: map $uri $variable1 { default 11; ~/sub 7; } map $uri $variable2 { default 78; ~/pep 23; } --- appa From francis at daoine.org Mon Nov 7 22:12:37 2011 From: francis at daoine.org (Francis Daly) Date: Mon, 7 Nov 2011 22:12:37 +0000 Subject: Multipule Map modules In-Reply-To: <848848260804f79dc05def8e8126ab0c.NginxMailingListEnglish@forum.nginx.org> References: <848848260804f79dc05def8e8126ab0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111107221237.GN27078@craic.sysops.org> On Mon, Nov 07, 2011 at 03:54:24PM -0500, token wrote: Hi there, > What i would like to do is like Apache is to use multiple map files and > get the vars form them > > i have tried the following and can only get one value `$variable1` > what is a rewrite to get `$variable1` and `$variable2` in one rewrite > > rewrite ^(^\/*)/(.*)$ /index.php?key1=$variable1&key2=$variable2 last; I confess I'm not sure exactly what it is you are trying to do. For certain urls, grab parts of the url and send them as-is within QUERY_STRING to a php-processor? (In which case: doing the grabbing in the location definition is probably easiest.) Or grab parts of the url, and set other values in QUERY_STRING based on the url parts? (In which case, matching on $uri at server level and using the map-ped value within the location block is probably easiest.) Or maybe something else? > map $uri $variable1 { > default 11; > /sub 7; > > } > map $uri $variable2 { > default 78; > /pep 23; > } The map documentation is at http://wiki.nginx.org/HttpMapModule#map and there's an example of if/set/map in the thread at http://forum.nginx.org/read.php?2,194480 And there are examples of php-without-rewrite around as well -- in the location block, set "fastcgi_param SCRIPT_FILENAME" explicitly, as well as fastcgi_pass. Good luck, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Nov 7 22:22:29 2011 From: nginx-forum at nginx.us (nchainani) Date: Mon, 07 Nov 2011 17:22:29 -0500 Subject: Path to a directory should return 404 Message-ID: Under the document root, I have few directories like /1/file1.html /1/file2.html /2/file1.html /2/file2.html If I go to the url http://localhost:8080/1, I get back a 301 with the new location http://localhost/1/ autoindexing is off, and so I end up with a 403 Forbidden page. Can I change this default behavior such that all the requests for the directories get the universal 404 page i.e. instead of a redirection, it should simply return the configured 404 page for location /. Sorry, I couldn't find a similar post on the forums. Thanks for any pointers. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217950,217950#msg-217950 From xiaohanhoho at gmail.com Tue Nov 8 02:08:31 2011 From: xiaohanhoho at gmail.com (=?GB2312?B?0KS6rQ==?=) Date: Tue, 8 Nov 2011 10:08:31 +0800 Subject: 504 Gateway Time out Nginx 1.0.4: Upstream time out In-Reply-To: <073867d2c99fa075d96095ce39dd647c@ruby-forum.com> References: <073867d2c99fa075d96095ce39dd647c@ruby-forum.com> Message-ID: proxy_connect_timeout 10m; proxy_send_timeout 10m; proxy_read_timeout 8m; Is it too long? 2011/11/5 Roger Gue > HERE IS MY NGINX.CONF FILE DETAIL: > > user nginx; > worker_processes 1; > > error_log /var/log/nginx/error.log; > pid /var/run/nginx/nginx.pid; > > events { > worker_connections 2048; > # multi_accept on; > } > > http { > include /etc/nginx/mime.types; > > access_log /var/log/nginx/access.log; > > sendfile on; > tcp_nopush on; > > #keepalive_timeout 0; > keepalive_timeout 3; > tcp_nodelay off; > ignore_invalid_headers on; > > gzip on; > gzip_disable "MSIE [1-6]\.(?!.*SV1)"; > > gzip_comp_level 6; > gzip_static on; > gzip_min_length 2200; > gzip_buffers 16 8k; > gzip_http_version 1.1; > gzip_types text/plain text/css application/x-javascript image/x-icon > text/xml application/xml application/xml+rss text/javascript; > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > } > > > AND HERE IS MY PROXY.CONF FILE DETAIL > > #proxy.conf > #proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > client_body_buffer_size 256k; > proxy_connect_timeout 10m; > proxy_send_timeout 10m; > proxy_read_timeout 8m; > client_header_timeout 10m; > proxy_buffers 32 4k; > client_body_timeout 800; > proxy_ignore_client_abort on; > > client_header_buffer_size 1k; > large_client_header_buffers 8 8k; > client_max_body_size 2g; > > postpone_output 1460; > > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 8 02:42:20 2011 From: nginx-forum at nginx.us (wangbin579) Date: Mon, 07 Nov 2011 21:42:20 -0500 Subject: Tcpcopy,an online request replication tool fit for nginx In-Reply-To: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> References: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Name: tcpcopy It is a request replication tool and is mainly for testing tasks using netlink and raw sockets Description: It can help you find bugs in your online project without actually being online. And it can also be used to test the stress that a system could endure. For example, if your system already has "memcached" subsystem and you want to use "membase" to replace it, tcpcopy can assist you to test "membase". While your old memcached system is still running online, tcpcopy could copy the flow of packets from memcached to membase. From the point view of membase, the flow is accessing membase(just like membase online), and it will not affect memcached at all except network bandwidth and a little cpu load. Functionalities? 1) Distributed Stress Test You can use online data to test the stress that your target machine can endure. It is better than apache ab tool and you can find bugs that only occur during high-stress situations. 2) Hot Backup It is very suitable for backup tasks if connections are short-lived and the request loss rate is very low(1/100000). 3) Normal Online Test You can find whether the new system is stable and find bugs that only occur in actual online environments. 4) Comparison Test For example, you can use tcpcopy to compare the performances of apache and nginx. Characteristics? 1)real time 2)realistic 3)efficient 4)easy to use 5)distributed 6)significant Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217680,217954#msg-217954 From nginx-forum at nginx.us Tue Nov 8 02:45:27 2011 From: nginx-forum at nginx.us (wangbin579) Date: Mon, 07 Nov 2011 21:45:27 -0500 Subject: nginx_hmux_module - support hmux protocol proxy with Nginx In-Reply-To: <2d7909af72c23eb07f1385d88d6ae095.NginxMailingListEnglish@forum.nginx.org> References: <2d7909af72c23eb07f1385d88d6ae095.NginxMailingListEnglish@forum.nginx.org> Message-ID: <767a7623e78585d135e98eafb97bd091.NginxMailingListEnglish@forum.nginx.org> this module is running stable in a large project for more than 2 months Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217681,217955#msg-217955 From nginx-forum at nginx.us Tue Nov 8 02:49:12 2011 From: nginx-forum at nginx.us (wangbin579) Date: Mon, 07 Nov 2011 21:49:12 -0500 Subject: nginx_hmux_module - support hmux protocol proxy with Nginx In-Reply-To: <2d7909af72c23eb07f1385d88d6ae095.NginxMailingListEnglish@forum.nginx.org> References: <2d7909af72c23eb07f1385d88d6ae095.NginxMailingListEnglish@forum.nginx.org> Message-ID: <34eb287fcb3598c48d68c88648144899.NginxMailingListEnglish@forum.nginx.org> this module has been running stably in a large project for more than 2 months Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217681,217956#msg-217956 From nginx-forum at nginx.us Tue Nov 8 04:31:04 2011 From: nginx-forum at nginx.us (CarlWang) Date: Mon, 07 Nov 2011 23:31:04 -0500 Subject: How to handle NGX_AGAIN returned by ngx_http_read_client_request_body() within handler module? In-Reply-To: <20111107174759.GT95664@mdounin.ru> References: <20111107174759.GT95664@mdounin.ru> Message-ID: <8cd48bd50b12c0e8ea5d0bb1812c66eb.NginxMailingListEnglish@forum.nginx.org> Here are my code: static void v8_embed_handler ( ngx_http_request_t * r ) { ...// generating out chain. rc = ngx_http_output_filter ( r , out ); while( rc == NGX_AGAIN ) { if( out->next == NULL ) break; rc = ngx_http_output_filter ( r , out->next ); out = out->next; } ngx_http_finalize_request ( r , rc ); } static ngx_int_t ngx_http_v8_handler_request(ngx_http_request_t *r) { ngx_int_t rc = NGX_DONE ; rc = ngx_http_read_client_request_body ( r , v8_embed_handler ) ; // call the v8_embed_handler handler to process the post data if ( rc >= NGX_HTTP_SPECIAL_RESPONSE ) return rc; return NGX_DONE; } Then I changed the code as : static void v8_embed_handler ( ngx_http_request_t * r ) { ...// generating out chain. ngx_http_finalize_request ( r , ngx_http_output_filter ( r , out ) ); } However, it doesn't change the test result. If the response is larger than 64Kb, the problem is still there. I'm using curl to test it. It firstly output 64k response and it says "curl: (18) transfer closed with 19013 bytes remaining to read". Then it output a little part of the remaining response. I don't really understand why. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3545,217958#msg-217958 From nginx-forum at nginx.us Tue Nov 8 05:20:21 2011 From: nginx-forum at nginx.us (token) Date: Tue, 08 Nov 2011 00:20:21 -0500 Subject: Multipule Map modules In-Reply-To: <848848260804f79dc05def8e8126ab0c.NginxMailingListEnglish@forum.nginx.org> References: <848848260804f79dc05def8e8126ab0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9339d4215213908c3f55b8ac38ecadbc.NginxMailingListEnglish@forum.nginx.org> Hi, Ant?nio that is exactly wath is was trying to accomplish id didn't know i had to use the "~" at the beginning of the map key. Thank you >Do you mean that the request URI can contain both /pep and /sub?Like > this: > http://example.com/sub/pep/other-stuff-if-any > If so then your matching must be done with a regex: > map $uri $variable1 { > default 11; > ~/sub 7; > } > map $uri $variable2 { > default 78; > ~/pep 23; > } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217936,217959#msg-217959 From agentzh at gmail.com Tue Nov 8 06:56:17 2011 From: agentzh at gmail.com (agentzh) Date: Tue, 8 Nov 2011 14:56:17 +0800 Subject: How to handle NGX_AGAIN returned by ngx_http_read_client_request_body() within handler module? In-Reply-To: <8cd48bd50b12c0e8ea5d0bb1812c66eb.NginxMailingListEnglish@forum.nginx.org> References: <20111107174759.GT95664@mdounin.ru> <8cd48bd50b12c0e8ea5d0bb1812c66eb.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Nov 8, 2011 at 12:31 PM, CarlWang wrote: > Here are my code: > static void v8_embed_handler ( ngx_http_request_t * r ) > { > ? ? ? ?...// generating out chain. > ? ? ? ?rc = ngx_http_output_filter ( r , out ); > ? ? ? ?while( rc == NGX_AGAIN ) { > ? ? ? ? ? ? ? ?if( out->next == NULL ) > ? ? ? ? ? ? ? ? ? ? ? ?break; > ? ? ? ? ? ? ? ?rc = ngx_http_output_filter ( r , out->next ); > ? ? ? ? ? ? ? ?out = out->next; > ? ? ? ?} > ? ? ? ?ngx_http_finalize_request ( r , rc ); > } Please note that you shouldn't call ngx_http_output_filter on your data again when NGX_AGAIN is returned. The underlying copy and writer filters will buffer the output chains when the system send buffer is full. Furthermore, the ngx_http_finalize_request function will automatically register the ngx_http_writer as the write event handler to actually emit the outputs at every write event for you. And that's why Maxim suggested the simple call ngx_http_finalize_request(r, ngx_http_output_filter(r, out)) > static ngx_int_t ngx_http_v8_handler_request(ngx_http_request_t *r) > { > ? ? ? ?ngx_int_t rc = NGX_DONE ; > ? ? ? ?rc = ngx_http_read_client_request_body ( r , v8_embed_handler ) ; // > call the v8_embed_handler handler to process the post data > ? ? ? ?if ( rc >= NGX_HTTP_SPECIAL_RESPONSE ) > ? ? ? ? ? ? ? ?return rc; > ? ? ? ?return NGX_DONE; > } > Which phase is your ngx_http_v8_handler_request function running in? If it's running in the access or rewrite phase, then the coding structure can be quite different here. I suggest you take a look at how our ngx_lua module handle all of this in access, write, and content phases: http://wiki.nginx.org/HttpLuaModule BTW, taking a close look at the error.log when building nginx with --with-debug and enabling the debug error log level will be very helpful to debug such issues ;) Regards, -agentzh From mdounin at mdounin.ru Tue Nov 8 08:08:20 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Nov 2011 12:08:20 +0400 Subject: error: too many arguments to function 'ngx_time_update' In-Reply-To: <20111107205312.3eb23869@e-healthexpert.org> References: <20111107205312.3eb23869@e-healthexpert.org> Message-ID: <20111108080820.GW95664@mdounin.ru> Hello! On Mon, Nov 07, 2011 at 08:53:12PM +0000, Mark Alan wrote: [...] > Can you help me to avoid the following errors? [...] > /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c:32:5: error: too many arguments to function 'ngx_time_update' > src/core/ngx_times.h:23:6: note: declared here Errors are caused by 3rd party module you are trying to use. Either do not compile it, or update it (likely new version of the module exists which resolves this). Maxim Dounin From ru at nginx.com Tue Nov 8 08:35:29 2011 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 8 Nov 2011 08:35:29 +0000 Subject: Path to a directory should return 404 In-Reply-To: References: Message-ID: <20111108083529.GA19051@lo0.su> On Mon, Nov 07, 2011 at 05:22:29PM -0500, nchainani wrote: > Under the document root, I have few directories like > /1/file1.html > /1/file2.html > /2/file1.html > /2/file2.html > > If I go to the url http://localhost:8080/1, I get back a 301 with the > new location http://localhost/1/ autoindexing is off, and so I end > up with a 403 Forbidden page. > > Can I change this default behavior such that all the requests for the > directories get the universal 404 page i.e. instead of a redirection, it > should simply return the configured 404 page for location /. > > Sorry, I couldn't find a similar post on the forums. Thanks for any > pointers. Something like that maybe? : location / { : #error_page 404 ...; : location ~ /$ { return 404; } : } From agentzh at gmail.com Tue Nov 8 08:33:51 2011 From: agentzh at gmail.com (agentzh) Date: Tue, 8 Nov 2011 16:33:51 +0800 Subject: error: too many arguments to function 'ngx_time_update' In-Reply-To: <20111107205312.3eb23869@e-healthexpert.org> References: <20111107205312.3eb23869@e-healthexpert.org> Message-ID: On Tue, Nov 8, 2011 at 4:53 AM, Mark Alan wrote: > /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c: > In function > 'ngx_http_echo_timer_elapsed_variable': > > /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c:32:5: error: too many arguments to function 'ngx_time_update' > src/core/ngx_times.h:23:6: note: declared here > > /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c: > In function > 'ngx_http_echo_exec_echo_reset_timer': > > /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c:70:5: error: too many arguments to function 'ngx_time_update' > src/core/ngx_times.h:23:6: note: declared here > There was an incompatible API change in 0.7.66 that the ngx_time_update macro then takes a different number of arguments. This issue was already fixed in ngx_echo v0.33+, see http://wiki.nginx.org/HttpEchoModule#v0.33 I suggest you upgrade your ngx_echo module to the latest v0.37rc7 release, which can be downloaded here: https://github.com/agentzh/echo-nginx-module/tags Regards, -agentzh From nginx-forum at nginx.us Tue Nov 8 09:25:57 2011 From: nginx-forum at nginx.us (abcomp01) Date: Tue, 08 Nov 2011 04:25:57 -0500 Subject: php-fpm and nginx return "The page you are looking for is temporarily unavailable. Please try again later. " Message-ID: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> php-fpm and nginx return "The page you are looking for is temporarily unavailable. Please try again later. " call http://173.212.207.146/upload/info.php location ~ \.php$ { root /home/admin/18loadnet; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/admin/info.php; include /etc/nginx/fastcgi_params; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217967,217967#msg-217967 From igor at sysoev.ru Tue Nov 8 09:47:21 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 8 Nov 2011 13:47:21 +0400 Subject: php-fpm and nginx return "The page you are looking for is temporarily unavailable. Please try again later. " In-Reply-To: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> References: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111108094721.GB9232@nginx.com> On Tue, Nov 08, 2011 at 04:25:57AM -0500, abcomp01 wrote: > php-fpm and nginx return "The page you are looking for is temporarily > unavailable. Please try again later. " > > call http://173.212.207.146/upload/info.php > > location ~ \.php$ { > root /home/admin/18loadnet; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /home/admin/info.php; > include /etc/nginx/fastcgi_params; > } Please look in error_log for reason of this issue. -- Igor Sysoev From mdounin at mdounin.ru Tue Nov 8 09:47:42 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Nov 2011 13:47:42 +0400 Subject: php-fpm and nginx return "The page you are looking for is temporarily unavailable. Please try again later. " In-Reply-To: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> References: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111108094742.GY95664@mdounin.ru> Hello! On Tue, Nov 08, 2011 at 04:25:57AM -0500, abcomp01 wrote: > php-fpm and nginx return "The page you are looking for is temporarily > unavailable. Please try again later. " > > call http://173.212.207.146/upload/info.php > > location ~ \.php$ { > root /home/admin/18loadnet; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /home/admin/info.php; > include /etc/nginx/fastcgi_params; > } Try looking into error_log, it likely has explanation of the error returned. Maxim Dounin From nginx-forum at nginx.us Tue Nov 8 09:54:57 2011 From: nginx-forum at nginx.us (abcomp01) Date: Tue, 08 Nov 2011 04:54:57 -0500 Subject: php-fpm and nginx return "The page you are looking for is temporarily unavailable. Please try again later. " In-Reply-To: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> References: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9a8b677624d53347a2f3ef680decb9ba.NginxMailingListEnglish@forum.nginx.org> 2011/11/08 02:53:31 [error] 3983#0: *513879 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 171.98.8.155, server: _, request: "GET /upload/info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "173.212.207.146" ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217967,217973#msg-217973 From mdounin at mdounin.ru Tue Nov 8 10:00:23 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Nov 2011 14:00:23 +0400 Subject: php-fpm and nginx return "The page you are looking for is temporarily unavailable. Please try again later. " In-Reply-To: <9a8b677624d53347a2f3ef680decb9ba.NginxMailingListEnglish@forum.nginx.org> References: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> <9a8b677624d53347a2f3ef680decb9ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111108100023.GZ95664@mdounin.ru> Hello! On Tue, Nov 08, 2011 at 04:54:57AM -0500, abcomp01 wrote: > 2011/11/08 02:53:31 [error] 3983#0: *513879 recv() failed (104: > Connection reset by peer) while reading response header from upstream, > client: 171.98.8.155, server: _, request: "GET /upload/info.php > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: > "173.212.207.146" > > ? Backend (php-fpm) sent RST (i.e. terminated connection abnormally) before replying anything to nginx. Most likely it died. Try looking into php-fpm / system logs for more details. In any case it doesn't look like nginx issue. Maxim Dounin From nginx-forum at nginx.us Tue Nov 8 10:01:07 2011 From: nginx-forum at nginx.us (etrader) Date: Tue, 08 Nov 2011 05:01:07 -0500 Subject: Running Shell Script from html webpage Message-ID: I have a Nginx installed on Linux (Ubuntu or Centos) without any scripting language such as PHP or Python (connected to the webserver). Can I run Shell commands from a webpage (e.g. html)? I mean is it possible to use nginx without script languages by shell scripts? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217975,217975#msg-217975 From varia at e-healthexpert.org Tue Nov 8 10:58:31 2011 From: varia at e-healthexpert.org (Mark Alan) Date: Tue, 8 Nov 2011 10:58:31 +0000 Subject: solved: error: too many arguments to function 'ngx_time_update' In-Reply-To: <20111108080820.GW95664@mdounin.ru> References: <20111107205312.3eb23869@e-healthexpert.org> <20111108080820.GW95664@mdounin.ru> Message-ID: <20111108105831.5c6613c5@e-healthexpert.org> On Tue, 8 Nov 2011 12:08:20 +0400, Maxim Dounin wrote: > > /tmp/buildd/nginx-1.1.7/debian/modules/nginx-echo/src/ngx_http_echo_timer.c:32:5: > > error: too many arguments to function 'ngx_time_update' > > src/core/ngx_times.h:23:6: note: declared here > > Errors are caused by 3rd party module you are trying to use. > Either do not compile it, or update it (likely new version of the > module exists which resolves this). Thank you. As usual you were right. A simple sed -i '/MODULESDIR/d' debian/rules took care of the offending 'nginx-echo' module and of all the other modules that bloat the current ubuntu/debian version of Nginx code. Regards, M. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From varia at e-healthexpert.org Tue Nov 8 11:05:07 2011 From: varia at e-healthexpert.org (Mark Alan) Date: Tue, 8 Nov 2011 11:05:07 +0000 Subject: Is the 607418-ipv6-addresses.diff patch still needed? Message-ID: <20111108110507.1a0bb3ae@e-healthexpert.org> Hello, The current version of the debianized Nginx code applies the attached 607418-ipv6-addresses.diff patch. Do you still see any need to apply this patch to the current nginx_1.1.7 code? Regards, M. ######## 607418-ipv6-addresses.diff patch follows ######## Description: $host variable mis-parses IPv6 literal addresses from HTTP Author: Steven Chamberlain Debian-Bug: http://bugs.debian.org/607418 Last-Update: 2010-12-30 Index: trunk/src/http/ngx_http_request.c =================================================================== --- trunk.orig/src/http/ngx_http_request.c 2010-12-30 01:46:10.308926973 -0600 +++ trunk/src/http/ngx_http_request.c 2010-12-30 01:48:21.638927393 -0600 @@ -1650,11 +1650,12 @@ { u_char *h, ch; size_t i, last; - ngx_uint_t dot; + ngx_uint_t dot, in_brackets; last = len; h = *host; dot = 0; + in_brackets = 0; for (i = 0; i < len; i++) { ch = h[i]; @@ -1670,11 +1671,27 @@ dot = 0; - if (ch == ':') { + if (ch == '[' && i == 0) { + /* start of literal IPv6 address */ + in_brackets = 1; + continue; + } + + /* + * Inside square brackets, the colon is a delimeter for an IPv6 address. + * Otherwise it comes before the port number, so remove it. + */ + if (ch == ':' && !in_brackets) { last = i; continue; } + if (ch == ']') { + /* end of literal IPv6 address */ + in_brackets = 0; + continue; + } + if (ngx_path_separator(ch) || ch == '\0') { return 0; } @@ -1684,6 +1701,11 @@ } } + if (in_brackets) { + /* missing the closing square bracket for IPv6 address */ + return 0; + } + if (dot) { last--; } From cyril.lavier at davromaniak.eu Tue Nov 8 11:17:43 2011 From: cyril.lavier at davromaniak.eu (Cyril LAVIER) Date: Tue, 08 Nov 2011 12:17:43 +0100 Subject: Is the 607418-ipv6-addresses.diff patch still needed? In-Reply-To: <20111108110507.1a0bb3ae@e-healthexpert.org> References: <20111108110507.1a0bb3ae@e-healthexpert.org> Message-ID: Hi Mark. I think it can be removed. I'm compiling the 1.1.7 packages without the patch, and I will test it after. If this patch is not needed anymore, I will apply the changes in the SVN. Thanks for this remark. On Tue, 8 Nov 2011 11:05:07 +0000, Mark Alan wrote: > Hello, > > The current version of the debianized Nginx code applies the > attached 607418-ipv6-addresses.diff patch. > > Do you still see any need to apply this patch to the current > nginx_1.1.7 code? > > Regards, > > M. > > > ######## 607418-ipv6-addresses.diff patch follows ######## > Description: $host variable mis-parses IPv6 literal addresses from > HTTP > Author: Steven Chamberlain > Debian-Bug: http://bugs.debian.org/607418 > Last-Update: 2010-12-30 > > Index: trunk/src/http/ngx_http_request.c > =================================================================== > --- trunk.orig/src/http/ngx_http_request.c 2010-12-30 > 01:46:10.308926973 -0600 +++ trunk/src/http/ngx_http_request.c > 2010-12-30 01:48:21.638927393 -0600 @@ -1650,11 +1650,12 @@ > { > u_char *h, ch; > size_t i, last; > - ngx_uint_t dot; > + ngx_uint_t dot, in_brackets; > > last = len; > h = *host; > dot = 0; > + in_brackets = 0; > > for (i = 0; i < len; i++) { > ch = h[i]; > @@ -1670,11 +1671,27 @@ > > dot = 0; > > - if (ch == ':') { > + if (ch == '[' && i == 0) { > + /* start of literal IPv6 address */ > + in_brackets = 1; > + continue; > + } > + > + /* > + * Inside square brackets, the colon is a delimeter for an > IPv6 address. > + * Otherwise it comes before the port number, so remove it. > + */ > + if (ch == ':' && !in_brackets) { > last = i; > continue; > } > > + if (ch == ']') { > + /* end of literal IPv6 address */ > + in_brackets = 0; > + continue; > + } > + > if (ngx_path_separator(ch) || ch == '\0') { > return 0; > } > @@ -1684,6 +1701,11 @@ > } > } > > + if (in_brackets) { > + /* missing the closing square bracket for IPv6 address */ > + return 0; > + } > + > if (dot) { > last--; > } > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Cyril "Davromaniak" Lavier From kgorlo at gmail.com Tue Nov 8 11:18:29 2011 From: kgorlo at gmail.com (Kamil Gorlo) Date: Tue, 8 Nov 2011 12:18:29 +0100 Subject: Log response header from upstream In-Reply-To: <20111107112911.GI95664@mdounin.ru> References: <20111106211504.GY95664@mdounin.ru> <20111107101902.GD95664@mdounin.ru> <20111107112911.GI95664@mdounin.ru> Message-ID: > Hyphen is used when variable isn't found. ?Empty string - if it's > found, but empty. > I see that variable defined by user is always empty (no hyphen) even if it is not defined in location section connected with request (but defined somewhere else in nginx config - other location section not some section which is inherited). Is it normal? Cheers, Kamil Gorlo From masterkorp at gmail.com Tue Nov 8 11:22:38 2011 From: masterkorp at gmail.com (Alfredo Palhares) Date: Tue, 8 Nov 2011 11:22:38 +0000 Subject: Running Shell Script from html webpage In-Reply-To: References: Message-ID: Hello, well you can use a web anti-framework, that will enable you to use shell power. You can also use a RC templata to place the code into the html. Here is a link to werc http://www.werc.org/ -- Cumprimentos, Alfredo Palhares -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.lavier at davromaniak.eu Tue Nov 8 11:24:40 2011 From: cyril.lavier at davromaniak.eu (Cyril LAVIER) Date: Tue, 08 Nov 2011 12:24:40 +0100 Subject: Is the 607418-ipv6-addresses.diff patch still needed? In-Reply-To: References: <20111108110507.1a0bb3ae@e-healthexpert.org> Message-ID: <7a11c714169e41c5faf133af0f69d71f@davromaniak.eu> I just tested it. It seems to work without the patch. Thanks for this usefull remark. On Tue, 08 Nov 2011 12:17:43 +0100, Cyril LAVIER wrote: > Hi Mark. > > I think it can be removed. > > I'm compiling the 1.1.7 packages without the patch, and I will test > it after. > > If this patch is not needed anymore, I will apply the changes in the > SVN. > > Thanks for this remark. > > On Tue, 8 Nov 2011 11:05:07 +0000, Mark Alan wrote: >> Hello, >> >> The current version of the debianized Nginx code applies the >> attached 607418-ipv6-addresses.diff patch. >> >> Do you still see any need to apply this patch to the current >> nginx_1.1.7 code? >> >> Regards, >> >> M. >> >> >> ######## 607418-ipv6-addresses.diff patch follows ######## >> Description: $host variable mis-parses IPv6 literal addresses from >> HTTP >> Author: Steven Chamberlain >> Debian-Bug: http://bugs.debian.org/607418 >> Last-Update: 2010-12-30 >> >> Index: trunk/src/http/ngx_http_request.c >> =================================================================== >> --- trunk.orig/src/http/ngx_http_request.c 2010-12-30 >> 01:46:10.308926973 -0600 +++ trunk/src/http/ngx_http_request.c >> 2010-12-30 01:48:21.638927393 -0600 @@ -1650,11 +1650,12 @@ >> { >> u_char *h, ch; >> size_t i, last; >> - ngx_uint_t dot; >> + ngx_uint_t dot, in_brackets; >> >> last = len; >> h = *host; >> dot = 0; >> + in_brackets = 0; >> >> for (i = 0; i < len; i++) { >> ch = h[i]; >> @@ -1670,11 +1671,27 @@ >> >> dot = 0; >> >> - if (ch == ':') { >> + if (ch == '[' && i == 0) { >> + /* start of literal IPv6 address */ >> + in_brackets = 1; >> + continue; >> + } >> + >> + /* >> + * Inside square brackets, the colon is a delimeter for an >> IPv6 address. >> + * Otherwise it comes before the port number, so remove it. >> + */ >> + if (ch == ':' && !in_brackets) { >> last = i; >> continue; >> } >> >> + if (ch == ']') { >> + /* end of literal IPv6 address */ >> + in_brackets = 0; >> + continue; >> + } >> + >> if (ngx_path_separator(ch) || ch == '\0') { >> return 0; >> } >> @@ -1684,6 +1701,11 @@ >> } >> } >> >> + if (in_brackets) { >> + /* missing the closing square bracket for IPv6 address */ >> + return 0; >> + } >> + >> if (dot) { >> last--; >> } >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx -- Cyril "Davromaniak" Lavier From varia at e-healthexpert.org Tue Nov 8 11:52:11 2011 From: varia at e-healthexpert.org (Mark Alan) Date: Tue, 8 Nov 2011 11:52:11 +0000 Subject: Is the 607418-ipv6-addresses.diff patch still needed? In-Reply-To: <7a11c714169e41c5faf133af0f69d71f@davromaniak.eu> References: <20111108110507.1a0bb3ae@e-healthexpert.org> <7a11c714169e41c5faf133af0f69d71f@davromaniak.eu> Message-ID: <20111108115211.5310be68@e-healthexpert.org> On Tue, 08 Nov 2011 12:24:40 +0100, Cyril LAVIER wrote: > I just tested it. > It seems to work without the patch. > Thanks for this usefull remark. You are welcome. Although unrelated, I should note that there seems to be a market for a high security nginx-ultralight version. Such a version would be especially suited for those websites (most of the current CMS?) that process loads of fastcgi using php5-fpm and php-apc. I am attaching the debian/rules section that I have been using to get such 'nginx-ultralight' config.status.ultralight: config.env.ultralight config.sub config.guess \ cd $(BUILDDIR_ultralight) && ./configure \ --prefix=/etc/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-client-body-temp-path=/var/lib/nginx/body \ --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ --http-log-path=/var/log/nginx/access.log \ --http-proxy-temp-path=/var/lib/nginx/proxy \ --lock-path=/var/lock/nginx.lock \ --pid-path=/var/run/nginx.pid \ --with-http_gzip_static_module \ --with-http_ssl_module \ --without-http-cache \ --without-http_browser_module \ --without-http_geo_module \ --without-http_limit_req_module \ --without-http_limit_zone_module \ --without-http_map_module \ --without-http_memcached_module \ --without-http_referer_module \ --without-http_scgi_module \ --without-http_split_clients_module \ --without-http_ssi_module \ --without-http_upstream_keepalive_module \ --without-http_userid_module \ --without-http_uwsgi_module \ $(CONFIGURE_OPTS) >$@\ touch $@' Regards, M. From cyril.lavier at davromaniak.eu Tue Nov 8 12:00:39 2011 From: cyril.lavier at davromaniak.eu (Cyril LAVIER) Date: Tue, 08 Nov 2011 13:00:39 +0100 Subject: Is the 607418-ipv6-addresses.diff patch still needed? In-Reply-To: <20111108115211.5310be68@e-healthexpert.org> References: <20111108110507.1a0bb3ae@e-healthexpert.org> <7a11c714169e41c5faf133af0f69d71f@davromaniak.eu> <20111108115211.5310be68@e-healthexpert.org> Message-ID: <1c5231cd03c59e01dcc437d1ef37276f@davromaniak.eu> On Tue, 8 Nov 2011 11:52:11 +0000, Mark Alan wrote: > On Tue, 08 Nov 2011 12:24:40 +0100, Cyril LAVIER > wrote: > >> I just tested it. >> It seems to work without the patch. >> Thanks for this usefull remark. > > You are welcome. > > Although unrelated, I should note that there seems to be a market for > a high security nginx-ultralight version. > Such a version would be especially suited for those websites (most of > the current CMS?) that process loads of fastcgi using php5-fpm and > php-apc. > > I am attaching the debian/rules section that I have been using to get > such 'nginx-ultralight' > > config.status.ultralight: config.env.ultralight config.sub > config.guess > \ cd $(BUILDDIR_ultralight) && ./configure \ > --prefix=/etc/nginx \ > --conf-path=/etc/nginx/nginx.conf \ > --error-log-path=/var/log/nginx/error.log \ > --http-client-body-temp-path=/var/lib/nginx/body \ > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ > --http-log-path=/var/log/nginx/access.log \ > --http-proxy-temp-path=/var/lib/nginx/proxy \ > --lock-path=/var/lock/nginx.lock \ > --pid-path=/var/run/nginx.pid \ > --with-http_gzip_static_module \ > --with-http_ssl_module \ > --without-http-cache \ > --without-http_browser_module \ > --without-http_geo_module \ > --without-http_limit_req_module \ > --without-http_limit_zone_module \ > --without-http_map_module \ > --without-http_memcached_module \ > --without-http_referer_module \ > --without-http_scgi_module \ > --without-http_split_clients_module \ > --without-http_ssi_module \ > --without-http_upstream_keepalive_module \ > --without-http_userid_module \ > --without-http_uwsgi_module \ > $(CONFIGURE_OPTS) >$@\ > touch $@' > Could you open a Debian Bug on the source package nginx for this purpose ? I think it's a good idea to have a special nginx build for all CMS related uses, but it needs to be discussed between the maintainers. Thanks. > Regards, > M. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Cyril "Davromaniak" Lavier From mdounin at mdounin.ru Tue Nov 8 13:01:16 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Nov 2011 17:01:16 +0400 Subject: Is the 607418-ipv6-addresses.diff patch still needed? In-Reply-To: <20111108110507.1a0bb3ae@e-healthexpert.org> References: <20111108110507.1a0bb3ae@e-healthexpert.org> Message-ID: <20111108130116.GD95664@mdounin.ru> Hello! On Tue, Nov 08, 2011 at 11:05:07AM +0000, Mark Alan wrote: > The current version of the debianized Nginx code applies the > attached 607418-ipv6-addresses.diff patch. > > Do you still see any need to apply this patch to the current > nginx_1.1.7 code? See http://trac.nginx.org/nginx/ticket/1. Maxim Dounin From jelledejong at powercraft.nl Tue Nov 8 13:15:07 2011 From: jelledejong at powercraft.nl (Jelle de Jong) Date: Tue, 08 Nov 2011 14:15:07 +0100 Subject: Is the 607418-ipv6-addresses.diff patch still needed? In-Reply-To: <1c5231cd03c59e01dcc437d1ef37276f@davromaniak.eu> References: <20111108110507.1a0bb3ae@e-healthexpert.org> <7a11c714169e41c5faf133af0f69d71f@davromaniak.eu> <20111108115211.5310be68@e-healthexpert.org> <1c5231cd03c59e01dcc437d1ef37276f@davromaniak.eu> Message-ID: <4EB92B5B.4040208@powercraft.nl> On 08-11-11 13:00, Cyril LAVIER wrote: > Could you open a Debian Bug on the source package nginx for this purpose ? > I think it's a good idea to have a special nginx build for all CMS > related uses, but it needs to be discussed between the maintainers. +1 for wanting such a package :) Kind regards, Thanks in advance, Jelle de Jong -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 316 bytes Desc: OpenPGP digital signature URL: From mdounin at mdounin.ru Tue Nov 8 13:31:15 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Nov 2011 17:31:15 +0400 Subject: Log response header from upstream In-Reply-To: References: <20111106211504.GY95664@mdounin.ru> <20111107101902.GD95664@mdounin.ru> <20111107112911.GI95664@mdounin.ru> Message-ID: <20111108133115.GE95664@mdounin.ru> Hello! On Tue, Nov 08, 2011 at 12:18:29PM +0100, Kamil Gorlo wrote: > > Hyphen is used when variable isn't found. ?Empty string - if it's > > found, but empty. > > > > I see that variable defined by user is always empty (no hyphen) even > if it is not defined in location section connected with request (but > defined somewhere else in nginx config - other location section not > some section which is inherited). Is it normal? Yes. Maxim Dounin From me at keithfernie.co.uk Tue Nov 8 14:13:03 2011 From: me at keithfernie.co.uk (Keith Fernie) Date: Tue, 08 Nov 2011 14:13:03 -0000 Subject: Running Shell Script from html webpage In-Reply-To: References: Message-ID: I'am doing this with Debian Squeeze & fcgiwrap installed. I've seen fcgiwrap present in Ubuntu. Example script (Must be executable) #!/bin/sh # -*- coding: utf-8 -*- NAME=`"cpuinfo"` echo "Content-type:text/html\r\n" echo "" echo "$NAME" echo '' echo '' echo '' echo '' echo "
"
date
echo "\nuname -a"
uname -a
echo "\ncpuinfo"
cat /proc/cpuinfo
echo "
" Run it here http://newhost.qaq.me/cpuinfo.sh Also using this as an include file, not restricted to only shell scripts. location ~ (\.cgi|\.py|\.sh|\.pl|\.lua)$ { gzip off; root /var/www/$server_name; autoindex on; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fastcgi_params; fastcgi_param DOCUMENT_ROOT /var/www/$server_name; fastcgi_param SCRIPT_FILENAME /var/www/$server_name$fastcgi_script_name; } On Tue, 08 Nov 2011 10:01:07 -0000, etrader wrote: > I have a Nginx installed on Linux (Ubuntu or Centos) without any > scripting language such as PHP or Python (connected to the webserver). > Can I run Shell commands from a webpage (e.g. html)? > > I mean is it possible to use nginx without script languages by shell > scripts? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,217975,217975#msg-217975 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 8 15:23:58 2011 From: nginx-forum at nginx.us (CarlWang) Date: Tue, 08 Nov 2011 10:23:58 -0500 Subject: How to handle NGX_AGAIN returned by ngx_http_read_client_request_body() within handler module? In-Reply-To: References: Message-ID: <19de791219ef7a970c5c38a0be50c9cc.NginxMailingListEnglish@forum.nginx.org> Thanks for your help. I just followed Emiller's Guide To Nginx Module Development and build my Non-proxying Handler. I'm not quite sure which phase is the handler function running in. Maybe NGX_HTTP_ACCESS_PHASE or NGX_HTTP_CONTENT_PHASE. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3545,218016#msg-218016 From nginx-forum at nginx.us Tue Nov 8 16:03:03 2011 From: nginx-forum at nginx.us (etrader) Date: Tue, 08 Nov 2011 11:03:03 -0500 Subject: Speed up static content Message-ID: <37afec7fa5501b7d6f876ac308769295.NginxMailingListEnglish@forum.nginx.org> Is there a special configuration for static content like image files? I have no problem with nginx, and the reason that I am asking is to know if there is a different nginx conf to load static files faster. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218021,218021#msg-218021 From daniel.carrillo at gmail.com Tue Nov 8 16:23:13 2011 From: daniel.carrillo at gmail.com (Daniel Carrillo) Date: Tue, 8 Nov 2011 17:23:13 +0100 Subject: Speed up static content In-Reply-To: <37afec7fa5501b7d6f876ac308769295.NginxMailingListEnglish@forum.nginx.org> References: <37afec7fa5501b7d6f876ac308769295.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2011/11/8 etrader : > Is there a special configuration for static content like image files? I > have no problem with nginx, and the reason that I am asking is to know > if there is a different nginx conf to load static files faster. Hard to say without know your hardware, OS, processor, etc, but look at this: http://mailman.nginx.org/pipermail/nginx/2010-October/023025.html Hope it helps. From nginx-forum at nginx.us Tue Nov 8 16:44:26 2011 From: nginx-forum at nginx.us (CarlWang) Date: Tue, 08 Nov 2011 11:44:26 -0500 Subject: How to handle NGX_AGAIN returned by ngx_http_read_client_request_body() within handler module? In-Reply-To: <19de791219ef7a970c5c38a0be50c9cc.NginxMailingListEnglish@forum.nginx.org> References: <19de791219ef7a970c5c38a0be50c9cc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <316a9980abe0fd5ee19249b475960982.NginxMailingListEnglish@forum.nginx.org> I checked the error.log and found the ngx_http_v8_handler_request called in the post access phase. Then how should the coding structure be? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3545,218023#msg-218023 From nginx-forum at nginx.us Tue Nov 8 16:56:53 2011 From: nginx-forum at nginx.us (lpugoy) Date: Tue, 08 Nov 2011 11:56:53 -0500 Subject: Proxy cache for php site In-Reply-To: References: Message-ID: <205f048aeb2995eebccc12918de5a285.NginxMailingListEnglish@forum.nginx.org> I'm sorry, but I don't understand the effect of cookies on proxy caching. I'm trying to implement the same configuration as the one referred to, and I'm also having trouble in that it doesn't seem to be caching. I'm also using httperf. For every request the backend does return a different cookie. Is this what is preventing the caching from happening? In the debug log, I found some entries that refer to the cache. They are below: 2011/11/08 20:51:45 [debug] 21434#0: *1 http cache key: "httpwww.site.comGET/" 2011/11/08 20:51:45 [debug] 21434#0: *1 http script var: "" 2011/11/08 20:51:45 [debug] 21434#0: *1 add cleanup: 091D56A4 2011/11/08 20:51:45 [debug] 21434#0: shmtx lock 2011/11/08 20:51:45 [debug] 21434#0: slab alloc: 76 slot: 4 2011/11/08 20:51:45 [debug] 21434#0: slab alloc: B6F6B000 2011/11/08 20:51:45 [debug] 21434#0: shmtx unlock 2011/11/08 20:51:45 [debug] 21434#0: *1 http file cache exists: -5 e:0 2011/11/08 20:51:45 [debug] 21434#0: *1 cache file: "/var/cache/nginx/5/86/629d786ceca8ff787779a3e4ccdc8865" 2011/11/08 20:51:45 [debug] 21434#0: *1 add cleanup: 091D56E8 2011/11/08 20:51:45 [debug] 21434#0: *1 http upstream cache: -5 What do they mean? Thank you. Jon Bennett Wrote: ------------------------------------------------------- > Hi Appa, > > got it sorted thanks, had cookies being written > all the time and that > was stopping it. Now in production, many thanks! > > Jon Posted at Nginx Forum: http://forum.nginx.org/read.php?2,216756,217993#msg-217993 From nginx-forum at nginx.us Tue Nov 8 17:06:08 2011 From: nginx-forum at nginx.us (etrader) Date: Tue, 08 Nov 2011 12:06:08 -0500 Subject: Speed up static content In-Reply-To: References: Message-ID: <9c4be9f7898f71bf17580caf07153850.NginxMailingListEnglish@forum.nginx.org> Thanks Daniel! I run nginx (1.0.4 or 1.0.5) on different servers, but in general, they are based on Centos or Ubuntu running php via php-fpm. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218021,218026#msg-218026 From ianevans at digitalhit.com Tue Nov 8 17:59:54 2011 From: ianevans at digitalhit.com (Ian M. Evans) Date: Tue, 8 Nov 2011 12:59:54 -0500 Subject: Mailing list software that plays nice with nginx Message-ID: <9ff9e5633d4d8a7efa6229b329e6f092.squirrel@www.digitalhit.com> Hi everyone, Looking for s script (preferably PHP) that plays nice with nginx "out of the box" i.e. doesn't require rewriting .htaccess files into nginx rules, etc. Just lloking for something that will let me send out an announcement-only list, handle people adding/deleting themselves, etc. Scheduled emails would be a blessing. Anything out there fit the bill? From ianevans at digitalhit.com Tue Nov 8 18:08:31 2011 From: ianevans at digitalhit.com (Ian M. Evans) Date: Tue, 8 Nov 2011 13:08:31 -0500 Subject: Mailing list software that plays nice with nginx In-Reply-To: <9ff9e5633d4d8a7efa6229b329e6f092.squirrel@www.digitalhit.com> References: <9ff9e5633d4d8a7efa6229b329e6f092.squirrel@www.digitalhit.com> Message-ID: On Tue, November 8, 2011 12:59 pm, Ian M. Evans wrote: > Looking for s script (preferably PHP) that plays nice with nginx "out of > the box" i.e. doesn't require rewriting .htaccess files into nginx rules, > etc. > Okay, just discovered http://wiki.nginx.org/PHPList minutes after sending this email. If only I had waited. :-) Anyway, the settings there seem to be if the list software is installed at the document root. How would I change the locations if the software is installed at /lists? [I suck at locations and haven't had much sleep!] Thanks. From ionathan at gmail.com Tue Nov 8 18:36:14 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Tue, 8 Nov 2011 15:36:14 -0300 Subject: @fallback is not working in version 1.0.4? Message-ID: Hi! I'm using nginx 1.0.4 and I have the following configuration: server { listen 8080; server_name xxx.yyy.com; location / { error_page 500 502 503 @fallback; proxy_pass http://upstream1; } location @fallback { proxy_pass http://upstream2; } } And for some reason I don't get to upstream2 even if requests from upstream1 are 500. Am I doing something wrong? Thanks! Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Nov 8 18:50:42 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 8 Nov 2011 22:50:42 +0400 Subject: @fallback is not working in version 1.0.4? In-Reply-To: References: Message-ID: <20111108185042.GA21062@nginx.com> On Tue, Nov 08, 2011 at 03:36:14PM -0300, Jonathan Leibiusky wrote: > Hi! I'm using nginx 1.0.4 and I have the following configuration: > > server { > listen 8080; > server_name xxx.yyy.com; > location / { > error_page 500 502 503 @fallback; > proxy_pass http://upstream1; proxy_intercept_errors on; > } > location @fallback { > proxy_pass http://upstream2; > } > } > > And for some reason I don't get to upstream2 even if requests from > upstream1 are 500. > Am I doing something wrong? -- Igor Sysoev From ionathan at gmail.com Tue Nov 8 18:58:44 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Tue, 8 Nov 2011 15:58:44 -0300 Subject: @fallback is not working in version 1.0.4? In-Reply-To: <20111108185042.GA21062@nginx.com> References: <20111108185042.GA21062@nginx.com> Message-ID: awesome! thanks a lot! On Tue, Nov 8, 2011 at 3:50 PM, Igor Sysoev wrote: > On Tue, Nov 08, 2011 at 03:36:14PM -0300, Jonathan Leibiusky wrote: > > Hi! I'm using nginx 1.0.4 and I have the following configuration: > > > > server { > > listen 8080; > > server_name xxx.yyy.com; > > location / { > > error_page 500 502 503 @fallback; > > proxy_pass http://upstream1; > > proxy_intercept_errors on; > > > } > > location @fallback { > > proxy_pass http://upstream2; > > } > > } > > > > And for some reason I don't get to upstream2 even if requests from > > upstream1 are 500. > > Am I doing something wrong? > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Nov 8 21:29:04 2011 From: francis at daoine.org (Francis Daly) Date: Tue, 8 Nov 2011 21:29:04 +0000 Subject: Proxy cache for php site In-Reply-To: <205f048aeb2995eebccc12918de5a285.NginxMailingListEnglish@forum.nginx.org> References: <205f048aeb2995eebccc12918de5a285.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111108212904.GO27078@craic.sysops.org> On Tue, Nov 08, 2011 at 11:56:53AM -0500, lpugoy wrote: Hi there, > I'm sorry, but I don't understand the effect of cookies on proxy > caching. Caching involves fetching something once from the backend, and then sending the same response to many requests. A cookie is something that is set differently per request. If you have one, you don't have the other. By default. > I'm trying to implement the same configuration as the one > referred to, and I'm also having trouble in that it doesn't seem to be > caching. I'm also using httperf. For every request the backend does > return a different cookie. Is this what is preventing the caching from > happening? Probably. There are (rfc) rules on caching and cacheability of a http response -- but be aware that not all of them apply to a reverse proxy like nginx. Look at the http headers sent from your backend. If they indicate that the response is not cacheable, nginx won't cache it -- unless you configure it to. Setting a cookie is one way to make a response non-cacheable. If you don't want the cookie set, don't set it. Out-of-the-box, nginx obeys the rules. Get your backend to obey the rules too, and it will all work fine. If you won't do that, then you can (probably) configure nginx to break the rules in the specific way that you want it to when proxying your backend. For example, if the backend sets a cookie and you want nginx to cache that, do you want it to set the same cookie for all requests; or to set no cookie for all requests; or something else? You're likely much better off fixing the backend. Good luck, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Nov 8 21:30:52 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Nov 2011 01:30:52 +0400 Subject: Proxy cache for php site In-Reply-To: <205f048aeb2995eebccc12918de5a285.NginxMailingListEnglish@forum.nginx.org> References: <205f048aeb2995eebccc12918de5a285.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111108213052.GK95664@mdounin.ru> Hello! On Tue, Nov 08, 2011 at 11:56:53AM -0500, lpugoy wrote: > I'm sorry, but I don't understand the effect of cookies on proxy > caching. I'm trying to implement the same configuration as the one > referred to, and I'm also having trouble in that it doesn't seem to be > caching. I'm also using httperf. For every request the backend does > return a different cookie. Is this what is preventing the caching from > happening? Yes, nginx won't cache responses with cookies unless you specifically ask it to via proxy_ignore_headers directive[1]. Please also note that if you do so, you may want to also instruct nginx to hide returned cookies from clients (via proxy_hide_header directive[2]), or you'll end up with multiple clients with the same cookie. Example configuration should look like: proxy_ignore_headers Set-Cookie; proxy_hide_header Set-Cookie; Depending on the actual headers your backend returns you may also need to ignore other headers as well for cache to work, notably Expires and Cache-Control. Alternatively, you may want to instruct your backend to not return headers which prevent caching. [1] http://wiki.nginx.org/HttpProxyModule#proxy_ignore_headers [2] http://wiki.nginx.org/HttpProxyModule#proxy_hide_header > In the debug log, I found some entries that refer to the cache. They are > below: > > 2011/11/08 20:51:45 [debug] 21434#0: *1 http cache key: > "httpwww.site.comGET/" > 2011/11/08 20:51:45 [debug] 21434#0: *1 http script var: "" > 2011/11/08 20:51:45 [debug] 21434#0: *1 add cleanup: 091D56A4 > 2011/11/08 20:51:45 [debug] 21434#0: shmtx lock > 2011/11/08 20:51:45 [debug] 21434#0: slab alloc: 76 slot: 4 > 2011/11/08 20:51:45 [debug] 21434#0: slab alloc: B6F6B000 > 2011/11/08 20:51:45 [debug] 21434#0: shmtx unlock > 2011/11/08 20:51:45 [debug] 21434#0: *1 http file cache exists: -5 e:0 > 2011/11/08 20:51:45 [debug] 21434#0: *1 cache file: > "/var/cache/nginx/5/86/629d786ceca8ff787779a3e4ccdc8865" > 2011/11/08 20:51:45 [debug] 21434#0: *1 add cleanup: 091D56E8 > 2011/11/08 20:51:45 [debug] 21434#0: *1 http upstream cache: -5 > > What do they mean? Thank you. This debug lines correspond to lookup of the request in cache, and they show that no cached response exists. Maxim Dounin From keith at scott-land.net Wed Nov 9 01:27:47 2011 From: keith at scott-land.net (Keith) Date: Wed, 09 Nov 2011 01:27:47 +0000 Subject: php-fpm and nginx return "The page you are looking for is temporarily unavailable. Please try again later. " In-Reply-To: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> References: <9636a5d8da2b172192e56ee838fe791e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4EB9D713.9040902@scott-land.net> Hi, I was getting the same error earlier today. I eventually figured out that for me it was the mssql (NOT MySql) extension that was messing things up. A phpinfo() page would work fine but the mssql data import script that I was writing wouldn't work through a browser request. I noticed some Segfault errors in one of the logs. My script ran fine from the command line but just not through php-fpm. Didn't try fast-cgi though. Keith. On 08/11/2011 09:25, abcomp01 wrote: > php-fpm and nginx return "The page you are looking for is temporarily > unavailable. Please try again later. " > > call http://173.212.207.146/upload/info.php > > location ~ \.php$ { > root /home/admin/18loadnet; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /home/admin/info.php; > include /etc/nginx/fastcgi_params; > } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217967,217967#msg-217967 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From agentzh at gmail.com Wed Nov 9 01:28:48 2011 From: agentzh at gmail.com (agentzh) Date: Wed, 9 Nov 2011 09:28:48 +0800 Subject: How to handle NGX_AGAIN returned by ngx_http_read_client_request_body() within handler module? In-Reply-To: <316a9980abe0fd5ee19249b475960982.NginxMailingListEnglish@forum.nginx.org> References: <19de791219ef7a970c5c38a0be50c9cc.NginxMailingListEnglish@forum.nginx.org> <316a9980abe0fd5ee19249b475960982.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Nov 9, 2011 at 12:44 AM, CarlWang wrote: > I checked the error.log and found the ngx_http_v8_handler_request called > in the post access phase. > Then how should ?the coding structure be? > I've recommended our ngx_lua module for your reference: https://github.com/chaoslawful/lua-nginx-module Please check its source out. Best, -agentzh From btm at loftninjas.org Wed Nov 9 01:30:50 2011 From: btm at loftninjas.org (Bryan McLellan) Date: Tue, 8 Nov 2011 20:30:50 -0500 Subject: SOLVED: Timeout when sending over 16k of data with UTF-8 characters Message-ID: On Tue, Nov 1, 2011 at 9:56 AM, Maxim Dounin wrote: > Here 16384 bytes has been read from client, and one byte remains > ("rest 1"). ?OpenSSL doesn't provide any more bytes and claims it > needs more network input (2, SSL_ERROR_WANT_READ). > > 2011/11/01 12:44:24 [debug] 13689#0: *129 event timer del: 12: 1596563798 > 2011/11/01 12:44:24 [debug] 13689#0: *129 event timer add: 12: 60000:1596564354 > 2011/11/01 12:45:24 [debug] 13689#0: *129 event timer del: 12: 1596564354 > 2011/11/01 12:45:24 [debug] 13689#0: *129 http run request: "/organizations/opscode-btm/nodes/broken?" > 2011/11/01 12:45:24 [debug] 13689#0: *129 http finalize request: 408, "/organizations/opscode-btm/nodes/broken?" a:1, c:1 > > Though client fails to provide one more byte. > > For me, it looks like problem in client. ?It's either calculate > content length incorrectly or fails to properly flush ssl buffers > on it's side. Thanks! I confirmed that it was likely on the client side SSL by creating a duplicate nginx configuration that lacked SSL, which did not present the symptoms. I then narrowed it down to Ruby's SSL implementation and found a recent bug fix that resolves the issue. http://redmine.ruby-lang.org/issues/5233 Bryan From nginx-forum at nginx.us Wed Nov 9 02:14:04 2011 From: nginx-forum at nginx.us (dbanks) Date: Tue, 08 Nov 2011 21:14:04 -0500 Subject: gzip - unexplained side effects Message-ID: <48ec591068896e53321f58945d5a8836.NginxMailingListEnglish@forum.nginx.org> Hi, I'm running nginx 1.0.0 in front of a FCGI backend. We've been running in production for about 4 months, and have really been impressed with the performance and stability of nginx. We run a medium-volume appliction: 1000 to 4000 requests/sec spread over 2 instances by an upstream round robin load balancer. We use keepalives, but keep them rather short given the request volume to keep the number of open connections manageable. Recently, I was working to improve our gzip settings. The largest change was that I added an explicit gzip buffer line (find my config below - the three lines that are commented out seem to be correlated with this issue) After the change, the volume of outbound traffic decreased measurably, as if gzip was not originally doing much due to inadequate buffer space. Great news!, or so I thought. What also changed was that the number of active connections fell by about 75% - the gzip change was somehow causing keepalives to be closed prematurely. Also, our volume of incoming requests decreased a bit: as if some requests were being aborted (though accepts == handled). The request volume makes debugging this particular issue somewhat troublesome, since I have yet to replicate it in a quiet instance. The guts of my configuration appear below. This is such an unexpected issue that I'm not doing a great job of setting up my question well. What I think I'd like to know is how could a change to the gzip buffers (or the other two commented changes) impact keepalives or overall connection negotiation? Also, any suggestions as to how to go about debugging it? sendfile off; tcp_nodelay on; ignore_invalid_headers on; if_modified_since off; gzip on; gzip_comp_level 9; gzip_types text/javascript text/plain application/x-javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)" #gzip_buffers 512 4k; #gzip_min_length 1100; #if it fits in one packet, no worries #gzip_http_version 1.1; keepalive_timeout 6; keepalive_requests 4; Cheers, Dean Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218044,218044#msg-218044 From mike503 at gmail.com Wed Nov 9 04:59:56 2011 From: mike503 at gmail.com (Michael Shadle) Date: Tue, 8 Nov 2011 20:59:56 -0800 Subject: t-shirts In-Reply-To: <1307994899.2744.465.camel@portable-evil> References: <1307994899.2744.465.camel@portable-evil> Message-ID: anyone still know of any T-shirt options? On Mon, Jun 13, 2011 at 12:54 PM, Cliff Wells wrote: > For anyone interested, the guy who printed the Nginx t-shirts a couple > of years ago finally put the remainder up on eBay: > > http://cgi.ebay.com/NGINX-t-shirt-/170654481676?pt=US_Mens_Tshirts&var=&hash=item6d700b391a#ht_500wt_1156 > > He went out of business, so this is it. > > Regards, > Cliff > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://nginx.org/mailman/listinfo/nginx > From amoiz.shine at gmail.com Wed Nov 9 06:07:32 2011 From: amoiz.shine at gmail.com (Sharl.Jimh.Tsin) Date: Wed, 09 Nov 2011 14:07:32 +0800 Subject: t-shirts In-Reply-To: References: <1307994899.2744.465.camel@portable-evil> Message-ID: <1320818852.2227.1.camel@sharl-desktop> ? 2011-11-08?? 20:59 -0800?Michael Shadle??? > anyone still know of any T-shirt options? > > On Mon, Jun 13, 2011 at 12:54 PM, Cliff Wells wrote: > > For anyone interested, the guy who printed the Nginx t-shirts a couple > > of years ago finally put the remainder up on eBay: > > > > http://cgi.ebay.com/NGINX-t-shirt-/170654481676?pt=US_Mens_Tshirts&var=&hash=item6d700b391a#ht_500wt_1156 > > > > He went out of business, so this is it. > > > > Regards, > > Cliff > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx is this t-shirt sold by NGINX official? -- Best regards, Sharl.Jimh.Tsin (From China **Obviously Taiwan INCLUDED**) Using Gmail? Please read this important notice: http://www.fsf.org/campaigns/jstrap/gmail?10073. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From mike503 at gmail.com Wed Nov 9 06:11:07 2011 From: mike503 at gmail.com (Michael Shadle) Date: Tue, 8 Nov 2011 22:11:07 -0800 Subject: t-shirts In-Reply-To: <1320818852.2227.1.camel@sharl-desktop> References: <1307994899.2744.465.camel@portable-evil> <1320818852.2227.1.camel@sharl-desktop> Message-ID: No I don't believe they were ever "official" - I don't believe there are official ones. On Tue, Nov 8, 2011 at 10:07 PM, Sharl.Jimh.Tsin wrote: > is this t-shirt sold by NGINX official? From amoiz.shine at gmail.com Wed Nov 9 06:25:06 2011 From: amoiz.shine at gmail.com (Sharl.Jimh.Tsin) Date: Wed, 09 Nov 2011 14:25:06 +0800 Subject: t-shirts In-Reply-To: References: <1307994899.2744.465.camel@portable-evil> <1320818852.2227.1.camel@sharl-desktop> Message-ID: <1320819906.2227.5.camel@sharl-desktop> ? 2011-11-08?? 22:11 -0800?Michael Shadle??? > No I don't believe they were ever "official" - I don't believe there > are official ones. > > On Tue, Nov 8, 2011 at 10:07 PM, Sharl.Jimh.Tsin wrote: > > > is this t-shirt sold by NGINX official? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx well,if it is official one,i am willing to buy one for support. but for someone's personal action,i will worry about the quality of this clothes. -- Best regards, Sharl.Jimh.Tsin (From China **Obviously Taiwan INCLUDED**) Using Gmail? Please read this important notice: http://www.fsf.org/campaigns/jstrap/gmail?10073. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From cliff at develix.com Wed Nov 9 07:18:55 2011 From: cliff at develix.com (Cliff Wells) Date: Tue, 08 Nov 2011 23:18:55 -0800 Subject: t-shirts In-Reply-To: <1320819906.2227.5.camel@sharl-desktop> References: <1307994899.2744.465.camel@portable-evil> <1320818852.2227.1.camel@sharl-desktop> <1320819906.2227.5.camel@sharl-desktop> Message-ID: <1320823135.12168.4.camel@portable-evil> I can ask if he has any more and see if he will re-list them if he does. As far as quality, the screen printing is good (my wife and I have had ours for a few years now). The shirts themselves are fairly heavy material (for a t-shirt). The biggest issue is that white t-shirts get not-white over time. Regards, Cliff On Wed, 2011-11-09 at 14:25 +0800, Sharl.Jimh.Tsin wrote: > ? 2011-11-08?? 22:11 -0800?Michael Shadle??? > > No I don't believe they were ever "official" - I don't believe there > > are official ones. > > > > On Tue, Nov 8, 2011 at 10:07 PM, Sharl.Jimh.Tsin wrote: > > > > > is this t-shirt sold by NGINX official? > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > well,if it is official one,i am willing to buy one for support. > > but for someone's personal action,i will worry about the quality of this > clothes. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Nov 9 11:02:52 2011 From: nginx-forum at nginx.us (lpugoy) Date: Wed, 09 Nov 2011 06:02:52 -0500 Subject: Proxy cache for php site In-Reply-To: <20111108213052.GK95664@mdounin.ru> References: <20111108213052.GK95664@mdounin.ru> Message-ID: <6a7d9d2ee046df8d9c9cfdee121e16d8.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Yes, nginx won't cache responses with cookies > unless you > specifically ask it to via proxy_ignore_headers > directive[1]. > > Please also note that if you do so, you may want > to also instruct > nginx to hide returned cookies from clients (via > proxy_hide_header > directive[2]), or you'll end up with multiple > clients with the > same cookie. > > Example configuration should look like: > > proxy_ignore_headers Set-Cookie; > proxy_hide_header Set-Cookie; > > Depending on the actual headers your backend > returns you may also > need to ignore other headers as well for cache to > work, notably > Expires and Cache-Control. > > Alternatively, you may want to instruct your > backend to not return > headers which prevent caching. > > [1] > http://wiki.nginx.org/HttpProxyModule#proxy_ignore > _headers > [2] > http://wiki.nginx.org/HttpProxyModule#proxy_hide_h > eader > Hello. Thank you for the explanation. I now understand the problem and it is now working. I want to add a condition where if a cookie with the name sess_id is included in the request, then the cache would be bypassed. I tried the following if ($http_cookie ~* "sess_id") { set $no_cache "1"; } proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; but it doesn't seem to work. What is the correct way to do this? Thank you very much. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,216756,218066#msg-218066 From brian at akins.org Wed Nov 9 11:56:07 2011 From: brian at akins.org (Brian Akins) Date: Wed, 9 Nov 2011 06:56:07 -0500 Subject: gzip - unexplained side effects In-Reply-To: <48ec591068896e53321f58945d5a8836.NginxMailingListEnglish@forum.nginx.org> References: <48ec591068896e53321f58945d5a8836.NginxMailingListEnglish@forum.nginx.org> Message-ID: <82C7407F-B8AB-41F9-BE78-CB6F19198710@akins.org> On Nov 8, 2011, at 9:14 PM, dbanks wrote: > > gzip_comp_level 9; > Not really related to your question, but this uses much more cpu than the default for very little gain. We always run this at "1" Unless you have some special requirements (with benchmarks), I'd do the same. --Brian From igor at sysoev.ru Wed Nov 9 12:41:33 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 9 Nov 2011 16:41:33 +0400 Subject: gzip - unexplained side effects In-Reply-To: <48ec591068896e53321f58945d5a8836.NginxMailingListEnglish@forum.nginx.org> References: <48ec591068896e53321f58945d5a8836.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111109124133.GB42247@nginx.com> On Tue, Nov 08, 2011 at 09:14:04PM -0500, dbanks wrote: > Hi, > > I'm running nginx 1.0.0 in front of a FCGI backend. We've been running > in production for about 4 months, and have really been impressed with > the performance and stability of nginx. > > We run a medium-volume appliction: 1000 to 4000 requests/sec spread over > 2 instances by an upstream round robin load balancer. We use > keepalives, but keep them rather short given the request volume to keep > the number of open connections manageable. > > Recently, I was working to improve our gzip settings. The largest > change was that I added an explicit gzip buffer line (find my config > below - the three lines that are commented out seem to be correlated > with this issue) > > After the change, the volume of outbound traffic decreased measurably, > as if gzip was not originally doing much due to inadequate buffer space. > Great news!, or so I thought. What also changed was that the number of > active connections fell by about 75% - the gzip change was somehow > causing keepalives to be closed prematurely. Also, our volume of > incoming requests decreased a bit: as if some requests were being > aborted (though accepts == handled). The request volume makes > debugging this particular issue somewhat troublesome, since I have yet > to replicate it in a quiet instance. > > The guts of my configuration appear below. This is such an unexpected > issue that I'm not doing a great job of setting up my question well. > What I think I'd like to know is how could a change to the gzip buffers > (or the other two commented changes) impact keepalives or overall > connection negotiation? Also, any suggestions as to how to go about > debugging it? > > sendfile off; > tcp_nodelay on; > ignore_invalid_headers on; > if_modified_since off; > > gzip on; > gzip_comp_level 9; > gzip_types text/javascript text/plain application/x-javascript; > gzip_disable "MSIE [1-6]\.(?!.*SV1)" > #gzip_buffers 512 4k; > #gzip_min_length 1100; #if it fits in one packet, no worries > #gzip_http_version 1.1; > > keepalive_timeout 6; > keepalive_requests 4; If you comment gzip_buffers, you see the previous site state ? What is typical uncompressed and compressed response size ? The default gzip_buffers are "32 4k", so they can keep up to 128K. And as it was already suggested it's better to use default gzip_comp_level 1. -- Igor Sysoev From nginx-forum at nginx.us Wed Nov 9 13:18:55 2011 From: nginx-forum at nginx.us (lpugoy) Date: Wed, 09 Nov 2011 08:18:55 -0500 Subject: Proxy cache for php site In-Reply-To: <6a7d9d2ee046df8d9c9cfdee121e16d8.NginxMailingListEnglish@forum.nginx.org> References: <20111108213052.GK95664@mdounin.ru> <6a7d9d2ee046df8d9c9cfdee121e16d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <865ea5535942c65c4bd29e049108398d.NginxMailingListEnglish@forum.nginx.org> lpugoy Wrote: > Hello. > > Thank you for the explanation. I now understand > the problem and it is now working. > > I want to add a condition where if a cookie with > the name sess_id is included in the request, then > the cache would be bypassed. I tried the following > > > if ($http_cookie ~* "sess_id") { > set $no_cache "1"; > } > > proxy_no_cache $no_cache; > proxy_cache_bypass $no_cache; > > but it doesn't seem to work. What is the correct > way to do this? Thank you very much. An update. I don't think that the if statement is being entered. There are these messages in the debug log 2011/11/09 21:14:03 [debug] 6397#0: *3 http script var 2011/11/09 21:14:03 [debug] 6397#0: *3 http script regex: "sess_id" 2011/11/09 21:14:03 [notice] 6397#0: *3 "sess_id" does not match "", client: 10.214.99.16, server: *.site.com, request: "GET / HTTP/1.1", host: "www.site.com" 2011/11/09 21:14:03 [debug] 6397#0: *3 http script if 2011/11/09 21:14:03 [debug] 6397#0: *3 http script if: false Is there another way I can test for existence of cookies? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,216756,218084#msg-218084 From zjay1987 at gmail.com Wed Nov 9 14:30:57 2011 From: zjay1987 at gmail.com (li zJay) Date: Wed, 9 Nov 2011 22:30:57 +0800 Subject: Nginx HttpAccessModule command invalid when rewrite command existed Message-ID: In the following simple case: location /entry1 { allow 127.0.0.1; } -- ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zjay1987 at gmail.com Wed Nov 9 14:35:44 2011 From: zjay1987 at gmail.com (li zJay) Date: Wed, 9 Nov 2011 22:35:44 +0800 Subject: Nginx HttpAccessModule command invalid when rewrite command existed In-Reply-To: References: Message-ID: In the following simple case: location /entry1 { allow 127.0.0.1; deny all; rewrite ***; } the allow/deny command has no effect. Is that because rewrite command works in the earlier phase? Thanks! On Wed, Nov 9, 2011 at 10:30 PM, li zJay wrote: > In the following simple case: > > location /entry1 { > allow 127.0.0.1; > > } > > -- > > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Wed Nov 9 14:42:59 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 9 Nov 2011 18:42:59 +0400 Subject: Nginx HttpAccessModule command invalid when rewrite command existed In-Reply-To: References: Message-ID: <20111109144259.GF42247@nginx.com> On Wed, Nov 09, 2011 at 10:35:44PM +0800, li zJay wrote: > In the following simple case: > > location /entry1 { > allow 127.0.0.1; > deny all; > rewrite ***; > } > > the allow/deny command has no effect. Is that because rewrite command works > in the earlier phase? Yes, rewrites run before allow/deny. -- Igor Sysoev From ianevans at digitalhit.com Wed Nov 9 14:50:05 2011 From: ianevans at digitalhit.com (Ian Evans) Date: Wed, 09 Nov 2011 09:50:05 -0500 Subject: Mailing list software that plays nice with nginx - locations help In-Reply-To: References: <9ff9e5633d4d8a7efa6229b329e6f092.squirrel@www.digitalhit.com> Message-ID: <4EBA931D.8070301@digitalhit.com> On 08/11/2011 1:08 PM, Ian M. Evans wrote: > Anyway, the settings there seem to be if the list software is installed at > the document root. > How would I change the locations if the software is installed at /lists? > [I suck at locations and haven't had much sleep!] Here are the locations on the wiki: location ~* \.(txt|log|inc)$ { location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { location /config { location ~* (index\.php|upload\.php|connector\.php|dl\.php|ut\.php|lt\.php|download\.php)$ { fastcgi_split_path_info ^(.|\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; [etc] } location ~ \.php$ { If phplists is being installed in /lists and not the root as in the wiki example (http://wiki.nginx.org/PHPList), how would I change these locations? Would it be location ~* /lists\.(txt|log|inc)$ { etc? And how would I change fastcgi_split_path_info ^(.|\.php)(/.+)$; and fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; Thanks. From zjay1987 at gmail.com Wed Nov 9 14:58:13 2011 From: zjay1987 at gmail.com (li zJay) Date: Wed, 9 Nov 2011 22:58:13 +0800 Subject: Nginx HttpAccessModule command invalid when rewrite command existed In-Reply-To: <20111109144259.GF42247@nginx.com> References: <20111109144259.GF42247@nginx.com> Message-ID: Thanks Igor. That is OK, and I had a think and use the following commands instead: location /entry1 { if ( ! $remote_addr ~ "^(127\.0\.0|10\.10\.10)" ) { return 403; } rewrite *** } On Wed, Nov 9, 2011 at 10:42 PM, Igor Sysoev wrote: > On Wed, Nov 09, 2011 at 10:35:44PM +0800, li zJay wrote: > > In the following simple case: > > > > location /entry1 { > > allow 127.0.0.1; > > deny all; > > rewrite ***; > > } > > > > the allow/deny command has no effect. Is that because rewrite command > works > > in the earlier phase? > > Yes, rewrites run before allow/deny. > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Nov 9 19:09:29 2011 From: nginx-forum at nginx.us (dbanks) Date: Wed, 09 Nov 2011 14:09:29 -0500 Subject: gzip - unexplained side effects In-Reply-To: <20111109124133.GB42247@nginx.com> References: <20111109124133.GB42247@nginx.com> Message-ID: Hi Igor, thanks for your response! >>If you comment gzip_buffers, you see the previous site state ? If I comment gzip_buffers, gzip_min_length, and gzip_http_version, I see the previous (desired) behavior. >>What is typical uncompressed and compressed response size ? Responses are 6.8k or smaller uncompressed, 2.6k compressed (gzip_comp_level=9), 2.8k compressed (gzip_comp_level=1). About half of the responses are this size, and the other half are less than 1k uncompressed. We currently have more than adequate CPU and want to minimize bandwidth costs, so I had assumed that more compression was better. Is there another reason that I should stay with the default gzip_comp_level=1? (I'm happy to try it--just curious.) Cheers, Dean Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218044,218117#msg-218117 From nginx-forum at nginx.us Thu Nov 10 07:13:17 2011 From: nginx-forum at nginx.us (wangbin579) Date: Thu, 10 Nov 2011 02:13:17 -0500 Subject: Tcpcopy,an online request replication tool fit for nginx In-Reply-To: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> References: <141f5701aa1bdc50e4d7a29c237508a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <18a32992e0352db3ca2a68e1370790c6.NginxMailingListEnglish@forum.nginx.org> tcpcopy-0.2.1.tar.gz is now available. Try it and I believe you will like this tool Posted at Nginx Forum: http://forum.nginx.org/read.php?2,217680,218138#msg-218138 From nginx-forum at nginx.us Thu Nov 10 07:50:03 2011 From: nginx-forum at nginx.us (lpugoy) Date: Thu, 10 Nov 2011 02:50:03 -0500 Subject: Proxy cache for php site In-Reply-To: <865ea5535942c65c4bd29e049108398d.NginxMailingListEnglish@forum.nginx.org> References: <20111108213052.GK95664@mdounin.ru> <6a7d9d2ee046df8d9c9cfdee121e16d8.NginxMailingListEnglish@forum.nginx.org> <865ea5535942c65c4bd29e049108398d.NginxMailingListEnglish@forum.nginx.org> Message-ID: > An update. I don't think that the if statement is > being entered. There are these messages in the > debug log > > > 2011/11/09 21:14:03 [debug] 6397#0: *3 http script > var > 2011/11/09 21:14:03 [debug] 6397#0: *3 http script > regex: "sess_id" > 2011/11/09 21:14:03 [notice] 6397#0: *3 "sess_id" > does not match "", client: 10.214.99.16, server: > *.site.com, request: "GET / HTTP/1.1", host: > "www.site.com" > 2011/11/09 21:14:03 [debug] 6397#0: *3 http script > if > 2011/11/09 21:14:03 [debug] 6397#0: *3 http script > if: false > > Is there another way I can test for existence of > cookies? The problem is apparently with my method of passing cookies. curl -b does not work. curl -b sess_id=... does. It's now working again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,216756,218143#msg-218143 From nginx-forum at nginx.us Thu Nov 10 11:12:54 2011 From: nginx-forum at nginx.us (etrader) Date: Thu, 10 Nov 2011 06:12:54 -0500 Subject: Reducing Connect time for static files Message-ID: Exploring the webpage load by pingdom service, I just found that the main delay of page load is due to slow load of images. The images come from a cookieless domain. I changed the nginx conf for that domain as location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|swf|wma|wmv)$ { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; access_log off; } It did speed up the load of images a lot; but still, the load of images is not fast enough Connect: 100 ms Send: 1 ms Wait 200 ms Receive: 100 ms Receive is controlled by the data transfer, but I believe it is possible to reduce the length of connect and wait time by nginx configuration. Any idea? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218159,218159#msg-218159 From r at roze.lv Thu Nov 10 14:27:14 2011 From: r at roze.lv (Reinis Rozitis) Date: Thu, 10 Nov 2011 16:27:14 +0200 Subject: proxy_store HEAD Message-ID: Hello, is there a (simple) way to make 'proxy_store on' store the files also on HEAD requests instead of just returning the response from upstream? I understand the reasons behind the current behaviour but I'm trying to build some sort of synchronisation between few nginx boxes without the need to actually download the files to the client - so far the only option seems to use 'proxy_ignore_client_abort' make a GET request and forcefully abort it (since even ranged 1 byte requests don't store the file on the "proxying" host). p.s. it seems that wiki is a bit outdated and 'proxy_cache_methods' is deprecated/notexisting at least in 1.1.x tree rr From mdounin at mdounin.ru Thu Nov 10 15:55:42 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Nov 2011 19:55:42 +0400 Subject: proxy_store HEAD In-Reply-To: References: Message-ID: <20111110155542.GU95664@mdounin.ru> Hello! On Thu, Nov 10, 2011 at 04:27:14PM +0200, Reinis Rozitis wrote: > Hello, > is there a (simple) way to make 'proxy_store on' store the files > also on HEAD requests instead of just returning the response from > upstream? > > I understand the reasons behind the current behaviour but I'm trying > to build some sort of synchronisation between few nginx boxes > without the need to actually download the files to the client - so > far the only option seems to use 'proxy_ignore_client_abort' make a > GET request and forcefully abort it (since even ranged 1 byte > requests don't store the file on the "proxying" host). You may try proxy_method GET; in a server/location dedicated for synchronization. It will cause nginx to use GET requests to upstream (though it won't sent anything to client as long as original request was HEAD one). This is basically identical to what proxy_cache does with HEAD requests. BTW, proxy_ignore_client_abort is meaningless with proxy_store, as proxy_store activates the same logic unconditionally. > p.s. it seems that wiki is a bit outdated and > 'proxy_cache_methods' is deprecated/notexisting at least in > 1.1.x tree The "proxy_cache_methods" directive exists and not deprecated. It's for proxy_cache though, not proxy_store. Maxim Dounin From r at roze.lv Thu Nov 10 16:20:02 2011 From: r at roze.lv (Reinis Rozitis) Date: Thu, 10 Nov 2011 18:20:02 +0200 Subject: proxy_store HEAD In-Reply-To: <20111110155542.GU95664@mdounin.ru> References: <20111110155542.GU95664@mdounin.ru> Message-ID: <201D4C9150264FEAA3CEE70940A13360@DD21> >You may try > proxy_method GET; Thx Maxim, works like a charm > The "proxy_cache_methods" directive exists and not deprecated. It's for proxy_cache though, not proxy_store. My fault, I had my own typical nginx istallation with --without-http-cache also quick look at Igors docs http://nginx.org/ru/docs/http/ngx_http_proxy_module.html didn't show directive so came to wrong conclusion after experimenting with the config and nginx throwing 'unknown directive' at startup. rr From nginx-forum at nginx.us Thu Nov 10 16:47:52 2011 From: nginx-forum at nginx.us (ceh329) Date: Thu, 10 Nov 2011 11:47:52 -0500 Subject: Alter Config On Startup Message-ID: Hello, I have a SSL section in my nginx.conf file and I want to start up nginx without SSL enabled. Is this possible without removing the SSL info in the conf file? Thanks, Charlie Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218180,218180#msg-218180 From lvella at gmail.com Thu Nov 10 17:17:32 2011 From: lvella at gmail.com (Lucas Clemente Vella) Date: Thu, 10 Nov 2011 15:17:32 -0200 Subject: Trac authentication failing Message-ID: I am running Trac inside uWSGI, with nginx as front-end server. I have also set up basic http authentication in nginx. Now, theoretically, when authenticated in nginx, Trac should recognize the user and display its name, identify the new tickets, etc. But that is not happening. How can make sure the authentication information is being sent to uWSGI and received by Trac? -- Lucas Clemente Vella lvella at gmail.com From roberto at unbit.it Thu Nov 10 18:45:54 2011 From: roberto at unbit.it (Roberto De Ioris) Date: Thu, 10 Nov 2011 19:45:54 +0100 (CET) Subject: Trac authentication failing In-Reply-To: References: Message-ID: <7a243fb4ebd93ec437f66af9b028b74e.squirrel@manage.unbit.it> > I am running Trac inside uWSGI, with nginx as front-end server. I have > also set up basic http authentication in nginx. Now, theoretically, > when authenticated in nginx, Trac should recognize the user and > display its name, identify the new tickets, etc. But that is not > happening. > > How can make sure the authentication information is being sent to > uWSGI and received by Trac? Be sure to have uwsgi_param REMOTE_USER $remote_user in your config or includes file -- Roberto De Ioris http://unbit.it From lvella at gmail.com Thu Nov 10 23:30:13 2011 From: lvella at gmail.com (Lucas Clemente Vella) Date: Thu, 10 Nov 2011 21:30:13 -0200 Subject: Trac authentication failing In-Reply-To: <7a243fb4ebd93ec437f66af9b028b74e.squirrel@manage.unbit.it> References: <7a243fb4ebd93ec437f66af9b028b74e.squirrel@manage.unbit.it> Message-ID: 2011/11/10 Roberto De Ioris : > Be sure to have > > uwsgi_param REMOTE_USER $remote_user > > in your config or includes file Like a charm. Thanks! -- Lucas Clemente Vella lvella at gmail.com From appa at perusio.net Fri Nov 11 00:31:01 2011 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 11 Nov 2011 00:31:01 +0000 Subject: Alter Config On Startup In-Reply-To: References: Message-ID: <878vnnmrei.wl%appa@perusio.net> On 10 Nov 2011 16h47 WET, nginx-forum at nginx.us wrote: > Hello, I have a SSL section in my nginx.conf file and I want to > start up nginx without SSL enabled. Is this possible without > removing the SSL info in the conf file? Yes. You'll have to: 1. Create a script that enables the SSL part. The most viable way seems to be through the include directive. 2. Reload nginx. 3. Done. Using sed supposing that you start Nginx without SSL enabled. The SSL config is in a file sslhost.conf. In the file you have the regular HTTP config: server { # HTTP (no SSL) } ##SSL include sslhost.conf; sed -i 's/##SSL //' service nginx reload Now the SSL config is active. --- appa From nginx-forum at nginx.us Fri Nov 11 08:56:09 2011 From: nginx-forum at nginx.us (zhenwei) Date: Fri, 11 Nov 2011 03:56:09 -0500 Subject: error: deadlock happened in ngx_http_file_cache_expire() Message-ID: {{{ proxy_cache_path /nginx/cache/three levels=1:1:2 keys_zone=three:1000m; proxy_cache_valid 200 304 302 60m; }}} Nginx version: 1.0.1 nginx: configure arguments: --with-pcre --group=admin --user=admin --prefix=/var/www/nginx --with-http_ssl_module --with-http_perl_module --with-http_stub_status_module --with-perl_modules_path=./src/http/modules/perl/ --add-module=../ngx_cache_purge The cache entries timeout is set as 60 minutes, on production environment(for website hosting) service became unavailable after running several hours. After dive into backtraces and source code, it's a deadlock issue on reload nginx(reload occurs every several minutes), here is the bt information. {{{ root 28779 0.0 0.1 1124788 6464 ? Ss 11:15 0:00 nginx: master process /var/www/nginx//sbin/nginx nobody 28781 48.8 0.2 1126424 8268 ? R 11:15 77:41 nginx: worker process is shutting down nobody 28784 47.2 0.0 1122576 3460 ? R 11:15 75:05 nginx: cache manager process nobody 28785 47.6 0.0 1122724 3496 ? R 11:15 75:45 nginx: cache loader process nobody 30156 50.8 0.2 1128800 9648 ? R 11:43 66:27 nginx: worker process nobody 30157 54.1 0.2 1128800 9648 ? R 11:43 70:46 nginx: worker process nobody 30158 54.9 0.2 1128800 9572 ? R 11:43 71:52 nginx: worker process nobody 30159 55.3 0.2 1128800 9648 ? R 11:43 72:22 nginx: worker process nobody 30160 8.5 26.3 1124788 1069964 ? R 11:43 11:12 nginx: cache manager process }}} >From the ps list Nginx is under reloading, when all worker process and a cache manager process is spinlock to get the cache, and another cache manager acquired the lock and fall into a while loop. {{{ #0 ngx_http_file_cache_expire (cache=0x1ea7dc00) at src/http/ngx_http_file_cache.c:1103 1103 if (fcn->deleting) { (gdb) bt #0 ngx_http_file_cache_expire (cache=0x1ea7dc00) at src/http/ngx_http_file_cache.c:1103 #1 0x0000000000474e8b in ngx_http_file_cache_manager (data=0x1ea7dc00) at src/http/ngx_http_file_cache.c:1193 #2 0x0000000000439e9d in ngx_cache_manager_process_handler (ev=0x7fff5f5f3120) at src/os/unix/ngx_process_cycle.c:1346 #3 0x00000000004303f0 in ngx_event_expire_timers () at src/event/ngx_event_timer.c:149 #4 0x000000000042e537 in ngx_process_events_and_timers (cycle=0x1e9b5a70) at src/event/ngx_event.c:261 #5 0x0000000000439d6e in ngx_cache_manager_process_cycle (cycle=0x1e9b5a70, data=0x6c4440) at src/os/unix/ngx_process_cycle.c:1328 #6 0x00000000004361bd in ngx_spawn_process (cycle=0x1e9b5a70, proc=0x439bdc , data=0x6c4440, name=0x4abc88 "cache manager process", respawn=-4) at src/os/unix/ngx_process.c:196 #7 0x00000000004382ca in ngx_start_cache_manager_processes (cycle=0x1e9b5a70, respawn=1) at src/os/unix/ngx_process_cycle.c:398 #8 0x0000000000437dc9 in ngx_master_process_cycle (cycle=0x1e9b5a70) at src/os/unix/ngx_process_cycle.c:251 #9 0x000000000040e36c in main (argc=1, argv=0x7fff5f5f36d8) at src/core/nginx.c:405 (gdb) p fcn->deleting $1 = 1 (gdb) p wait $2 = -67 }}} Source code {{{ 1075 for ( ;; ) { 1076 1077 if (ngx_queue_empty(&cache->sh->queue)) { 1078 wait = 10; 1079 break; 1080 } 1081 1082 q = ngx_queue_last(&cache->sh->queue); 1083 1084 fcn = ngx_queue_data(q, ngx_http_file_cache_node_t, queue); 1085 1086 wait = fcn->expire - now; 1087 1088 if (wait > 0) { 1089 wait = wait > 10 ? 10 : wait; 1090 break; 1091 } ..... }}} The for loop never exit if "wait" is a negative number and unfortunately it becomes -67, I hope decrease the timeout to 10m would work around it, and it failed, either. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218202,218202#msg-218202 From mdounin at mdounin.ru Fri Nov 11 10:27:47 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Nov 2011 14:27:47 +0400 Subject: error: deadlock happened in ngx_http_file_cache_expire() In-Reply-To: References: Message-ID: <20111111102747.GA95664@mdounin.ru> Hello! On Fri, Nov 11, 2011 at 03:56:09AM -0500, zhenwei wrote: > Nginx version: 1.0.1 [...] > #0 ngx_http_file_cache_expire (cache=0x1ea7dc00) at > src/http/ngx_http_file_cache.c:1103 > 1103 if (fcn->deleting) { [...] Please upgreade, this was fixed in 1.0.5: *) Bugfix: worker processes may got caught in an endless loop during reconfiguration, if a caching was used; the bug had appeared in 0.8.48. Maxim Dounin From nginx-forum at nginx.us Fri Nov 11 11:28:00 2011 From: nginx-forum at nginx.us (zhenwei) Date: Fri, 11 Nov 2011 06:28:00 -0500 Subject: error: deadlock happened in ngx_http_file_cache_expire() In-Reply-To: <20111111102747.GA95664@mdounin.ru> References: <20111111102747.GA95664@mdounin.ru> Message-ID: <4e56ac8fb1a6d11ac985bb9dcb2c3bc2.NginxMailingListEnglish@forum.nginx.org> Wonderful, thanks a lot! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218202,218205#msg-218205 From al-nginx at none.at Fri Nov 11 12:16:25 2011 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 11 Nov 2011 13:16:25 +0100 Subject: Q: about try_files and regex location Message-ID: <48dc5cf04b3d067ab54f333ab78fdef4@none.at> Dear all, please can you help me to fix the issue with try_files an regex location, thank you. I use ### sbin/nginx -V nginx: nginx version: nginx/1.1.4 nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5) nginx: TLS SNI support enabled nginx: configure arguments: --with-debug --with-libatomic --without-http_ssi_module --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --with-http_ssl_module --user=nginx --group=nginx --prefix=server/nginx --with-http_stub_status_module ### with the following config. ### http { server { ... location ~ ^/(share|alfresco)(/res)?(.*$) { alias /home/alfresco/alfresco-4.0.b/tomcat/webapps; try_files $uri /$1$2 /$1$3 @alfresco; } location @alfresco { proxy_pass http://alfresco; } } } ### I get the following error: ### 2011/11/11 12:58:09 [error] 5618#0: *33782 open() "/home/alfresco/alfresco-4.0.b/tomcat/webapps/share/components/document-details/document-link/home/al" failed (2: No such file or directory), client: xxx, server: xxx, request: "GET /share/components/document-details/document-links-min.js HTTP/1.1", host: "xxx", referrer: "https://xxx/share/page/site/aleks-glossar/document-details?nodeRef=workspace://SpacesStore/985297e8-ff1b-4423-b6e9-8dacc0011196" ### Could it be that try_files match on '/$1$2' but the open call get '/$1$3' or something else? attached the debug log. BR Aleks #### debug 2011/11/11 12:58:09 [debug] 5618#0: *33782 posix_memalign: 000000000CD7D350:4096 @16 2011/11/11 12:58:09 [debug] 5618#0: *33782 http process request line 2011/11/11 12:58:09 [debug] 5618#0: *33782 http request line: "GET /share/components/document-details/document-links-min.js HTTP/1.1" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http uri: "/share/components/document-details/document-links-min.js" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http args: "" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http exten: "js" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http process request header line 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "Host: XXXX" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "Accept: */*" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "Accept-Language: de-de,de;q=0.8,en;q=0.5,en-us;q=0.3" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "Accept-Encoding: gzip, deflate" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "Accept-Charset: UTF-8,*" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "DNT: 1" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "Referer: https://xxx/share/page/site/aleks-glossar/document-details?nodeRef=workspace://SpacesStore/985297e8-ff1b-4423-b6e9-8dacc0011196" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "Cookie: JSESSIONID=C3A5E3E0985A055C732E884981544337; alfLogin=1321012572; alfUsername2="YWxyaXQ="" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header: "Connection: keep-alive" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http header done 2011/11/11 12:58:09 [debug] 5618#0: *33782 event timer del: 7: 1321012754907 2011/11/11 12:58:09 [debug] 5618#0: *33782 rewrite phase: 0 2011/11/11 12:58:09 [debug] 5618#0: *33782 test location: "/redmine" 2011/11/11 12:58:09 [debug] 5618#0: *33782 test location: ~ "^/(share|alfresco)(/res)?(.*$)" 2011/11/11 12:58:09 [debug] 5618#0: *33782 using configuration "^/(share|alfresco)(/res)?(.*$)" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http cl:-1 max:52428800 2011/11/11 12:58:09 [debug] 5618#0: *33782 rewrite phase: 2 2011/11/11 12:58:09 [debug] 5618#0: *33782 post rewrite phase: 3 2011/11/11 12:58:09 [debug] 5618#0: *33782 generic phase: 4 2011/11/11 12:58:09 [debug] 5618#0: *33782 generic phase: 5 2011/11/11 12:58:09 [debug] 5618#0: *33782 access phase: 6 2011/11/11 12:58:09 [debug] 5618#0: *33782 access phase: 7 2011/11/11 12:58:09 [debug] 5618#0: *33782 post access phase: 8 2011/11/11 12:58:09 [debug] 5618#0: *33782 try files phase: 9 2011/11/11 12:58:09 [debug] 5618#0: *33782 http script copy: "/home/alfresco/alfresco-4.0.b/tomcat/webapps" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http script var: "/share/components/document-details/document-links-min.js" 2011/11/11 12:58:09 [debug] 5618#0: *33782 trying to use file: "/share/components/document-details/document-links-min.js" "/home/alfresco/alfresco-4.0.b/tomcat/webapps/share/components/document-details/document-links-min.js" 2011/11/11 12:58:09 [debug] 5618#0: *33782 try file uri: "/share/components/document-details/document-links-min.js" 2011/11/11 12:58:09 [debug] 5618#0: *33782 content phase: 10 2011/11/11 12:58:09 [debug] 5618#0: *33782 content phase: 11 2011/11/11 12:58:09 [debug] 5618#0: *33782 content phase: 12 2011/11/11 12:58:09 [debug] 5618#0: *33782 http script copy: "/home/alfresco/alfresco-4.0.b/tomcat/webapps" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http filename: "/home/alfresco/alfresco-4.0.b/tomcat/webapps/share/components/document-details/document-link/home/al" 2011/11/11 12:58:09 [debug] 5618#0: *33782 add cleanup: 000000000CD7DF38 2011/11/11 12:58:09 [error] 5618#0: *33782 open() "/home/alfresco/alfresco-4.0.b/tomcat/webapps/share/components/document-details/document-link/home/al" failed (2: No such file or directory), client: xxx, server: xxx, request: "GET /share/components/document-details/document-links-min.js HTTP/1.1", host: "xxx", referrer: "https://external.none.at/share/page/site/aleks-glossar/document-details?nodeRef=workspace://SpacesStore/985297e8-ff1b-4423-b6e9-8dacc0011196" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http finalize request: 404, "/share/components/document-details/document-link/home/al?" a:1, c:1 2011/11/11 12:58:09 [debug] 5618#0: *33782 http special response: 404, "/share/components/document-details/document-link/home/al?" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http set discard body 2011/11/11 12:58:09 [debug] 5618#0: *33782 HTTP/1.1 404 Not Found Server: nginx/1.1.4 Date: Fri, 11 Nov 2011 11:58:09 GMT Content-Type: text/html Content-Length: 168 Connection: keep-alive 2011/11/11 12:58:09 [debug] 5618#0: *33782 write new buf t:1 f:0 000000000CD7DFB8, pos 000000000CD7DFB8, size: 154 file: 0, size: 0 2011/11/11 12:58:09 [debug] 5618#0: *33782 http write filter: l:0 f:0 s:154 2011/11/11 12:58:09 [debug] 5618#0: *33782 http output filter "/share/components/document-details/document-link/home/al?" 2011/11/11 12:58:09 [debug] 5618#0: *33782 http copy filter: "/share/components/document-details/document-link/home/al?" 2011/11/11 12:58:09 [debug] 5618#0: *33782 write old buf t:1 f:0 000000000CD7DFB8, pos 000000000CD7DFB8, size: 154 file: 0, size: 0 2011/11/11 12:58:09 [debug] 5618#0: *33782 write new buf t:0 f:0 0000000000000000, pos 00000000006802C0, size: 116 file: 0, size: 0 #### From ne at vbart.ru Fri Nov 11 13:15:02 2011 From: ne at vbart.ru (Valentin V. Bartenev) Date: Fri, 11 Nov 2011 17:15:02 +0400 Subject: Q: about try_files and regex location In-Reply-To: <48dc5cf04b3d067ab54f333ab78fdef4@none.at> References: <48dc5cf04b3d067ab54f333ab78fdef4@none.at> Message-ID: <201111111715.02934.ne@vbart.ru> On Friday 11 November 2011 16:16:25 Aleksandar Lazic wrote: [...] > ### > http { > server { > ... > location ~ ^/(share|alfresco)(/res)?(.*$) { - location ~ ^/(share|alfresco)(/res)?(.*$) { + location ~ ^/(share|alfresco)(/res)?(.*)$ { > alias /home/alfresco/alfresco-4.0.b/tomcat/webapps; [...] http://nginx.org/en/docs/http/ngx_http_core_module.html#alias The "alias" directive in your config replaces request part of path to file after successful "try_files" check. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Fri Nov 11 13:18:27 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Nov 2011 17:18:27 +0400 Subject: Q: about try_files and regex location In-Reply-To: <48dc5cf04b3d067ab54f333ab78fdef4@none.at> References: <48dc5cf04b3d067ab54f333ab78fdef4@none.at> Message-ID: <20111111131827.GG95664@mdounin.ru> Hello! On Fri, Nov 11, 2011 at 01:16:25PM +0100, Aleksandar Lazic wrote: > Dear all, > > please can you help me to fix the issue with try_files an regex > location, thank you. > > I use > > ### > sbin/nginx -V > nginx: nginx version: nginx/1.1.4 > nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5) > nginx: TLS SNI support enabled > nginx: configure arguments: --with-debug --with-libatomic > --without-http_ssi_module --without-http_uwsgi_module > --without-http_scgi_module --without-http_memcached_module > --with-http_ssl_module --user=nginx --group=nginx > --prefix=server/nginx --with-http_stub_status_module > ### > > with the following config. > > ### > http { > server { > ... > location ~ ^/(share|alfresco)(/res)?(.*$) { > alias /home/alfresco/alfresco-4.0.b/tomcat/webapps; > try_files $uri /$1$2 /$1$3 @alfresco; > } > > location @alfresco { > proxy_pass http://alfresco; > } > } > } > ### > > I get the following error: > > ### > 2011/11/11 12:58:09 [error] 5618#0: *33782 open() > "/home/alfresco/alfresco-4.0.b/tomcat/webapps/share/components/document-details/document-link/home/al" > failed (2: No such file or directory), client: xxx, server: xxx, > request: "GET > /share/components/document-details/document-links-min.js HTTP/1.1", > host: "xxx", referrer: "https://xxx/share/page/site/aleks-glossar/document-details?nodeRef=workspace://SpacesStore/985297e8-ff1b-4423-b6e9-8dacc0011196" > ### > > Could it be that try_files match on '/$1$2' but the open call get > '/$1$3' or something else? > > attached the debug log. This is the bug in alias and try_files interaction. Or, more strictly, such configuration should be rejected during testing configuration as it's not really make sense: alias within regex location specifies full path to a file, and try_files is meaningless here. You probably mean to use "root" instead, i.e. location ~ ^/(share|alfresco)(/res)?(.*$) { root /home/alfresco/alfresco-4.0.b/tomcat/webapps; try_files $uri /$1$2 /$1$3 @alfresco; } Maxim Dounin From tseveendorj at gmail.com Fri Nov 11 13:17:04 2011 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Fri, 11 Nov 2011 21:17:04 +0800 Subject: Rewrite conversion In-Reply-To: <4E2CC27A.2090705@gmail.com> References: <20110723071826.GC73233@sysoev.ru> <4E2CC27A.2090705@gmail.com> Message-ID: Dear Igor, You have converted apache rewrite to nginx. I appreciated very much for you. But I need redirecting request www to non-www in my rewrite. Please see your converted rewrite below. Any help will be appreciated. server { ... root /path/to/files; error_page 403 /index.php?do=/public/error/403/; error_page 404 /index.php?do=/public/error/404/; location /file/ { } location /install/ { } location /design/ { } location /plugins/ { } location = /robots.txt { } location = /favicon.ico { } location / { fastcgi_pass ... include fastcgi_params; fastcgi_param SCRIPT_FILENAME /path/to/files/index.php; fastcgi_param QUERY_STRING do=$uri; } location /index.php { location = /index.php { fastcgi_pass ... include fastcgi_params; fastcgi_param SCRIPT_FILENAME /path/to/files/index.php; fastcgi_param QUERY_STRING $args; } location ~ ^/index.php(/.*)$ { fastcgi_pass ... include fastcgi_params; fastcgi_param SCRIPT_FILENAME /path/to/files/index.php; fastcgi_param QUERY_STRING do=$1; } return 404; } On 7/25/11, Tseveendorj wrote: > On 11.07.23 23:05, Edho Arief wrote: >> On Sat, Jul 23, 2011 at 9:33 PM, Tseveendorj Ochirlantuu >> wrote: >>> Dear Igor, >>> I just tested rewrite but one thing did not work. When I'm >>> accessing >>> http://www.xac.mn/index.php?do=/mytunes/view/song_40/module_popout/ this >>> popup but I got >>> >>> 404 Not Found >>> >>> ________________________________ >>> nginx/0.7.65 >>> on the screen. Above url is working on Apache with rewrite. >>> I do not know difference between these two rewrites. >>> Apache >>> RewriteRule ^(.*)$ /index.php?do=/$1 [L] >>> Nginx location ~ ^/index.php(/.*)$ { fastcgi_pass backend; >>> include fastcgi_params; fastcgi_param >>> SCRIPT_FILENAME /var/www/xac/index.php; >>> fastcgi_param QUERY_STRING do=$1; } >>> >> the url >> http://www.xac.mn/index.php?do=/mytunes/view/song_40/module_popout/ >> is handled by this location block: >> >> location = /index.php { >> fastcgi_pass ... >> include fastcgi_params; >> fastcgi_param SCRIPT_FILENAME /path/to/files/index.php; >> fastcgi_param QUERY_STRING $args; >> } >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > Really, I thought above rewrite handled by another. What do you think > this popup does not work ? > > > > From al-nginx at none.at Fri Nov 11 23:13:42 2011 From: al-nginx at none.at (Aleksandar Lazic) Date: Sat, 12 Nov 2011 00:13:42 +0100 Subject: Q: about try_files and regex location In-Reply-To: <20111111131827.GG95664@mdounin.ru> References: <48dc5cf04b3d067ab54f333ab78fdef4@none.at> <20111111131827.GG95664@mdounin.ru> Message-ID: Thanks this was the solution. Also thanks to Valentin. BR Aleks On 11.11.2011 14:18, Maxim Dounin wrote: > Hello! > > On Fri, Nov 11, 2011 at 01:16:25PM +0100, Aleksandar Lazic wrote: > >> Dear all, >> >> please can you help me to fix the issue with try_files an regex >> location, thank you. >> >> I use >> >> ### >> sbin/nginx -V >> nginx: nginx version: nginx/1.1.4 >> nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5) >> nginx: TLS SNI support enabled >> nginx: configure arguments: --with-debug --with-libatomic >> --without-http_ssi_module --without-http_uwsgi_module >> --without-http_scgi_module --without-http_memcached_module >> --with-http_ssl_module --user=nginx --group=nginx >> --prefix=server/nginx --with-http_stub_status_module >> ### >> >> with the following config. >> >> ### >> http { >> server { >> ... >> location ~ ^/(share|alfresco)(/res)?(.*$) { >> alias /home/alfresco/alfresco-4.0.b/tomcat/webapps; >> try_files $uri /$1$2 /$1$3 @alfresco; >> } >> >> location @alfresco { >> proxy_pass http://alfresco; >> } >> } >> } >> ### >> >> I get the following error: >> >> ### >> 2011/11/11 12:58:09 [error] 5618#0: *33782 open() >> >> "/home/alfresco/alfresco-4.0.b/tomcat/webapps/share/components/document-details/document-link/home/al" >> failed (2: No such file or directory), client: xxx, server: xxx, >> request: "GET >> /share/components/document-details/document-links-min.js HTTP/1.1", >> host: "xxx", referrer: >> "https://xxx/share/page/site/aleks-glossar/document-details?nodeRef=workspace://SpacesStore/985297e8-ff1b-4423-b6e9-8dacc0011196" >> ### >> >> Could it be that try_files match on '/$1$2' but the open call get >> '/$1$3' or something else? >> >> attached the debug log. > > This is the bug in alias and try_files interaction. Or, more > strictly, such configuration should be rejected during testing > configuration as it's not really make sense: alias within regex > location specifies full path to a file, and try_files is > meaningless here. > > You probably mean to use "root" instead, i.e. > > location ~ ^/(share|alfresco)(/res)?(.*$) { > root /home/alfresco/alfresco-4.0.b/tomcat/webapps; > try_files $uri /$1$2 /$1$3 @alfresco; > } > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat Nov 12 12:05:40 2011 From: nginx-forum at nginx.us (TECK) Date: Sat, 12 Nov 2011 07:05:40 -0500 Subject: Nginx 1.0.9 reporting wrong version in PHP 5.3.8 Message-ID: <85ae2cb4a0f8b6be2a5f6ab73923ca00.NginxMailingListEnglish@forum.nginx.org> Hi, I installed Nginx 1.0.9 and when I look at the phpinfo() details, it reports as version 1.0.4. Anyone else has this issue? Regards, Floren Munteanu Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218241,218241#msg-218241 From nginx-forum at nginx.us Sat Nov 12 12:08:13 2011 From: nginx-forum at nginx.us (TECK) Date: Sat, 12 Nov 2011 07:08:13 -0500 Subject: Nginx 1.0.9 reporting wrong version in PHP 5.3.8 In-Reply-To: <85ae2cb4a0f8b6be2a5f6ab73923ca00.NginxMailingListEnglish@forum.nginx.org> References: <85ae2cb4a0f8b6be2a5f6ab73923ca00.NginxMailingListEnglish@forum.nginx.org> Message-ID: Never mind, I was on the wrong server. Sorry about that. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218241,218242#msg-218242 From nginx-forum at nginx.us Sat Nov 12 19:47:44 2011 From: nginx-forum at nginx.us (xin) Date: Sat, 12 Nov 2011 14:47:44 -0500 Subject: Rails App, Nginx, Virtual Hosts and bandwidth shaping In-Reply-To: References: Message-ID: Anton Yuzhaninov Hi! This means that EVERY virtualhost have that limit or all virtualhosts together? How I could do that to limit for example 10 virtualhosts to have all together 25mbps limited bandwidth, is it possible? 10 virtualhosts together = 25mbps limit not each.. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,265,218247#msg-218247 From ilan at time4learning.com Sun Nov 13 01:59:04 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Sat, 12 Nov 2011 20:59:04 -0500 Subject: Loading a PHP file without a query string parameter downlo Message-ID: We're upgrading our web server (physical machine). Our site works fine. However, I noticed that going to a PHP file, i.e. /index.php causes the file to download. Going to /index.php?asdfasdf causes the file to work correctly. Help? In our Nginx configuration we have: location ~ \.php$ { fastcgi_param HTTPS on; include fcgi; fastcgi_pass joomphp; } In our old server, this works fine and we don't have this issue. -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Nov 13 02:35:40 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 13 Nov 2011 06:35:40 +0400 Subject: Loading a PHP file without a query string parameter downlo In-Reply-To: References: Message-ID: <20111113023534.GL95664@mdounin.ru> Hello! On Sat, Nov 12, 2011 at 08:59:04PM -0500, Ilan Berkner wrote: > We're upgrading our web server (physical machine). Our site works fine. > > However, I noticed that going to a PHP file, i.e. /index.php causes the > file to download. Going to /index.php?asdfasdf causes the file to work > correctly. Help? > > In our Nginx configuration we have: > > location ~ \.php$ > { > fastcgi_param HTTPS on; > include fcgi; > fastcgi_pass joomphp; > } > > In our old server, this works fine and we don't have this issue. Browser cache? Maxim Dounin From ilan at time4learning.com Sun Nov 13 02:41:43 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Sat, 12 Nov 2011 21:41:43 -0500 Subject: Loading a PHP file without a query string parameter downlo In-Reply-To: <20111113023534.GL95664@mdounin.ru> References: <20111113023534.GL95664@mdounin.ru> Message-ID: That's what it was, thanks. On Sat, Nov 12, 2011 at 9:35 PM, Maxim Dounin wrote: > Hello! > > On Sat, Nov 12, 2011 at 08:59:04PM -0500, Ilan Berkner wrote: > > > We're upgrading our web server (physical machine). Our site works fine. > > > > However, I noticed that going to a PHP file, i.e. /index.php causes the > > file to download. Going to /index.php?asdfasdf causes the file to work > > correctly. Help? > > > > In our Nginx configuration we have: > > > > location ~ \.php$ > > { > > fastcgi_param HTTPS on; > > include fcgi; > > fastcgi_pass joomphp; > > } > > > > In our old server, this works fine and we don't have this issue. > > Browser cache? > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilan at time4learning.com Sun Nov 13 06:58:39 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Sun, 13 Nov 2011 01:58:39 -0500 Subject: Can anyone explain what is nginx.old? Message-ID: I'm seeing some warnings in root log files regarding excessive processes for nginx.old... -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Sun Nov 13 08:12:07 2011 From: ne at vbart.ru (Valentin V. Bartenev) Date: Sun, 13 Nov 2011 12:12:07 +0400 Subject: Can anyone explain what is nginx.old? In-Reply-To: References: Message-ID: <201111131212.07416.ne@vbart.ru> On Sunday 13 November 2011 10:58:39 Ilan Berkner wrote: > I'm seeing some warnings in root log files regarding excessive processes > for nginx.old... When you do "make install" to some destination directory, and installation script finds out that there is an old "nginx" binary, then it renames the old binary to the "nginx.old", before copying the new one. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sun Nov 13 12:06:02 2011 From: nginx-forum at nginx.us (Long Wan) Date: Sun, 13 Nov 2011 07:06:02 -0500 Subject: nginx worker process hang,cpu load 100% Message-ID: Hi, I have faced a trouble with nginx runs as a http revers proxy server,the worker process sometimes hanging there, cpu usage up to 100%,it's never recovey until i kill the process,below is the detail informations: system environment: [root at host-22 ~]# lsb_release -a LSB Version: :core-3.1-amd64:core-3.1-ia32:core-3.1-noarch:graphics-3.1-amd64:graphics-3.1-ia32:graphics-3.1-noarch Distributor ID: CentOS Description: CentOS release 5.5 (Final) Release: 5.5 Codename: Final [root at host-22 ~]# [root at host-22 ~]# uname -a Linux host-22 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux nginx version: [root at host-22 ~]# /usr/local/nginx/sbin/nginx -V nginx: nginx version: nginx/1.0.4 nginx: built by gcc 4.1.2 20080704 (Red Hat 4.1.2-46) nginx: TLS SNI support disabled nginx: configure arguments: --user=www --group=www --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-openssl-opt=enable-tlsext --with-http_sub_module --with-cc-opt=-O2 --with-cpu-opt=opteron [root at host-22 ~]# (also tested under 1.0.6 and 1.0.9,have the same problem) nginx config( nginx runs as a http revers proxy server): worker_processes 8; events { use epoll; worker_connections 5120; } http { sendfile on; keepalive_timeout 15; ... upstream 2012_servers { server 10.0.7.5:80 max_fails=2 fail_timeout=30s; server 10.0.7.6:80 max_fails=2 fail_timeout=30s; server 10.0.7.7:80 max_fails=2 fail_timeout=30s; server 10.0.7.8:80 max_fails=2 fail_timeout=30s; } server { listen 80; server_name test.2012.com ; ... location / { include proxy.conf; proxy_pass http://2012_servers; } ... } trouble: [root at host-22 ~]# ps aux|grep -e CPU -e nginx USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 936 0.0 0.0 49328 7572 ? Ss Nov11 2:37 nginx: master process /usr/local/nginx/sbin/nginx www 1130 99.9 0.0 55764 13472 ? R Nov11 2664:28 nginx: worker process www 1216 99.9 0.0 53668 11092 ? R Nov11 2660:23 nginx: worker process www 31057 0.0 0.0 50816 8820 ? S 19:40 0:00 nginx: worker process www 31058 0.0 0.0 50816 8820 ? S 19:40 0:00 nginx: worker process www 31059 0.0 0.0 50816 8820 ? S 19:40 0:00 nginx: worker process www 31060 0.0 0.0 50816 8820 ? S 19:40 0:00 nginx: worker process www 31061 0.0 0.0 50816 8820 ? S 19:40 0:00 nginx: worker process www 31062 0.8 0.0 50816 8820 ? S 19:40 0:00 nginx: worker process www 31063 0.1 0.0 50816 9012 ? S 19:40 0:00 nginx: worker process www 31064 0.2 0.0 50816 8820 ? S 19:40 0:00 nginx: worker process two nginx worker processes(pid 1130,1216) are hanging. there is nothing significant message i can found in error.log or strace (-p 1130|1216). Grateful for any advice. thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218259,218259#msg-218259 From mdounin at mdounin.ru Sun Nov 13 12:35:30 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 13 Nov 2011 16:35:30 +0400 Subject: nginx worker process hang,cpu load 100% In-Reply-To: References: Message-ID: <20111113123530.GM95664@mdounin.ru> Hello! On Sun, Nov 13, 2011 at 07:06:02AM -0500, Long Wan wrote: > I have faced a trouble with nginx runs as a http revers proxy server,the > worker process sometimes hanging there, cpu usage up to 100%,it's never > recovey until i kill the process,below is the detail informations: [...] > trouble: > [root at host-22 ~]# ps aux|grep -e CPU -e nginx > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME > COMMAND > root 936 0.0 0.0 49328 7572 ? Ss Nov11 2:37 nginx: > master process /usr/local/nginx/sbin/nginx > www 1130 99.9 0.0 55764 13472 ? R Nov11 2664:28 nginx: > worker process > www 1216 99.9 0.0 53668 11092 ? R Nov11 2660:23 nginx: > worker process > www 31057 0.0 0.0 50816 8820 ? S 19:40 0:00 nginx: > worker process [...] > two nginx worker processes(pid 1130,1216) are hanging. there is nothing > significant message i can found in error.log or strace (-p 1130|1216). Please try attaching to a runaway process with gdb and check where it loops, i.e. gdb /path/to/nginx bt n ... (repeat 'n' several times to see loop) Maxim Dounin From nginx-forum at nginx.us Sun Nov 13 15:10:29 2011 From: nginx-forum at nginx.us (zhenwei) Date: Sun, 13 Nov 2011 10:10:29 -0500 Subject: what's the difference between proxy_store and proxy_cache? Message-ID: >From Nginx official site there are two method to store remote upstream on local file system, "proxy_cache" and "proxy_store", the document describes clearly on how to configure, but still I have problem on the differences. Months ago I did some test over the both and found proxy_cache owns far better performance, and I thought proxy_cache is "cache in memory", recently after diving into source code, I recognized both are file based, so i'm not sure if it could afford high connection where disk probably becomes the bottleneck. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218263,218263#msg-218263 From ilan at time4learning.com Sun Nov 13 15:57:53 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Sun, 13 Nov 2011 10:57:53 -0500 Subject: Match all requests Message-ID: I have this location configuration: location / { index maintenance.htm; error_page 404 = maintenance.htm; log_not_found off; } which I thought captures all requests, however, entering "/index.php" for example, causes the file to be downloaded instead of going to the "maintenance.htm" file. How can I capture all requests? -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lvella at gmail.com Sun Nov 13 16:30:44 2011 From: lvella at gmail.com (Lucas Clemente Vella) Date: Sun, 13 Nov 2011 14:30:44 -0200 Subject: Match all requests In-Reply-To: References: Message-ID: 2011/11/13 Ilan Berkner > I have this location configuration: > > location / > { > index maintenance.htm; > error_page 404 = maintenance.htm; > log_not_found off; > } > > which I thought captures all requests, however, entering "/index.php" for > example, causes the file to be downloaded instead of going to the > "maintenance.htm" file. How can I capture all requests? > Maybe if you put like this before the other locations: location ~ .*$ { index maintenance.htm; error_page 404 = maintenance.htm; log_not_found off; } -- Lucas Clemente Vella lvella at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Sun Nov 13 16:32:46 2011 From: ne at vbart.ru (Valentin V. Bartenev) Date: Sun, 13 Nov 2011 20:32:46 +0400 Subject: Match all requests In-Reply-To: References: Message-ID: <201111132032.46316.ne@vbart.ru> On Sunday 13 November 2011 19:57:53 Ilan Berkner wrote: > I have this location configuration: > > location / > { This: > index maintenance.htm; only captures requests to directories (i.e., ends with a /). This: > error_page 404 = maintenance.htm; only captures 404 responses. > log_not_found off; > } > > which I thought captures all requests, however, entering "/index.php" for > example, causes the file to be downloaded instead of going to the > "maintenance.htm" file. How can I capture all requests? Probably, you want something like this: error_page 404 = /maintenance.htm; location / { return 404; } location = /maintenance.htm {} wbr, Valentin V. Bartenev From igor at sysoev.ru Sun Nov 13 16:44:04 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Sun, 13 Nov 2011 20:44:04 +0400 Subject: Match all requests In-Reply-To: References: Message-ID: <20111113164404.GA96350@nginx.com> On Sun, Nov 13, 2011 at 10:57:53AM -0500, Ilan Berkner wrote: > I have this location configuration: > > location / > { > index maintenance.htm; > error_page 404 = maintenance.htm; > log_not_found off; > } > > which I thought captures all requests, however, entering "/index.php" for > example, causes the file to be downloaded instead of going to the > "maintenance.htm" file. How can I capture all requests? location / { try_files /maintenance.htm =404; } -- Igor Sysoev From mdounin at mdounin.ru Sun Nov 13 17:04:33 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 13 Nov 2011 21:04:33 +0400 Subject: what's the difference between proxy_store and proxy_cache? In-Reply-To: References: Message-ID: <20111113170433.GN95664@mdounin.ru> Hello! On Sun, Nov 13, 2011 at 10:10:29AM -0500, zhenwei wrote: > From Nginx official site there are two method to store remote upstream > on local file system, "proxy_cache" and "proxy_store", the document > describes clearly on how to configure, but still I have problem on the > differences. Months ago I did some test over the both and found > proxy_cache owns far better performance, and I thought proxy_cache is > "cache in memory", recently after diving into source code, I recognized > both are file based, so i'm not sure if it could afford high connection > where disk probably becomes the bottleneck. "proxy_cache" is general-purpose cache with automatic lookups before proxy_pass, expiration support and so on. It is usually what you need if you need caching capabilities. "proxy_store" is just a method to store proxied files on disk. It may be used to construct cache-like setups (usually involving try_files and/or error_page-based fallback), though it's up to you to implement any required logic. Maxim Dounin From vbart at nginx.com Sun Nov 13 16:50:05 2011 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 13 Nov 2011 20:50:05 +0400 Subject: what's the difference between proxy_store and proxy_cache? In-Reply-To: References: Message-ID: <201111132050.06153.vbart@nginx.com> On Sunday 13 November 2011 19:10:29 zhenwei wrote: > From Nginx official site there are two method to store remote upstream > on local file system, "proxy_cache" and "proxy_store", the document > describes clearly on how to configure, but still I have problem on the > differences. The "proxy_store" just stores backend's responses to a defined path. It's totally up to you, what to do with these files after they were stored. The "proxy_cache" alone doesn't do anything. But with other proxy_cache_* directives, you can setup a file cache with key, life time, etc. wbr, Valentin V. Bartenev From ilan at time4learning.com Sun Nov 13 21:38:53 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Sun, 13 Nov 2011 16:38:53 -0500 Subject: Match all requests In-Reply-To: <20111113164404.GA96350@nginx.com> References: <20111113164404.GA96350@nginx.com> Message-ID: Igor, Thanks, but the code you provided still causes the domain.com/index.phpfile to be downloaded instead of the maintenance page to show up? On Sun, Nov 13, 2011 at 11:44 AM, Igor Sysoev wrote: > On Sun, Nov 13, 2011 at 10:57:53AM -0500, Ilan Berkner wrote: > > I have this location configuration: > > > > location / > > { > > index maintenance.htm; > > error_page 404 = maintenance.htm; > > log_not_found off; > > } > > > > which I thought captures all requests, however, entering "/index.php" for > > example, causes the file to be downloaded instead of going to the > > "maintenance.htm" file. How can I capture all requests? > > location / { > try_files /maintenance.htm =404; > } > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jabberuser at gmail.com Sun Nov 13 21:46:53 2011 From: jabberuser at gmail.com (Piotr Karbowski) Date: Sun, 13 Nov 2011 22:46:53 +0100 Subject: Match all requests In-Reply-To: References: <20111113164404.GA96350@nginx.com> Message-ID: <4EC03ACD.7080500@gmail.com> On 13.11.2011 22:38, Ilan Berkner wrote: > Thanks, but the code you provided still causes the > domain.com/index.phpfile to be downloaded instead of the maintenance > page to show up? Because you use location .php$ for php, I presume so you need add there something like 'try_files /maintenance.htm $uri =404; There = to the php's location block. You may need read http://wiki.nginx.org/NginxHttpCoreModule#location -- Piotr. From nginx-forum at nginx.us Mon Nov 14 01:28:49 2011 From: nginx-forum at nginx.us (Long Wan) Date: Sun, 13 Nov 2011 20:28:49 -0500 Subject: nginx worker process hang,cpu load 100% In-Reply-To: References: Message-ID: <8be64fff352bfff2ab00fc62646fab46.NginxMailingListEnglish@forum.nginx.org> Hello Maxim,Thanks for your reply, I tried gdb as you tolde me , it reported something : [root at host-22 ~]# gdb /usr/local/nginx/sbin/nginx 1130 GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-23.el5) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /usr/local/nginx/sbin/nginx...done. Attaching to program: /usr/local/nginx/sbin/nginx, process 1130 Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done. [Thread debugging using libthread_db enabled] Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libcrypt.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libcrypt.so.1 Reading symbols from /lib64/libpcre.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libpcre.so.0 Reading symbols from /lib64/libssl.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libssl.so.6 Reading symbols from /lib64/libcrypto.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libcrypto.so.6 Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libdl.so.2 Reading symbols from /usr/lib64/libz.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libz.so.1 Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /usr/lib64/libgssapi_krb5.so.2...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libgssapi_krb5.so.2 Reading symbols from /usr/lib64/libkrb5.so.3...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libkrb5.so.3 Reading symbols from /lib64/libcom_err.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libcom_err.so.2 Reading symbols from /usr/lib64/libk5crypto.so.3...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libk5crypto.so.3 Reading symbols from /usr/lib64/libkrb5support.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libkrb5support.so.0 Reading symbols from /lib64/libkeyutils.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libkeyutils.so.1 Reading symbols from /lib64/libresolv.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libresolv.so.2 Reading symbols from /lib64/libselinux.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libselinux.so.1 Reading symbols from /lib64/libsepol.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libsepol.so.1 Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libnss_files.so.2 ngx_http_upstream_get_round_robin_peer (pc=0x78166f0, data=) at src/http/ngx_http_upstream_round_robin.c:413 413 src/http/ngx_http_upstream_round_robin.c: No such file or directory. in src/http/ngx_http_upstream_round_robin.c (gdb) bt #0 ngx_http_upstream_get_round_robin_peer (pc=0x78166f0, data=) at src/http/ngx_http_upstream_round_robin.c:413 #1 0x000000000041a8fc in ngx_event_connect_peer (pc=0x78166f0) at src/event/ngx_event_connect.c:24 #2 0x000000000043d1e8 in ngx_http_upstream_connect (r=0x7cf6310, u=0x78166e0) at src/http/ngx_http_upstream.c:1089 #3 0x000000000043ea3a in ngx_http_upstream_init_request (r=0x7cf6310) at src/http/ngx_http_upstream.c:628 #4 0x0000000000435185 in ngx_http_read_client_request_body (r=0x7cf6310, post_handler=0x43eec0 ) at src/http/ngx_http_request_body.c:153 #5 0x0000000000456b46 in ngx_http_proxy_handler (r=0x7cf6310) at src/http/modules/ngx_http_proxy_module.c:617 #6 0x000000000042b15c in ngx_http_core_content_phase (r=0x7cf6310, ph=0x7b5a4e0) at src/http/ngx_http_core_module.c:1339 #7 0x0000000000426817 in ngx_http_core_run_phases (r=0x7cf6310) at src/http/ngx_http_core_module.c:837 #8 0x000000000042f6d6 in ngx_http_process_request (r=0x7cf6310) at src/http/ngx_http_request.c:1650 #9 0x0000000000430314 in ngx_http_process_request_line (rev=0x7c65578) at src/http/ngx_http_request.c:893 #10 0x0000000000420a04 in ngx_epoll_process_events (cycle=, timer=, flags=) at src/event/modules/ngx_epoll_module.c:635 #11 0x0000000000419bad in ngx_process_events_and_timers (cycle=0x784e770) at src/event/ngx_event.c:245 #12 0x000000000041f528 in ngx_worker_process_cycle (cycle=0x784e770, data=) at src/os/unix/ngx_process_cycle.c:800 #13 0x000000000041dc89 in ngx_spawn_process (cycle=0x784e770, proc=0x41f470 , data=0x0, name=0x4652e1 "worker process", respawn=-4) at src/os/unix/ngx_process.c:196 #14 0x000000000041eb0b in ngx_start_worker_processes (cycle=0x784e770, n=8, type=-4) at src/os/unix/ngx_process_cycle.c:360 #15 0x000000000041fea8 in ngx_master_process_cycle (cycle=0x784e770) at src/os/unix/ngx_process_cycle.c:249 #16 0x0000000000406069 in main (argc=1, argv=) at src/core/nginx.c:405 (gdb) n there is no output when type 'n', should i recompile nginx with '--with-debug' configure option ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218259,218280#msg-218280 From nginx-forum at nginx.us Mon Nov 14 01:37:58 2011 From: nginx-forum at nginx.us (zhenwei) Date: Sun, 13 Nov 2011 20:37:58 -0500 Subject: what's the difference between proxy_store and proxy_cache? In-Reply-To: <201111132050.06153.vbart@nginx.com> References: <201111132050.06153.vbart@nginx.com> Message-ID: <87d9fadc36d30fe8b02560bd1c9720d8.NginxMailingListEnglish@forum.nginx.org> thanks, and how about the performance? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218263,218281#msg-218281 From ne at vbart.ru Mon Nov 14 05:48:42 2011 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 14 Nov 2011 09:48:42 +0400 Subject: what's the difference between proxy_store and proxy_cache? In-Reply-To: <87d9fadc36d30fe8b02560bd1c9720d8.NginxMailingListEnglish@forum.nginx.org> References: <201111132050.06153.vbart@nginx.com> <87d9fadc36d30fe8b02560bd1c9720d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201111140948.43044.ne@vbart.ru> On Monday 14 November 2011 05:37:58 zhenwei wrote: > thanks, and how about the performance? So, if you care about disk performance and have enough RAM, probably, you may need to tune kernel disk cache or even consider to put nginx cache on "/dev/shm". Also, you can use the memcached module for caching. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon Nov 14 06:26:43 2011 From: nginx-forum at nginx.us (amastr) Date: Mon, 14 Nov 2011 01:26:43 -0500 Subject: Handle connection abort Message-ID: <040feea69e166cf13bfadea620db5a50.NginxMailingListEnglish@forum.nginx.org> Hello, I have an nginx module that receives http request, treats it and sends response to the client host. I need to handle situation when connection between server and client is aborted. How can I do it? Thanks in advance Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218285,218285#msg-218285 From kasatkinnv at gmail.com Mon Nov 14 09:14:45 2011 From: kasatkinnv at gmail.com (kasatkinnv at gmail.com) Date: Mon, 14 Nov 2011 13:14:45 +0400 Subject: Nginx and httpfs2 Message-ID: Hi, List! Has anyone successfully used httpfs2 with nginx? I'm getting Input/Output error when trying to mount file system over HTTP from nginx server. The requirements for httpfs2: "The server must be able to send byte ranges". Does nginx have support for this? The mounting over HTTP works with Apache and lighttpd but slow. Thank you! -- Nikolay Kasatkin From mdounin at mdounin.ru Mon Nov 14 10:19:40 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Nov 2011 14:19:40 +0400 Subject: nginx worker process hang,cpu load 100% In-Reply-To: <8be64fff352bfff2ab00fc62646fab46.NginxMailingListEnglish@forum.nginx.org> References: <8be64fff352bfff2ab00fc62646fab46.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111114101940.GP95664@mdounin.ru> Hello! On Sun, Nov 13, 2011 at 08:28:49PM -0500, Long Wan wrote: > Hello Maxim,Thanks for your reply, I tried gdb as you tolde me , it > reported something : [...] > (gdb) bt > #0 ngx_http_upstream_get_round_robin_peer (pc=0x78166f0, data= optimized out>) at src/http/ngx_http_upstream_round_robin.c:413 > #1 0x000000000041a8fc in ngx_event_connect_peer (pc=0x78166f0) at > src/event/ngx_event_connect.c:24 > #2 0x000000000043d1e8 in ngx_http_upstream_connect (r=0x7cf6310, > u=0x78166e0) at src/http/ngx_http_upstream.c:1089 [...] > (gdb) n > > there is no output when type 'n', should i recompile nginx with > '--with-debug' configure option ? This looks very similar to this problem, fixed in 1.1.1/1.0.7: *) Bugfix: nginx hogged CPU if all servers in an upstream were marked as "down". Are you sure you see the same problem in 1.0.9? Maxim Dounin From ssrini_vasan at hotmail.com Mon Nov 14 13:04:39 2011 From: ssrini_vasan at hotmail.com (Srinivasan Subramanian) Date: Mon, 14 Nov 2011 18:34:39 +0530 Subject: Nginx Load balancer mode for JBoss / Icefaces application Message-ID: Hello We have setup nginx as a Loadbalancer on Centos 5.2 x64. nginx is acting as a LB for a web application developed using Java servlets and Icefaces (1.8.2). The web application is deployed on JBoss 5.1. After configuration nginx is able to redirect the queries to the upstream servers properly. However the session information is not being passed through or is being modified. So the Icefaces servlet is repeatedly refreshing the login page every few seconds and keeps creating new sessions. Please advise on any additional settings that need to be made. The current settings are: (nginx is running on 192.168.1.137) upstream int-lb { server 192.168.1.139:8080; server 192.168.1.138:8080;} server { listen 80; server_name int-lb; #charset koi8-r; access_log /var/log/nginx/host.access.log main; error_log /var/log/nginx/host.error.log debug; root /usr/app/jboss5/server/default/deploy/admin-console.war; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://int-lb; }} Thanks in advance for all assistance. Regards Srini -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Nov 14 14:37:29 2011 From: nginx-forum at nginx.us (Long Wan) Date: Mon, 14 Nov 2011 09:37:29 -0500 Subject: nginx worker process hang,cpu load 100% In-Reply-To: References: Message-ID: <7ed1abd36d44e812afca391274f9e925.NginxMailingListEnglish@forum.nginx.org> Hello,Maxim. Thanks for you help. I reproduce the problem in nginx-1.0.9, [root at host-22 ~]# /usr/local/nginx/sbin/nginx -V nginx: nginx version: nginx/1.0.9 nginx: built by gcc 4.1.2 20080704 (Red Hat 4.1.2-51) nginx: TLS SNI support disabled nginx: configure arguments: --user=www --group=www --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-openssl-opt=enable-tlsext --with-http_sub_module --with-cc-opt=-O2 --with-cpu-opt=opteron --add-module=../ngx_cache_purge-1.4 [root at host-22 ~]# [root at host-22 ~]# [root at host-22 ~]# ps aux|grep -e CPU -e nginx USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 569 0.0 0.0 49016 6884 ? Ss 21:59 0:00 nginx: master process /usr/local/nginx/sbin/nginx www 587 97.3 0.0 49380 7556 ? R 22:00 10:16 nginx: worker process www 588 93.8 0.0 49380 7556 ? R 22:00 9:54 nginx: worker process www 614 43.1 0.0 49412 7584 ? T 22:01 4:07 nginx: worker process root 781 0.0 0.1 95984 19684 pts/0 S+ 22:06 0:00 gdb /usr/local/nginx/sbin/nginx 614 www 876 0.0 0.0 50504 8464 ? S 22:10 0:00 nginx: worker process www 877 0.0 0.0 50504 8464 ? S 22:10 0:00 nginx: worker process www 878 0.0 0.0 50504 8464 ? S 22:10 0:00 nginx: worker process www 879 0.0 0.0 50504 8660 ? S 22:10 0:00 nginx: worker process www 880 0.0 0.0 50504 8464 ? S 22:10 0:00 nginx: worker process www 881 0.0 0.0 50504 8464 ? S 22:10 0:00 nginx: worker process www 882 0.0 0.0 50504 8464 ? S 22:10 0:00 nginx: worker process www 883 0.0 0.0 50504 8464 ? S 22:10 0:00 nginx: worker process root 954 0.0 0.0 61168 788 pts/1 S+ 22:10 0:00 grep -e CPU -e nginx [root at host-22 ~]# [root at host-22 ~]# gdb /usr/local/nginx/sbin/nginx 614 GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-23.el5) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /usr/local/nginx/sbin/nginx...done. Attaching to program: /usr/local/nginx/sbin/nginx, process 614 Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done. [Thread debugging using libthread_db enabled] Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libcrypt.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libcrypt.so.1 Reading symbols from /lib64/libpcre.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libpcre.so.0 Reading symbols from /lib64/libssl.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libssl.so.6 Reading symbols from /lib64/libcrypto.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libcrypto.so.6 Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libdl.so.2 Reading symbols from /usr/lib64/libz.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libz.so.1 Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /usr/lib64/libgssapi_krb5.so.2...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libgssapi_krb5.so.2 Reading symbols from /usr/lib64/libkrb5.so.3...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libkrb5.so.3 Reading symbols from /lib64/libcom_err.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libcom_err.so.2 Reading symbols from /usr/lib64/libk5crypto.so.3...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libk5crypto.so.3 Reading symbols from /usr/lib64/libkrb5support.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libkrb5support.so.0 Reading symbols from /lib64/libkeyutils.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libkeyutils.so.1 Reading symbols from /lib64/libresolv.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libresolv.so.2 Reading symbols from /lib64/libselinux.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libselinux.so.1 Reading symbols from /lib64/libsepol.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libsepol.so.1 Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libnss_files.so.2 ngx_http_upstream_get_peer (pc=0x54ae260, data=) at src/http/ngx_http_upstream_round_robin.c:632 632 if (reset++) { (gdb) bt #0 ngx_http_upstream_get_peer (pc=0x54ae260, data=) at src/http/ngx_http_upstream_round_robin.c:632 #1 ngx_http_upstream_get_round_robin_peer (pc=0x54ae260, data=) at src/http/ngx_http_upstream_round_robin.c:425 #2 0x000000000041a99c in ngx_event_connect_peer (pc=0x54ae960) at src/event/ngx_event_connect.c:24 #3 0x000000000043d5a8 in ngx_http_upstream_connect (r=0x54c3b30, u=0x54ae250) at src/http/ngx_http_upstream.c:1103 #4 0x000000000043ee0a in ngx_http_upstream_init_request (r=0x54c3b30) at src/http/ngx_http_upstream.c:631 #5 0x00000000004354a5 in ngx_http_read_client_request_body (r=0x54c3b30, post_handler=0x43f310 ) at src/http/ngx_http_request_body.c:154 #6 0x00000000004572d6 in ngx_http_proxy_handler (r=0x54c3b30) at src/http/modules/ngx_http_proxy_module.c:617 #7 0x000000000042b47c in ngx_http_core_content_phase (r=0x54c3b30, ph=0x583ffd8) at src/http/ngx_http_core_module.c:1365 #8 0x0000000000426967 in ngx_http_core_run_phases (r=0x54c3b30) at src/http/ngx_http_core_module.c:861 #9 0x000000000042fa66 in ngx_http_process_request (r=0x54c3b30) at src/http/ngx_http_request.c:1665 #10 0x00000000004306a4 in ngx_http_process_request_line (rev=0x5843fc0) at src/http/ngx_http_request.c:911 #11 0x0000000000419e86 in ngx_event_process_posted (cycle=, posted=0x68bd88) at src/event/ngx_event_posted.c:39 #12 0x000000000041f608 in ngx_worker_process_cycle (cycle=0x560f670, data=) at src/os/unix/ngx_process_cycle.c:801 #13 0x000000000041dd69 in ngx_spawn_process (cycle=0x560f670, proc=0x41f550 , data=0x0, name=0x466581 "worker process", respawn=-4) at src/os/unix/ngx_process.c:196 #14 0x000000000041ebeb in ngx_start_worker_processes (cycle=0x560f670, n=8, type=-4) at src/os/unix/ngx_process_cycle.c:360 #15 0x000000000041ff88 in ngx_master_process_cycle (cycle=0x560f670) at src/os/unix/ngx_process_cycle.c:249 #16 0x00000000004060d9 in main (argc=1, argv=) at src/core/nginx.c:405 (gdb) n 425 rrp->current = ngx_http_upstream_get_peer(rrp->peers); (gdb) n 435 if (!(rrp->tried[n] & m)) { (gdb) n 460 if (pc->tries == 0) { (gdb) n 464 if (--i == 0) { (gdb) n 425 rrp->current = ngx_http_upstream_get_peer(rrp->peers); (gdb) n 435 if (!(rrp->tried[n] & m)) { (gdb) n 460 if (pc->tries == 0) { (gdb) n 464 if (--i == 0) { (gdb) n 425 rrp->current = ngx_http_upstream_get_peer(rrp->peers); (gdb) n 435 if (!(rrp->tried[n] & m)) { (gdb) n 460 if (pc->tries == 0) { (gdb) n 464 if (--i == 0) { (gdb) I found i made a mistake in nginx.conf, i include a virtual host configuation like this: upstream test_servers { #server 10.0.7.4:80 ; server 10.0.7.5:80 backup; #server 10.0.7.6:80 ; #server 10.0.7.7:80 ; } server { listen 80; server_name test.org ; access_log /data1/logs/$host.access.log main; location / { include proxy.conf; proxy_pass http://test_servers; } } there was only one server in upstream,which marked 'backup'. after some test,i found this is the reason. but when i test the nginx.conf syntax by using nginx -t,the result is ok. [root at host-22 ~]# /usr/local/nginx/sbin/nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful [root at host-22 ~]# i think nginx should warn me when that situation,haha... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218259,218315#msg-218315 From igor at sysoev.ru Mon Nov 14 15:44:03 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 14 Nov 2011 19:44:03 +0400 Subject: nginx-1.1.8 Message-ID: <20111114154403.GF33940@nginx.com> Changes with nginx 1.1.8 14 Nov 2011 *) Change: the ngx_http_limit_zone_module was renamed to the ngx_http_limit_conn_module. *) Change: the "limit_zone" directive was superseded by the "limit_conn_zone" directive with a new syntax. *) Feature: support for multiple "limit_conn" limits on the same level. *) Feature: the "image_filter_sharpen" directive. *) Bugfix: a segmentation fault might occur in a worker process if resolver got a big DNS response. Thanks to Ben Hawkes. *) Bugfix: in cache key calculation if internal MD5 implementation was used; the bug had appeared in 1.0.4. *) Bugfix: the "If-Modified-Since", "If-Range", etc. client request header lines might be passed to backend while caching; or not passed without caching if caching was enabled in another part of the configuration. *) Bugfix: the module ngx_http_mp4_module sent incorrect "Content-Length" response header line if the "start" argument was used. Thanks to Piotr Sikora. -- Igor Sysoev From mdounin at mdounin.ru Mon Nov 14 15:58:27 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Nov 2011 19:58:27 +0400 Subject: nginx worker process hang,cpu load 100% In-Reply-To: <7ed1abd36d44e812afca391274f9e925.NginxMailingListEnglish@forum.nginx.org> References: <7ed1abd36d44e812afca391274f9e925.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111114155827.GR95664@mdounin.ru> Hello! On Mon, Nov 14, 2011 at 09:37:29AM -0500, Long Wan wrote: [...] > I found i made a mistake in nginx.conf, i include a virtual host > configuation like this: > upstream test_servers { > #server 10.0.7.4:80 ; > server 10.0.7.5:80 backup; > #server 10.0.7.6:80 ; > #server 10.0.7.7:80 ; > } [...] > there was only one server in upstream,which marked 'backup'. after some > test,i found this is the reason. Yes, thank you for report. This is somewhat known issue, 'backup' handling needs attention. Maxim Dounin From nginx-forum at nginx.us Tue Nov 15 01:56:21 2011 From: nginx-forum at nginx.us (Long Wan) Date: Mon, 14 Nov 2011 20:56:21 -0500 Subject: nginx worker process hang,cpu load 100% In-Reply-To: References: Message-ID: <800ad31679e45e06363a87a924be77e6.NginxMailingListEnglish@forum.nginx.org> Do you plan to fix this issue in next release? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218259,218342#msg-218342 From nginx-forum at nginx.us Tue Nov 15 02:30:27 2011 From: nginx-forum at nginx.us (dannynoonan) Date: Mon, 14 Nov 2011 21:30:27 -0500 Subject: compile ngx_resty to statically link some libs? Message-ID: Hey agentzh, could you provide tips on how I could cut down this list of dynamically linked libs: [david at dev-3 ngx_openresty-1.0.8.26]$ ldd /usr/local/encap/nginx-resty-1.0.8.26/nginx/sbin/nginx linux-vdso.so.1 => (0x00007fff079fd000) libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003aac200000) libcrypt.so.1 => /lib64/libcrypt.so.1 (0x0000003e08a00000) libssl.so.10 => /usr/lib64/libssl.so.10 (0x00000033d9c00000) libdrizzle.so.0 => /usr/lib64/libdrizzle.so.0 (0x00007fd4bcdf1000) libm.so.6 => /lib64/libm.so.6 (0x0000003e12000000) libpcre.so.0 => /lib64/libpcre.so.0 (0x0000003710800000) libcrypto.so.10 => /lib64/libcrypto.so.10 (0x0000003e13400000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003e11000000) libz.so.1 => /lib64/libz.so.1 (0x0000003e11c00000) libc.so.6 => /lib64/libc.so.6 (0x0000003e10c00000) /lib64/ld-linux-x86-64.so.2 (0x0000003e10800000) libfreebl3.so => /lib64/libfreebl3.so (0x0000003e08e00000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00000033d9400000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00000033d9800000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00000033d9000000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x000000374a800000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x000000374a000000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x0000003e13800000) libresolv.so.2 => /lib64/libresolv.so.2 (0x0000003e12c00000) libselinux.so.1 => /lib64/libselinux.so.1 (0x0000003749c00000) Specifically, I'd really like to remove libdrizzle as a dependency on the target machine and just have the compile pull it in at link time. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218343,218343#msg-218343 From nbubingo at gmail.com Tue Nov 15 02:47:53 2011 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Tue, 15 Nov 2011 10:47:53 +0800 Subject: Nginx Load balancer mode for JBoss / Icefaces application In-Reply-To: References: Message-ID: ?However the session information is not being passed through or is being modified.? How do you know that? I think the problem is the session is sent to be the wrong jboss server. If the Jboss uses the similar session sticky way as Tomcat/Resin, you can use my nginx_jvm_route_module: http://code.google.com/p/nginx-upstream-jvm-route/ 2011/11/14 Srinivasan Subramanian > > > Hello > > We have setup nginx as a Loadbalancer on Centos 5.2 x64. nginx is acting > as a LB for a web application developed using Java servlets and Icefaces > (1.8.2). The web application is deployed on JBoss 5.1. After > configuration nginx is able to redirect the queries to the upstream servers > properly. However the session information is not being passed through or > is being modified. So the Icefaces servlet is repeatedly refreshing the > login page every few seconds and keeps creating new sessions. > > Please advise on any additional settings that need to be made. The > current settings are: > > (nginx is running on 192.168.1.137) > > upstream int-lb { > server 192.168.1.139:8080; > server 192.168.1.138:8080; > } > > server { > listen 80; > server_name int-lb; > > #charset koi8-r; > access_log /var/log/nginx/host.access.log main; > error_log /var/log/nginx/host.error.log debug; > root /usr/app/jboss5/server/default/deploy/admin-console.war; > > location / { > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_pass http://int-lb; > } > } > > Thanks in advance for all assistance. > > Regards > Srini > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssrini_vasan at hotmail.com Tue Nov 15 03:18:36 2011 From: ssrini_vasan at hotmail.com (Srinivasan Subramanian) Date: Tue, 15 Nov 2011 08:48:36 +0530 Subject: Nginx Load balancer mode for JBoss / Icefaces application In-Reply-To: References: , Message-ID: Hi You were bang on! Thanks. That was the issue. For now i introduced the ip_hask; key for now in the upstream section and that fixed it. I will try your module. We want to actually also implement the HttpUpstreamFairModule to improve the distribution. Will the sticky session module that you linked to work in conjunction with that? Thanks for your timely help, greatly appreciated. Regards Date: Tue, 15 Nov 2011 10:47:53 +0800 Subject: Re: Nginx Load balancer mode for JBoss / Icefaces application From: nbubingo at gmail.com To: nginx at nginx.org ?However the session information is not being passed through or is being modified.? How do you know that? I think the problem is the session is sent to be the wrong jboss server. If the Jboss uses the similar session sticky way as Tomcat/Resin, you can use my nginx_jvm_route_module: http://code.google.com/p/nginx-upstream-jvm-route/ 2011/11/14 Srinivasan Subramanian Hello We have setup nginx as a Loadbalancer on Centos 5.2 x64. nginx is acting as a LB for a web application developed using Java servlets and Icefaces (1.8.2). The web application is deployed on JBoss 5.1. After configuration nginx is able to redirect the queries to the upstream servers properly. However the session information is not being passed through or is being modified. So the Icefaces servlet is repeatedly refreshing the login page every few seconds and keeps creating new sessions. Please advise on any additional settings that need to be made. The current settings are: (nginx is running on 192.168.1.137) upstream int-lb { server 192.168.1.139:8080; server 192.168.1.138:8080; } server { listen 80; server_name int-lb; #charset koi8-r; access_log /var/log/nginx/host.access.log main; error_log /var/log/nginx/host.error.log debug; root /usr/app/jboss5/server/default/deploy/admin-console.war; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://int-lb; }} Thanks in advance for all assistance. Regards Srini _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 15 03:36:08 2011 From: nginx-forum at nginx.us (archon810) Date: Mon, 14 Nov 2011 22:36:08 -0500 Subject: Bug while using "proxy_cache_use_stale updating" In-Reply-To: References: <20090901132100.GG98063@rambler-co.ru> Message-ID: I have 0.8.54, and I just hit this issue, which resulted in the RSS feed not updating for a full day. Due to the way my server is configured, because of my cookies, I was getting straight to the feed bypassing the cache and only found the problem when using incognito mode without cookies. A server restart seemed to fix the problem for now. Igor, is this fixed in some version >0.8.54? Thanks, Artem Posted at Nginx Forum: http://forum.nginx.org/read.php?2,5225,218347#msg-218347 From nginx-forum at nginx.us Tue Nov 15 03:56:50 2011 From: nginx-forum at nginx.us (fengguang) Date: Mon, 14 Nov 2011 22:56:50 -0500 Subject: Green screen with New MP4 module Message-ID: When pseudo-streaming with new mp4 module, sometimes the flash player will display a green screen for a second. Anyone has the same problem? I'm sure the mp4 file was processed correctly. The metadata was inserted at the beginning of the file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218348,218348#msg-218348 From agentzh at gmail.com Tue Nov 15 04:24:01 2011 From: agentzh at gmail.com (agentzh) Date: Tue, 15 Nov 2011 12:24:01 +0800 Subject: compile ngx_resty to statically link some libs? In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 10:30 AM, dannynoonan wrote: > Hey agentzh, could you provide tips on how I could cut down this list of > dynamically linked libs: > [snip] > > Specifically, I'd really like to remove libdrizzle as a dependency on > the target machine and just have the compile pull it in at link time. > Try ./configure --with-ld-opt="-static" ... while building ngx_openresty :) Regards, -agentzh From nginx-forum at nginx.us Tue Nov 15 05:37:43 2011 From: nginx-forum at nginx.us (zhenwei) Date: Tue, 15 Nov 2011 00:37:43 -0500 Subject: what's the difference between proxy_store and proxy_cache? In-Reply-To: <201111140948.43044.ne@vbart.ru> References: <201111140948.43044.ne@vbart.ru> Message-ID: thanks for your advise, using pseudo disk via memory or memcached are good approaches. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218263,218355#msg-218355 From nginx-forum at nginx.us Tue Nov 15 07:32:10 2011 From: nginx-forum at nginx.us (faskiri.devel) Date: Tue, 15 Nov 2011 02:32:10 -0500 Subject: Nginx fails to accept new connection if active worker crashes Message-ID: <9506f66614285b9ecfddda8261d94984.NginxMailingListEnglish@forum.nginx.org> Hi All I use nginx configured with multiple workers. I also have an nginx module that crashed due to an error when I noticed that the module crash leaves nginx in a state where it cannot accept new calls. Removing my module and killing the "active" worker (the one which seems to take the new request) with a SIGHUP again caused nginx to hang. Killing the other worker(s) seem to be working just fine. Further investigations(with nginx at debug level) showed that all threads are fine but none of the workers are getting the ngx_accept_mutex_lock. Master tries to release the ngx_accept_mutex_lock if the dead process was holding it [ https://svn.nginx.org/nginx/browser/nginx/trunk/src/os/unix/ngx_process.c?annotate=blame#L503] but doesnt look like the value is set anywhere. I have been using nginx only for a couple of months now so I am not very sure of the diagnosis, please feel free to correct. uname -a: Linux faskiri-pc 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 14:58:24 UTC 2010 x86_64 GNU/Linux nginx -V: nginx: nginx version: nginx/1.0.5 nginx: built by gcc 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5) nginx: configure arguments: --without-http_ssi_module --without-http_geo_module --without-http_fastcgi_module --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-pcre --with-debug I will be grateful for any advice. Best Regards +Fasih Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218359,218359#msg-218359 From andrew at nginx.com Tue Nov 15 09:15:50 2011 From: andrew at nginx.com (Andrew Alexeev) Date: Tue, 15 Nov 2011 13:15:50 +0400 Subject: Nginx fails to accept new connection if active worker crashes In-Reply-To: <9506f66614285b9ecfddda8261d94984.NginxMailingListEnglish@forum.nginx.org> References: <9506f66614285b9ecfddda8261d94984.NginxMailingListEnglish@forum.nginx.org> Message-ID: Fasih, On Nov 15, 2011, at 11:32 AM, faskiri.devel wrote: > Hi All > > I use nginx configured with multiple workers. I also have an nginx > module that crashed due to an error when I noticed that the module crash > leaves nginx in a state where it cannot accept new calls. > > Removing my module and killing the "active" worker (the one which seems > to take the new request) with a SIGHUP again caused nginx to hang. > Killing the other worker(s) seem to be working just fine. > > Further investigations(with nginx at debug level) showed that all > threads are fine but none of the workers are getting the > ngx_accept_mutex_lock. > > Master tries to release the ngx_accept_mutex_lock if the dead process > was holding it [ > https://svn.nginx.org/nginx/browser/nginx/trunk/src/os/unix/ngx_process.c?annotate=blame#L503] > but doesnt look like the value is set anywhere. > > I have been using nginx only for a couple of months now so I am not very > sure of the diagnosis, please feel free to correct. Thanks for spotting this one. It's kind of a known issue and we're working on a fix currently. In the meanwhile you can switch accept mutex off as a workaround (the only downside could potentially be in minor increase of CPU utilization). > uname -a: Linux faskiri-pc 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 > 14:58:24 UTC 2010 x86_64 GNU/Linux > > nginx -V: nginx: nginx version: nginx/1.0.5 nginx: built by gcc 4.4.5 > (Ubuntu/Linaro 4.4.4-14ubuntu5) nginx: configure arguments: > --without-http_ssi_module --without-http_geo_module > --without-http_fastcgi_module --without-http_uwsgi_module > --without-http_scgi_module --without-http_memcached_module > --without-mail_pop3_module --without-mail_imap_module > --without-mail_smtp_module --with-pcre --with-debug > > I will be grateful for any advice. > > Best Regards > +Fasih > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218359,218359#msg-218359 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From igor at sysoev.ru Tue Nov 15 09:42:57 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 15 Nov 2011 13:42:57 +0400 Subject: nginx-1.0.10 Message-ID: <20111115094257.GB58136@nginx.com> Changes with nginx 1.0.10 15 Nov 2011 *) Bugfix: a segmentation fault might occur in a worker process if resolver got a big DNS response. Thanks to Ben Hawkes. *) Bugfix: in cache key calculation if internal MD5 implementation was used; the bug had appeared in 1.0.4. *) Bugfix: the module ngx_http_mp4_module sent incorrect "Content-Length" response header line if the "start" argument was used. Thanks to Piotr Sikora. -- Igor Sysoev From andrew at nginx.com Tue Nov 15 09:50:07 2011 From: andrew at nginx.com (Andrew Alexeev) Date: Tue, 15 Nov 2011 13:50:07 +0400 Subject: DNS TTLs being ignored In-Reply-To: References: Message-ID: <72FF6524-75CF-4123-8F83-50363C25AE21@nginx.com> On Nov 3, 2011, at 1:50 PM, Andrew Alexeev wrote: > Noah, > > This fix/improvement be introduced in 1.1.8 which will come out around Nov 14. Apologies, it didn't get in either 1.1.8 (yesterday) or 1.1.10 (today). It's almost ready and would hopefully get into the next dev and stable releases in a couple of weeks. > > Hope this helps > > On Nov 3, 2011, at 1:46 PM, Noah C. wrote: > >> Thanks for the reply Andrew. Do you have any idea when it's likely to be >> generally available? This is a pretty big nuisance for us, and I'd like >> to be able to figure out if I need to look at using a new reverse proxy, >> at least for the time being. >> >> --Noah >> >> -- >> Posted via http://www.ruby-forum.com/. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Nov 15 10:35:16 2011 From: nginx-forum at nginx.us (dannynoonan) Date: Tue, 15 Nov 2011 05:35:16 -0500 Subject: compile ngx_resty to statically link some libs? In-Reply-To: References: Message-ID: <15e3a05026fcf57706fc601b2b7bb4d1.NginxMailingListEnglish@forum.nginx.org> I'd tried that, but figured I was invoking it wrong, the error it yields: checking for --with-ld-opt="-static" ... not found ./configure: error: the invalid value in --with-ld-opt="-static" failed to run command: ./configure --prefix=/data/local/nginx-resty-1.0.8.26/nginx \ Reading up on the man page for ld caused me to try --with-ld-opt="-Bstatic" which gets me past configure, but gmake still yields an ELF binary that dynamically links out to libdrizzle. The final gcc linking step looked like this: gcc -o objs/nginx \ objs/src/core/nginx.o \ objs/src/core/ngx_log.o \ objs/src/core/ngx_palloc.o \ objs/src/core/ngx_array.o \ objs/src/core/ngx_list.o \ objs/src/core/ngx_hash.o \ objs/src/core/ngx_buf.o \ objs/src/core/ngx_queue.o \ objs/src/core/ngx_output_chain.o \ objs/src/core/ngx_string.o \ objs/src/core/ngx_parse.o \ objs/src/core/ngx_inet.o \ objs/src/core/ngx_file.o \ objs/src/core/ngx_crc32.o \ objs/src/core/ngx_murmurhash.o \ objs/src/core/ngx_md5.o \ objs/src/core/ngx_rbtree.o \ objs/src/core/ngx_radix_tree.o \ objs/src/core/ngx_slab.o \ objs/src/core/ngx_times.o \ objs/src/core/ngx_shmtx.o \ objs/src/core/ngx_connection.o \ objs/src/core/ngx_cycle.o \ objs/src/core/ngx_spinlock.o \ objs/src/core/ngx_cpuinfo.o \ objs/src/core/ngx_conf_file.o \ objs/src/core/ngx_resolver.o \ objs/src/core/ngx_open_file_cache.o \ objs/src/core/ngx_crypt.o \ objs/src/event/ngx_event.o \ objs/src/event/ngx_event_timer.o \ objs/src/event/ngx_event_posted.o \ objs/src/event/ngx_event_busy_lock.o \ objs/src/event/ngx_event_accept.o \ objs/src/event/ngx_event_connect.o \ objs/src/event/ngx_event_pipe.o \ objs/src/os/unix/ngx_time.o \ objs/src/os/unix/ngx_errno.o \ objs/src/os/unix/ngx_alloc.o \ objs/src/os/unix/ngx_files.o \ objs/src/os/unix/ngx_socket.o \ objs/src/os/unix/ngx_recv.o \ objs/src/os/unix/ngx_readv_chain.o \ objs/src/os/unix/ngx_udp_recv.o \ objs/src/os/unix/ngx_send.o \ objs/src/os/unix/ngx_writev_chain.o \ objs/src/os/unix/ngx_channel.o \ objs/src/os/unix/ngx_shmem.o \ objs/src/os/unix/ngx_process.o \ objs/src/os/unix/ngx_daemon.o \ objs/src/os/unix/ngx_setproctitle.o \ objs/src/os/unix/ngx_posix_init.o \ objs/src/os/unix/ngx_user.o \ objs/src/os/unix/ngx_process_cycle.o \ objs/src/os/unix/ngx_linux_init.o \ objs/src/event/modules/ngx_epoll_module.o \ objs/src/os/unix/ngx_linux_sendfile_chain.o \ objs/src/event/ngx_event_openssl.o \ objs/src/core/ngx_regex.o \ objs/src/http/ngx_http.o \ objs/src/http/ngx_http_core_module.o \ objs/src/http/ngx_http_special_response.o \ objs/src/http/ngx_http_request.o \ objs/src/http/ngx_http_parse.o \ objs/src/http/ngx_http_header_filter_module.o \ objs/src/http/ngx_http_write_filter_module.o \ objs/src/http/ngx_http_copy_filter_module.o \ objs/src/http/modules/ngx_http_log_module.o \ objs/src/http/ngx_http_request_body.o \ objs/src/http/ngx_http_variables.o \ objs/src/http/ngx_http_script.o \ objs/src/http/ngx_http_upstream.o \ objs/src/http/ngx_http_upstream_round_robin.o \ objs/src/http/ngx_http_parse_time.o \ objs/src/http/modules/ngx_http_static_module.o \ objs/src/http/modules/ngx_http_index_module.o \ objs/src/http/modules/ngx_http_chunked_filter_module.o \ objs/src/http/modules/ngx_http_range_filter_module.o \ objs/src/http/modules/ngx_http_headers_filter_module.o \ objs/src/http/modules/ngx_http_not_modified_filter_module.o \ objs/src/http/ngx_http_busy_lock.o \ objs/src/http/ngx_http_file_cache.o \ objs/src/http/modules/ngx_http_gzip_filter_module.o \ objs/src/http/ngx_http_postpone_filter_module.o \ objs/src/http/modules/ngx_http_ssi_filter_module.o \ objs/src/http/modules/ngx_http_charset_filter_module.o \ objs/src/http/modules/ngx_http_userid_filter_module.o \ objs/src/http/modules/ngx_http_autoindex_module.o \ objs/src/http/modules/ngx_http_auth_basic_module.o \ objs/src/http/modules/ngx_http_access_module.o \ objs/src/http/modules/ngx_http_limit_zone_module.o \ objs/src/http/modules/ngx_http_limit_req_module.o \ objs/src/http/modules/ngx_http_geo_module.o \ objs/src/http/modules/ngx_http_map_module.o \ objs/src/http/modules/ngx_http_split_clients_module.o \ objs/src/http/modules/ngx_http_referer_module.o \ objs/src/http/modules/ngx_http_rewrite_module.o \ objs/src/http/modules/ngx_http_ssl_module.o \ objs/src/http/modules/ngx_http_proxy_module.o \ objs/src/http/modules/ngx_http_fastcgi_module.o \ objs/src/http/modules/ngx_http_uwsgi_module.o \ objs/src/http/modules/ngx_http_scgi_module.o \ objs/src/http/modules/ngx_http_memcached_module.o \ objs/src/http/modules/ngx_http_empty_gif_module.o \ objs/src/http/modules/ngx_http_browser_module.o \ objs/src/http/modules/ngx_http_upstream_ip_hash_module.o \ objs/addon/src/ndk.o \ objs/addon/src/ngx_http_echo_module.o \ objs/addon/src/ngx_http_echo_util.o \ objs/addon/src/ngx_http_echo_timer.o \ objs/addon/src/ngx_http_echo_var.o \ objs/addon/src/ngx_http_echo_handler.o \ objs/addon/src/ngx_http_echo_filter.o \ objs/addon/src/ngx_http_echo_sleep.o \ objs/addon/src/ngx_http_echo_location.o \ objs/addon/src/ngx_http_echo_echo.o \ objs/addon/src/ngx_http_echo_request_info.o \ objs/addon/src/ngx_http_echo_subrequest.o \ objs/addon/src/ngx_http_echo_foreach.o \ objs/addon/src/ngx_http_xss_filter_module.o \ objs/addon/src/ngx_http_xss_util.o \ objs/addon/src/ngx_http_set_base32.o \ objs/addon/src/ngx_http_set_default_value.o \ objs/addon/src/ngx_http_set_hashed_upstream.o \ objs/addon/src/ngx_http_set_quote_sql.o \ objs/addon/src/ngx_http_set_quote_json.o \ objs/addon/src/ngx_http_set_unescape_uri.o \ objs/addon/src/ngx_http_set_misc_module.o \ objs/addon/src/ngx_http_set_escape_uri.o \ objs/addon/src/ngx_http_set_hash.o \ objs/addon/src/ngx_http_set_local_today.o \ objs/addon/src/ngx_http_set_hex.o \ objs/addon/src/ngx_http_set_base64.o \ objs/addon/src/ngx_http_set_random.o \ objs/addon/src/ngx_http_set_hmac.o \ objs/addon/src/ngx_http_form_input_module.o \ objs/addon/src/ngx_http_encrypted_session_module.o \ objs/addon/src/ngx_http_encrypted_session_cipher.o \ objs/addon/src/ngx_http_drizzle_module.o \ objs/addon/src/ngx_http_drizzle_handler.o \ objs/addon/src/ngx_http_drizzle_processor.o \ objs/addon/src/ngx_http_drizzle_upstream.o \ objs/addon/src/ngx_http_drizzle_util.o \ objs/addon/src/ngx_http_drizzle_output.o \ objs/addon/src/ngx_http_drizzle_keepalive.o \ objs/addon/src/ngx_http_drizzle_quoting.o \ objs/addon/src/ngx_http_drizzle_checker.o \ objs/addon/src/ngx_http_lua_script.o \ objs/addon/src/ngx_http_lua_log.o \ objs/addon/src/ngx_http_lua_subrequest.o \ objs/addon/src/ngx_http_lua_ndk.o \ objs/addon/src/ngx_http_lua_control.o \ objs/addon/src/ngx_http_lua_time.o \ objs/addon/src/ngx_http_lua_misc.o \ objs/addon/src/ngx_http_lua_variable.o \ objs/addon/src/ngx_http_lua_string.o \ objs/addon/src/ngx_http_lua_output.o \ objs/addon/src/ngx_http_lua_headers.o \ objs/addon/src/ngx_http_lua_req_body.o \ objs/addon/src/ngx_http_lua_uri.o \ objs/addon/src/ngx_http_lua_args.o \ objs/addon/src/ngx_http_lua_ctx.o \ objs/addon/src/ngx_http_lua_regex.o \ objs/addon/src/ngx_http_lua_module.o \ objs/addon/src/ngx_http_lua_headers_out.o \ objs/addon/src/ngx_http_lua_headers_in.o \ objs/addon/src/ngx_http_lua_directive.o \ objs/addon/src/ngx_http_lua_consts.o \ objs/addon/src/ngx_http_lua_exception.o \ objs/addon/src/ngx_http_lua_util.o \ objs/addon/src/ngx_http_lua_cache.o \ objs/addon/src/ngx_http_lua_conf.o \ objs/addon/src/ngx_http_lua_contentby.o \ objs/addon/src/ngx_http_lua_rewriteby.o \ objs/addon/src/ngx_http_lua_accessby.o \ objs/addon/src/ngx_http_lua_setby.o \ objs/addon/src/ngx_http_lua_capturefilter.o \ objs/addon/src/ngx_http_lua_clfactory.o \ objs/addon/src/ngx_http_lua_pcrefix.o \ objs/addon/src/ngx_http_lua_headerfilterby.o \ objs/addon/src/ngx_http_lua_shdict.o \ objs/addon/src/ngx_http_headers_more_filter_module.o \ objs/addon/src/ngx_http_headers_more_headers_out.o \ objs/addon/src/ngx_http_headers_more_headers_in.o \ objs/addon/src/ngx_http_headers_more_util.o \ objs/addon/src/ngx_http_srcache_filter_module.o \ objs/addon/src/ngx_http_srcache_util.o \ objs/addon/src/ngx_http_srcache_var.o \ objs/addon/src/ngx_http_srcache_store.o \ objs/addon/src/ngx_http_srcache_fetch.o \ objs/addon/src/ngx_http_array_var_module.o \ objs/addon/src/ngx_http_array_var_util.o \ objs/addon/src/ngx_http_memc_module.o \ objs/addon/src/ngx_http_memc_request.o \ objs/addon/src/ngx_http_memc_response.o \ objs/addon/src/ngx_http_memc_util.o \ objs/addon/src/ngx_http_memc_handler.o \ objs/addon/src/ngx_http_redis2_module.o \ objs/addon/src/ngx_http_redis2_handler.o \ objs/addon/src/ngx_http_redis2_reply.o \ objs/addon/src/ngx_http_redis2_util.o \ objs/addon/upstream-keepalive-nginx-module-0.3/ngx_http_upstream_keepalive_module.o \ objs/addon/auth-request-nginx-module-0.2/ngx_http_auth_request_module.o \ objs/addon/src/ngx_http_rds_json_filter_module.o \ objs/addon/src/ngx_http_rds_json_processor.o \ objs/addon/src/ngx_http_rds_json_util.o \ objs/addon/src/ngx_http_rds_json_output.o \ objs/addon/src/ngx_http_rds_json_handler.o \ objs/addon/src/ngx_http_rds_csv_filter_module.o \ objs/addon/src/ngx_http_rds_csv_processor.o \ objs/addon/src/ngx_http_rds_csv_util.o \ objs/addon/src/ngx_http_rds_csv_output.o \ objs/ngx_modules.o \ -Bstatic -Wl,-E -lpthread -lcrypt -lssl -ldrizzle -L/home/david/src/third-party/ngx_openresty-1.0.8.26/build/lua-root/data/local/nginx-resty-1.0.8.26/lua/lib -llua -lm -lpcre -lssl -lcrypto -ldl -lz Here's the full configure line: ./configure --prefix=/data/local/nginx-resty-loko-1.0.8.26 --with-ld-opt="-Bstatic" --with-http_drizzle_module Any ideas? Is trying to statically link a never ending struggle I should give up on early? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218343,218372#msg-218372 From nginx-forum at nginx.us Tue Nov 15 11:37:22 2011 From: nginx-forum at nginx.us (faskiri.devel) Date: Tue, 15 Nov 2011 06:37:22 -0500 Subject: Nginx fails to accept new connection if active worker crashes In-Reply-To: References: Message-ID: <2cd0f23b89b14d39b23cd6f97858c8a6.NginxMailingListEnglish@forum.nginx.org> Hi Andrew Thanks for the prompt reply. As a temporary fix I had created a variable in the shared memory to track which pid is holding the mutex so that the check in [https://svn.nginx.org/nginx/browser/nginx/trunk/src/os/unix/ngx_process.c?annotate=blame#L503] works. It works fine for me, hadnt realized I could switch it off. Are we talking about putting "accept_mutex off" in the nginx.conf file? It will be great if you could explain what that will actually do, as in, how will the new requests be handed off to workers. Best Regards +Fasih Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218359,218373#msg-218373 From bruno.premont at restena.lu Tue Nov 15 12:29:04 2011 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Tue, 15 Nov 2011 13:29:04 +0100 Subject: nginx-1.0.10 In-Reply-To: <20111115094257.GB58136@nginx.com> References: <20111115094257.GB58136@nginx.com> Message-ID: <20111115132904.2c1c47d7@pluto.restena.lu> Hi Igor, It would be nice if you could include tarball checksums in the release announcements! Bruno On Tue, 15 Nov 2011 13:42:57 Igor Sysoev wrote: > Changes with nginx 1.0.10 15 Nov 2011 > > *) Bugfix: a segmentation fault might occur in a worker process if > resolver got a big DNS response. > Thanks to Ben Hawkes. > > *) Bugfix: in cache key calculation if internal MD5 implementation was > used; the bug had appeared in 1.0.4. > > *) Bugfix: the module ngx_http_mp4_module sent incorrect > "Content-Length" response header line if the "start" argument was > used. > Thanks to Piotr Sikora. From igor at sysoev.ru Tue Nov 15 12:50:20 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 15 Nov 2011 16:50:20 +0400 Subject: nginx-1.0.10 In-Reply-To: <20111115132904.2c1c47d7@pluto.restena.lu> References: <20111115094257.GB58136@nginx.com> <20111115132904.2c1c47d7@pluto.restena.lu> Message-ID: <20111115125019.GA63671@nginx.com> On Tue, Nov 15, 2011 at 01:29:04PM +0100, Bruno Pr?mont wrote: > Hi Igor, > > It would be nice if you could include tarball checksums in the release > announcements! Is http://nginx.org/download/nginx-1.0.10.tar.gz.asc not enough ? -- Igor Sysoev From bruno.premont at restena.lu Tue Nov 15 13:18:59 2011 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Tue, 15 Nov 2011 14:18:59 +0100 Subject: nginx-1.0.10 In-Reply-To: <20111115125019.GA63671@nginx.com> References: <20111115094257.GB58136@nginx.com> <20111115132904.2c1c47d7@pluto.restena.lu> <20111115125019.GA63671@nginx.com> Message-ID: <20111115141859.53d923ae@pluto.restena.lu> On Tue, 15 Nov 2011 16:50:20 Igor Sysoev wrote: > On Tue, Nov 15, 2011 at 01:29:04PM +0100, Bruno Pr?mont wrote: > > Hi Igor, > > > > It would be nice if you could include tarball checksums in the release > > announcements! > > Is http://nginx.org/download/nginx-1.0.10.tar.gz.asc not enough ? It's useful though not optimal (well, depends on workflow :) ). When reading announcements happens on one system but downloading and compiling happens on a different system it's easier to verify checksums on the build host and have the checksums verified via signed e-mail. Bruno From andrew at nginx.com Tue Nov 15 13:26:11 2011 From: andrew at nginx.com (Andrew Alexeev) Date: Tue, 15 Nov 2011 17:26:11 +0400 Subject: Nginx fails to accept new connection if active worker crashes In-Reply-To: <2cd0f23b89b14d39b23cd6f97858c8a6.NginxMailingListEnglish@forum.nginx.org> References: <2cd0f23b89b14d39b23cd6f97858c8a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <965A100A-3E08-4024-A2B4-188A92AB47C1@nginx.com> On Nov 15, 2011, at 3:37 PM, faskiri.devel wrote: > Hi Andrew > > Thanks for the prompt reply. > > As a temporary fix I had created a variable in the shared memory to > track which pid is holding the mutex so that the check in > [https://svn.nginx.org/nginx/browser/nginx/trunk/src/os/unix/ngx_process.c?annotate=blame#L503] > works. It works fine for me, hadnt realized I could switch it off. Are > we talking about putting "accept_mutex off" in the nginx.conf file? It > will be great if you could explain what that will actually do, as in, > how will the new requests be handed off to workers. Yes, accept_mutex off. What accept mutex does is trying to prevent workers from competing over accept from listening sockets (in the kernel). In other words, without accept mutex workers may try to simultaneously check for new events on sockets which may lead to a slight increase in CPU usage. Depending on your OS and the event notification mechanisms the results may vary. Actually it's quite safe to try it and we'd appreciate your feedback here! And as I mentioned, we've been working on fixing the situation with crashed workers and mutex lock-ups. > Best Regards > +Fasih > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218359,218373#msg-218373 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ilan at time4learning.com Tue Nov 15 15:57:42 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Tue, 15 Nov 2011 10:57:42 -0500 Subject: Upgrade to 1.1.8 still shows 1.1.7? Message-ID: We usually upgrade Nginx when a new version comes out. I upgraded to 1.1.8 and when I do "nginx -v" from the command line, it shows 1.1.8. When I run phpinfo() through the php-fpm service that we use (having restarted it), it still shows 1.1.7, why would that be the case? How can I confirm via an online request that Nginx is running the correct version? The server status stub does not show it. -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Nov 15 16:09:29 2011 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Nov 2011 16:09:29 +0000 Subject: Upgrade to 1.1.8 still shows 1.1.7? In-Reply-To: References: Message-ID: <20111115160929.GR27078@craic.sysops.org> On Tue, Nov 15, 2011 at 10:57:42AM -0500, Ilan Berkner wrote: Hi there, > How can I confirm via an online request that Nginx is running the correct > version? The server status stub does not show it. Probably easiest is just curl -I http://your_server/ Look for the Server: header. If you've configured that not to show, then perhaps pick one specific url that you will expose the Server: header in and use that. All the best, f -- Francis Daly francis at daoine.org From ilan at time4learning.com Tue Nov 15 16:18:25 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Tue, 15 Nov 2011 11:18:25 -0500 Subject: Upgrade to 1.1.8 still shows 1.1.7? In-Reply-To: <20111115160929.GR27078@craic.sysops.org> References: <20111115160929.GR27078@craic.sysops.org> Message-ID: I actually had to do a full restart and now it works. Previously kill -HUP masterpid worked, this time it didn't, not sure why. On Tue, Nov 15, 2011 at 11:09 AM, Francis Daly wrote: > On Tue, Nov 15, 2011 at 10:57:42AM -0500, Ilan Berkner wrote: > > Hi there, > > > How can I confirm via an online request that Nginx is running the correct > > version? The server status stub does not show it. > > Probably easiest is just > > curl -I http://your_server/ > > Look for the Server: header. > > If you've configured that not to show, then perhaps pick one specific > url that you will expose the Server: header in and use that. > > All the best, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehabkost at raisama.net Tue Nov 15 16:38:31 2011 From: ehabkost at raisama.net (Eduardo Habkost) Date: Tue, 15 Nov 2011 14:38:31 -0200 Subject: SIGWINCH not working on second on-the-fly binary upgrade Message-ID: Hi, I was trying to upgrade nginx from 1.0.9 to 1.0.10 on-the-fly, using the process described on the wiki[1], but it looks like it is ignoring the SIGWINCH signal I send to it. bender:~# ps ax -H | grep nginx 31603 pts/2 S+ 0:00 grep nginx 32606 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 32607 ? S 9:47 nginx: worker process 31309 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31310 ? S 0:00 nginx: worker process bender:~# kill -WINCH 32606 # 32606 is the 1.0.9 master process, 31309 is the 1.0.10 master process bender:~# ps ax -H | grep nginx 31605 pts/2 S+ 0:00 grep nginx 32606 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 32607 ? S 9:47 nginx: worker process 31309 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31310 ? S 0:00 nginx: worker process After a quick look at the code, it looks like this happened because the 1.0.9 process I am running was started during a previous 1.0.7->1.0.9 on-the-fly upgrade, and the 1.0.9 master process now thinks it is not daemonized (ngx_daemonized is not set on initialization if ngx_inherited is set). I have easily reproduced it by killing nginx completely, making a on-the-fly 1.0.10->1.0.10 executable upgrade twice. On the first try, SIGWINCH works; after the first upgrade, SIGWINCH stops working. Full shell session showing the bug is pasted at the end of my message. [1] http://wiki.nginx.org/CommandLine#Upgrading_To_a_New_Binary_On_The_Fly -- Eduardo Shell session: bender:~# ps ax -H | grep nginx 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31684 pts/5 S+ 0:00 grep nginx bender:~# /usr/local/nginx/sbin/nginx bender:~# ps ax -H | grep nginx 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31689 pts/5 S+ 0:00 grep nginx 31686 ? Ss 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31687 ? S 0:00 nginx: worker process bender:~# kill -USR2 31686 bender:~# ps ax -H | grep nginx 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31693 pts/5 S+ 0:00 grep nginx 31686 ? Ss 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31687 ? S 0:00 nginx: worker process 31690 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31691 ? S 0:00 nginx: worker process bender:~# kill -WINCH 31686 bender:~# ps ax -H | grep nginx 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31695 pts/5 S+ 0:00 grep nginx 31686 ? Ss 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31690 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31691 ? S 0:00 nginx: worker process bender:~# echo SIGWINCH worked on the first process SIGWINCH worked on the first process bender:~# kill -QUIT 31686 bender:~# ps ax -H | grep nginx 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31697 pts/5 S+ 0:00 grep nginx 31690 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31691 ? S 0:00 nginx: worker process bender:~# kill -USR2 31690 bender:~# ps ax -H | grep nginx 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31705 pts/5 S+ 0:00 grep nginx 31690 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31691 ? S 0:00 nginx: worker process 31702 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31703 ? S 0:00 nginx: worker process bender:~# kill -WINCH 31690 bender:~# ps ax -H | grep nginx 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31709 pts/5 S+ 0:00 grep nginx 31690 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31691 ? S 0:00 nginx: worker process 31702 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31703 ? S 0:00 nginx: worker process bender:~# kill -WINCH 31690 bender:~# ps ax -H | grep nginx 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31711 pts/5 S+ 0:00 grep nginx 31690 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31691 ? S 0:00 nginx: worker process 31702 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31703 ? S 0:00 nginx: worker process bender:~# echo PID 31690 is ignoring SIGWINCH signals PID 31690 is ignoring SIGWINCH signals bender:~# kill -QUIT 31690 bender:~# ps ax -H | grep nginx 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script 31713 pts/5 S+ 0:00 grep nginx 31702 ? S 0:00 nginx: master process /usr/local/nginx/sbin/nginx 31703 ? S 0:00 nginx: worker process From ilan at time4learning.com Tue Nov 15 16:40:40 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Tue, 15 Nov 2011 11:40:40 -0500 Subject: SIGWINCH not working on second on-the-fly binary upgrade In-Reply-To: References: Message-ID: Nice find, I think I was experiencing a similar issue with upgrading on the fly from 1.1.7 to 1.1.9. On Tue, Nov 15, 2011 at 11:38 AM, Eduardo Habkost wrote: > Hi, > > I was trying to upgrade nginx from 1.0.9 to 1.0.10 on-the-fly, using > the process described on the wiki[1], but it looks like it is ignoring > the SIGWINCH signal I send to it. > > bender:~# ps ax -H | grep nginx > 31603 pts/2 S+ 0:00 grep nginx > 32606 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 32607 ? S 9:47 nginx: worker process > 31309 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31310 ? S 0:00 nginx: worker process > bender:~# kill -WINCH 32606 # 32606 is the 1.0.9 master process, 31309 > is the 1.0.10 master process > bender:~# ps ax -H | grep nginx > 31605 pts/2 S+ 0:00 grep nginx > 32606 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 32607 ? S 9:47 nginx: worker process > 31309 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31310 ? S 0:00 nginx: worker process > > After a quick look at the code, it looks like this happened because > the 1.0.9 process I am running was started during a previous > 1.0.7->1.0.9 on-the-fly upgrade, and the 1.0.9 master process now > thinks it is not daemonized (ngx_daemonized is not set on > initialization if ngx_inherited is set). > > I have easily reproduced it by killing nginx completely, making a > on-the-fly 1.0.10->1.0.10 executable upgrade twice. On the first try, > SIGWINCH works; after the first upgrade, SIGWINCH stops working. Full > shell session showing the bug is pasted at the end of my message. > > [1] http://wiki.nginx.org/CommandLine#Upgrading_To_a_New_Binary_On_The_Fly > > -- > Eduardo > > > Shell session: > > bender:~# ps ax -H | grep nginx > 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31684 pts/5 S+ 0:00 grep nginx > bender:~# /usr/local/nginx/sbin/nginx > bender:~# ps ax -H | grep nginx > 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31689 pts/5 S+ 0:00 grep nginx > 31686 ? Ss 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31687 ? S 0:00 nginx: worker process > bender:~# kill -USR2 31686 > bender:~# ps ax -H | grep nginx > 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31693 pts/5 S+ 0:00 grep nginx > 31686 ? Ss 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31687 ? S 0:00 nginx: worker process > 31690 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31691 ? S 0:00 nginx: worker process > bender:~# kill -WINCH 31686 > bender:~# ps ax -H | grep nginx > 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31695 pts/5 S+ 0:00 grep nginx > 31686 ? Ss 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31690 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31691 ? S 0:00 nginx: worker process > bender:~# echo SIGWINCH worked on the first process > SIGWINCH worked on the first process > bender:~# kill -QUIT 31686 > bender:~# ps ax -H | grep nginx > 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31697 pts/5 S+ 0:00 grep nginx > 31690 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31691 ? S 0:00 nginx: worker process > bender:~# kill -USR2 31690 > bender:~# ps ax -H | grep nginx > 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31705 pts/5 S+ 0:00 grep nginx > 31690 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31691 ? S 0:00 nginx: worker process > 31702 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31703 ? S 0:00 nginx: worker process > bender:~# kill -WINCH 31690 > bender:~# ps ax -H | grep nginx > 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31709 pts/5 S+ 0:00 grep nginx > 31690 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31691 ? S 0:00 nginx: worker process > 31702 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31703 ? S 0:00 nginx: worker process > bender:~# kill -WINCH 31690 > bender:~# ps ax -H | grep nginx > 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31711 pts/5 S+ 0:00 grep nginx > 31690 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31691 ? S 0:00 nginx: worker process > 31702 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31703 ? S 0:00 nginx: worker process > bender:~# echo PID 31690 is ignoring SIGWINCH signals > PID 31690 is ignoring SIGWINCH signals > bender:~# kill -QUIT 31690 > bender:~# ps ax -H | grep nginx > 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script > 31713 pts/5 S+ 0:00 grep nginx > 31702 ? S 0:00 nginx: master process > /usr/local/nginx/sbin/nginx > 31703 ? S 0:00 nginx: worker process > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilan at time4learning.com Tue Nov 15 16:40:48 2011 From: ilan at time4learning.com (Ilan Berkner) Date: Tue, 15 Nov 2011 11:40:48 -0500 Subject: SIGWINCH not working on second on-the-fly binary upgrade In-Reply-To: References: Message-ID: I mean 1.1.8. On Tue, Nov 15, 2011 at 11:40 AM, Ilan Berkner wrote: > Nice find, I think I was experiencing a similar issue with upgrading on > the fly from 1.1.7 to 1.1.9. > > > On Tue, Nov 15, 2011 at 11:38 AM, Eduardo Habkost wrote: > >> Hi, >> >> I was trying to upgrade nginx from 1.0.9 to 1.0.10 on-the-fly, using >> the process described on the wiki[1], but it looks like it is ignoring >> the SIGWINCH signal I send to it. >> >> bender:~# ps ax -H | grep nginx >> 31603 pts/2 S+ 0:00 grep nginx >> 32606 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 32607 ? S 9:47 nginx: worker process >> 31309 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31310 ? S 0:00 nginx: worker process >> bender:~# kill -WINCH 32606 # 32606 is the 1.0.9 master process, 31309 >> is the 1.0.10 master process >> bender:~# ps ax -H | grep nginx >> 31605 pts/2 S+ 0:00 grep nginx >> 32606 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 32607 ? S 9:47 nginx: worker process >> 31309 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31310 ? S 0:00 nginx: worker process >> >> After a quick look at the code, it looks like this happened because >> the 1.0.9 process I am running was started during a previous >> 1.0.7->1.0.9 on-the-fly upgrade, and the 1.0.9 master process now >> thinks it is not daemonized (ngx_daemonized is not set on >> initialization if ngx_inherited is set). >> >> I have easily reproduced it by killing nginx completely, making a >> on-the-fly 1.0.10->1.0.10 executable upgrade twice. On the first try, >> SIGWINCH works; after the first upgrade, SIGWINCH stops working. Full >> shell session showing the bug is pasted at the end of my message. >> >> [1] >> http://wiki.nginx.org/CommandLine#Upgrading_To_a_New_Binary_On_The_Fly >> >> -- >> Eduardo >> >> >> Shell session: >> >> bender:~# ps ax -H | grep nginx >> 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31684 pts/5 S+ 0:00 grep nginx >> bender:~# /usr/local/nginx/sbin/nginx >> bender:~# ps ax -H | grep nginx >> 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31689 pts/5 S+ 0:00 grep nginx >> 31686 ? Ss 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31687 ? S 0:00 nginx: worker process >> bender:~# kill -USR2 31686 >> bender:~# ps ax -H | grep nginx >> 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31693 pts/5 S+ 0:00 grep nginx >> 31686 ? Ss 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31687 ? S 0:00 nginx: worker process >> 31690 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31691 ? S 0:00 nginx: worker process >> bender:~# kill -WINCH 31686 >> bender:~# ps ax -H | grep nginx >> 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31695 pts/5 S+ 0:00 grep nginx >> 31686 ? Ss 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31690 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31691 ? S 0:00 nginx: worker process >> bender:~# echo SIGWINCH worked on the first process >> SIGWINCH worked on the first process >> bender:~# kill -QUIT 31686 >> bender:~# ps ax -H | grep nginx >> 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31697 pts/5 S+ 0:00 grep nginx >> 31690 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31691 ? S 0:00 nginx: worker process >> bender:~# kill -USR2 31690 >> bender:~# ps ax -H | grep nginx >> 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31705 pts/5 S+ 0:00 grep nginx >> 31690 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31691 ? S 0:00 nginx: worker process >> 31702 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31703 ? S 0:00 nginx: worker process >> bender:~# kill -WINCH 31690 >> bender:~# ps ax -H | grep nginx >> 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31709 pts/5 S+ 0:00 grep nginx >> 31690 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31691 ? S 0:00 nginx: worker process >> 31702 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31703 ? S 0:00 nginx: worker process >> bender:~# kill -WINCH 31690 >> bender:~# ps ax -H | grep nginx >> 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31711 pts/5 S+ 0:00 grep nginx >> 31690 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31691 ? S 0:00 nginx: worker process >> 31702 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31703 ? S 0:00 nginx: worker process >> bender:~# echo PID 31690 is ignoring SIGWINCH signals >> PID 31690 is ignoring SIGWINCH signals >> bender:~# kill -QUIT 31690 >> bender:~# ps ax -H | grep nginx >> 31678 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31679 pts/2 S+ 0:00 script -t -a /tmp/nginx.script >> 31713 pts/5 S+ 0:00 grep nginx >> 31702 ? S 0:00 nginx: master process >> /usr/local/nginx/sbin/nginx >> 31703 ? S 0:00 nginx: worker process >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > > Ilan Berkner > Chief Technology Officer > Time4Learning.com > > 6300 NE 1st Ave., Suite 203 > Ft. Lauderdale, FL 33334 > (954) 771-0914 > > > > > Time4Learning.com - Online interactive curriculum for home use, PreK-8th > Grade. > Time4Writing.com - Online writing tutorials for high, middle, and > elementary school students. > Time4Learning.net - A forum to chat with parents online about kids, > education, parenting and more. > spellingcity.com - Online vocabulary and spelling activities for > teachers, parents and students. > > > -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.yordanov at exsisto.com Tue Nov 15 16:43:04 2011 From: b.yordanov at exsisto.com (Boyko Yordanov) Date: Tue, 15 Nov 2011 18:43:04 +0200 Subject: SIGWINCH not working on second on-the-fly binary upgrade In-Reply-To: References: Message-ID: <8A05CECC-CFB5-4862-8912-4B9FF10D7D0C@exsisto.com> Verified here as well, I am upgrading by sending QUIT to the old master process as WINCH does not work. Boyko On Nov 15, 2011, at 6:38 PM, Eduardo Habkost wrote: > Hi, > > I was trying to upgrade nginx from 1.0.9 to 1.0.10 on-the-fly, using > the process described on the wiki[1], but it looks like it is ignoring > the SIGWINCH signal I send to it. From mdounin at mdounin.ru Tue Nov 15 17:27:32 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Nov 2011 21:27:32 +0400 Subject: Upgrade to 1.1.8 still shows 1.1.7? In-Reply-To: References: <20111115160929.GR27078@craic.sysops.org> Message-ID: <20111115172731.GG95664@mdounin.ru> Hello! On Tue, Nov 15, 2011 at 11:18:25AM -0500, Ilan Berkner wrote: > I actually had to do a full restart and now it works. > > Previously kill -HUP masterpid worked, this time it didn't, not sure why. kill -HUP isn't expected to upgrade nginx binary, upgrade procedure is outlined here: http://wiki.nginx.org/CommandLine#Upgrading_To_a_New_Binary_On_The_Fly Maxim Dounin > > On Tue, Nov 15, 2011 at 11:09 AM, Francis Daly wrote: > > > On Tue, Nov 15, 2011 at 10:57:42AM -0500, Ilan Berkner wrote: > > > > Hi there, > > > > > How can I confirm via an online request that Nginx is running the correct > > > version? The server status stub does not show it. > > > > Probably easiest is just > > > > curl -I http://your_server/ > > > > Look for the Server: header. > > > > If you've configured that not to show, then perhaps pick one specific > > url that you will expose the Server: header in and use that. > > > > All the best, > > > > f > > -- > > Francis Daly francis at daoine.org > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > Ilan Berkner > Chief Technology Officer > Time4Learning.com > > 6300 NE 1st Ave., Suite 203 > Ft. Lauderdale, FL 33334 > (954) 771-0914 > > > > > Time4Learning.com - Online interactive curriculum for home use, PreK-8th > Grade. > Time4Writing.com - Online writing tutorials for high, middle, and > elementary school students. > Time4Learning.net - A forum to chat with parents online about kids, > education, parenting and more. > spellingcity.com - Online vocabulary and spelling activities for teachers, > parents and students. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Nov 15 17:55:47 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Nov 2011 21:55:47 +0400 Subject: SIGWINCH not working on second on-the-fly binary upgrade In-Reply-To: References: Message-ID: <20111115175547.GH95664@mdounin.ru> Hello! On Tue, Nov 15, 2011 at 02:38:31PM -0200, Eduardo Habkost wrote: > I was trying to upgrade nginx from 1.0.9 to 1.0.10 on-the-fly, using > the process described on the wiki[1], but it looks like it is ignoring > the SIGWINCH signal I send to it. [...] > After a quick look at the code, it looks like this happened because > the 1.0.9 process I am running was started during a previous > 1.0.7->1.0.9 on-the-fly upgrade, and the 1.0.9 master process now > thinks it is not daemonized (ngx_daemonized is not set on > initialization if ngx_inherited is set). > > I have easily reproduced it by killing nginx completely, making a > on-the-fly 1.0.10->1.0.10 executable upgrade twice. On the first try, > SIGWINCH works; after the first upgrade, SIGWINCH stops working. Full > shell session showing the bug is pasted at the end of my message. Yes, thank you, this is a bug introduced in 1.1.1/1.0.9. As a workaround SIGQUIT to old master process may be used (it doesn't allow to revive old master, but allow upgrade). The following patch should fix this: diff --git a/src/core/nginx.c b/src/core/nginx.c --- a/src/core/nginx.c +++ b/src/core/nginx.c @@ -374,6 +374,10 @@ main(int argc, char *const *argv) ngx_daemonized = 1; } + if (ngx_inherited) { + ngx_daemonized = 1; + } + #endif if (ngx_create_pidfile(&ccf->pid, cycle->log) != NGX_OK) { Maxim Dounin From james.lyons at gmail.com Tue Nov 15 21:58:25 2011 From: james.lyons at gmail.com (James Lyons) Date: Tue, 15 Nov 2011 13:58:25 -0800 Subject: DNS caching issue Message-ID: So we are using nginx from a few builds back as upgrading consistently has proven difficult for our ops team to keep up with where I work. I think we're on 1.0.5. We have some hosts setup to handle subrequests using proxy_pass directive. When the DNS record is changed, and the host command on the machine is reporting a *new* host ip in the dns result. That host does not see traffic like it should. We have always attributed this to caching in dns and we perform "nginx reconfigure" to reparse conf and re-read dns. This isn't working for us though and i'm not sure why. We did upgrade to 1.0.5 relatively recently and i'm wondering if we inherited a bug. Two things are odd though. When we pulled the host ip *out* of the dns, and performed nginx reconfigure, the traffic to the machine ceased. But now that our work is done, and we're trying to put it back into rotation, nginx reconfigure does not seem to be working. Doing a restart is harder, as we have to drain the machine in question. But is there any known behavior that might be the root cause of this? Far as I'm aware the TTL on the DNS is 5m. But its been hours since the change. -James- From nginx-forum at nginx.us Tue Nov 15 22:54:11 2011 From: nginx-forum at nginx.us (artemg) Date: Tue, 15 Nov 2011 17:54:11 -0500 Subject: abort_request callback of ngx_http_upstream_s Message-ID: As I understand, abort_request() callback of upstream (ngx_http_upstream_s) is never called now. Please correct me, if I am wrong. The place to insert this function call is in ngx_http_upstream_check_broken_connection(), right? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218412,218412#msg-218412 From brian at akins.org Tue Nov 15 23:08:02 2011 From: brian at akins.org (Brian Akins) Date: Tue, 15 Nov 2011 18:08:02 -0500 Subject: what's the difference between proxy_store and proxy_cache? In-Reply-To: <201111140948.43044.ne@vbart.ru> References: <201111132050.06153.vbart@nginx.com> <87d9fadc36d30fe8b02560bd1c9720d8.NginxMailingListEnglish@forum.nginx.org> <201111140948.43044.ne@vbart.ru> Message-ID: On Mon, Nov 14, 2011 at 12:48 AM, Valentin V. Bartenev wrote: > you may > need to tune kernel disk cache or even consider to put nginx cache on > "/dev/shm". > > I wouldn't recommened putting it in tmpfs. Just let the OS buffer cache keep it in RAM. If you happen to have some SSD's on the other hand... --Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Nov 16 00:35:59 2011 From: nginx-forum at nginx.us (artemg) Date: Tue, 15 Nov 2011 19:35:59 -0500 Subject: abort_request callback of ngx_http_upstream_s In-Reply-To: References: Message-ID: <73d987887ff2a0f4b7cf9e9e6185a011.NginxMailingListEnglish@forum.nginx.org> I have made this patch, but not sure if it is correct. diff -u nginx-1.0.6/src/http/ngx_http_upstream.c nginx-1.0.6_/src/http/ngx_http_upstream.c --- nginx-1.0.6/src/http/ngx_http_upstream.c 2011-08-29 05:56:09.000000000 -0700 +++ nginx-1.0.6_/src/http/ngx_http_upstream.c 2011-11-11 02:17:36.000000000 -0800 @@ -1048,6 +1048,8 @@ ev->eof = 1; c->error = 1; + u->abort_request(r); + if (!u->cacheable && u->peer.connection) { ngx_log_error(NGX_LOG_INFO, ev->log, err, "client closed prematurely connection, " Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218412,218415#msg-218415 From agentzh at gmail.com Wed Nov 16 01:36:40 2011 From: agentzh at gmail.com (agentzh) Date: Wed, 16 Nov 2011 09:36:40 +0800 Subject: compile ngx_resty to statically link some libs? In-Reply-To: <15e3a05026fcf57706fc601b2b7bb4d1.NginxMailingListEnglish@forum.nginx.org> References: <15e3a05026fcf57706fc601b2b7bb4d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Nov 15, 2011 at 6:35 PM, dannynoonan wrote: > > Reading up on the man page for ld caused me to try > --with-ld-opt="-Bstatic" which gets me past configure, but gmake still > yields an ELF binary that dynamically links out to libdrizzle. Could you confirm that the file libdrizzle.a indeed exists in your system? > > Any ideas? Is trying to statically link a never ending struggle I should > give up on early? > What's your gcc's version? And what does your "uname -a" say? Do you have the libdrizzle.a file in the right place? Regards, -agentzh From agentzh at gmail.com Wed Nov 16 03:31:45 2011 From: agentzh at gmail.com (agentzh) Date: Wed, 16 Nov 2011 11:31:45 +0800 Subject: [ANN] ngx_openresty 1.0.9.10 (stable) released In-Reply-To: References: Message-ID: Hello, folks! I'm happy to announce that the new stable release of ngx_openresty, 1.0.9.10, has just been kicked out of door: http://openresty.org/#Download This is the first stable release of ngx_openresty that is based on the Nginx core 1.0.9. Special thanks go to all our contributors and users for helping make this release happen :) Here goes the complete change log for this release, as compared to the last stable release, 1.0.8.26, released two weeks ago: - upgraded the Nginx core to 1.0.9. - applied the epoll_check_stale_wev patchto the Nginx 1.0.9 core. this issue affected PostgresNginxModule when connecting to a remote PostgreSQL server over a slow network. thanks @??XX . - bugfix: nginx-1.0.9-variable_header_ignore_no_hash.patchmight introduce a memory overflow issue in multi-header variables. thanks Markus Linnala. - bugfix: fixed the error message length while the ./configure script fails. - feature: applied a patchto add new directives log_escape_non_ascii to prevent escaping non-ascii bytes in access log variable values. requested by @??? . It can be turned on and off, and default to on just as the standard Nginx version. - upgraded DrizzleNginxModule to 0.1.2rc4. - bugfix: fixed issues with poll, rtsig, and select used by the Nginx event model by eliminating the poll syscall performed by libdrizzle. This also gives rise to a nice speedup (about 10% in simple cases). - upgraded LuaNginxModule to 0.3.1rc28. - feature: added the ngx.encode_argsmethod to encode a Lua code to a URI query string. thanks ?? ( 0597? ). - feature: ngx.location.captureand ngx.exec now supports the same Lua args table format as in ngx.encode_args. thanks ?? (0597? ). - bugfix: Cache-Control header modification might introduce empty value headers when using with the standard ngx_headersmodule. - feature: added the ctx option to ngx.location.capture: you can now specify a custom Lua table to pass to the subrequest as its ngx.ctx . thanks @hugozhu . - bugfix: fixed compatibility with nginx 0.8.54. thanks @0579? . - upgraded HeadersMoreNginxModule to 0.16rc4. - bugfix: Cache-Control header modification might introduce empty value headers when using with the standard ngx_headersmodule. - upgraded PostgresNginxModule to 0.9rc2 - bugfix: now we log an error message when the postgres_pass target is not found at all and returns 500 in this case instead of returning empty response. - bugfix: we should no longer return NGX_AGAIN when the re-polling returns IO WAIT in case of the "connection made" state. - feature: added some debugging outputs which be enabled by passing the --with-debug option while building Nginx or OpenResty. - bugfix: fixed compatibility issues with Nginx 1.1.4+: ngx_chain_update_chains now requires a pool argument. - upgraded LuaRdsParserLibrary to 0.04. - bugfix: fixed a serious memory leak reported by bearnard. - upgraded XssNginxModule to 0.03rc5. - bugfix: the callback argument value parser did not accept JavaScript identifier names started with underscores. thanks Sam Mulube. As always, you're welcome to report bugs and feature requests either here or directly to me :) It'll also be highly appreciated to try out the devel releases (based on the Nginx core 1.0.10+) that are coming out later ;) OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. By taking adantage of various well-designed Nginx modules, OpenResty effectively turns the nginx server into a powerful web app server, in which the web developers can use the Lua programming language to script various existing nginx C modules and Lua modules and construct extremely high-performance web applications that is capable to handle 10K+ connections. OpenResty aims to run your server-side web app completely in the Nginx server, leveraging Nginx's event model to do non-blocking I/O not only with the HTTP clients, but also with remote backends like MySQL, PostgreSQL, Memcached, and Redis. You can find more details on the homepage of ngx_openresty here: http://openresty.org Have fun! -agentzh -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Wed Nov 16 08:23:45 2011 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 16 Nov 2011 12:23:45 +0400 Subject: DNS caching issue In-Reply-To: References: Message-ID: <39FCCA69-38BE-485C-B1B0-11BFE0B68069@nginx.com> James, On Nov 16, 2011, at 1:58 AM, James Lyons wrote: > So we are using nginx from a few builds back as upgrading consistently > has proven difficult for our ops team to keep up with where I work. I > think we're on 1.0.5. > > We have some hosts setup to handle subrequests using proxy_pass > directive. When the DNS record is changed, and the host command on > the machine is reporting a *new* host ip in the dns result. That host > does not see traffic like it should. > > We have always attributed this to caching in dns and we perform "nginx > reconfigure" to reparse conf and re-read dns. This isn't working for > us though and i'm not sure why. We did upgrade to 1.0.5 relatively > recently and i'm wondering if we inherited a bug. Two things are odd > though. > > When we pulled the host ip *out* of the dns, and performed nginx > reconfigure, the traffic to the machine ceased. But now that our work > is done, and we're trying to put it back into rotation, nginx > reconfigure does not seem to be working. > > Doing a restart is harder, as we have to drain the machine in > question. But is there any known behavior that might be the root > cause of this? Far as I'm aware the TTL on the DNS is 5m. But its > been hours since the change. Can you please show the relevant portion of your proxy configuration? > -James- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Nov 16 08:40:07 2011 From: nginx-forum at nginx.us (zhenwei) Date: Wed, 16 Nov 2011 03:40:07 -0500 Subject: what's the difference between proxy_store and proxy_cache? In-Reply-To: References: Message-ID: Quit agree with you that SSD seems the right way to get high random write/read performs well, especially we're serving thousands of websites for each Nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218263,218424#msg-218424 From nginx-forum at nginx.us Wed Nov 16 08:51:09 2011 From: nginx-forum at nginx.us (faskiri.devel) Date: Wed, 16 Nov 2011 03:51:09 -0500 Subject: Nginx fails to accept new connection if active worker crashes In-Reply-To: <965A100A-3E08-4024-A2B4-188A92AB47C1@nginx.com> References: <965A100A-3E08-4024-A2B4-188A92AB47C1@nginx.com> Message-ID: <335afd057b4626537c43149994c40d04.NginxMailingListEnglish@forum.nginx.org> Thanks for your attention! It works fine with accept_mutex off, I will run my stress test harness over the weekend to see the impact on the performance. If there is a significant difference in performance, will surely update the thread with the same. For my understanding, I had implemented a workaround to get around this problem. Is your solution along the same line? My patch: diff --git a/service/nginxServer/nginx-1.0.5/src/event/ngx_event.c b/service/nginxServer/nginx-1.0.5/src/event/ngx_event.c index c57d37e..a6ed725 100644 --- a/service/nginxServer/nginx-1.0.5/src/event/ngx_event.c +++ b/service/nginxServer/nginx-1.0.5/src/event/ngx_event.c @@ -49,6 +49,10 @@ ngx_atomic_t *ngx_connection_counter = &connection_counter; ngx_atomic_t *ngx_accept_mutex_ptr; ngx_shmtx_t ngx_accept_mutex; +// This is shared var protected by ngx_use_accept_mutex. Access only when +// ngx_accept_mutex is held. The var stores the PID of the process currently +// holding the mutex +ngx_pid_t *ngx_accept_mutex_held_by; ngx_uint_t ngx_use_accept_mutex; ngx_uint_t ngx_accept_events; ngx_uint_t ngx_accept_mutex_held; @@ -254,6 +258,7 @@ ngx_process_events_and_timers(ngx_cycle_t *cycle) } if (ngx_accept_mutex_held) { + *ngx_accept_mutex_held_by = 0; ngx_shmtx_unlock(&ngx_accept_mutex); } @@ -526,6 +531,9 @@ ngx_event_module_init(ngx_cycle_t *cycle) { return NGX_ERROR; } + // cl = 128 bytes are available for us to use. ngx_shmtx_create uses + // ngx_atomic_t bytes to assign to mutex->lock, using the memory after that + ngx_accept_mutex_held_by = (ngx_pid_t*) (shared + sizeof(ngx_atomic_t)); ngx_connection_counter = (ngx_atomic_t *) (shared + 1 * cl); diff --git a/service/nginxServer/nginx-1.0.5/src/event/ngx_event.h b/service/nginxServer/nginx-1.0.5/src/event/ngx_event.h index 778da52..f1b06d4 100644 --- a/service/nginxServer/nginx-1.0.5/src/event/ngx_event.h +++ b/service/nginxServer/nginx-1.0.5/src/event/ngx_event.h @@ -501,6 +501,7 @@ extern ngx_atomic_t *ngx_connection_counter; extern ngx_atomic_t *ngx_accept_mutex_ptr; extern ngx_shmtx_t ngx_accept_mutex; +extern ngx_pid_t *ngx_accept_mutex_held_by; extern ngx_uint_t ngx_use_accept_mutex; extern ngx_uint_t ngx_accept_events; extern ngx_uint_t ngx_accept_mutex_held; diff --git a/service/nginxServer/nginx-1.0.5/src/event/ngx_event_accept.c b/service/nginxServer/nginx-1.0.5/src/event/ngx_event_accept.c index 2355d1b..feb4568 100644 --- a/service/nginxServer/nginx-1.0.5/src/event/ngx_event_accept.c +++ b/service/nginxServer/nginx-1.0.5/src/event/ngx_event_accept.c @@ -298,6 +298,10 @@ ngx_trylock_accept_mutex(ngx_cycle_t *cycle) ngx_log_debug0(NGX_LOG_DEBUG_EVENT, cycle->log, 0, "accept mutex locked"); + *ngx_accept_mutex_held_by = ngx_pid; + + // If the mutex was already held by me and we are using RTSIG_EVENT, no + // need to enable accept_events if (ngx_accept_mutex_held && ngx_accept_events == 0 && !(ngx_event_flags & NGX_USE_RTSIG_EVENT)) @@ -306,6 +310,8 @@ ngx_trylock_accept_mutex(ngx_cycle_t *cycle) } if (ngx_enable_accept_events(cycle) == NGX_ERROR) { + // No one is holding the mutex now + *ngx_accept_mutex_held_by = 0; ngx_shmtx_unlock(&ngx_accept_mutex); return NGX_ERROR; } @@ -317,8 +323,9 @@ ngx_trylock_accept_mutex(ngx_cycle_t *cycle) } ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, - "accept mutex lock failed: %ui", ngx_accept_mutex_held); + "accept mutex lock failed: held by: %ui", *ngx_accept_mutex_held_by); + // If I held it earlier, but not anymore (ngx_trylock_accept_mutex failed) if (ngx_accept_mutex_held) { if (ngx_disable_accept_events(cycle) == NGX_ERROR) { return NGX_ERROR; diff --git a/service/nginxServer/nginx-1.0.5/src/os/unix/ngx_process.c b/service/nginxServer/nginx-1.0.5/src/os/unix/ngx_process.c index 6055587..b66d4b3 100644 --- a/service/nginxServer/nginx-1.0.5/src/os/unix/ngx_process.c +++ b/service/nginxServer/nginx-1.0.5/src/os/unix/ngx_process.c @@ -492,17 +492,18 @@ ngx_process_get_status(void) } - if (ngx_accept_mutex_ptr) { - - /* - * unlock the accept mutex if the abnormally exited process - * held it - */ - - ngx_atomic_cmp_set(ngx_accept_mutex_ptr, pid, 0); + // If the accept mutex is held by the abnormally exited process + // Note: If the process holding this has died, the mutex cannot be + // acquired by someone else, in which case, ngx_accept_mutex_held is + // free to be accessed + if (ngx_accept_mutex_held_by != NULL && pid == *ngx_accept_mutex_held_by) { + ngx_log_error(NGX_LOG_INFO, ngx_cycle->log, 0, + "PID %P held the accept mutex. Releasing", pid); + // Reset the value before unlocking + *ngx_accept_mutex_held_by = 0; + ngx_shmtx_unlock(&ngx_accept_mutex); } - one = 1; process = "unknown process"; -- 1.7.1 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218359,218425#msg-218425 From kworthington at gmail.com Wed Nov 16 11:30:35 2011 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 16 Nov 2011 06:30:35 -0500 Subject: nginx-1.0.10 In-Reply-To: <20111115141859.53d923ae@pluto.restena.lu> References: <20111115094257.GB58136@nginx.com> <20111115132904.2c1c47d7@pluto.restena.lu> <20111115125019.GA63671@nginx.com> <20111115141859.53d923ae@pluto.restena.lu> Message-ID: Hello Nginx Users, Just released: Nginx 1.0.10 For Windows http://goo.gl/5GcCE (32-bit and 64-bit) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Official Windows binaries are at nginx.org Thank you, Kevin -- Kevin Worthington kworthington ~at] gmail [dot} .C0M~ http://www.kevinworthington.com/ On Tue, Nov 15, 2011 at 8:18 AM, Bruno Pr?mont wrote: > On Tue, 15 Nov 2011 16:50:20 Igor Sysoev wrote: >> On Tue, Nov 15, 2011 at 01:29:04PM +0100, Bruno Pr?mont wrote: >> > Hi Igor, >> > >> > It would be nice if you could include tarball checksums in the release >> > announcements! >> >> Is http://nginx.org/download/nginx-1.0.10.tar.gz.asc not enough ? > > It's useful though not optimal (well, depends on workflow :) ). > > When reading announcements happens on one system but downloading > and compiling happens on a different system it's easier to verify > checksums on the build host and have the checksums verified via signed > e-mail. > > Bruno > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From andrew at nginx.com Wed Nov 16 14:00:12 2011 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 16 Nov 2011 18:00:12 +0400 Subject: DNS TTLs being ignored In-Reply-To: <72FF6524-75CF-4123-8F83-50363C25AE21@nginx.com> References: <72FF6524-75CF-4123-8F83-50363C25AE21@nginx.com> Message-ID: On Nov 15, 2011, at 1:50 PM, Andrew Alexeev wrote: > On Nov 3, 2011, at 1:50 PM, Andrew Alexeev wrote: > >> Noah, >> >> This fix/improvement be introduced in 1.1.8 which will come out around Nov 14. > > Apologies, it didn't get in either 1.1.8 (yesterday) or 1.1.10 (today). It's almost ready and would hopefully get into the next dev and stable releases in a couple of weeks. Jfyi, it went committed today http://mailman.nginx.org/pipermail/nginx-devel/2011-November/001466.html http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver and will be included in 1.1.9. >> Hope this helps >> >> On Nov 3, 2011, at 1:46 PM, Noah C. wrote: >> >>> Thanks for the reply Andrew. Do you have any idea when it's likely to be >>> generally available? This is a pretty big nuisance for us, and I'd like >>> to be able to figure out if I need to look at using a new reverse proxy, >>> at least for the time being. >>> >>> --Noah >>> >>> -- >>> Posted via http://www.ruby-forum.com/. >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx From magicbearmo at gmail.com Wed Nov 16 16:34:16 2011 From: magicbearmo at gmail.com (MagicBear) Date: Thu, 17 Nov 2011 00:34:16 +0800 Subject: [Module]Nginx Regexp Plugins Message-ID: This plugin in base on ngx_cache_purge-1.4 P.S: This plugin current will blocking web request, now is only for test. I cannot found a method to make nginx run a new thread. Usage: location ~ ^/regex_purge(/.*) { proxy_cache_batchpurge cache_zone $1$is_args$args; } download: http://m-b.cc/share/ngx_batch_purge-0.1.tgz -- MagicBear From piotr.sikora at frickle.com Wed Nov 16 17:32:11 2011 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Wed, 16 Nov 2011 18:32:11 +0100 Subject: [Module]Nginx Regexp Plugins In-Reply-To: References: Message-ID: <138D0318626C487BB0CAE851942DB3D5@Desktop> Hey, > P.S: This plugin current will blocking web request, now is only for test. I know you've got good intentions, but this module will kill your box. > I cannot found a method to make nginx run a new thread. Just fork() the worker process. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From appa at perusio.net Wed Nov 16 17:57:32 2011 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 16 Nov 2011 17:57:32 +0000 Subject: nginx-1.0.10 In-Reply-To: <20111115141859.53d923ae@pluto.restena.lu> References: <20111115094257.GB58136@nginx.com> <20111115132904.2c1c47d7@pluto.restena.lu> <20111115125019.GA63671@nginx.com> <20111115141859.53d923ae@pluto.restena.lu> Message-ID: <87zkfwj6gj.wl%appa@perusio.net> On 15 Nov 2011 13h18 WET, bruno.premont at restena.lu wrote: It's quite easy to create a script that downloads the source, the sig and verifies it with GPG. Here's my take on it: https://github.com/perusio/nginx-get-source Feel free to use it or fork it. --- appa From magicbearmo at gmail.com Wed Nov 16 18:47:31 2011 From: magicbearmo at gmail.com (Bear Magic) Date: Thu, 17 Nov 2011 02:47:31 +0800 Subject: [Module]Nginx Regexp Plugins In-Reply-To: <138D0318626C487BB0CAE851942DB3D5@Desktop> References: <138D0318626C487BB0CAE851942DB3D5@Desktop> Message-ID: <3134703292502901380@unknownmsgid> I have tried for this, but when done the first time, it will have some error for next. I Have thinking for spawn a process at startup or using nginx cache manager process to do this. Piotr Sikora ? 2011-11-17 1:32 ??? > Hey, > >> P.S: This plugin current will blocking web request, now is only for test. > > I know you've got good intentions, but this module will kill your box. > >> I cannot found a method to make nginx run a new thread. > > Just fork() the worker process. > > Best regards, > Piotr Sikora < piotr.sikora at frickle.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Nov 17 01:04:30 2011 From: nginx-forum at nginx.us (sisif) Date: Wed, 16 Nov 2011 20:04:30 -0500 Subject: MP4 - start time is out mp4 stts samples Message-ID: <3c2ccb0e721cdc80d2dd4bf09ec014ea.NginxMailingListEnglish@forum.nginx.org> Hello, About MP4 module, my files are around 100-200MB, 30-50minutes videos. When I try to seek more then 14-20min from start, I got "video not found" and this error in nginx log: 2011/11/17 01:17:46 [error] 15463#0: *19 start time is out mp4 stts samples in "/home/host/download/4040292549.mp4 How can I fix this, anyone have this error and know something about ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218451,218451#msg-218451 From igor at sysoev.ru Thu Nov 17 05:06:03 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 17 Nov 2011 09:06:03 +0400 Subject: MP4 - start time is out mp4 stts samples In-Reply-To: <3c2ccb0e721cdc80d2dd4bf09ec014ea.NginxMailingListEnglish@forum.nginx.org> References: <3c2ccb0e721cdc80d2dd4bf09ec014ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111117050603.GA10385@nginx.com> On Wed, Nov 16, 2011 at 08:04:30PM -0500, sisif wrote: > Hello, > About MP4 module, my files are around 100-200MB, 30-50minutes videos. > When I try to seek more then 14-20min from start, I got "video not > found" and this error in nginx log: > 2011/11/17 01:17:46 [error] 15463#0: *19 start time is out mp4 stts > samples in "/home/host/download/4040292549.mp4 > > How can I fix this, anyone have this error and know something about ? Could you show "nginx -V" output ? -- Igor Sysoev From benlancaster at holler.co.uk Thu Nov 17 10:11:38 2011 From: benlancaster at holler.co.uk (Ben Lancaster) Date: Thu, 17 Nov 2011 10:11:38 +0000 Subject: Nginx (PPA stable) periodically returning headers then hanging connection when serving from fcgi cache Message-ID: List, I'm experiencing some problems with FastCGI cache and nginx/1.0.4 from stable PPA on Ubuntu Lucid. In short, it seems that sometimes the FastCGI cache is getting corrupted somehow - Nginx will serve sane headers from cache, but then the connection will seem to hang, with no body returned until the request is timed out by the client (apparently it'll hang indefinitely). For example, headers will appear good: HTTP/1.1 200 OK Server: nginx X-Cache-Status: HIT Cache-Control: public, max_age=300 Content-Type: text/html; charset=utf-8 Date: Thu, 17 Nov 2011 09:58:26 GMT Expires: Thu, 17 Nov 2011 10:02:46 Etag: 611a2b5dcde004cf68ffd56345584d40 Connection: close Last-Modified: Thu, 17 Nov 2011 09:57:46 Transfer-Encoding: Identity ?but then the connection sits there without returning the body. Once nginx returns one "bad" response (as described above), all subsequent requests for the same (cached) resource have the same problem. Other cached resources seem to work as normal, and have experienced it twice in the past 24 hours. The only resolution I've found so far is to junk my cache folder and bounce the nginx service. Here's what my vhost config looks like: server { listen 80 default; server_name example.com; server_tokens off; root /home/user/example.com/web; index index.php; access_log /dev/null; error_log /var/log/nginx/error.log; location / { if (-f $request_filename) { expires 3h; break; } rewrite ^(.*) /index.php last; } location ~ (.*\.php)($|/) { set $script $1; set $path_info ""; if ($uri ~ "^(.+\.php)(/.+)") { set $script $1; set $path_info $2; } fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_read_timeout 180; fastcgi_param PATH_INFO $path_info; fastcgi_param SCRIPT_FILENAME $document_root$script; fastcgi_param SCRIPT_NAME $script; fastcgi_pass_header Set-Cookie; fastcgi_cache_methods GET HEAD; fastcgi_cache fcgi-cache; fastcgi_cache_key canimationlive$request_uri; fastcgi_cache_valid 200 1h; fastcgi_cache_min_uses 1; fastcgi_cache_use_stale error timeout http_500 updating; add_header X-Cache-Status $upstream_cache_status; } } ## --end Here's what the fcgi-cache definition looks like: fastcgi_cache_path /var/www/cache levels=1:2 keys_zone=fcgi-cache:10m max_size=512m inactive=28d; I've just upgraded to nginx 1.0.9 (also from PPA), and noticed in the changelog for versions 1.0.4 and 1.0.5 a "Bugfix: "stalled cache updating" alert" - is this the same problem? If so, the above may well be null and void. Thanks in advance, Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Nov 17 10:38:00 2011 From: nginx-forum at nginx.us (AlexXF) Date: Thu, 17 Nov 2011 05:38:00 -0500 Subject: SSL problem failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) In-Reply-To: <7227a5cde138f26c5e24e1f0f94a6485.NginxMailingListEnglish@forum.nginx.org> References: <4D5D45A3.2030209@feurix.com> <7227a5cde138f26c5e24e1f0f94a6485.NginxMailingListEnglish@forum.nginx.org> Message-ID: <328aa20b68d6f060d65706cda96e86dd.NginxMailingListEnglish@forum.nginx.org> I've got this problem also, but solved! There is a two files that gandi sent to you: site.crt site-bundle.crt Use site.crt instead of site-bundle.crt. Nginx requires certificate for exactly site only. So it is not require to use chain (bundle) certificate file. Enjoy! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,176163,218470#msg-218470 From mdounin at mdounin.ru Thu Nov 17 10:40:42 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 Nov 2011 14:40:42 +0400 Subject: SSL problem failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) In-Reply-To: <328aa20b68d6f060d65706cda96e86dd.NginxMailingListEnglish@forum.nginx.org> References: <4D5D45A3.2030209@feurix.com> <7227a5cde138f26c5e24e1f0f94a6485.NginxMailingListEnglish@forum.nginx.org> <328aa20b68d6f060d65706cda96e86dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111117104042.GO95664@mdounin.ru> Hello! On Thu, Nov 17, 2011 at 05:38:00AM -0500, AlexXF wrote: > I've got this problem also, but solved! > > There is a two files that gandi sent to you: > site.crt > site-bundle.crt > > Use site.crt instead of site-bundle.crt. Nginx requires certificate for > exactly site only. So it is not require to use chain (bundle) > certificate file. Both site certificate and bundle should be used. See here for details: http://nginx.org/en/docs/http/configuring_https_servers.html#chains Maxim Dounin From nginx-forum at nginx.us Thu Nov 17 10:44:32 2011 From: nginx-forum at nginx.us (AlexXF) Date: Thu, 17 Nov 2011 05:44:32 -0500 Subject: SSL problem failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) In-Reply-To: <328aa20b68d6f060d65706cda96e86dd.NginxMailingListEnglish@forum.nginx.org> References: <4D5D45A3.2030209@feurix.com> <7227a5cde138f26c5e24e1f0f94a6485.NginxMailingListEnglish@forum.nginx.org> <328aa20b68d6f060d65706cda96e86dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Anyway - it works after i've made that changes in nginx.conf file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,176163,218472#msg-218472 From nginx-forum at nginx.us Thu Nov 17 10:53:05 2011 From: nginx-forum at nginx.us (AlexXF) Date: Thu, 17 Nov 2011 05:53:05 -0500 Subject: SSL problem failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) In-Reply-To: References: <4D5D45A3.2030209@feurix.com> <7227a5cde138f26c5e24e1f0f94a6485.NginxMailingListEnglish@forum.nginx.org> <328aa20b68d6f060d65706cda96e86dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9da9434af94e62f895db53f84093291c.NginxMailingListEnglish@forum.nginx.org> Upd. It works for concatenated cert files also. Looks like topic starter forgot to concatenate cert files before. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,176163,218473#msg-218473 From igor at sysoev.ru Thu Nov 17 11:36:17 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 17 Nov 2011 15:36:17 +0400 Subject: SSL problem failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) In-Reply-To: <9da9434af94e62f895db53f84093291c.NginxMailingListEnglish@forum.nginx.org> References: <4D5D45A3.2030209@feurix.com> <7227a5cde138f26c5e24e1f0f94a6485.NginxMailingListEnglish@forum.nginx.org> <328aa20b68d6f060d65706cda96e86dd.NginxMailingListEnglish@forum.nginx.org> <9da9434af94e62f895db53f84093291c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111117113617.GA18146@nginx.com> On Thu, Nov 17, 2011 at 05:53:05AM -0500, AlexXF wrote: > Upd. It works for concatenated cert files also. > > Looks like topic starter forgot to concatenate cert files before. He might concatenate them in the wrong order or might use only site-bundle.crt. -- Igor Sysoev From igor at sysoev.ru Thu Nov 17 11:36:51 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 17 Nov 2011 15:36:51 +0400 Subject: SSL problem failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) In-Reply-To: References: <4D5D45A3.2030209@feurix.com> <7227a5cde138f26c5e24e1f0f94a6485.NginxMailingListEnglish@forum.nginx.org> <328aa20b68d6f060d65706cda96e86dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111117113651.GB18146@nginx.com> On Thu, Nov 17, 2011 at 05:44:32AM -0500, AlexXF wrote: > Anyway - it works after i've made that changes in nginx.conf file. Browsers usually store intermediate certificates which they receive and which are signed by trusted authorities, so actively used browsers may already have the required intermediate certificates and may not complain about a certificate sent without a chained bundle. -- Igor Sysoev From brian at akins.org Thu Nov 17 12:33:49 2011 From: brian at akins.org (Brian Akins) Date: Thu, 17 Nov 2011 07:33:49 -0500 Subject: [ANN] ngx_openresty 1.0.9.10 (stable) released In-Reply-To: References: Message-ID: <03D3F526-CD29-4110-8F9B-A096178BF11D@akins.org> On Nov 15, 2011, at 10:31 PM, agentzh wrote: > > I'm happy to announce that the new stable release of ngx_openresty, 1.0.9.10, has just been kicked out of door: > Once again, thanks for doing this! --Brian From magicbearmo at gmail.com Thu Nov 17 13:22:01 2011 From: magicbearmo at gmail.com (MagicBear) Date: Thu, 17 Nov 2011 21:22:01 +0800 Subject: [MOD] proxy_cache regexp batch purge modules 0.2 Message-ID: Download ===== http://m-b.cc/share/ngx_batch_purge-0.2.tar.gz About ===== nginx_cache_batchpurge is 'nginx' module which adds regex batch purge content from 'proxy' caches, and is base on `ngx_cache_purge`. Notice ===== This module is use only for batch purge, if you want to purge a single object, please install ngx_cache_purge module, this module can work with that module exists. Install ===== ./configure --add-module=../ngx_batch_purge-0.2 Configuration directives ======================== proxy_cache_purge ----------------- * **syntax**: `proxy_cache_batchpurge zone_name key` * **default**: `none` * **context**: `location` Sets area and key used for purging selected pages from `proxy`'s cache. Sample configuration ==================== http { proxy_cache_path /tmp/cache keys_zone=tmpcache:10m; server { location / { proxy_pass http://127.0.0.1:8000; proxy_cache tmpcache; proxy_cache_key $uri$is_args$args; } location ~ /purge(/.*) { allow 127.0.0.1; deny all; proxy_cache_purge tmpcache $1$is_args$args; } location ~ /regex_purge(.*) { allow 127.0.0.1; deny all; proxy_cache_batchpurge tmpcache $1$is_args$args; } } } -- MagicBear From nginx-forum at nginx.us Thu Nov 17 14:25:13 2011 From: nginx-forum at nginx.us (sisif) Date: Thu, 17 Nov 2011 09:25:13 -0500 Subject: MP4 - start time is out mp4 stts samples In-Reply-To: <20111117050603.GA10385@nginx.com> References: <20111117050603.GA10385@nginx.com> Message-ID: <5e238ac1a229e14931b6ea72c107997c.NginxMailingListEnglish@forum.nginx.org> nginx version: nginx/1.1.8 built by gcc 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) TLS SNI support enabled configure arguments: --prefix=/usr --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --user=nginx --group=nginx --add-module=/root/nginx-accesskey-2.0.3 --with-http_ssl_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/tmp/nginx/client/ --http-proxy-temp-path=/var/tmp/nginx/proxy/ --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218451,218477#msg-218477 From magicbearmo at gmail.com Thu Nov 17 15:57:10 2011 From: magicbearmo at gmail.com (MagicBear) Date: Thu, 17 Nov 2011 23:57:10 +0800 Subject: [MOD] proxy_cache regexp batch purge modules 0.2 In-Reply-To: References: Message-ID: because nginx is not threadsafe, it will cause segment failure for this version. Bad news. Need to continue improve. 2011/11/17 MagicBear : > Download > ===== > http://m-b.cc/share/ngx_batch_purge-0.2.tar.gz > > > About > ===== > nginx_cache_batchpurge is 'nginx' module which adds regex batch purge content > from 'proxy' caches, and is base on `ngx_cache_purge`. > > > Notice > ===== > This module is use only for batch purge, if you want to purge a single object, > please install ngx_cache_purge module, this module can work with that module > exists. > > > Install > ===== > ./configure --add-module=../ngx_batch_purge-0.2 > > > Configuration directives > ======================== > proxy_cache_purge > ----------------- > * **syntax**: `proxy_cache_batchpurge zone_name key` > * **default**: `none` > * **context**: `location` > > Sets area and key used for purging selected pages from `proxy`'s cache. > > > > Sample configuration > ==================== > ? ?http { > ? ? ? ?proxy_cache_path ?/tmp/cache ?keys_zone=tmpcache:10m; > > ? ? ? ?server { > ? ? ? ? ? ?location / { > ? ? ? ? ? ? ? ?proxy_pass ? ? ? ? http://127.0.0.1:8000; > ? ? ? ? ? ? ? ?proxy_cache ? ? ? ?tmpcache; > ? ? ? ? ? ? ? ?proxy_cache_key ? ?$uri$is_args$args; > ? ? ? ? ? ?} > > ? ? ? ? ? ?location ~ /purge(/.*) { > ? ? ? ? ? ? ? ?allow ? ? ? ? ? ? ?127.0.0.1; > ? ? ? ? ? ? ? ?deny ? ? ? ? ? ? ? all; > ? ? ? ? ? ? ? ?proxy_cache_purge ?tmpcache $1$is_args$args; > ? ? ? ? ? ?} > > ? ? ? ? ? ?location ~ /regex_purge(.*) { > ? ? ? ? ? ? ? ?allow ? ? ? ? ? ? ?127.0.0.1; > ? ? ? ? ? ? ? ?deny ? ? ? ? ? ? ? all; > ? ? ? ? ? ? ? ?proxy_cache_batchpurge ?tmpcache $1$is_args$args; > ? ? ? ? ? ?} > ? ? ? ?} > ? ?} > > > > > -- > MagicBear > -- MagicBear From dieterknopf at googlemail.com Thu Nov 17 16:50:05 2011 From: dieterknopf at googlemail.com (Dieter Knopf) Date: Thu, 17 Nov 2011 17:50:05 +0100 Subject: Bad perfomance with nginx and php-fpm Message-ID: Hello, i'm trying to install nginx with php-fpm. It works, but it's not really fast and i have problems with multiple connections. Versions: nginx: nginx version: nginx/1.0.10 PHP 5.3.8-1~dotdeb.2 (fpm-fcgi) (built: Aug 25 2011 13:36:54) Configs: VHost: [...] location ~ \.php$ { fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /web$fastcgi_script_name; fastcgi_pass unix:/tmp/foo.socket; } [...] php5-fpm: [...] listen = '/tmp/foo.socket' user = foo group = foo pm = dynamic pm.max_children = 10 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 chroot = /var/www/foo/ chdir = /web/ [...] fastcgi_params: [...] fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; [...] I installed a fresh Wordpress without any plugins on the page and tried ApacheBench with -n 1000 -c 10 on it: Time taken for tests: 25.860 seconds Complete requests: 1000 Failed requests: 875 (Connect: 0, Receive: 0, Length: 875, Exceptions: 0) The same page on a standard apache setup spawns 10x php5-cgi: Time taken for tests: 104.929 seconds Complete requests: 1000 Failed requests: 0 Not sure what's wrong :-( Thanks Dieter From frumentius at gmail.com Thu Nov 17 17:46:13 2011 From: frumentius at gmail.com (Joe) Date: Fri, 18 Nov 2011 00:46:13 +0700 Subject: Bad perfomance with nginx and php-fpm In-Reply-To: References: Message-ID: How about the logs file? Regards, Joe On Thu, Nov 17, 2011 at 11:50 PM, Dieter Knopf wrote: > Hello, > > i'm trying to install nginx with php-fpm. It works, but it's not > really fast and i have problems with multiple connections. > > Versions: > nginx: nginx version: nginx/1.0.10 > PHP 5.3.8-1~dotdeb.2 (fpm-fcgi) (built: Aug 25 2011 13:36:54) > > Configs: > VHost: > [...] > location ~ \.php$ { > fastcgi_index index.php; > include /etc/nginx/fastcgi_params; > fastcgi_param SCRIPT_FILENAME /web$fastcgi_script_name; > fastcgi_pass unix:/tmp/foo.socket; > } > [...] > > php5-fpm: > [...] > listen = '/tmp/foo.socket' > user = foo > group = foo > pm = dynamic > pm.max_children = 10 > pm.start_servers = 2 > pm.min_spare_servers = 1 > pm.max_spare_servers = 3 > chroot = /var/www/foo/ > chdir = /web/ > [...] > > fastcgi_params: > [...] > fastcgi_connect_timeout 60; > fastcgi_send_timeout 180; > fastcgi_read_timeout 180; > fastcgi_buffer_size 128k; > fastcgi_buffers 4 256k; > fastcgi_busy_buffers_size 256k; > fastcgi_temp_file_write_size 256k; > fastcgi_intercept_errors on; > [...] > > I installed a fresh Wordpress without any plugins on the page and > tried ApacheBench with -n 1000 -c 10 on it: > > Time taken for tests: 25.860 seconds > Complete requests: 1000 > Failed requests: 875 > (Connect: 0, Receive: 0, Length: 875, Exceptions: 0) > > The same page on a standard apache setup spawns 10x php5-cgi: > > Time taken for tests: 104.929 seconds > Complete requests: 1000 > Failed requests: 0 > > > Not sure what's wrong :-( > > Thanks > > Dieter > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jerome at loyet.net Thu Nov 17 17:54:24 2011 From: jerome at loyet.net (=?ISO-8859-1?B?Suly9G1lIExveWV0?=) Date: Thu, 17 Nov 2011 18:54:24 +0100 Subject: Bad perfomance with nginx and php-fpm In-Reply-To: References: Message-ID: 2011/11/17 Dieter Knopf > > Hello, > > i'm trying to install nginx with php-fpm. It works, but it's not > really fast and i have problems with multiple connections. > > Versions: > nginx: nginx version: nginx/1.0.10 > PHP 5.3.8-1~dotdeb.2 (fpm-fcgi) (built: Aug 25 2011 13:36:54) > > Configs: > VHost: > [...] > location ~ \.php$ { > ?fastcgi_index index.php; > ?include /etc/nginx/fastcgi_params; > ?fastcgi_param ?SCRIPT_FILENAME /web$fastcgi_script_name; > ?fastcgi_pass unix:/tmp/foo.socket; > } > [...] > > php5-fpm: > [...] > listen = '/tmp/foo.socket' > user = foo > group = foo > pm = dynamic can you test by setting pm = static and pm.max_children to something a little bit hight than 10 (12 or 15). Just to ensure the problem does not come from the dynamic PM. > pm.max_children = 10 > pm.start_servers = 2 > pm.min_spare_servers = 1 > pm.max_spare_servers = 3 > chroot = /var/www/foo/ > chdir = /web/ > [...] > > fastcgi_params: > [...] > fastcgi_connect_timeout 60; > fastcgi_send_timeout 180; > fastcgi_read_timeout 180; > fastcgi_buffer_size 128k; > fastcgi_buffers 4 256k; > fastcgi_busy_buffers_size 256k; > fastcgi_temp_file_write_size 256k; > fastcgi_intercept_errors on; > [...] > > I installed a fresh Wordpress without any plugins on the page and > tried ApacheBench with -n 1000 -c 10 on it: > > Time taken for tests: ? 25.860 seconds > Complete requests: ? ? ?1000 > Failed requests: ? ? ? ?875 > ? (Connect: 0, Receive: 0, Length: 875, Exceptions: 0) > > The same page on a standard apache setup spawns 10x php5-cgi: > > Time taken for tests: ? 104.929 seconds > Complete requests: ? ? ?1000 > Failed requests: ? ? ? ?0 > > > Not sure what's wrong :-( > > Thanks > > Dieter > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From dieterknopf at googlemail.com Thu Nov 17 21:59:15 2011 From: dieterknopf at googlemail.com (Dieter Knopf) Date: Thu, 17 Nov 2011 22:59:15 +0100 Subject: Bad perfomance with nginx and php-fpm In-Reply-To: References: Message-ID: 2011/11/17 Joe : > How about the logs file? I tried a new benchmark with the old settings: Time taken for tests: 24.939 seconds Complete requests: 1000 Failed requests: 861 (Connect: 0, Receive: 0, Length: 861, Exceptions: 0) In the logfile from php5-fpm i see 1000 requests (everyone with status 200): [...] - - 17/Nov/2011:21:55:40 +0000GET /index.php200 /www/index.php 174.837 20224 102.96% - - 17/Nov/2011:21:55:40 +0000GET /index.php200 /www/index.php 175.963 20224 96.61% - - 17/Nov/2011:21:55:40 +0000GET /index.php200 /www/index.php 191.407 20224 83.59% - - 17/Nov/2011:21:55:40 +0000GET /index.php200 /www/index.php 212.139 20224 75.42% [...] In the nginx-access.log the same: [...] 46.252.24.80 - - [17/Nov/2011:22:55:37 +0100] "GET / HTTP/1.0" 200 5935 "-" "ApacheBench/2.3" 46.252.24.80 - - [17/Nov/2011:22:55:37 +0100] "GET / HTTP/1.0" 200 5931 "-" "ApacheBench/2.3" 46.252.24.80 - - [17/Nov/2011:22:55:37 +0100] "GET / HTTP/1.0" 200 5933 "-" "ApacheBench/2.3" 46.252.24.80 - - [17/Nov/2011:22:55:37 +0100] "GET / HTTP/1.0" 200 5931 "-" "ApacheBench/2.3" [...] Thanks Dieter From dieterknopf at googlemail.com Thu Nov 17 22:02:04 2011 From: dieterknopf at googlemail.com (Dieter Knopf) Date: Thu, 17 Nov 2011 23:02:04 +0100 Subject: Bad perfomance with nginx and php-fpm In-Reply-To: References: Message-ID: 2011/11/17 J?r?me Loyet : > can you test by setting pm = static and pm.max_children to something a > little bit hight than 10 (12 or 15). > Just to ensure the problem does not come from the dynamic PM. Sure. I just tested it with 20 children: Time taken for tests: 24.580 seconds Complete requests: 1000 Failed requests: 859 (Connect: 0, Receive: 0, Length: 859, Exceptions: 0) The same result :-( This sould be far better with 20 php daemons, it must be another problem, like a timeout or something like that? Thanks Dieter From nginx-forum at nginx.us Thu Nov 17 22:33:58 2011 From: nginx-forum at nginx.us (dannynoonan) Date: Thu, 17 Nov 2011 17:33:58 -0500 Subject: compile ngx_resty to statically link some libs? In-Reply-To: References: Message-ID: I replied to this, but it seems my reply got lost. I had some responses and follow-up questions, but I'll wait a bit to see if this BB will just post it after a few hours. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218343,218520#msg-218520 From nginx-forum at nginx.us Fri Nov 18 02:20:28 2011 From: nginx-forum at nginx.us (dannynoonan) Date: Thu, 17 Nov 2011 21:20:28 -0500 Subject: compile ngx_resty to statically link some libs? In-Reply-To: References: Message-ID: This forum can't keep up w/ my posts. Anyway, I figured out how to build a libdrizzle.a, now I need to figure out how to tell the ngx resty congfigure or gmake steps to slurp it in. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218343,218534#msg-218534 From nginx-forum at nginx.us Fri Nov 18 02:52:57 2011 From: nginx-forum at nginx.us (liuzhida) Date: Thu, 17 Nov 2011 21:52:57 -0500 Subject: How do proxy_module response buffering options work? In-Reply-To: <20110424212203.GO56867@mdounin.ru> References: <20110424212203.GO56867@mdounin.ru> Message-ID: <96ec2ea88638c78172c823d078b1e63f.NginxMailingListEnglish@forum.nginx.org> > Additionally, there is proxy_max_temp_file_size, > which controls how > much data may be written to disk. Once temp file > size becomes > bigger - nginx pauses reading data from upstream > until data from > temporary file is sent to client. do you mean if a response size is larger than all the proxy buffers, after some part of the response been written into buffers, the rest of data which can't be written into buffers will be written into temp_file? for every single response? the proxy_max_temp_file_size are global or per request? for example the upstream response is 2000 bytes. Nginx is configured with 4 buffers, each 100 bytes in size. is that nginx deal with this single response with written the 400 bytes of response into buffer and written the 1600 bytes into temp_file? and after the buffer is completely send, Nginx will read the rest of response data from the temp_file and written into buffer? Thx liuzhida Posted at Nginx Forum: http://forum.nginx.org/read.php?2,193347,218535#msg-218535 From nginx-forum at nginx.us Fri Nov 18 03:06:40 2011 From: nginx-forum at nginx.us (bigplum) Date: Thu, 17 Nov 2011 22:06:40 -0500 Subject: How to set $ into a variable? Message-ID: <0c871be943c329ca4015f2c36d8b6b48.NginxMailingListEnglish@forum.nginx.org> nginx.conf: location / { .... set $var "$uri=200k" .... and run "curl localhost/" The module use ngx_http_get_indexed_variable() for $var will get the string "/=200k", but I want the value is "$uri=200k". So any escape character supported in nginx.conf? THANKS. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218536,218536#msg-218536 From agentzh at gmail.com Fri Nov 18 03:07:23 2011 From: agentzh at gmail.com (agentzh) Date: Fri, 18 Nov 2011 11:07:23 +0800 Subject: compile ngx_resty to statically link some libs? In-Reply-To: References: Message-ID: On Fri, Nov 18, 2011 at 10:20 AM, dannynoonan wrote: > This forum can't keep up w/ my posts. > > Anyway, I figured out how to build a libdrizzle.a, now I need to figure > out how to tell the ngx resty congfigure or gmake steps to slurp it in. > When you have libdrizzle.a in the right place, then passing proper --with-ld-opt option for your system (like -static on my system) to the configure script should be sufficient :) Regards, -agentzh From edho at myconan.net Fri Nov 18 03:13:29 2011 From: edho at myconan.net (Edho Arief) Date: Fri, 18 Nov 2011 10:13:29 +0700 Subject: Bad perfomance with nginx and php-fpm In-Reply-To: References: Message-ID: On Fri, Nov 18, 2011 at 5:02 AM, Dieter Knopf wrote: > > The same result :-( > > This sould be far better with 20 php daemons, it must be another > problem, like a timeout or something like that? > Here's my result on my own site: [root at eustia ~]# ab -c 10 -n 1000 http://animebsd.net/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking animebsd.net (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: nginx/1.1.8 Server Hostname: animebsd.net Server Port: 80 Document Path: / Document Length: 54635 bytes Concurrency Level: 10 Time taken for tests: 261.211 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 54893000 bytes HTML transferred: 54635000 bytes Requests per second: 3.83 [#/sec] (mean) Time per request: 2612.112 [ms] (mean) Time per request: 261.211 [ms] (mean, across all concurrent requests) Transfer rate: 205.22 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 54 61 134.0 54 3053 Processing: 1260 2545 304.0 2515 3884 Waiting: 456 1534 219.6 1510 2673 Total: 1315 2606 328.8 2570 5409 Percentage of the requests served within a certain time (ms) 50% 2570 66% 2605 75% 2652 80% 2690 90% 2978 95% 3187 98% 3270 99% 3902 100% 5409 (longest request) Not exactly fast (cpu constrained - kvm instance of 1 core/thread e3-1270 cpu, 512MB ram, debian linux 6) but certainly completed all requests without fail. 15 php5-cgi instances ran through supervisord in tcp mode. There are quite a bit plugins installed, too (but no wp-cache of some kind). No tweaking at nginx side, some timeout and limit increase on php side. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From jerome at loyet.net Fri Nov 18 07:33:32 2011 From: jerome at loyet.net (=?ISO-8859-1?B?Suly9G1lIExveWV0?=) Date: Fri, 18 Nov 2011 08:33:32 +0100 Subject: Bad perfomance with nginx and php-fpm In-Reply-To: References: Message-ID: 2011/11/17 Dieter Knopf : > 2011/11/17 J?r?me Loyet : >> can you test by setting pm = static and pm.max_children to something a >> little bit hight than 10 (12 or 15). >> Just to ensure the problem does not come from the dynamic PM. > > Sure. I just tested it with 20 children: > Time taken for tests: ? 24.580 seconds > Complete requests: ? ? ?1000 > Failed requests: ? ? ? ?859 > ? (Connect: 0, Receive: 0, Length: 859, Exceptions: 0) > > The same result :-( > > This sould be far better with 20 php daemons, it must be another > problem, like a timeout or something like that? what page are you requesting ? Start by requesting a very simple PHP page (). It's a good start to figure out what's going on. Here is what I have with a very short and simple page (20 static children on FPM PHP 5.3 trunk and nginx 1.1.4) fat at dev:~/web$ ab -c 10 -n 1000 http://xxxxxxxxxx/pwd.php Server Software: nginx/1.1.4 Server Hostname: xxxxxx Server Port: 80 Document Path: /pwd.php Document Length: 22 bytes Concurrency Level: 10 Time taken for tests: 0.649 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 175000 bytes HTML transferred: 22000 bytes Requests per second: 1539.96 [#/sec] (mean) Time per request: 6.494 [ms] (mean) Time per request: 0.649 [ms] (mean, across all concurrent requests) Transfer rate: 263.18 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.4 0 5 Processing: 0 6 2.6 6 37 Waiting: 0 6 2.6 6 37 Total: 1 6 2.9 6 42 Percentage of the requests served within a certain time (ms) 50% 6 66% 6 75% 7 80% 7 90% 7 95% 7 98% 12 99% 19 100% 42 (longest request) Then increase the size the PHP page returns (ex: 64k --> ) fat at dev:~/web$ ab -c 10 -n 1000 http://xxxxxxx/size.php Server Software: nginx/1.1.4 Server Hostname: xxxxxxx Server Port: 80 Document Path: /size.php Document Length: 65536 bytes Concurrency Level: 10 Time taken for tests: 1.242 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 65689000 bytes HTML transferred: 65536000 bytes Requests per second: 805.17 [#/sec] (mean) Time per request: 12.420 [ms] (mean) Time per request: 1.242 [ms] (mean, across all concurrent requests) Transfer rate: 51651.05 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.4 0 5 Processing: 1 12 2.9 12 34 Waiting: 1 12 2.9 12 32 Total: 1 12 3.0 12 37 Percentage of the requests served within a certain time (ms) 50% 12 66% 13 75% 14 80% 14 90% 16 95% 17 98% 18 99% 20 100% 37 (longest request) Then increase the complexity (size, CPU, memory, fork, database requests, ...) > > Thanks > > Dieter > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From ru at nginx.com Fri Nov 18 08:53:46 2011 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 18 Nov 2011 12:53:46 +0400 Subject: Upgrade to 1.1.8 still shows 1.1.7? In-Reply-To: <20111115172731.GG95664@mdounin.ru> References: <20111115160929.GR27078@craic.sysops.org> <20111115172731.GG95664@mdounin.ru> Message-ID: <20111118085346.GB29345@lo0.su> On Tue, Nov 15, 2011 at 09:27:32PM +0400, Maxim Dounin wrote: > Hello! > > On Tue, Nov 15, 2011 at 11:18:25AM -0500, Ilan Berkner wrote: > > > I actually had to do a full restart and now it works. > > > > Previously kill -HUP masterpid worked, this time it didn't, not sure why. > > kill -HUP isn't expected to upgrade nginx binary, upgrade > procedure is outlined here: > > http://wiki.nginx.org/CommandLine#Upgrading_To_a_New_Binary_On_The_Fly Please also make yourself familiar with the "Controlling nginx" article that has just appeared on the website: http://nginx.org/en/docs/control.html From nginx-forum at nginx.us Fri Nov 18 11:55:40 2011 From: nginx-forum at nginx.us (janedenone) Date: Fri, 18 Nov 2011 06:55:40 -0500 Subject: Using ab to benchmark nginx: Connection reset by peer (54) Message-ID: I recently updated to nginx 1.0.8 and tried to benchmark performance for cached dynamic pages (initially served by a Django app via proxy_pass) and for static pages. In both cases, nginx will not serve more than 3 or 4 requests (even without concurrent connections), so ab almost immediately reports: Benchmarking testsite.static (be patient)...apr_socket_recv: Connection reset by peer (54) It is only when choosing a maximum of 4 (or fewer) requests that ab finishes successfully. Why is that? I tried increasing the number of worker processes (no luck), but I assume that nginx should be capable of serving more than 4 requests without tweaking any configuration variable. Could it be that I accidentally triggered some sort of DOS protection mechanism? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218554,218554#msg-218554 From magicbearmo at gmail.com Fri Nov 18 13:58:37 2011 From: magicbearmo at gmail.com (MagicBear) Date: Fri, 18 Nov 2011 21:58:37 +0800 Subject: How to set $ into a variable? In-Reply-To: <0c871be943c329ca4015f2c36d8b6b48.NginxMailingListEnglish@forum.nginx.org> References: <0c871be943c329ca4015f2c36d8b6b48.NginxMailingListEnglish@forum.nginx.org> Message-ID: may be try set $var "\$uri=200k" 2011/11/18 bigplum : > nginx.conf: > ?location / { > ? ?.... > ? ?set $var "$uri=200k" > ? ?.... > > and run "curl localhost/" > > The module use ngx_http_get_indexed_variable() for $var will get the > string "/=200k", but I want the value is "$uri=200k". > > So any escape character supported in nginx.conf? > > THANKS. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218536,218536#msg-218536 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- MagicBear From mdounin at mdounin.ru Fri Nov 18 14:00:18 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Nov 2011 18:00:18 +0400 Subject: How do proxy_module response buffering options work? In-Reply-To: <96ec2ea88638c78172c823d078b1e63f.NginxMailingListEnglish@forum.nginx.org> References: <20110424212203.GO56867@mdounin.ru> <96ec2ea88638c78172c823d078b1e63f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111118140018.GB95664@mdounin.ru> Hello! On Thu, Nov 17, 2011 at 09:52:57PM -0500, liuzhida wrote: > > Additionally, there is proxy_max_temp_file_size, > > which controls how > > much data may be written to disk. Once temp file > > size becomes > > bigger - nginx pauses reading data from upstream > > until data from > > temporary file is sent to client. > > do you mean if a response size is larger than all the proxy buffers, > after some part of the response been written into buffers, the rest of > data which can't be written into buffers will be written into temp_file? Yes, as long as it's not possible to write data to client fast enough. > for every single response? the proxy_max_temp_file_size are global or > per request? Per request. It limits maximum size of a temporary file used to buffer a request. > for example the upstream response is 2000 bytes. Nginx is configured > with 4 buffers, each 100 bytes in size. is that nginx deal with this > single response with written the 400 bytes of response into buffer and > written the 1600 bytes into temp_file? Roughly yes. (There are some nuances though, as you can't write data to file without using in-memory buffers, and that's why some memory buffers will be reserved for reading response from upstream and writing it to disk. And again, this all assumes it's not possible to write anything to client, while 2000 bytes usually just fit into socket buffer and will be passed to kernel immediately even if client isn't reading at all.) > and after the buffer is > completely send, Nginx will read the rest of response data from the > temp_file and written into buffer? It will follow usual procedure to send file-backed data (the same one which is used for static files), i.e either it will use sendfile() or read data to output_buffers and send them. Maxim Dounin From mdounin at mdounin.ru Fri Nov 18 14:10:32 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Nov 2011 18:10:32 +0400 Subject: Using ab to benchmark nginx: Connection reset by peer (54) In-Reply-To: References: Message-ID: <20111118141032.GC95664@mdounin.ru> Hello! On Fri, Nov 18, 2011 at 06:55:40AM -0500, janedenone wrote: > I recently updated to nginx 1.0.8 and tried to benchmark performance for > cached dynamic pages (initially served by a Django app via proxy_pass) > and for static pages. In both cases, nginx will not serve more than 3 or > 4 requests (even without concurrent connections), so ab almost > immediately reports: > > Benchmarking testsite.static (be patient)...apr_socket_recv: Connection > reset by peer (54) > > It is only when choosing a maximum of 4 (or fewer) requests that ab > finishes successfully. > > Why is that? I tried increasing the number of worker processes (no > luck), but I assume that nginx should be capable of serving more than 4 > requests without tweaking any configuration variable. Could it be that I > accidentally triggered some sort of DOS protection mechanism? By default there are no anti-DoS protections are activated in nginx. If you've activated some - it may be the reason. Looking into error may shed some light on the issue. Alternatively, it may be some other limits, e.g. your firewall ones. Maxim Dounin From mdounin at mdounin.ru Fri Nov 18 14:16:12 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Nov 2011 18:16:12 +0400 Subject: How to set $ into a variable? In-Reply-To: <0c871be943c329ca4015f2c36d8b6b48.NginxMailingListEnglish@forum.nginx.org> References: <0c871be943c329ca4015f2c36d8b6b48.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111118141612.GD95664@mdounin.ru> Hello! On Thu, Nov 17, 2011 at 10:06:40PM -0500, bigplum wrote: > nginx.conf: > location / { > .... > set $var "$uri=200k" > .... > > and run "curl localhost/" > > The module use ngx_http_get_indexed_variable() for $var will get the > string "/=200k", but I want the value is "$uri=200k". > > So any escape character supported in nginx.conf? Right now there are no way to escape $ in arguments which support variables. This is a bug. Possible workaround is to use some variable defined with module which doesn't support variables, e.g. you may do so with geo: geo $dollar { default "$"; } ... set $var "${dollar}uri=200k" ... Maxim Dounin From lists at ruby-forum.com Fri Nov 18 14:27:44 2011 From: lists at ruby-forum.com (Noah C.) Date: Fri, 18 Nov 2011 15:27:44 +0100 Subject: DNS TTLs being ignored In-Reply-To: References: Message-ID: <21d1c4e19dc65e2ae861c2506d937a30@ruby-forum.com> That's great news. Thank you very much. I'll be sure to get hold of it as soon as 1.1.9 is released. --Noah -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Fri Nov 18 14:39:05 2011 From: nginx-forum at nginx.us (janedenone) Date: Fri, 18 Nov 2011 09:39:05 -0500 Subject: Conflict: index and try_files Message-ID: <12b16bcca767bd5986409e54ad6bad9c.NginxMailingListEnglish@forum.nginx.org> In the following simple configuration server { server_name testsite.static; root /some/path/; index blog.html; try_files $uri $uri.html =404; } the try_files directive seems to interfere with the index directive: I always get 404 for http://testsite.static/. If I remove the fallback part of try_files, the same request results in an error: *990 rewrite or internal redirection cycle while internal redirect to "/.html.html.html.... The file blog.html is present in /some/path/, so I really do not understand why the file is not found/served for the above URL. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218562,218562#msg-218562 From nginx-forum at nginx.us Fri Nov 18 14:41:40 2011 From: nginx-forum at nginx.us (janedenone) Date: Fri, 18 Nov 2011 09:41:40 -0500 Subject: Using ab to benchmark nginx: Connection reset by peer (54) In-Reply-To: <20111118141032.GC95664@mdounin.ru> References: <20111118141032.GC95664@mdounin.ru> Message-ID: Thanks ??there is no firewall (the test is run on my local machine), and the error.log shows nothing. I increased the log level, only to see notifications about exiting worker and cache processes. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218554,218563#msg-218563 From igor at sysoev.ru Fri Nov 18 14:42:02 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 18 Nov 2011 18:42:02 +0400 Subject: Conflict: index and try_files In-Reply-To: <12b16bcca767bd5986409e54ad6bad9c.NginxMailingListEnglish@forum.nginx.org> References: <12b16bcca767bd5986409e54ad6bad9c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111118144202.GA47997@nginx.com> On Fri, Nov 18, 2011 at 09:39:05AM -0500, janedenone wrote: > In the following simple configuration > > server { > server_name testsite.static; > root /some/path/; > index blog.html; > try_files $uri $uri.html =404; > } > > the try_files directive seems to interfere with the index directive: I > always get 404 for http://testsite.static/. If I remove the fallback > part of try_files, the same request results in an error: > > *990 rewrite or internal redirection cycle while internal redirect to > "/.html.html.html.... > > The file blog.html is present in /some/path/, so I really do not > understand why the file is not found/served for the above URL. try_files $uri $uri/ $uri.html =404; -- Igor Sysoev From nginx-forum at nginx.us Fri Nov 18 14:46:45 2011 From: nginx-forum at nginx.us (janedenone) Date: Fri, 18 Nov 2011 09:46:45 -0500 Subject: Conflict: index and try_files In-Reply-To: <20111118144202.GA47997@nginx.com> References: <20111118144202.GA47997@nginx.com> Message-ID: <0ebad7d446696b257a008755595ce0cf.NginxMailingListEnglish@forum.nginx.org> Excellent, this works. Thanks a lot. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218562,218567#msg-218567 From mdounin at mdounin.ru Fri Nov 18 14:50:27 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Nov 2011 18:50:27 +0400 Subject: Conflict: index and try_files In-Reply-To: <12b16bcca767bd5986409e54ad6bad9c.NginxMailingListEnglish@forum.nginx.org> References: <12b16bcca767bd5986409e54ad6bad9c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111118145027.GG95664@mdounin.ru> Hello! On Fri, Nov 18, 2011 at 09:39:05AM -0500, janedenone wrote: > In the following simple configuration > > server { > server_name testsite.static; > root /some/path/; > index blog.html; > try_files $uri $uri.html =404; > } > > the try_files directive seems to interfere with the index directive: I > always get 404 for http://testsite.static/. If you want try_files to test directories as well (and to allow index directive to work), you have to explicitly specify "/" at the end of name. I.e. use something like this: try_files $uri/ $uri $uri.html =404; > If I remove the fallback > part of try_files, the same request results in an error: If you remove "=404" from the above try_files, it will become try_files $uri $uri.html; i.e. if no file $uri found, it will redirect to $uri.html (/.html) in your case. In it's turn it won't be found and will again redirect to $uri.html (/.html.html now). The > *990 rewrite or internal redirection cycle while internal redirect to > "/.html.html.html.... is expected. > The file blog.html is present in /some/path/, so I really do not > understand why the file is not found/served for the above URL. That's because it tests for files, and rejects directories by default. Test for directories must by explicitly requested. See here for more information: http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files Maxim Dounin From agentzh at gmail.com Fri Nov 18 15:25:56 2011 From: agentzh at gmail.com (agentzh) Date: Fri, 18 Nov 2011 23:25:56 +0800 Subject: Using ab to benchmark nginx: Connection reset by peer (54) In-Reply-To: References: Message-ID: On Fri, Nov 18, 2011 at 7:55 PM, janedenone wrote: > I recently updated to nginx 1.0.8 and tried to benchmark performance for > cached dynamic pages (initially served by a Django app via proxy_pass) > and for static pages. In both cases, nginx will not serve more than 3 or > 4 requests (even without concurrent connections), so ab almost > immediately reports: > > Benchmarking testsite.static (be patient)...apr_socket_recv: Connection > reset by peer (54) > What kind of system are you in? ab is known to have such issues in certain Mac OS X boxes to my knowledge :) Regards, -agentzh From nginx-forum at nginx.us Fri Nov 18 15:36:25 2011 From: nginx-forum at nginx.us (janedenone) Date: Fri, 18 Nov 2011 10:36:25 -0500 Subject: Using ab to benchmark nginx: Connection reset by peer (54) In-Reply-To: References: Message-ID: <91ffb52587e559a9a14884aa20c25e4a.NginxMailingListEnglish@forum.nginx.org> agentzh Wrote: > What kind of system are you in? ab is known to > have such issues in > certain Mac OS X boxes to my knowledge :) Well ? Mac OS X. :) Are these issues to the ab version that comes with OS X, or are they related to the OS in general? In the former case, I'd just recompile ab manually. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218554,218571#msg-218571 From agentzh at gmail.com Fri Nov 18 15:41:09 2011 From: agentzh at gmail.com (agentzh) Date: Fri, 18 Nov 2011 23:41:09 +0800 Subject: Using ab to benchmark nginx: Connection reset by peer (54) In-Reply-To: <91ffb52587e559a9a14884aa20c25e4a.NginxMailingListEnglish@forum.nginx.org> References: <91ffb52587e559a9a14884aa20c25e4a.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Fri, Nov 18, 2011 at 11:36 PM, janedenone wrote: > agentzh Wrote: >> What kind of system are you in? ab is known to >> have such issues in >> certain Mac OS X boxes to my knowledge :) > > Well ? Mac OS X. :) Are these issues to the ab version that comes with > OS X, or are they related to the OS in general? In the former case, I'd > just recompile ab manually. > On Mac OS X 10.6, I have to use "127.0.0.1" instead of "localhost" while running the ab command, or I'll get exactly the same error message. And for 10.7 (Lion), people say they have to apply a patch to ab to get it work. Hope this helps, -agentzh From nginx-forum at nginx.us Fri Nov 18 15:57:04 2011 From: nginx-forum at nginx.us (ceh329) Date: Fri, 18 Nov 2011 10:57:04 -0500 Subject: rewrite rule that alters content results Message-ID: <61d14ebe4c4d548478be71f47354c6cf.NginxMailingListEnglish@forum.nginx.org> Hello, Is there a rewrite statement that I can use to rewrite the html content that is being sent back to the client? Some of the content has urls in it and they have the wrong port I would like to change it as it's being sent back to the client. Thanks, Charles Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218573,218573#msg-218573 From brian at akins.org Fri Nov 18 16:38:30 2011 From: brian at akins.org (Brian Akins) Date: Fri, 18 Nov 2011 11:38:30 -0500 Subject: Filter_cache module released Message-ID: https://github.com/bakins/ngx_http_filter_cache Tested with up to nginx 1.0.6 Note: this currently doesn't work when the ctx gets reset on named locations. The quickest fix I have is to patch the request struct to add a filter_cache pointer. Filter cache caches full responses after the results of filters such as SSI and gzip. It was the first nginx module we ever wrote and it grew organically, so the code is krufty. --Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Nov 18 16:50:53 2011 From: nginx-forum at nginx.us (dbanks) Date: Fri, 18 Nov 2011 11:50:53 -0500 Subject: gzip - unexplained side effects In-Reply-To: <48ec591068896e53321f58945d5a8836.NginxMailingListEnglish@forum.nginx.org> References: <48ec591068896e53321f58945d5a8836.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4ae63147b1d2837011736899194e07a6.NginxMailingListEnglish@forum.nginx.org> I spent more time testing this particular issue. I believe that what appeared to be lost traffic is simply due to the shortened keepalives and the load balancer favoring keepalive connections over new connections. However, there still seems to be a link between gzip settings and the number of open connections (keepalives). gzip on; gzip_comp_level 1; gzip_types text/javascript text/plain application/x-javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)" #gzip_buffers 64 4k; #gzip_min_length 1100; #if it fits in one packet, no worries #gzip_http_version 1.1; I have confirmed that if any one of the three config lines at the bottom (the ones that are commented out) are present, the number of open connections drops from ~7600 to ~1300. Removing that line from the config and reloading restores normal operation. Strange. Cheers, Dean Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218044,218575#msg-218575 From mdounin at mdounin.ru Fri Nov 18 17:40:06 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Nov 2011 21:40:06 +0400 Subject: gzip - unexplained side effects In-Reply-To: <4ae63147b1d2837011736899194e07a6.NginxMailingListEnglish@forum.nginx.org> References: <48ec591068896e53321f58945d5a8836.NginxMailingListEnglish@forum.nginx.org> <4ae63147b1d2837011736899194e07a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20111118174006.GH95664@mdounin.ru> Hello! On Fri, Nov 18, 2011 at 11:50:53AM -0500, dbanks wrote: > I spent more time testing this particular issue. I believe that what > appeared to be lost traffic is simply due to the shortened keepalives > and the load balancer favoring keepalive connections over new > connections. However, there still seems to be a link between gzip > settings and the number of open connections (keepalives). > > gzip on; > gzip_comp_level 1; > gzip_types text/javascript text/plain application/x-javascript; > gzip_disable "MSIE [1-6]\.(?!.*SV1)" It looks like you've missed ";" here. This is probably a reason for all "strange" effects you observe - missing ";" causes "gzip_disable" to eat next directive in config. > #gzip_buffers 64 4k; > #gzip_min_length 1100; #if it fits in one packet, no worries > #gzip_http_version 1.1; > > I have confirmed that if any one of the three config lines at the bottom > (the ones that are commented out) are present, the number of open > connections drops from ~7600 to ~1300. Removing that line from the > config and reloading restores normal operation. Strange. Maxim Dounin From nginx-forum at nginx.us Fri Nov 18 17:44:55 2011 From: nginx-forum at nginx.us (ceh329) Date: Fri, 18 Nov 2011 12:44:55 -0500 Subject: Alter Config On Startup In-Reply-To: References: Message-ID: <8b3cfc0ca735733f4c1530570821f181.NginxMailingListEnglish@forum.nginx.org> Thanks, was not what I was hoping for but I'll build it in to a startup script. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218180,218577#msg-218577 From nginx-forum at nginx.us Fri Nov 18 19:00:44 2011 From: nginx-forum at nginx.us (dbanks) Date: Fri, 18 Nov 2011 14:00:44 -0500 Subject: gzip - unexplained side effects In-Reply-To: <4ae63147b1d2837011736899194e07a6.NginxMailingListEnglish@forum.nginx.org> References: <48ec591068896e53321f58945d5a8836.NginxMailingListEnglish@forum.nginx.org> <4ae63147b1d2837011736899194e07a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <745955042a91e67e21ea83f48d8c74db.NginxMailingListEnglish@forum.nginx.org> Hello Maxim! Mystery solved. I cannot believe that I missed that! Cheers, Dean Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218044,218580#msg-218580 From nginx-forum at nginx.us Fri Nov 18 23:31:51 2011 From: nginx-forum at nginx.us (Salem) Date: Fri, 18 Nov 2011 18:31:51 -0500 Subject: php and urls with /?xxx Message-ID: Ahoi, at the moment i use a "location \.php {}" block for opening php-files through fastcgi_pass. But what's the right way for calls like www.xxx.xxx/?xxx=xxx ? For Wordpress-Urls like /index.php/xxx/ try_files $uri $uri/ /index.php?q=$request_uri; works fine, but for /? Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,218588,218588#msg-218588 From appa at perusio.net Sat Nov 19 01:38:13 2011 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sat, 19 Nov 2011 01:38:13 +0000 Subject: php and urls with /?xxx In-Reply-To: References: Message-ID: <87vcqgriwq.wl%appa@perusio.net> On 18 Nov 2011 23h31 WET, nginx-forum at nginx.us wrote: > Ahoi, > > at the moment i use a "location \.php {}" block for opening > php-files through fastcgi_pass. But what's the right way for calls > like www.xxx.xxx/?xxx=xxx ? > > For Wordpress-Urls like /index.php/xxx/ > try_files $uri $uri/ /index.php?q=$request_uri; > works fine, but for /? Try this: ## Regular PHP processing. location ~ ^(?