From nginx-forum at nginx.us Sat Feb 1 05:01:17 2014 From: nginx-forum at nginx.us (DamienR) Date: Sat, 01 Feb 2014 00:01:17 -0500 Subject: "Zero size buf" Error on file upload Message-ID: Hi, I have a nginx server doing ffmpeg video conversions, we've noticed if we upload a file around 100Mb in size it won't upload and throws a "zero buf" error in nginx log; other logs not capture anything. 2014/01/31 22:18:49 [alert] 27455#0: *90757882 zero size buf in output t:0 r:0 f:1 0000000000000000 0000000000000000-0000000000000000 00000000023B4E68 0-0 while sending request to upstream, client: 120.xxx.x.51, server: www.video.domain.com, request: "POST /ajax/language HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "video.domain.com", referrer: "http://video.domain.com/upload/video" This is also from wordpress install on the same server 2014/01/31 22:23:26 [alert] 27448#0: *90768658 zero size buf in output t:0 r:0 f:1 0000000000000000 0000000000000000-0000000000000000 00000000024DB4D8 0-0 while sending request to upstream, client: xxx.61.14.155, server: domain.com.au, request: "POST /wp-cron.php?doing_wp_cron=1391225006.5313498973846435546875 HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "domain.com.au" build: nginx version: nginx/1.4.4 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) TLS SNI support enabled configure arguments: --sbin-path=/usr/local/sbin --with-http_mp4_module --with-http_flv_module --with-http_gzip_static_module --with-http_gzip_stati c_module --with-http_ssl_module --with-http_spdy_module --with-http_secure_link_module --with-http_ssl_module My worry is it's a configuration error, we've had site admin edit our config as shown below, I see no changes though we still have large file problem and zero buffer error. client_body_buffer_size 200m; client_max_body_size 300m; connection_pool_size 256; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; keepalive_timeout 10; #FastCGI fastcgi_buffer_size 256k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; fastcgi_send_timeout 600; fastcgi_read_timeout 600; fastcgi_connect_timeout 120; fastcgi_index index.php; Anyone had this error? I don't find much here about it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247069,247069#msg-247069 From nginx-forum at nginx.us Sat Feb 1 07:38:49 2014 From: nginx-forum at nginx.us (Larry) Date: Sat, 01 Feb 2014 02:38:49 -0500 Subject: nginx -> Dns server ? In-Reply-To: References: Message-ID: <6889b2a2758878ef4b8abe29325aaffa.NginxMailingListEnglish@forum.nginx.org> Maybe this will make it : https://github.com/agentzh/lua-resty-dns anyone ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247062,247071#msg-247071 From contact at jpluscplusm.com Sat Feb 1 10:48:30 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 1 Feb 2014 10:48:30 +0000 Subject: nginx -> Dns server ? In-Reply-To: <3d3d98b1ab254094cb2a753909110440.NginxMailingListEnglish@forum.nginx.org> References: <3d3d98b1ab254094cb2a753909110440.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 31 January 2014 20:35, Larry wrote: > I just read that nginx has a resolver. > Will it be able to replace our powerdns which just enables the basics ds > stuffs ? (lookup + ttl as usual) No. On 1 February 2014 07:38, Larry wrote: > Maybe this will make it : > https://github.com/agentzh/lua-resty-dns > anyone ? No. From richard at kearsley.me Sat Feb 1 12:17:45 2014 From: richard at kearsley.me (Richard Kearsley) Date: Sat, 01 Feb 2014 12:17:45 +0000 Subject: nginx -> Dns server ? In-Reply-To: References: <3d3d98b1ab254094cb2a753909110440.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52ECE5E9.8010402@kearsley.me> On 01/02/14 10:48, Jonathan Matthews wrote: > No. > No. > he's right but this can make powerdns a little more bearable https://github.com/fredan/luabackend From nginx-forum at nginx.us Sat Feb 1 12:56:54 2014 From: nginx-forum at nginx.us (Larry) Date: Sat, 01 Feb 2014 07:56:54 -0500 Subject: nginx -> Dns server ? In-Reply-To: References: Message-ID: :) Thanks you all Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247062,247076#msg-247076 From nginx-forum at nginx.us Sat Feb 1 14:32:10 2014 From: nginx-forum at nginx.us (richardm) Date: Sat, 01 Feb 2014 09:32:10 -0500 Subject: a NGINX Summit 2/25 in San Francisco In-Reply-To: <4B6A427B-3D33-480A-BF91-59D13C5A7280@nginx.com> References: <4B6A427B-3D33-480A-BF91-59D13C5A7280@nginx.com> Message-ID: <6da5e63dd7915b3ab8e1a586ddbf5fb3.NginxMailingListEnglish@forum.nginx.org> sarahnovotny Wrote: ------------------------------------------------------- > Hello all! > > I?d like to invite you to join the Nginx, Inc. team for our first User > Summit February 25th at Dogpatch Studios in San Francisco. > > The highlights include 2 formal presentations by the NGINX FOSS > project and Nginx, Inc. founder, Igor Sysoev, and well known module > developer in the NGINX ecosystem, Yichun Zhang (@agentzh). And, we?re > soliciting lightning talks from the community! > . . . . . Is there any chance that these talks will be captured on video and made available to those of us who cannot attend? I know that is not as good as being there, and I'd travel if I could, but it's not possible. The main speakers are so important to nginx that we'd all love to hear what they say. Thanks. Richard Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246971,247077#msg-247077 From lists at ruby-forum.com Sat Feb 1 19:25:43 2014 From: lists at ruby-forum.com (Anth Anth) Date: Sat, 01 Feb 2014 20:25:43 +0100 Subject: Launching Excel in a production web server on a Mac In-Reply-To: References: Message-ID: Thanks, Scott! Awesome info. I'm running under root because that was the easy way to bind to port 80. I'm guessing if I run nginx under my user account I'd have to forward a higher port (8080 or whatever) to port 80 using an ip table (or whatever the OS X equivalent is)? -- Posted via http://www.ruby-forum.com/. From scott_ribe at elevated-dev.com Sun Feb 2 00:27:32 2014 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Sat, 1 Feb 2014 17:27:32 -0700 Subject: Launching Excel in a production web server on a Mac In-Reply-To: References: Message-ID: On Feb 1, 2014, at 12:25 PM, Anth Anth wrote: > I'm running under root because that was the easy way to bind to port 80. > I'm guessing if I run nginx under my user account I'd have to forward a > higher port (8080 or whatever) to port 80 using an ip table (or whatever > the OS X equivalent is)? IIRC, one of your later messages made it clear that you do not, in fact, need nginx to communicate with Xcel. Rather you need your application server to do so. I forget whether you said passenger or unicorn, but either way *that* process needs to run as the logged-in user, and that's simpler since that process only needs to communicate via local sockets. -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From sarah at nginx.com Sun Feb 2 14:13:12 2014 From: sarah at nginx.com (Sarah Novotny) Date: Sun, 2 Feb 2014 06:13:12 -0800 Subject: a NGINX Summit 2/25 in San Francisco In-Reply-To: <6da5e63dd7915b3ab8e1a586ddbf5fb3.NginxMailingListEnglish@forum.nginx.org> References: <4B6A427B-3D33-480A-BF91-59D13C5A7280@nginx.com> <6da5e63dd7915b3ab8e1a586ddbf5fb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Richard On Feb 1, 2014, at 6:32 AM, richardm wrote: > sarahnovotny Wrote: > ------------------------------------------------------- >> Hello all! >> >> I?d like to invite you to join the Nginx, Inc. team for our first User >> Summit February 25th at Dogpatch Studios in San Francisco. >> >> The highlights include 2 formal presentations by the NGINX FOSS >> project and Nginx, Inc. founder, Igor Sysoev, and well known module >> developer in the NGINX ecosystem, Yichun Zhang (@agentzh). And, we?re >> soliciting lightning talks from the community! >> . . . . . > > Is there any chance that these talks will be captured on video and made > available to those of us who cannot attend? I know that is not as good as > being there, and I'd travel if I could, but it's not possible. The main > speakers are so important to nginx that we'd all love to hear what they say. We are planning to make videos of the talks by Igor and Yichun. But, you're correct the experience will be more robust locally. sarah From strattonbrazil at gmail.com Sun Feb 2 17:14:03 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Sun, 2 Feb 2014 09:14:03 -0800 Subject: server blocks configured, but getting "hello world" of nginx Message-ID: I've followed the tutorial below to setup a couple of server blocks, but I get the "Welcome to nginx" message every time. https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3 $ ls -l /etc/nginx/sites-available/ total 8 -rw-r--r-- 1 root root 1185 Feb 2 17:01 morebearsmore.com -rw-r--r-- 1 root root 2744 Feb 2 17:07 strattonbrazil.com $ ls -l /etc/nginx/sites-enabled/ total 0 lrwxrwxrwx 1 root root 44 Feb 2 17:03 morebearsmore.com -> /etc/nginx/sites-available/morebearsmore.com lrwxrwxrwx 1 root root 45 Feb 2 16:44 strattonbrazil.com -> /etc/nginx/sites-available/strattonbrazil.com This is the contents of more of the configs (minus the comments at the top). server { listen 80; listen [::]:80 default_server ipv6only=on; root /var/www/morebearsmore.com/public_html; index index.html index.htm; # Make site accessible from http://localhost/ server_name morebearsmore.com; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } } I've added a "hello world" index file to that directory, too. $ ls -l /var/www/strattonbrazil.com/public_html/index.html -rw-r--r-- 1 root root 148 Feb 2 16:41 /var/www/ strattonbrazil.com/public_html/index.html $ cat /var/www/strattonbrazil.com/public_html/index.html www.strattonbrazil.com

Success: You Have Set Up a Virtual Host

But again every time I get the same welcome message. Here's the access log for hitting morebearsmore.com a few times from my web browser. I don't see any messages in the error log. 71.217.116.55 - - [02/Feb/2014:17:13:57 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/31.0.1650.63 Chrome/31.0.1650.63 Safari/537.36" 71.217.116.55 - - [02/Feb/2014:17:14:00 +0000] "-" 400 0 "-" "-" 71.217.116.55 - - [02/Feb/2014:17:14:07 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/31.0.1650.63 Chrome/31.0.1650.63 Safari/537.36" -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 3 13:55:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Feb 2014 17:55:34 +0400 Subject: "Zero size buf" Error on file upload In-Reply-To: References: Message-ID: <20140203135534.GM1835@mdounin.ru> Hello! On Sat, Feb 01, 2014 at 12:01:17AM -0500, DamienR wrote: > Hi, > > I have a nginx server doing ffmpeg video conversions, we've noticed if we > upload a file around 100Mb in size it won't upload and throws a "zero buf" > error in nginx log; other logs not capture anything. > > 2014/01/31 22:18:49 [alert] 27455#0: *90757882 zero size buf in output t:0 > r:0 f:1 0000000000000000 0000000000000000-0000000000000000 00000000023B4E68 > 0-0 while sending request to upstream, client: 120.xxx.x.51, server: > www.video.domain.com, request: "POST /ajax/language HTTP/1.1", upstream: > "fastcgi://127.0.0.1:9000", host: "video.domain.com", referrer: > "http://video.domain.com/upload/video" > > This is also from wordpress install on the same server > > 2014/01/31 22:23:26 [alert] 27448#0: *90768658 zero size buf in output t:0 > r:0 f:1 0000000000000000 0000000000000000-0000000000000000 00000000024DB4D8 > 0-0 while sending request to upstream, client: xxx.61.14.155, server: > domain.com.au, request: "POST > /wp-cron.php?doing_wp_cron=1391225006.5313498973846435546875 HTTP/1.0", > upstream: "fastcgi://127.0.0.1:9000", host: "domain.com.au" > > build: > > nginx version: nginx/1.4.4 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > TLS SNI support enabled > configure arguments: --sbin-path=/usr/local/sbin --with-http_mp4_module > --with-http_flv_module --with-http_gzip_static_module --with-http_gzip_stati > c_module --with-http_ssl_module --with-http_spdy_module > --with-http_secure_link_module --with-http_ssl_module Please try the following: 1. Test if you see the problem in 1.5.9. There is at least one bug was fixed which may lead to "zero size buf in output" alerts. While it shouldn't affect any real file uploads (it only happened on a requests with zero-sized body in some cases), testing with latest version is a good idea anyway. 2. If spdy is actually used, try without it. If it doesn't help, please provide full configuration and a debug log of a failed request, see http://wiki.nginx.org/Debugging. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Mon Feb 3 16:53:19 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 03 Feb 2014 20:53:19 +0400 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: Message-ID: <2532760.aRXVgSKg0I@vbart-laptop> On Sunday 02 February 2014 09:14:03 Josh Stratton wrote: > I've followed the tutorial below to setup a couple of server blocks, but I > get the "Welcome to nginx" message every time. > > https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3 > > $ ls -l /etc/nginx/sites-available/ > total 8 > -rw-r--r-- 1 root root 1185 Feb 2 17:01 morebearsmore.com > -rw-r--r-- 1 root root 2744 Feb 2 17:07 strattonbrazil.com > > $ ls -l /etc/nginx/sites-enabled/ > total 0 > lrwxrwxrwx 1 root root 44 Feb 2 17:03 morebearsmore.com -> > /etc/nginx/sites-available/morebearsmore.com > lrwxrwxrwx 1 root root 45 Feb 2 16:44 strattonbrazil.com -> > /etc/nginx/sites-available/strattonbrazil.com > > This is the contents of more of the configs (minus the comments at the > top). [..] What's in your nginx.conf? wbr, Valentin V. Bartenev From strattonbrazil at gmail.com Mon Feb 3 17:13:24 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 09:13:24 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: <2532760.aRXVgSKg0I@vbart-laptop> References: <2532760.aRXVgSKg0I@vbart-laptop> Message-ID: This is my nginx.conf page, which I haven't done anything with. The /etc/nginx/conf.d/ directory on my machine is empty. user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } On Mon, Feb 3, 2014 at 8:53 AM, Valentin V. Bartenev wrote: > On Sunday 02 February 2014 09:14:03 Josh Stratton wrote: > > I've followed the tutorial below to setup a couple of server blocks, but > I > > get the "Welcome to nginx" message every time. > > > > > https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3 > > > > $ ls -l /etc/nginx/sites-available/ > > total 8 > > -rw-r--r-- 1 root root 1185 Feb 2 17:01 morebearsmore.com > > -rw-r--r-- 1 root root 2744 Feb 2 17:07 strattonbrazil.com > > > > $ ls -l /etc/nginx/sites-enabled/ > > total 0 > > lrwxrwxrwx 1 root root 44 Feb 2 17:03 morebearsmore.com -> > > /etc/nginx/sites-available/morebearsmore.com > > lrwxrwxrwx 1 root root 45 Feb 2 16:44 strattonbrazil.com -> > > /etc/nginx/sites-available/strattonbrazil.com > > > > This is the contents of more of the configs (minus the comments at the > > top). > [..] > > What's in your nginx.conf? > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Feb 3 17:23:10 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 03 Feb 2014 21:23:10 +0400 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <2532760.aRXVgSKg0I@vbart-laptop> Message-ID: <1673263.qDiUuRvmdB@vbart-laptop> On Monday 03 February 2014 09:13:24 Josh Stratton wrote: > This is my nginx.conf page, which I haven't done anything with. The > /etc/nginx/conf.d/ directory on my machine is empty. [..] > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > } > Ok. Did you reload nginx after the configuration was added to "sites-enabled"? wbr, Valentin V. Bartenev From vbart at nginx.com Mon Feb 3 18:01:12 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 03 Feb 2014 22:01:12 +0400 Subject: SSL behaviour with multiple server blocks for same port In-Reply-To: <7A091C14-06D1-4FFE-847B-E3710B1B1C4F@postfach.slogh.com> References: <20140123114904.GW1835@mdounin.ru> <7A091C14-06D1-4FFE-847B-E3710B1B1C4F@postfach.slogh.com> Message-ID: <2070392.kD6pay9ZMb@vbart-laptop> On Thursday 23 January 2014 15:54:58 Alex wrote: > > On Thu, Jan 23, 2014 at 11:17:42AM +0000, Pankaj Mehta wrote: > > Hi, > > > These blocks have different ssl certificates. I understand that if I enable > > SNI in nginx and the client supports it, then we have a predictable > > behaviour where nginx will use the correct ssl parameters from the server > > block corresponding to that hostname. But I have no idea which ssl config > > One thing I became painfully aware of last time is that when you use > SSL-enabled server blocks with SNI, a listen directive from one block > may overwrite the listen directive from another one. > > For example, when I have: > > server { > listen 443 ssl; > server_name www.host1.com; > > ... > } > > and > > server { > listen 443 ssl spdy; > server_name www.host2.com; > > ... > } > > Even though the listen directive for server block of www.host1.com does > not define SPDY, it accepts SPDY connections as well. In other words, if > you want to disable SPDY, you'd have to make sure that it doesn't appear > in any server block listen directive (assuming you're using SNI rather > than dedicated IPs). > > I am not sure if this behavior can be avoided. nginx advertises spdy/2 > via the NPN TLS extension. During the TLS handshake, would it be > possible to first parse the hostname the client is attempting to connect > (SNI), and only then decide whether to advertise SPDY via NPN or not > depending on the hostname's listen directive? > Sorry for the late answer. This behavior cannot be avoided since even if you do not advertise SPDY via NPN/ALPN for some virtual hosts, but do for another, then browsers still be able to request any of them using an already established SPDY connection. wbr, Valentin V. Bartenev From strattonbrazil at gmail.com Mon Feb 3 18:42:43 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 10:42:43 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <2532760.aRXVgSKg0I@vbart-laptop> Message-ID: I think I have everything working as expected. The only thing that's still strange to me is when I go to the morebearsmore.com domain with "www" prefixed to it, it goes to the test html file in the other server block. I had this problem in apache, so I switched to nginx and I'm still seeing it. I tried to setup both server blocks at the same time. Why would www.morebearsmore.com go to my strattonbrazil.com directory while the other morebearsmore.com goes to the correct directory? I figured with a fresh install of nginx, I would see it "default" to one or the other. Is strattonbrazil.com just happening to be the fallback? On Mon, Feb 3, 2014 at 9:13 AM, Josh Stratton wrote: > This is my nginx.conf page, which I haven't done anything with. The > /etc/nginx/conf.d/ directory on my machine is empty. > > user www-data; > worker_processes 4; > pid /run/nginx.pid; > > events { > worker_connections 768; > # multi_accept on; > } > > http { > > ## > # Basic Settings > ## > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > # server_tokens off; > > # server_names_hash_bucket_size 64; > # server_name_in_redirect off; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > ## > # Logging Settings > ## > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > ## > # Gzip Settings > ## > > gzip on; > gzip_disable "msie6"; > > # gzip_vary on; > # gzip_proxied any; > # gzip_comp_level 6; > # gzip_buffers 16 8k; > # gzip_http_version 1.1; > # gzip_types text/plain text/css application/json application/x-javascript > text/xml application/xml application/xml+rss text/javascript; > > ## > # nginx-naxsi config > ## > # Uncomment it if you installed nginx-naxsi > ## > > #include /etc/nginx/naxsi_core.rules; > > ## > # nginx-passenger config > ## > # Uncomment it if you installed nginx-passenger > ## > #passenger_root /usr; > #passenger_ruby /usr/bin/ruby; > > ## > # Virtual Host Configs > ## > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > } > > > > On Mon, Feb 3, 2014 at 8:53 AM, Valentin V. Bartenev wrote: > >> On Sunday 02 February 2014 09:14:03 Josh Stratton wrote: >> > I've followed the tutorial below to setup a couple of server blocks, >> but I >> > get the "Welcome to nginx" message every time. >> > >> > >> https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3 >> > >> > $ ls -l /etc/nginx/sites-available/ >> > total 8 >> > -rw-r--r-- 1 root root 1185 Feb 2 17:01 morebearsmore.com >> > -rw-r--r-- 1 root root 2744 Feb 2 17:07 strattonbrazil.com >> > >> > $ ls -l /etc/nginx/sites-enabled/ >> > total 0 >> > lrwxrwxrwx 1 root root 44 Feb 2 17:03 morebearsmore.com -> >> > /etc/nginx/sites-available/morebearsmore.com >> > lrwxrwxrwx 1 root root 45 Feb 2 16:44 strattonbrazil.com -> >> > /etc/nginx/sites-available/strattonbrazil.com >> > >> > This is the contents of more of the configs (minus the comments at the >> > top). >> [..] >> >> What's in your nginx.conf? >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strattonbrazil at gmail.com Mon Feb 3 18:52:29 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 10:52:29 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <2532760.aRXVgSKg0I@vbart-laptop> Message-ID: Nevermind. I found the answer here that fixed it. I'm redirecting from www now. Still don't understand why it fell back to the other server block. http://stackoverflow.com/questions/9951827/www-in-domain-not-working-in-nginx server { server_name www.morebearsmore.com; return 301 http://morebearsmore.com$request_uri; } On Mon, Feb 3, 2014 at 10:42 AM, Josh Stratton wrote: > I think I have everything working as expected. The only thing that's > still strange to me is when I go to the morebearsmore.com domain with > "www" prefixed to it, it goes to the test html file in the other server > block. I had this problem in apache, so I switched to nginx and I'm still > seeing it. I tried to setup both server blocks at the same time. Why > would www.morebearsmore.com go to my strattonbrazil.com directory while > the other morebearsmore.com goes to the correct directory? I figured > with a fresh install of nginx, I would see it "default" to one or the > other. Is strattonbrazil.com just happening to be the fallback? > > > On Mon, Feb 3, 2014 at 9:13 AM, Josh Stratton wrote: > >> This is my nginx.conf page, which I haven't done anything with. The >> /etc/nginx/conf.d/ directory on my machine is empty. >> >> user www-data; >> worker_processes 4; >> pid /run/nginx.pid; >> >> events { >> worker_connections 768; >> # multi_accept on; >> } >> >> http { >> >> ## >> # Basic Settings >> ## >> >> sendfile on; >> tcp_nopush on; >> tcp_nodelay on; >> keepalive_timeout 65; >> types_hash_max_size 2048; >> # server_tokens off; >> >> # server_names_hash_bucket_size 64; >> # server_name_in_redirect off; >> >> include /etc/nginx/mime.types; >> default_type application/octet-stream; >> >> ## >> # Logging Settings >> ## >> >> access_log /var/log/nginx/access.log; >> error_log /var/log/nginx/error.log; >> >> ## >> # Gzip Settings >> ## >> >> gzip on; >> gzip_disable "msie6"; >> >> # gzip_vary on; >> # gzip_proxied any; >> # gzip_comp_level 6; >> # gzip_buffers 16 8k; >> # gzip_http_version 1.1; >> # gzip_types text/plain text/css application/json >> application/x-javascript text/xml application/xml application/xml+rss >> text/javascript; >> >> ## >> # nginx-naxsi config >> ## >> # Uncomment it if you installed nginx-naxsi >> ## >> >> #include /etc/nginx/naxsi_core.rules; >> >> ## >> # nginx-passenger config >> ## >> # Uncomment it if you installed nginx-passenger >> ## >> #passenger_root /usr; >> #passenger_ruby /usr/bin/ruby; >> >> ## >> # Virtual Host Configs >> ## >> >> include /etc/nginx/conf.d/*.conf; >> include /etc/nginx/sites-enabled/*; >> } >> >> >> >> On Mon, Feb 3, 2014 at 8:53 AM, Valentin V. Bartenev wrote: >> >>> On Sunday 02 February 2014 09:14:03 Josh Stratton wrote: >>> > I've followed the tutorial below to setup a couple of server blocks, >>> but I >>> > get the "Welcome to nginx" message every time. >>> > >>> > >>> https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3 >>> > >>> > $ ls -l /etc/nginx/sites-available/ >>> > total 8 >>> > -rw-r--r-- 1 root root 1185 Feb 2 17:01 morebearsmore.com >>> > -rw-r--r-- 1 root root 2744 Feb 2 17:07 strattonbrazil.com >>> > >>> > $ ls -l /etc/nginx/sites-enabled/ >>> > total 0 >>> > lrwxrwxrwx 1 root root 44 Feb 2 17:03 morebearsmore.com -> >>> > /etc/nginx/sites-available/morebearsmore.com >>> > lrwxrwxrwx 1 root root 45 Feb 2 16:44 strattonbrazil.com -> >>> > /etc/nginx/sites-available/strattonbrazil.com >>> > >>> > This is the contents of more of the configs (minus the comments at the >>> > top). >>> [..] >>> >>> What's in your nginx.conf? >>> >>> wbr, Valentin V. Bartenev >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strattonbrazil at gmail.com Mon Feb 3 18:55:22 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 10:55:22 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <2532760.aRXVgSKg0I@vbart-laptop> Message-ID: That's strange. It only fixed it on my desktop. It still goes to the strattonbrazil.com site when I type in www.morebearsmore.com on my phone, which was the original problem. Is the phone doing some kind of caching? Why would this happen on a windows phone and iphone with nginx (and apache when I tried it) but not my desktop? On Mon, Feb 3, 2014 at 10:52 AM, Josh Stratton wrote: > Nevermind. I found the answer here that fixed it. I'm redirecting from > www now. Still don't understand why it fell back to the other server > block. > > > http://stackoverflow.com/questions/9951827/www-in-domain-not-working-in-nginx > > server { > server_name www.morebearsmore.com; > return 301 http://morebearsmore.com$request_uri; > } > > > > On Mon, Feb 3, 2014 at 10:42 AM, Josh Stratton wrote: > >> I think I have everything working as expected. The only thing that's >> still strange to me is when I go to the morebearsmore.com domain with >> "www" prefixed to it, it goes to the test html file in the other server >> block. I had this problem in apache, so I switched to nginx and I'm still >> seeing it. I tried to setup both server blocks at the same time. Why >> would www.morebearsmore.com go to my strattonbrazil.com directory while >> the other morebearsmore.com goes to the correct directory? I figured >> with a fresh install of nginx, I would see it "default" to one or the >> other. Is strattonbrazil.com just happening to be the fallback? >> >> >> On Mon, Feb 3, 2014 at 9:13 AM, Josh Stratton wrote: >> >>> This is my nginx.conf page, which I haven't done anything with. The >>> /etc/nginx/conf.d/ directory on my machine is empty. >>> >>> user www-data; >>> worker_processes 4; >>> pid /run/nginx.pid; >>> >>> events { >>> worker_connections 768; >>> # multi_accept on; >>> } >>> >>> http { >>> >>> ## >>> # Basic Settings >>> ## >>> >>> sendfile on; >>> tcp_nopush on; >>> tcp_nodelay on; >>> keepalive_timeout 65; >>> types_hash_max_size 2048; >>> # server_tokens off; >>> >>> # server_names_hash_bucket_size 64; >>> # server_name_in_redirect off; >>> >>> include /etc/nginx/mime.types; >>> default_type application/octet-stream; >>> >>> ## >>> # Logging Settings >>> ## >>> >>> access_log /var/log/nginx/access.log; >>> error_log /var/log/nginx/error.log; >>> >>> ## >>> # Gzip Settings >>> ## >>> >>> gzip on; >>> gzip_disable "msie6"; >>> >>> # gzip_vary on; >>> # gzip_proxied any; >>> # gzip_comp_level 6; >>> # gzip_buffers 16 8k; >>> # gzip_http_version 1.1; >>> # gzip_types text/plain text/css application/json >>> application/x-javascript text/xml application/xml application/xml+rss >>> text/javascript; >>> >>> ## >>> # nginx-naxsi config >>> ## >>> # Uncomment it if you installed nginx-naxsi >>> ## >>> >>> #include /etc/nginx/naxsi_core.rules; >>> >>> ## >>> # nginx-passenger config >>> ## >>> # Uncomment it if you installed nginx-passenger >>> ## >>> #passenger_root /usr; >>> #passenger_ruby /usr/bin/ruby; >>> >>> ## >>> # Virtual Host Configs >>> ## >>> >>> include /etc/nginx/conf.d/*.conf; >>> include /etc/nginx/sites-enabled/*; >>> } >>> >>> >>> >>> On Mon, Feb 3, 2014 at 8:53 AM, Valentin V. Bartenev wrote: >>> >>>> On Sunday 02 February 2014 09:14:03 Josh Stratton wrote: >>>> > I've followed the tutorial below to setup a couple of server blocks, >>>> but I >>>> > get the "Welcome to nginx" message every time. >>>> > >>>> > >>>> https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3 >>>> > >>>> > $ ls -l /etc/nginx/sites-available/ >>>> > total 8 >>>> > -rw-r--r-- 1 root root 1185 Feb 2 17:01 morebearsmore.com >>>> > -rw-r--r-- 1 root root 2744 Feb 2 17:07 strattonbrazil.com >>>> > >>>> > $ ls -l /etc/nginx/sites-enabled/ >>>> > total 0 >>>> > lrwxrwxrwx 1 root root 44 Feb 2 17:03 morebearsmore.com -> >>>> > /etc/nginx/sites-available/morebearsmore.com >>>> > lrwxrwxrwx 1 root root 45 Feb 2 16:44 strattonbrazil.com -> >>>> > /etc/nginx/sites-available/strattonbrazil.com >>>> > >>>> > This is the contents of more of the configs (minus the comments at the >>>> > top). >>>> [..] >>>> >>>> What's in your nginx.conf? >>>> >>>> wbr, Valentin V. Bartenev >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strattonbrazil at gmail.com Mon Feb 3 18:57:33 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 10:57:33 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <2532760.aRXVgSKg0I@vbart-laptop> Message-ID: As a test, if I add a querystring to see if it breaks the cache, it does work. Is this an ISP cache? www.morebearsmore.com goes to strattonbrazil.com server block www.morebearsmore.com?foo=7 goes to the correct morebearsmore.com server block On Mon, Feb 3, 2014 at 10:55 AM, Josh Stratton wrote: > That's strange. It only fixed it on my desktop. It still goes to the > strattonbrazil.com site when I type in www.morebearsmore.com on my phone, > which was the original problem. Is the phone doing some kind of caching? > Why would this happen on a windows phone and iphone with nginx (and apache > when I tried it) but not my desktop? > > > On Mon, Feb 3, 2014 at 10:52 AM, Josh Stratton wrote: > >> Nevermind. I found the answer here that fixed it. I'm redirecting from >> www now. Still don't understand why it fell back to the other server >> block. >> >> >> http://stackoverflow.com/questions/9951827/www-in-domain-not-working-in-nginx >> >> server { >> server_name www.morebearsmore.com; >> return 301 http://morebearsmore.com$request_uri; >> } >> >> >> >> On Mon, Feb 3, 2014 at 10:42 AM, Josh Stratton wrote: >> >>> I think I have everything working as expected. The only thing that's >>> still strange to me is when I go to the morebearsmore.com domain with >>> "www" prefixed to it, it goes to the test html file in the other server >>> block. I had this problem in apache, so I switched to nginx and I'm still >>> seeing it. I tried to setup both server blocks at the same time. Why >>> would www.morebearsmore.com go to my strattonbrazil.com directory while >>> the other morebearsmore.com goes to the correct directory? I figured >>> with a fresh install of nginx, I would see it "default" to one or the >>> other. Is strattonbrazil.com just happening to be the fallback? >>> >>> >>> On Mon, Feb 3, 2014 at 9:13 AM, Josh Stratton wrote: >>> >>>> This is my nginx.conf page, which I haven't done anything with. The >>>> /etc/nginx/conf.d/ directory on my machine is empty. >>>> >>>> user www-data; >>>> worker_processes 4; >>>> pid /run/nginx.pid; >>>> >>>> events { >>>> worker_connections 768; >>>> # multi_accept on; >>>> } >>>> >>>> http { >>>> >>>> ## >>>> # Basic Settings >>>> ## >>>> >>>> sendfile on; >>>> tcp_nopush on; >>>> tcp_nodelay on; >>>> keepalive_timeout 65; >>>> types_hash_max_size 2048; >>>> # server_tokens off; >>>> >>>> # server_names_hash_bucket_size 64; >>>> # server_name_in_redirect off; >>>> >>>> include /etc/nginx/mime.types; >>>> default_type application/octet-stream; >>>> >>>> ## >>>> # Logging Settings >>>> ## >>>> >>>> access_log /var/log/nginx/access.log; >>>> error_log /var/log/nginx/error.log; >>>> >>>> ## >>>> # Gzip Settings >>>> ## >>>> >>>> gzip on; >>>> gzip_disable "msie6"; >>>> >>>> # gzip_vary on; >>>> # gzip_proxied any; >>>> # gzip_comp_level 6; >>>> # gzip_buffers 16 8k; >>>> # gzip_http_version 1.1; >>>> # gzip_types text/plain text/css application/json >>>> application/x-javascript text/xml application/xml application/xml+rss >>>> text/javascript; >>>> >>>> ## >>>> # nginx-naxsi config >>>> ## >>>> # Uncomment it if you installed nginx-naxsi >>>> ## >>>> >>>> #include /etc/nginx/naxsi_core.rules; >>>> >>>> ## >>>> # nginx-passenger config >>>> ## >>>> # Uncomment it if you installed nginx-passenger >>>> ## >>>> #passenger_root /usr; >>>> #passenger_ruby /usr/bin/ruby; >>>> >>>> ## >>>> # Virtual Host Configs >>>> ## >>>> >>>> include /etc/nginx/conf.d/*.conf; >>>> include /etc/nginx/sites-enabled/*; >>>> } >>>> >>>> >>>> >>>> On Mon, Feb 3, 2014 at 8:53 AM, Valentin V. Bartenev wrote: >>>> >>>>> On Sunday 02 February 2014 09:14:03 Josh Stratton wrote: >>>>> > I've followed the tutorial below to setup a couple of server blocks, >>>>> but I >>>>> > get the "Welcome to nginx" message every time. >>>>> > >>>>> > >>>>> https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3 >>>>> > >>>>> > $ ls -l /etc/nginx/sites-available/ >>>>> > total 8 >>>>> > -rw-r--r-- 1 root root 1185 Feb 2 17:01 morebearsmore.com >>>>> > -rw-r--r-- 1 root root 2744 Feb 2 17:07 strattonbrazil.com >>>>> > >>>>> > $ ls -l /etc/nginx/sites-enabled/ >>>>> > total 0 >>>>> > lrwxrwxrwx 1 root root 44 Feb 2 17:03 morebearsmore.com -> >>>>> > /etc/nginx/sites-available/morebearsmore.com >>>>> > lrwxrwxrwx 1 root root 45 Feb 2 16:44 strattonbrazil.com -> >>>>> > /etc/nginx/sites-available/strattonbrazil.com >>>>> > >>>>> > This is the contents of more of the configs (minus the comments at >>>>> the >>>>> > top). >>>>> [..] >>>>> >>>>> What's in your nginx.conf? >>>>> >>>>> wbr, Valentin V. Bartenev >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Feb 3 19:02:20 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 03 Feb 2014 23:02:20 +0400 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: Message-ID: <1443286.hK7F7P9N4q@vbart-laptop> On Monday 03 February 2014 10:57:33 Josh Stratton wrote: > As a test, if I add a querystring to see if it breaks the cache, it does > work. Is this an ISP cache? > > www.morebearsmore.com goes to strattonbrazil.com server block > www.morebearsmore.com?foo=7 goes to the correct morebearsmore.com server > block [..] More likely your browser cache. wbr, Valentin V. Bartenev From alex at zeitgeist.se Mon Feb 3 19:03:52 2014 From: alex at zeitgeist.se (Alex) Date: Mon, 03 Feb 2014 20:03:52 +0100 Subject: SSL behaviour with multiple server blocks for same port In-Reply-To: <2070392.kD6pay9ZMb@vbart-laptop> References: <20140123114904.GW1835@mdounin.ru> <7A091C14-06D1-4FFE-847B-E3710B1B1C4F@postfach.slogh.com> <2070392.kD6pay9ZMb@vbart-laptop> Message-ID: <41FED718-0B29-491E-AE39-5B3AE8CCC574@postfach.slogh.com> On 2014-02-03 19:01, Valentin V. Bartenev wrote: > Sorry for the late answer. Hi! No problem at all! > This behavior cannot be avoided since even if you do not advertise SPDY via > NPN/ALPN for some virtual hosts, but do for another, then browsers still be > able to request any of them using an already established SPDY connection. That makes sense - thanks for the explanation. I guess it'd be easier in cases like these if nginx had some functionality to display the values of currently parsed config parameters (similar to postconf in postfix), which would allow to quickly detect possibly unwanted configurations (such as the SPDY parameter that was still present in certain NPN-based vhosts) and/or configurations that deviate from standard settings. Not that it's important... just in case we run out of ideas how to improve nginx further. ;) From alex at zeitgeist.se Mon Feb 3 19:04:51 2014 From: alex at zeitgeist.se (Alex) Date: Mon, 03 Feb 2014 20:04:51 +0100 Subject: a NGINX Summit 2/25 in San Francisco In-Reply-To: References: <4B6A427B-3D33-480A-BF91-59D13C5A7280@nginx.com> <6da5e63dd7915b3ab8e1a586ddbf5fb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9508E609-E5B6-423D-8ED0-93DABF4D41AA@postfach.slogh.com> On 2014-02-02 15:13, Sarah Novotny wrote: Hi Sarah, > We are planning to make videos of the talks by Igor and Yichun. But, > you're correct the experience will be more robust locally. Excellent. Thanks! Alex From vbart at nginx.com Mon Feb 3 19:07:25 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 03 Feb 2014 23:07:25 +0400 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: Message-ID: <34789219.VPnigI4Tbc@vbart-laptop> On Monday 03 February 2014 10:52:29 Josh Stratton wrote: > Nevermind. I found the answer here that fixed it. I'm redirecting from > www now. Still don't understand why it fell back to the other server > block. [..] This article should shed the light: http://nginx.org/en/docs/http/server_names.html wbr, Valentin V. Bartenev From strattonbrazil at gmail.com Mon Feb 3 19:21:06 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 11:21:06 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: <34789219.VPnigI4Tbc@vbart-laptop> References: <34789219.VPnigI4Tbc@vbart-laptop> Message-ID: How long is that cache kept? If it redirected to the other one, will be redirect on my phone indefinitely? I tried clearly my phone's settings and it still pulls up the other site's page--the old page, too, as I've changed the words. Is nginx saying this page is cacheable and thus not returning the new version because the browser uses the old one? > This article should shed the light: > http://nginx.org/en/docs/http/server_names.html Thanks for the link. That seems pretty clear, but how is nginx deriving the hostname? If I run `hostname` I get "home" back. I still don't understand why it fell back to the other one. On Mon, Feb 3, 2014 at 11:07 AM, Valentin V. Bartenev wrote: > On Monday 03 February 2014 10:52:29 Josh Stratton wrote: > > Nevermind. I found the answer here that fixed it. I'm redirecting from > > www now. Still don't understand why it fell back to the other server > > block. > [..] > > This article should shed the light: > http://nginx.org/en/docs/http/server_names.html > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 3 19:33:08 2014 From: nginx-forum at nginx.us (mobprice01) Date: Mon, 03 Feb 2014 14:33:08 -0500 Subject: New Unlocked Latest Mobile/GSM Phones Available For Sale. Message-ID: <2e136be090e0d60aa02fd082f7b5d988.NginxMailingListEnglish@forum.nginx.org> Mobile/GSM Phones On-Going Sales,Stock availability, Apple iPhone 5C 16GB,32GB & 64GB (White, Blue, Green, Yellow, Pink) Apple iPhone 5S 16GB,64GB & 64GB (Space Gray, White/Silver, Gold) Samsung Galaxy S4 I9505 Samsung Galaxy S4 I9506 Samsung Galaxy Note 3 N9005 Sony Xperia Z1 Blackberry Q10 (Gold Edition) Blackberry Porsche Design P?9982 (Gold Edition) Xbox One Console Sony Playstaion 4 Console Apple iPad Mini Wi-Fi + 4G Cellular With Retina Display 16GB,32GB & 64GB Promotional Offer: (BUY 2 GET 1 FREE) (BUY 5 GET 3 FREE) Note: All these products are brand new,sealed in box with complete accessories and comes with 1 Year International warranty, Door-to-Door worldwide delivery/shipping (24/48hours) Contact Name: Menza C.Sami Contact E-mail: mobprice2014 at gmail.com . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247090,247090#msg-247090 From vbart at nginx.com Mon Feb 3 19:43:36 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 03 Feb 2014 23:43:36 +0400 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <34789219.VPnigI4Tbc@vbart-laptop> Message-ID: <1452291.A0ALDGoF0I@vbart-laptop> On Monday 03 February 2014 11:21:06 Josh Stratton wrote: > How long is that cache kept? It depends on browser if there were no cache-related headers in response. > If it redirected to the other one, will be redirect on my phone indefinitely? > I tried clearly my phone's settings and it still pulls up the other site's > page--the old page, too, as I've changed the words. Is nginx saying this > page is cacheable and thus not returning the new version because the browser > uses the old one? By default nginx doesn't return any cache-related headers. If the browser has cached some page, it can show the page to user without requesting server. > > > This article should shed the light: > > http://nginx.org/en/docs/http/server_names.html > > Thanks for the link. That seems pretty clear, but how is nginx deriving > the hostname? If I run `hostname` I get "home" back. I still don't > understand why it fell back to the other one. If there is no server defined for a requested host, it falls back to that one with the "default_server" parameter in the listen directive for a specific address:port pair, or to the first one in the configuration if there is no such parameter. wbr, Valentin V. Bartenev From strattonbrazil at gmail.com Mon Feb 3 19:48:41 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 11:48:41 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: <1452291.A0ALDGoF0I@vbart-laptop> References: <34789219.VPnigI4Tbc@vbart-laptop> <1452291.A0ALDGoF0I@vbart-laptop> Message-ID: > or to the first one in the configuration if there is no such parameter. As if all the server blocks are configured together? That sounds really strange to me, that one server block could be the default for another server block. # rgrep default_server /etc/nginx/ /etc/nginx/sites-available/strattonbrazil.com: #listen [::]:80 default_server ipv6only=on; /etc/nginx/sites-available/morebearsmore.com: #listen [::]:80 default_server ipv6only=on; On Mon, Feb 3, 2014 at 11:43 AM, Valentin V. Bartenev wrote: > On Monday 03 February 2014 11:21:06 Josh Stratton wrote: > > How long is that cache kept? > > It depends on browser if there were no cache-related headers in response. > > > If it redirected to the other one, will be redirect on my phone > indefinitely? > > I tried clearly my phone's settings and it still pulls up the other > site's > > page--the old page, too, as I've changed the words. Is nginx saying this > > page is cacheable and thus not returning the new version because the > browser > > uses the old one? > > By default nginx doesn't return any cache-related headers. If the browser > has > cached some page, it can show the page to user without requesting server. > > > > > > This article should shed the light: > > > http://nginx.org/en/docs/http/server_names.html > > > > Thanks for the link. That seems pretty clear, but how is nginx deriving > > the hostname? If I run `hostname` I get "home" back. I still don't > > understand why it fell back to the other one. > > If there is no server defined for a requested host, it falls back to that > one > with the "default_server" parameter in the listen directive for a specific > address:port pair, or to the first one in the configuration if there is no > such > parameter. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Feb 3 20:12:20 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 04 Feb 2014 00:12:20 +0400 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <1452291.A0ALDGoF0I@vbart-laptop> Message-ID: <2865941.MVtO23UYmD@vbart-laptop> On Monday 03 February 2014 11:48:41 Josh Stratton wrote: > > or to the first one in the configuration if there is no such > > parameter. > > As if all the server blocks are configured together? That sounds really > strange to me, that one server block could be the default for another > server block. They all are configured together since they share the same address:port pair. The only separation that server can make is based on requested host, but what if there is no matches among configured server names? Then nginx picks one based on a simple rule I already mentioned. And there is no magic in placing them in different configuration files (in fact, that "sites-available" thing is just a debian way of splitting web server configuration), they could be configured in one file as well. > > # rgrep default_server /etc/nginx/ > /etc/nginx/sites-available/strattonbrazil.com: #listen [::]:80 > default_server ipv6only=on; > /etc/nginx/sites-available/morebearsmore.com: #listen [::]:80 > default_server ipv6only=on; Note, that you have "default_server" for IPv6 only, but your listen directives for IPv4 haven't got this parameter. wbr, Valentin V. Bartenev From strattonbrazil at gmail.com Mon Feb 3 20:32:53 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 12:32:53 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: <2865941.MVtO23UYmD@vbart-laptop> References: <1452291.A0ALDGoF0I@vbart-laptop> <2865941.MVtO23UYmD@vbart-laptop> Message-ID: Right, I actually have those lines commented out. That's the part I don't understand. For example, if I put everything in the same file (example below), neither one of them have a default_server or a wildcard. The only other option I see from the link you sent me is www.morebears.com is getting "defaulted" by nginx to the hostname. However, I don't know what about my server would default to strattonbrazil.com. Based on the below settings, I don't see how that should be happening and in my system environment I don't see anything like `hostname` that would also direct to strattonbrazil.com. server { listen 80; #listen [::]:80 default_server ipv6only=on; root /var/www/morebearsmore.com/public_html; index index.html index.htm; ... } server { listen 80; #listen [::]:80 default_server ipv6only=on; root /var/www/strattonbrazil.com/public_html; index index.html index.htm; ... } On Mon, Feb 3, 2014 at 12:12 PM, Valentin V. Bartenev wrote: > On Monday 03 February 2014 11:48:41 Josh Stratton wrote: > > > or to the first one in the configuration if there is no such > > > parameter. > > > > As if all the server blocks are configured together? That sounds really > > strange to me, that one server block could be the default for another > > server block. > > They all are configured together since they share the same address:port > pair. > The only separation that server can make is based on requested host, but > what > if there is no matches among configured server names? Then nginx picks one > based on a simple rule I already mentioned. > > And there is no magic in placing them in different configuration files (in > fact, > that "sites-available" thing is just a debian way of splitting web server > configuration), they could be configured in one file as well. > > > > > > # rgrep default_server /etc/nginx/ > > /etc/nginx/sites-available/strattonbrazil.com: #listen [::]:80 > > default_server ipv6only=on; > > /etc/nginx/sites-available/morebearsmore.com: #listen [::]:80 > > default_server ipv6only=on; > > Note, that you have "default_server" for IPv6 only, but your listen > directives > for IPv4 haven't got this parameter. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Feb 3 20:42:04 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 04 Feb 2014 00:42:04 +0400 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <2865941.MVtO23UYmD@vbart-laptop> Message-ID: <2133731.7mfixNk94i@vbart-laptop> On Monday 03 February 2014 12:32:53 Josh Stratton wrote: > Right, I actually have those lines commented out. That's the part I don't > understand. For example, if I put everything in the same file (example > below), neither one of them have a default_server or a wildcard. The only > other option I see from the link you sent me is www.morebears.com is > getting "defaulted" by nginx to the hostname. However, I don't know what > about my server would default to strattonbrazil.com. Based on the below > settings, I don't see how that should be happening and in my system > environment I don't see anything like `hostname` that would also direct to > strattonbrazil.com. [..] It's up to your DNS server configuration. Nginx knows nothing about it, it just receives requests that actually can contain arbitrary host name. wbr, Valentin V. Bartenev From strattonbrazil at gmail.com Mon Feb 3 20:59:28 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 12:59:28 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: <2133731.7mfixNk94i@vbart-laptop> References: <2865941.MVtO23UYmD@vbart-laptop> <2133731.7mfixNk94i@vbart-laptop> Message-ID: What does the DNS server have to do with it? I thought it just translated domain names to IP addresses. I thought the browser queries the DNS, gets the IP from the domain name (which is the same for both domains--with or without www), and returns it to the browser. The browser then fires the request to that IP address with the domain inside the HTTP header, which nginx uses to determine the correct server block. On Mon, Feb 3, 2014 at 12:42 PM, Valentin V. Bartenev wrote: > On Monday 03 February 2014 12:32:53 Josh Stratton wrote: > > Right, I actually have those lines commented out. That's the part I > don't > > understand. For example, if I put everything in the same file (example > > below), neither one of them have a default_server or a wildcard. The > only > > other option I see from the link you sent me is www.morebears.com is > > getting "defaulted" by nginx to the hostname. However, I don't know what > > about my server would default to strattonbrazil.com. Based on the below > > settings, I don't see how that should be happening and in my system > > environment I don't see anything like `hostname` that would also direct > to > > strattonbrazil.com. > [..] > > It's up to your DNS server configuration. Nginx knows nothing about it, > it just receives requests that actually can contain arbitrary host name. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Feb 3 22:13:32 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 04 Feb 2014 02:13:32 +0400 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <2133731.7mfixNk94i@vbart-laptop> Message-ID: <2533336.2XJsneIFeU@vbart-laptop> On Monday 03 February 2014 12:59:28 Josh Stratton wrote: > What does the DNS server have to do with it? I thought it just translated > domain names to IP addresses. I thought the browser queries the DNS, gets > the IP from the domain name (which is the same for both domains--with or > without www), and returns it to the browser. The browser then fires the > request to that IP address with the domain inside the HTTP header, which > nginx uses to determine the correct server block. Right. But your configuration only had two server blocks: one with server_name "morebearsmore.com" and one with server_name "strattonbrazil.com", so there is no one for "www.morebearsmore.com", and since you didn't have "default_server" parameter in any of them, then nginx just picked up the first included in nginx.conf file. wbr, Valentin V. Bartenev From strattonbrazil at gmail.com Mon Feb 3 22:28:41 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 3 Feb 2014 14:28:41 -0800 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: <2533336.2XJsneIFeU@vbart-laptop> References: <2133731.7mfixNk94i@vbart-laptop> <2533336.2XJsneIFeU@vbart-laptop> Message-ID: Right, and that's fine. It just seems a bizarre behavior. I would have expected an nginx error or something. Thanks for all your help getting it figured out. nginx's configuration seems very intuitive in general. On Mon, Feb 3, 2014 at 2:13 PM, Valentin V. Bartenev wrote: > On Monday 03 February 2014 12:59:28 Josh Stratton wrote: > > What does the DNS server have to do with it? I thought it just > translated > > domain names to IP addresses. I thought the browser queries the DNS, > gets > > the IP from the domain name (which is the same for both domains--with or > > without www), and returns it to the browser. The browser then fires the > > request to that IP address with the domain inside the HTTP header, which > > nginx uses to determine the correct server block. > > Right. But your configuration only had two server blocks: one with > server_name "morebearsmore.com" and one with server_name " > strattonbrazil.com", > so there is no one for "www.morebearsmore.com", and since you didn't have > "default_server" parameter in any of them, then nginx just picked up the > first included in nginx.conf file. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Feb 3 22:43:34 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 04 Feb 2014 02:43:34 +0400 Subject: server blocks configured, but getting "hello world" of nginx In-Reply-To: References: <2533336.2XJsneIFeU@vbart-laptop> Message-ID: <3332268.H8CbkL1Leo@vbart-laptop> On Monday 03 February 2014 14:28:41 Josh Stratton wrote: > Right, and that's fine. It just seems a bizarre behavior. I would have > expected an nginx error or something. Thanks for all your help getting it > figured out. nginx's configuration seems very intuitive in general. Well, I tend to agree with you. I believe this behavior comes from Apache, and the original idea was to be familiar for people who migrates from it (which means for all about 10 years ago, when the first version of nginx was released). wbr, Valentin V. Bartenev From lists at ruby-forum.com Tue Feb 4 05:42:37 2014 From: lists at ruby-forum.com (Ne Ka) Date: Tue, 04 Feb 2014 06:42:37 +0100 Subject: Nginx proxy with websocket not returning data In-Reply-To: <20140130121233.GW1835@mdounin.ru> References: <2f428bd7afb308ed734d213343494eae@ruby-forum.com> <20140130121233.GW1835@mdounin.ru> Message-ID: <0a8f22511d845a4beb380c0ee4559951@ruby-forum.com> Hi, Nginx - 1.4.4 OS - windows I ran it with the error log debug mode but didn't see anything out of usual. I tried attaching it but its giving me some error. Not sure how to get nginx - V output, the debugging mentioned was for linux. Maxim Dounin wrote in post #1135040: > Hello! > > On Thu, Jan 30, 2014 at 12:56:42PM +0100, Ne Ka wrote: > >> Hi, >> >> I am trying to configure nginx proxy with websockets I am using jetty >> server with cometd framework. I am able to do the websocket handshake >> and send my login request to the server but I am not getting any >> response back. Can you let me know what could be wrong with my config >> file. > > It would be interesting to know at least nginx version used, as > weel as OS and "nginx -V" output. See > http://wiki.nginx.org/Debugging for some more hints. > > Config looks fine and should work. > > -- > Maxim Dounin > http://nginx.org/ -- Posted via http://www.ruby-forum.com/. From mailinglisten at simonhoenscheid.de Tue Feb 4 11:49:35 2014 From: mailinglisten at simonhoenscheid.de (mailinglisten at simonhoenscheid.de) Date: Tue, 04 Feb 2014 12:49:35 +0100 Subject: Joomla CMS on Nginx (Rewrite/Environment Problem) Message-ID: <98c2f3eeb2e6abca6e1c05cd821c9d55@simonhoenscheid.de> Hello List, I did a testmigration of a Joomla CMS yesterday, and there is a bug (the system was running on Apache before): There where no Apache rewrite rules on the old server. The Joomla Uses SEF (Search Engine Friendly URLs) OLD: http://www.example.com/pattern/article.html NEW: http://www.example.com/var/www/www.example.com/pattern/article The missing .html suffix is not a prob but the whole rootpath in the url is. Any ideas how to solve this? here is my server.conf for this host: server_name www.example.com example.com.com; server_name_in_redirect off; access_log /var/log/nginx/www.example.com-access.log combined; error_log /var/log/nginx/www.example.com-error.log notice; root /var/www/www.example.com; location / { try_files $uri $uri/ /index.php?$args; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $request_filename; } } Thanks for your help Simon From mailinglisten at simonhoenscheid.de Tue Feb 4 13:15:11 2014 From: mailinglisten at simonhoenscheid.de (mailinglisten at simonhoenscheid.de) Date: Tue, 04 Feb 2014 14:15:11 +0100 Subject: Joomla CMS on Nginx (Rewrite/Environment Problem) In-Reply-To: <98c2f3eeb2e6abca6e1c05cd821c9d55@simonhoenscheid.de> References: <98c2f3eeb2e6abca6e1c05cd821c9d55@simonhoenscheid.de> Message-ID: Fixed. $live_site in joomlas config was empty, apache ignores that, nginx not Kind Regards Simon Am 2014-02-04 12:49, schrieb mailinglisten at simonhoenscheid.de: > Hello List, > > I did a testmigration of a Joomla CMS yesterday, and there is a bug > (the system was running on Apache before): > There where no Apache rewrite rules on the old server. > The Joomla Uses SEF (Search Engine Friendly URLs) > > OLD: > > http://www.example.com/pattern/article.html > > NEW: > > http://www.example.com/var/www/www.example.com/pattern/article > > The missing .html suffix is not a prob but the whole rootpath in the > url is. Any ideas how to solve this? > > here is my server.conf for this host: > > server_name www.example.com example.com.com; > server_name_in_redirect off; > access_log /var/log/nginx/www.example.com-access.log combined; > error_log /var/log/nginx/www.example.com-error.log notice; > root /var/www/www.example.com; > location / { > try_files $uri $uri/ /index.php?$args; > } > > > location ~* \.php$ { > fastcgi_index index.php; > fastcgi_pass 127.0.0.1:9000; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > fastcgi_param SCRIPT_NAME $request_filename; > } > } > > Thanks for your help > Simon > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Feb 4 13:45:42 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Feb 2014 17:45:42 +0400 Subject: nginx-1.5.10 Message-ID: <20140204134542.GC1835@mdounin.ru> Changes with nginx 1.5.10 04 Feb 2014 *) Feature: the ngx_http_spdy_module now uses SPDY 3.1 protocol. Thanks to Automattic and MaxCDN for sponsoring this work. *) Feature: the ngx_http_mp4_module now skips tracks too short for a seek requested. *) Bugfix: a segmentation fault might occur in a worker process if the $ssl_session_id variable was used in logs; the bug had appeared in 1.5.9. *) Bugfix: the $date_local and $date_gmt variables used wrong format outside of the ngx_http_ssi_filter_module. *) Bugfix: client connections might be immediately closed if deferred accept was used; the bug had appeared in 1.3.15. *) Bugfix: alerts "getsockopt(TCP_FASTOPEN) ... failed" appeared in logs during binary upgrade on Linux; the bug had appeared in 1.5.8. Thanks to Piotr Sikora. -- Maxim Dounin http://nginx.org/en/donation.html From lists at ruby-forum.com Tue Feb 4 21:35:18 2014 From: lists at ruby-forum.com (Anth Anth) Date: Tue, 04 Feb 2014 22:35:18 +0100 Subject: Launching Excel in a production web server on a Mac In-Reply-To: References: Message-ID: Thanks again, Scott. That makes a lot of sense. I've set passenger_user_switching off and passenger_default_user to the user I want Excel launched as... still nothing though. The processes look like this: Process User ---------- PassengerWatchdog root PassengerHelpterAgent userIwant PassengerLoggingAgent nobody nginx (master) root nginx (worker) nobody Does this seem correct? -- Posted via http://www.ruby-forum.com/. From steve at greengecko.co.nz Wed Feb 5 01:31:01 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 05 Feb 2014 14:31:01 +1300 Subject: nginx-1.5.10 In-Reply-To: <20140204134542.GC1835@mdounin.ru> References: <20140204134542.GC1835@mdounin.ru> Message-ID: <1391563861.23812.160.camel@steve-new> On Tue, 2014-02-04 at 17:45 +0400, Maxim Dounin wrote: > Changes with nginx 1.5.10 04 Feb 2014 > > *) Feature: the ngx_http_spdy_module now uses SPDY 3.1 protocol. > Thanks to Automattic and MaxCDN for sponsoring this work. > > *) Feature: the ngx_http_mp4_module now skips tracks too short for a > seek requested. > > *) Bugfix: a segmentation fault might occur in a worker process if the > $ssl_session_id variable was used in logs; the bug had appeared in > 1.5.9. > > *) Bugfix: the $date_local and $date_gmt variables used wrong format > outside of the ngx_http_ssi_filter_module. > > *) Bugfix: client connections might be immediately closed if deferred > accept was used; the bug had appeared in 1.3.15. > > *) Bugfix: alerts "getsockopt(TCP_FASTOPEN) ... failed" appeared in logs > during binary upgrade on Linux; the bug had appeared in 1.5.8. > Thanks to Piotr Sikora. > > Great to hear about the new speedy stuff. I take it it's not getting backported into 1.4, so when's 1.6 due? Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From sarah at nginx.com Wed Feb 5 01:35:58 2014 From: sarah at nginx.com (Sarah Novotny) Date: Tue, 4 Feb 2014 17:35:58 -0800 Subject: nginx-1.5.10 In-Reply-To: <1391563861.23812.160.camel@steve-new> References: <20140204134542.GC1835@mdounin.ru> <1391563861.23812.160.camel@steve-new> Message-ID: <434B18CF-4AA7-448B-BEDB-D1AB5CF84C3F@nginx.com> Hi! On Feb 4, 2014, at 5:31 PM, Steve Holdoway wrote: > On Tue, 2014-02-04 at 17:45 +0400, Maxim Dounin wrote: >> Changes with nginx 1.5.10 04 Feb 2014 >> >> *) Feature: the ngx_http_spdy_module now uses SPDY 3.1 protocol. >> Thanks to Automattic and MaxCDN for sponsoring this work. >> >> *) Feature: the ngx_http_mp4_module now skips tracks too short for a >> seek requested. >> >> *) Bugfix: a segmentation fault might occur in a worker process if the >> $ssl_session_id variable was used in logs; the bug had appeared in >> 1.5.9. >> >> *) Bugfix: the $date_local and $date_gmt variables used wrong format >> outside of the ngx_http_ssi_filter_module. >> >> *) Bugfix: client connections might be immediately closed if deferred >> accept was used; the bug had appeared in 1.3.15. >> >> *) Bugfix: alerts "getsockopt(TCP_FASTOPEN) ... failed" appeared in logs >> during binary upgrade on Linux; the bug had appeared in 1.5.8. >> Thanks to Piotr Sikora. >> >> > Great to hear about the new speedy stuff. I take it it's not getting > backported into 1.4, so when's 1.6 due? mid-year :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 5 02:38:46 2014 From: nginx-forum at nginx.us (mevans336) Date: Tue, 04 Feb 2014 21:38:46 -0500 Subject: Upgraded to 1.5.10 - Site Still SPDY/2? Message-ID: <431ff89d47da4397798d63cf4058b090.NginxMailingListEnglish@forum.nginx.org> I upgraded my Nginx reverse proxy to 1.5.10 using the official Ubuntu Precise Nginx packages, but my site is still reporting SPDY/2 in Chrome. Do I need to do something more drastic than issuing a kill -HUP on the master process to load the new Nginx binary? Or am I missing something else? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247180,247180#msg-247180 From jim at ohlste.in Wed Feb 5 03:05:41 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 04 Feb 2014 22:05:41 -0500 Subject: Upgraded to 1.5.10 - Site Still SPDY/2? In-Reply-To: <431ff89d47da4397798d63cf4058b090.NginxMailingListEnglish@forum.nginx.org> References: <431ff89d47da4397798d63cf4058b090.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52F1AA85.8050000@ohlste.in> Hello, On 2/4/14, 9:38 PM, mevans336 wrote: > I upgraded my Nginx reverse proxy to 1.5.10 using the official Ubuntu > Precise Nginx packages, but my site is still reporting SPDY/2 in Chrome. Do > I need to do something more drastic than issuing a kill -HUP on the master > process to load the new Nginx binary? Or am I missing something else? HUP reloads the configuration. USR2 upgrades the binary. > -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From nginx-forum at nginx.us Wed Feb 5 03:12:43 2014 From: nginx-forum at nginx.us (mevans336) Date: Tue, 04 Feb 2014 22:12:43 -0500 Subject: Upgraded to 1.5.10 - Site Still SPDY/2? In-Reply-To: <52F1AA85.8050000@ohlste.in> References: <52F1AA85.8050000@ohlste.in> Message-ID: <79a55d76a1ce5f524c3cb2a6b52f3482.NginxMailingListEnglish@forum.nginx.org> Bingo. Now Chrome is reporting spdy/3. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247180,247182#msg-247182 From nginx-forum at nginx.us Wed Feb 5 06:49:59 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 05 Feb 2014 01:49:59 -0500 Subject: Are headers set in the server block inherited to all location blocks Message-ID: I am seeing strange behavior using includes. For example, if I request a javascript file (ending in .js) the headers set globally in the server block are not set. I was under the impression that if you set headers in the server block, ALL location blocks below inherit those headers. See the following: server { ... add_header Strict-Transport-Security max-age=31556926; add_header X-XSS-Protection "1; mode=block"; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; include expires.conf; ... } # expires.conf location ~* \.(?:ico|js|css|gif|jpe?g|png|xml)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } When requesting a .js file, the Pragma and Cache-Control headers are set, but all the headers set in the base server block are not. What is the fix here? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247185,247185#msg-247185 From devnull82 at gmail.com Wed Feb 5 08:39:39 2014 From: devnull82 at gmail.com (Andrea) Date: Wed, 5 Feb 2014 09:39:39 +0100 Subject: Mailserver proxy (all protocols, webmail included) Message-ID: Hello, I'm trying to configure a full proxy for my mailservers. With full, I mean for all protocols. We have, for example, 2 mail servers: zimbra (imap,pop3,smtp,webmail) linux+roundcube (imap, pop3, smtp, webmail) Using MySQL I'm able to configure imap,pop3,smtp proxy: when a user connect to the proxy, my nginx script checks on a database where to redirect the login. If to the Zimbra server or to the linux server. The problem is with webmail: is there some way to redirect http to zimbra webmail or to roundcube, depending on the username? Do I have to create a login page and then use webmail's api to redirect the login? What I want is to give a single hostname to users, for example webmail.mydomain.com, and then let nginx redirect users to the right webmail how it does for imap,pop3 and smtp. Thanks! Andrea -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Feb 5 09:41:35 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 05 Feb 2014 13:41:35 +0400 Subject: nginx-1.5.10 In-Reply-To: <1391563861.23812.160.camel@steve-new> References: <20140204134542.GC1835@mdounin.ru> <1391563861.23812.160.camel@steve-new> Message-ID: <7494120.QIBOdQVnfA@vbart-laptop> On Wednesday 05 February 2014 14:31:01 Steve Holdoway wrote: [..] > Great to hear about the new speedy stuff. I take it it's not getting > backported into 1.4, so when's 1.6 due? Just a note for people who worry about using mainline version in production. The fact that there is version called "stable" doesn't imply that the other one is "unstable". We call one branch stable because we maintain internal API and feature set stability for 3rd-party developers, while mainline versions gets all improvements (and even more bugfixes). wbr, Valentin V. Bartenev From francis at daoine.org Wed Feb 5 13:33:34 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Feb 2014 13:33:34 +0000 Subject: Are headers set in the server block inherited to all location blocks In-Reply-To: References: Message-ID: <20140205133334.GC3581@craic.sysops.org> On Wed, Feb 05, 2014 at 01:49:59AM -0500, justink101 wrote: Hi there, > I was under the impression that if you set headers in the > server block, ALL location blocks below inherit those headers. No. Can you say where you got that impression? Perhaps documentation can be clarified or corrected. The request is handled in one location block. Only the configuration in, or inherited into, that location block, matters. Configuration in the location block overrides anything that might otherwise have been inherited. > location ~* \.(?:ico|js|css|gif|jpe?g|png|xml)$ { > expires 7d; > add_header Pragma public; > add_header Cache-Control "public, must-revalidate, proxy-revalidate"; "add_header" here means that no "add_header" directives are inherited -- only these two apply. > When requesting a .js file, the Pragma and Cache-Control headers are set, > but all the headers set in the base server block are not. That is working as intended. > What is the fix here? If you want configuration in the location block, put all of the configuration in the location block. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Feb 5 13:52:21 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Feb 2014 17:52:21 +0400 Subject: Are headers set in the server block inherited to all location blocks In-Reply-To: <20140205133334.GC3581@craic.sysops.org> References: <20140205133334.GC3581@craic.sysops.org> Message-ID: <20140205135220.GL1835@mdounin.ru> Hello! On Wed, Feb 05, 2014 at 01:33:34PM +0000, Francis Daly wrote: > On Wed, Feb 05, 2014 at 01:49:59AM -0500, justink101 wrote: > > Hi there, > > > I was under the impression that if you set headers in the > > server block, ALL location blocks below inherit those headers. > > No. > > Can you say where you got that impression? Perhaps documentation can be > clarified or corrected. It looks like add_header documentation doesn't have our usual clause for array-like directives, and something like this should be helpful: --- a/xml/en/docs/http/ngx_http_headers_module.xml +++ b/xml/en/docs/http/ngx_http_headers_module.xml @@ -56,6 +56,14 @@ the response code equals 200, 201, 204, A value can contain variables. + +There could be several add_header directives. +These directives are inherited from the previous level if and +only if there are no +add_header +directives defined on the current level. + + -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Wed Feb 5 14:43:12 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 5 Feb 2014 15:43:12 +0100 Subject: nginx-1.5.10 In-Reply-To: <7494120.QIBOdQVnfA@vbart-laptop> References: <20140204134542.GC1835@mdounin.ru> <1391563861.23812.160.camel@steve-new> <7494120.QIBOdQVnfA@vbart-laptop> Message-ID: Hi Valentin, Thanks for that information. However, since the usual way to do things is to have 2 branches: production & development, I guess the awaited people initial reaction is not to trust the 'mainline' branch. I think it would be highly beneficial to mention somewhere (on the downloads page?) that the mainline branch is production-ready. Production managers are traditionally scared, so repeating and enforcing the message would be highly beneficial. ;o) --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Feb 5 15:10:03 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Feb 2014 15:10:03 +0000 Subject: Are headers set in the server block inherited to all location blocks In-Reply-To: <20140205135220.GL1835@mdounin.ru> References: <20140205133334.GC3581@craic.sysops.org> <20140205135220.GL1835@mdounin.ru> Message-ID: <20140205151003.GD3581@craic.sysops.org> On Wed, Feb 05, 2014 at 05:52:21PM +0400, Maxim Dounin wrote: > On Wed, Feb 05, 2014 at 01:33:34PM +0000, Francis Daly wrote: Hi there, > > Can you say where you got that impression? Perhaps documentation can be > > clarified or corrected. > > It looks like add_header documentation doesn't have our usual > clause for array-like directives, and something like this should > be helpful: I'm not sure where the best place for it is; but I'd suggest not putting this on every directive, but only marking the few that don't follow the common inheritance rules. And then have an obvious document which describes what the common inheritance rules are. It would probably be something like the content linked from http://blog.martinfjordvald.com/2012/08/ but could be simplified to "inheritance is per-directive, and is all or nothing. The following are 'nothing'; the rest are 'all'. Exceptions are noted in the per-directive documentation (and possibly listed here too)." In the blog above, it indicates that "Action" directives do not inherit -- I don't know if that's a useful distinction that could be made in the documentation; it would presumably be extra work to ensure that every directives is categorised as "inherit all", "inherit none", or "exception", and I can't tell whether the benefit would be worth the cost. > +These directives are inherited from the previous level if and > +only if there are no > +add_header > +directives defined on the current level. To me, that's not special for add_header -- that's common to all inheriting directives bar the few exceptions (root/alias, allow/deny, and xslt_param/xslt_string_param in stock nginx). Maybe it is easier to put it in all directive documentation, though, if that's what people are more likely to read. I won't be the one doing the work, so I'm not in the right place to judge what work should be done :-) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Feb 5 15:55:19 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 05 Feb 2014 10:55:19 -0500 Subject: Are headers set in the server block inherited to all location blocks In-Reply-To: <20140205133334.GC3581@craic.sysops.org> References: <20140205133334.GC3581@craic.sysops.org> Message-ID: Gosh that is horrible that I have to copy and paste shared headers in the server block, to all location blocks. Is this a conscious decision? This makes maintainability very difficult as i have to do something like: [code] # shared_headers.conf add_header Alternate-Protocol 443:npn-spdy/3; add_header Strict-Transport-Security max-age=31556926; add_header X-XSS-Protection "1; mode=block"; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; # expires.conf location ~* \.(?:ico|css|gif|jpe?g|png|xml)$ { include shared_headers.conf; expires 7d; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } server { ... location ^~ /icons { include shared_headers.conf; add_header Access-Control-Allow-Origin *; } location ^~ /docs { include shared_headers.conf; auth_basic "Docs"; auth_basic_user_file /etc/nginx/auth/docs.htpasswd; } location ^~ /actions { include shared_headers.conf; add_header Access-Control-Allow-Origin https://www.mydomain.com; } } [/code] Or, is there a better way to do this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247185,247215#msg-247215 From mdounin at mdounin.ru Wed Feb 5 16:42:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Feb 2014 20:42:57 +0400 Subject: Are headers set in the server block inherited to all location blocks In-Reply-To: <20140205151003.GD3581@craic.sysops.org> References: <20140205133334.GC3581@craic.sysops.org> <20140205135220.GL1835@mdounin.ru> <20140205151003.GD3581@craic.sysops.org> Message-ID: <20140205164257.GR1835@mdounin.ru> Hello! On Wed, Feb 05, 2014 at 03:10:03PM +0000, Francis Daly wrote: > On Wed, Feb 05, 2014 at 05:52:21PM +0400, Maxim Dounin wrote: > > On Wed, Feb 05, 2014 at 01:33:34PM +0000, Francis Daly wrote: > > Hi there, > > > > Can you say where you got that impression? Perhaps documentation can be > > > clarified or corrected. > > > > It looks like add_header documentation doesn't have our usual > > clause for array-like directives, and something like this should > > be helpful: > > I'm not sure where the best place for it is; but I'd suggest not putting > this on every directive, but only marking the few that don't follow the > common inheritance rules. > > And then have an obvious document which describes what the common > inheritance rules are. Having some generic description of directive inheritance is certainly a plus (just in case, it likely should be in /syntax.xml, which currently only contains some basics about size and time units and needs to be actually written), but it's yet to find somebody who'll sign up for this work. On the other hand, writing this information explicitly for array-like directives matches our current practice and seems to help a lot. Especially keeping in mind that there are only a few array-like directives where inheritance is important, and most of them already have the clause in question. [...] -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Feb 5 17:31:32 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 05 Feb 2014 12:31:32 -0500 Subject: Setting a header inside an if block Message-ID: <30b075e03cb3cce14fbdc563d236c5d8.NginxMailingListEnglish@forum.nginx.org> I currently have: server{ ... if ($remote_user = "") { return 401; } ... } But what I really want is: server{ ... if ($remote_user = "") { add_header WWW-Authenticate 'Basic realm="mydomainhere.com"'; return 401; } ... } But nginx won't allow me to use the add_header directive inside an if block. How can I achieve this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247218,247218#msg-247218 From byrd at redrokk.com Wed Feb 5 18:05:19 2014 From: byrd at redrokk.com (Tyler Byrd) Date: Wed, 5 Feb 2014 10:05:19 -0800 Subject: Installing SSL on Nginx Message-ID: Looking for a consultant to assist with installing a SSL certificate on a Amazon EC2 instance running NGINX, Wordpress, & PHP-FPM. Currently we have installed everything but it is causing the WP admin area to run into a continues redirect. It is also causing our commerce cart to redirect to http://backend/checkout. If this is something you think you can help with please shoot me an email. Thanks. -- Sincerely, Tyler Byrd | President Red Rokk Interactive C: (360) 920-2462 | O: (360) 747-7401 | F: (954) 867-1177 Byrd at RedRokk.com | www.RedRokk.com This email and any files transmitted with it are confidential, are the property of Red Rokk Inc, and are intended only for the individual named. If you are not the named addressee, your access to and use or distribution of this email and its attachments is unauthorized and you may not use, disseminate, distribute, copy or make any other use of it. If you have received this email in error please notify the sender. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Wed Feb 5 18:37:46 2014 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 05 Feb 2014 20:37:46 +0200 Subject: Setting a header inside an if block In-Reply-To: <30b075e03cb3cce14fbdc563d236c5d8.NginxMailingListEnglish@forum.nginx.org> References: <30b075e03cb3cce14fbdc563d236c5d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52F284FA.6030404@csdoc.com> On 05.02.2014 19:31, justink101 wrote: > I currently have: > > server{ > ... > if ($remote_user = "") { > return 401; > } > ... > } > > But what I really want is: > > server{ > ... > if ($remote_user = "") { > add_header WWW-Authenticate 'Basic realm="mydomainhere.com"'; > return 401; > } > ... > } > > But nginx won't allow me to use the add_header directive inside an if block. > How can I achieve this? via http://nginx.org/en/docs/http/ngx_http_auth_request_module.html -- Best regards, Gena From nginx-forum at nginx.us Wed Feb 5 18:51:44 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 05 Feb 2014 13:51:44 -0500 Subject: Are headers set in the server block inherited to all location blocks In-Reply-To: References: <20140205133334.GC3581@craic.sysops.org> Message-ID: If its a repetitive block you could use an external file and a single include line. http://nginx.org/en/docs/ngx_core_module.html#include Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247185,247222#msg-247222 From nginx-forum at nginx.us Wed Feb 5 18:53:14 2014 From: nginx-forum at nginx.us (dwirth) Date: Wed, 05 Feb 2014 13:53:14 -0500 Subject: sudden nginx hang -- restart fails, "98: Address already in use" Message-ID: <2f8c55ed6e9eae873166070e23f23d93.NginxMailingListEnglish@forum.nginx.org> Hello, all. About an hour ago, out of the blue, my server stopped responding to webpage requests. We are using nginx + php-fpm on RHEL6. # service nginx status nginx (pid 31600) is running... # service nginx restart Stopping nginx: [FAILED] Starting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) nginx: [emerg] still could not bind() I killed the process and was able to restart nginx so the immediate crisis is over, but I need to know: What the hell happened? What would cause nginx to hang like this? I have googled around and I see several discussions about what to do when this happens but zilch about how to keep it from happening. - Dave Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247223,247223#msg-247223 From contact at jpluscplusm.com Wed Feb 5 19:08:24 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 5 Feb 2014 19:08:24 +0000 Subject: sudden nginx hang -- restart fails, "98: Address already in use" In-Reply-To: <2f8c55ed6e9eae873166070e23f23d93.NginxMailingListEnglish@forum.nginx.org> References: <2f8c55ed6e9eae873166070e23f23d93.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 5 February 2014 18:53, dwirth wrote: > Hello, all. > > About an hour ago, out of the blue, my server stopped responding to webpage > requests. We are using nginx + php-fpm on RHEL6. > > # service nginx status > nginx (pid 31600) is running... > > # service nginx restart > Stopping nginx: [FAILED] > Starting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address > already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] still could not bind() > > I killed the process and was able to restart nginx so the immediate crisis > is over, but I need to know: What the hell happened? What would cause nginx > to hang like this? I have googled around and I see several discussions about > what to do when this happens but zilch about how to keep it from happening. The underlying cause I can't help with but, in this situation, I'd always do a separate stop/stop so I could ensure the service had stopped before starting it again. It hadn't here, and that's what caused your "Address already in use" error messages. J From lists at ruby-forum.com Wed Feb 5 20:04:21 2014 From: lists at ruby-forum.com (Anth Anth) Date: Wed, 05 Feb 2014 21:04:21 +0100 Subject: Launching Excel in a production web server on a Mac In-Reply-To: References: Message-ID: <0fb1b8427a5862d0bbe9efbffa303eab@ruby-forum.com> Actually, after some more playing around I've finally got something working! Well, not exactly, but I'm able to communicate through OSA instead of appscript (and to be honest, I know very little about this appscript gem and it's been deprecated anyway, so it's probably for the best that I rewrite the spreadsheet manipulation code anyway.) A MILLION thank yous, Scott! You are a life saver! Oh, and Jonathan... a lesson for you: just because a question doesn't make sense /to you/ doesn't make it nonsensical. I asked a question that made sense to me, and it also happened to make sense to someone else who had encountered a similar problem, and we were able to exchange info that led to me solving my problem... weird, that almost sounds like what support forums are supposed to be like. Please keep that in mind before scolding someone for asking a question you don't quite get. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Feb 5 22:12:55 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 05 Feb 2014 17:12:55 -0500 Subject: Setting a header inside an if block In-Reply-To: <52F284FA.6030404@csdoc.com> References: <52F284FA.6030404@csdoc.com> Message-ID: I don't have the auth_request module? All I need to do, is set the WWW-Authenticate header. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247218,247227#msg-247227 From emailgrant at gmail.com Wed Feb 5 22:49:46 2014 From: emailgrant at gmail.com (Grant) Date: Wed, 5 Feb 2014 14:49:46 -0800 Subject: restrict by IP for some users Message-ID: I'd like to restrict access to a server block to authenticated users. Some of the users should be able to access it from any IP and some of the users should be blocked unless they are coming from a particular IP. How is this done in nginx? - Grant From jeroenooms at gmail.com Wed Feb 5 23:48:50 2014 From: jeroenooms at gmail.com (Jeroen Ooms) Date: Wed, 5 Feb 2014 15:48:50 -0800 Subject: upstream sent too big header while reading response header from upstream Message-ID: After I added some CORS headers to my API, one of the users of my nginx-based system complained about occasional errors with: upstream sent too big header while reading response header from upstream He also reported to have worked around the issue using: proxy_buffers 8 512k; proxy_buffer_size 2024k; proxy_busy_buffers_size 2024k; proxy_read_timeout 3000; However unfortunately I was unable to reproduce this problem myself. I also had a hard time figuring out what the exact problem is. Some questions: - What exactly does this error mean? Does it mean that response contained too many headers? How many is too many? - Is it wise to increase the buffer sizes as the user reported? What would be sensible defaults? From francis at daoine.org Thu Feb 6 00:07:17 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 6 Feb 2014 00:07:17 +0000 Subject: restrict by IP for some users In-Reply-To: References: Message-ID: <20140206000717.GE3581@craic.sysops.org> On Wed, Feb 05, 2014 at 02:49:46PM -0800, Grant wrote: Hi there, > I'd like to restrict access to a server block to authenticated users. > Some of the users should be able to access it from any IP and some of > the users should be blocked unless they are coming from a particular > IP. How is this done in nginx? Perhaps something along these lines? User "a" must come from an address listed in "geo $goodip". Other users may come from anywhere. === map $remote_user $userip { default 1; a $goodip; } geo $goodip { default 0; 127.0.0.0/24 1; } server { auth_basic "This Site"; auth_basic_user_file htpasswd; if ($userip = 0) { return 403; } } === f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Feb 6 00:50:35 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 6 Feb 2014 00:50:35 +0000 Subject: Are headers set in the server block inherited to all location blocks In-Reply-To: References: <20140205133334.GC3581@craic.sysops.org> Message-ID: <20140206005035.GF3581@craic.sysops.org> On Wed, Feb 05, 2014 at 10:55:19AM -0500, justink101 wrote: Hi there, > Gosh that is horrible that I have to copy and paste shared headers in the > server block, to all location blocks. Is this a conscious decision? This It's nginx. It makes it straightforward to see what configuration applies to a specific request. Copy-paste isn't the only way to create the config file. > makes maintainability very difficult as i have to do something like: I confess I don't see how it makes maintainability difficult. If it matters, could you explain? (It may not matter, of course.) You can use the nginx directive "include" to refer to another file, as you do here; or you could use your macro-processing language of choice to turn your preferred input format into the desired nginx.conf. > server { If you use include shared_headers.conf; at server level, then you only need to repeat it in locations where you want different headers added. > location ^~ /docs { > include shared_headers.conf; > > auth_basic "Docs"; > auth_basic_user_file /etc/nginx/auth/docs.htpasswd; > } So you wouldn't need it there, for example. > location ^~ /actions { > include shared_headers.conf; > > add_header Access-Control-Allow-Origin https://www.mydomain.com; > } But you would need it there. f -- Francis Daly francis at daoine.org From ajaykemparaj at gmail.com Thu Feb 6 08:18:36 2014 From: ajaykemparaj at gmail.com (Ajay k) Date: Thu, 6 Feb 2014 13:48:36 +0530 Subject: NginxHttpRewriteModule compiled sequence Message-ID: Hi , Is there a way to print all the compiled sequences of a rewrite module as documented in http://wiki.nginx.org/NginxHttpRewriteModule This interpreter is a simple stack virtual machine. For example, the directive: location /download/ { if ($forbidden) { return 403; } if ($slow) { limit_rate 10k; } rewrite ^/(download/.*)/media/(.*)\..*$ /$1/mp3/$2.mp3 break ;} will be compiled into this sequence: variable $forbidden checking to zero recovery 403 completion of entire code variable $slow checking to zero checkings of regular expression copying "/" copying $1 copying "/mp3/" copying $2 copying ".mp3" completion of regular expression completion of entire sequence Thanks, Ajay K -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Thu Feb 6 09:06:16 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 6 Feb 2014 13:06:16 +0400 Subject: Setting a header inside an if block In-Reply-To: References: <52F284FA.6030404@csdoc.com> Message-ID: <20140206090616.GE43775@lo0.su> On Wed, Feb 05, 2014 at 05:12:55PM -0500, justink101 wrote: > I don't have the auth_request module? All I need to do, is set the > WWW-Authenticate header. So you only want to ensure that the basic authentication was attempted, but don't want to check the credentials on the nginx side? Here's how: http { map $remote_user $realm { '' mydomainhere.com; default off; } server { auth_basic $realm; auth_basic_user_file /dev/null; } } It means that HTTTP basic authentication by nginx will be disabled ("off") if $remote_user is not empty, and enabled (and return 401 with an appropriate header) otherwise. From nginx-forum at nginx.us Thu Feb 6 10:04:17 2014 From: nginx-forum at nginx.us (rhklinux) Date: Thu, 06 Feb 2014 05:04:17 -0500 Subject: nginx IMAP hooks Message-ID: <6637561467575155dbaa8326ef4f79bc.NginxMailingListEnglish@forum.nginx.org> Hello people, I am absolute newbie in the world of nginx. Before I start learning configuration and module writing I want to ensure that what I am trying to do can be done using nginx. I want to have a IMAP proxy setup with a module (written by me) that scans email data for some specific content. Any help appriciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247241,247241#msg-247241 From mdounin at mdounin.ru Thu Feb 6 12:11:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Feb 2014 16:11:25 +0400 Subject: sudden nginx hang -- restart fails, "98: Address already in use" In-Reply-To: <2f8c55ed6e9eae873166070e23f23d93.NginxMailingListEnglish@forum.nginx.org> References: <2f8c55ed6e9eae873166070e23f23d93.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140206121125.GU1835@mdounin.ru> Hello! On Wed, Feb 05, 2014 at 01:53:14PM -0500, dwirth wrote: > Hello, all. > > About an hour ago, out of the blue, my server stopped responding to webpage > requests. We are using nginx + php-fpm on RHEL6. > > # service nginx status > nginx (pid 31600) is running... > > # service nginx restart > Stopping nginx: [FAILED] > Starting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address > already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) > nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) > nginx: [emerg] still could not bind() > > I killed the process and was able to restart nginx so the immediate crisis > is over, but I need to know: What the hell happened? What would cause nginx > to hang like this? I have googled around and I see several discussions about > what to do when this happens but zilch about how to keep it from happening. Such hangs can be caused either by bugs (either in nginx itself, or in 3rd party modules; take a look at nginx -V to find out how many 3rd party modules you have) or by some serious blocking at OS level. E.g., serving files from an NFS share may easily result in such a hang if something happens with the NFS server. It is impossible to say what happened in your case without additional information (at least "ps alx" output whould be helpful, and see also http://wiki.nginx.org/Debugging). General recomendations are: 1. Make sure your nginx is up-to-date. Note that some linux distros ship quite outdated versions in their repositories, make sure to check version against nginx.org. Current versions are 1.5.10 (mainline) and 1.4.4 (stable). 2. Make sure you aren't using things that can easily block, like NFS or other network filesystems, or some blocking code in embedded languages like embedded perl or lua. Or, if you do use them, expect anything to die if something bad happens. 3. If you are using 3rd party modules, make sure you have good reasons to do so. 4. Examine logs for "crit", "alert", "emerge" messages. If there are any, they require investigation, especially messages about worker processes "exited on signal". -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Feb 6 12:18:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Feb 2014 16:18:57 +0400 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: References: Message-ID: <20140206121857.GV1835@mdounin.ru> Hello! On Wed, Feb 05, 2014 at 03:48:50PM -0800, Jeroen Ooms wrote: > After I added some CORS headers to my API, one of the users of my > nginx-based system complained about occasional errors with: > > upstream sent too big header while reading response header from upstream > > He also reported to have worked around the issue using: > > proxy_buffers 8 512k; > proxy_buffer_size 2024k; > proxy_busy_buffers_size 2024k; > proxy_read_timeout 3000; > > However unfortunately I was unable to reproduce this problem myself. I > also had a hard time figuring out what the exact problem is. > > Some questions: > > - What exactly does this error mean? Does it mean that response > contained too many headers? How many is too many? Response headers should fit into proxy_buffer_size, see http://nginx.org/r/proxy_buffer_size. If they don't, the error is reported. > - Is it wise to increase the buffer sizes as the user reported? What > would be sensible defaults? Certainly no. In most cases defaults used (4k on most platforms) are appropriate. If big cookies are expected to be returned by a proxied server, something like 32k or 64k will be good enough. If larger values are needed, it indicate backend problem. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Feb 6 12:21:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Feb 2014 16:21:37 +0400 Subject: NginxHttpRewriteModule compiled sequence In-Reply-To: References: Message-ID: <20140206122137.GW1835@mdounin.ru> Hello! On Thu, Feb 06, 2014 at 01:48:36PM +0530, Ajay k wrote: > Is there a way to print all the compiled sequences of a rewrite module as > documented in > > http://wiki.nginx.org/NginxHttpRewriteModule No. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Feb 6 12:23:51 2014 From: nginx-forum at nginx.us (dwirth) Date: Thu, 06 Feb 2014 07:23:51 -0500 Subject: sudden nginx hang -- restart fails, "98: Address already in use" In-Reply-To: <20140206121125.GU1835@mdounin.ru> References: <20140206121125.GU1835@mdounin.ru> Message-ID: Thanks. I am fairly certain (?) at this point that NFS is the culprit. I had a lot of trouble unmounting one of my NFS directories. Eventually I resorted to rebooting, at which point it went into a permanent hang until a reboot was forced via hypervisor. Is this particular situation, where NFS causes nginx to shut down, specific to nginx? We just switched from apache to nginx at the start of the year. I didn't have NFS problems before that. I don't know if that's coincidence or not. At any rate, my takeaway from this: nginx + NFS = bad. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247223,247248#msg-247248 From mdounin at mdounin.ru Thu Feb 6 12:51:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Feb 2014 16:51:14 +0400 Subject: nginx IMAP hooks In-Reply-To: <6637561467575155dbaa8326ef4f79bc.NginxMailingListEnglish@forum.nginx.org> References: <6637561467575155dbaa8326ef4f79bc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140206125114.GY1835@mdounin.ru> Hello! On Thu, Feb 06, 2014 at 05:04:17AM -0500, rhklinux wrote: > Hello people, > I am absolute newbie in the world of nginx. Before I start learning > configuration and module writing I want to ensure that what I am trying to > do can be done using nginx. I want to have a IMAP proxy setup with a module > (written by me) that scans email data for some specific content. > Any help appriciated. This isn't something that would be easy to do in nginx. More or less, you'll have to write your own IMAP proxy code. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Feb 6 14:23:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Feb 2014 18:23:58 +0400 Subject: sudden nginx hang -- restart fails, "98: Address already in use" In-Reply-To: References: <20140206121125.GU1835@mdounin.ru> Message-ID: <20140206142358.GB1835@mdounin.ru> Hello! On Thu, Feb 06, 2014 at 07:23:51AM -0500, dwirth wrote: > Thanks. I am fairly certain (?) at this point that NFS is the culprit. I had > a lot of trouble unmounting one of my NFS directories. Eventually I resorted > to rebooting, at which point it went into a permanent hang until a reboot > was forced via hypervisor. Well, if you use NFS it perfectly explains observed behaviour. > Is this particular situation, where NFS causes nginx to shut down, specific > to nginx? We just switched from apache to nginx at the start of the year. I > didn't have NFS problems before that. I don't know if that's coincidence or > not. Basic NFS problems are the same regardless of software you use: if something goes wrong, it blocks processes trying to access NFS share. With nginx, results are usually a bit more severe than with process-based servers like Apache, because a) blocking an nginx worker process affects multiple requests, and b) blocking all nginx processes is easier as typically there are only small number of nginx worker processes. > At any rate, my takeaway from this: nginx + NFS = bad. I would take nginx out of the equation. If you are going to use NFS it may be a good idea to make sure you've read and understood all the mount options NFS has. In particular, it is believed that using the "soft" option and small timeouts may help a bit. I wouldn't recommend using NFS at all though. -- Maxim Dounin http://nginx.org/ From mrj at advancedcontrols.com.au Thu Feb 6 14:32:44 2014 From: mrj at advancedcontrols.com.au (Mark James) Date: Fri, 07 Feb 2014 01:32:44 +1100 Subject: Root ignored for "location = /"? Message-ID: <52F39D0C.4040708@advancedcontrols.com.au> Hello, I want the index.html file in a particular directory to only be served when the domain's root URI is requested. Using the config server example.com; index index.html; location = / { root path/to/dir; } a request to example.com results in index.html in the Nginx default root "/html" directory being served. The same thing happens with a trailing slash on the root, or when I substitute a trailing-slash alias directive. If I use an alias directive without a trailing slash I get 403 error directory index of "path/to/dir" is forbidden. There are no problems if I instead use "location /". Can anyone suggest a reason or a resolution? Thanks. Mark From kworthington at gmail.com Thu Feb 6 14:59:10 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 6 Feb 2014 09:59:10 -0500 Subject: [nginx-announce] nginx-1.5.10 In-Reply-To: <20140204134549.GD1835@mdounin.ru> References: <20140204134549.GD1835@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.10 for Windows http://goo.gl/OCUvut (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via Twitter ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Feb 4, 2014 at 8:45 AM, Maxim Dounin wrote: > Changes with nginx 1.5.10 04 Feb > 2014 > > *) Feature: the ngx_http_spdy_module now uses SPDY 3.1 protocol. > Thanks to Automattic and MaxCDN for sponsoring this work. > > *) Feature: the ngx_http_mp4_module now skips tracks too short for a > seek requested. > > *) Bugfix: a segmentation fault might occur in a worker process if the > $ssl_session_id variable was used in logs; the bug had appeared in > 1.5.9. > > *) Bugfix: the $date_local and $date_gmt variables used wrong format > outside of the ngx_http_ssi_filter_module. > > *) Bugfix: client connections might be immediately closed if deferred > accept was used; the bug had appeared in 1.3.15. > > *) Bugfix: alerts "getsockopt(TCP_FASTOPEN) ... failed" appeared in > logs > during binary upgrade on Linux; the bug had appeared in 1.5.8. > Thanks to Piotr Sikora. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Feb 6 15:23:33 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 06 Feb 2014 19:23:33 +0400 Subject: Root ignored for "location = /"? In-Reply-To: <52F39D0C.4040708@advancedcontrols.com.au> References: <52F39D0C.4040708@advancedcontrols.com.au> Message-ID: <1907095.vJ2MPN7AKb@vbart-laptop> On Friday 07 February 2014 01:32:44 Mark James wrote: > Hello, > > I want the index.html file in a particular directory to only be served when the domain's root URI is requested. > > Using the config > > server example.com; > index index.html; > location = / { > root path/to/dir; > } > > a request to example.com results in index.html in the Nginx default root "/html" directory being served. > > The same thing happens with a trailing slash on the root, or when I substitute a trailing-slash alias directive. > > If I use an alias directive without a trailing slash I get 403 error > > directory index of "path/to/dir" is forbidden. > > There are no problems if I instead use "location /". > > Can anyone suggest a reason or a resolution? The reason is documented: http://nginx.org/r/index "It should be noted that using an index file causes an internal redirect ..." wbr, Valentin V. Bartenev From mrj at advancedcontrols.com.au Thu Feb 6 15:52:48 2014 From: mrj at advancedcontrols.com.au (Mark James) Date: Fri, 07 Feb 2014 02:52:48 +1100 Subject: Root ignored for "location = /"? In-Reply-To: <1907095.vJ2MPN7AKb@vbart-laptop> References: <52F39D0C.4040708@advancedcontrols.com.au> <1907095.vJ2MPN7AKb@vbart-laptop> Message-ID: <52F3AFD0.8020407@advancedcontrols.com.au> On 07/02/14 02:23, Valentin V. Bartenev wrote: > The reason is documented:http://nginx.org/r/index > > "It should be noted that using an index file > causes an internal redirect ..." Thanks very much for this Valentin. I've been stuck on this for a while. The solution was to replace the "location = /" block with a "location = /index.html" block. From zaphod at berentweb.com Thu Feb 6 16:49:45 2014 From: zaphod at berentweb.com (Beeblebrox) Date: Thu, 6 Feb 2014 18:49:45 +0200 Subject: cache-proxy passes garbled fonts + alias problem Message-ID: Hello, * I am on FreeBSD_11-current_amd64, using nginx/1.5.8 both as intranet server:80 and caching proxy:8080 (squid-like) * I followed this example for the nginx-cache-proxy config file: http://www.goitworld.com/how-to-use-nginx-proxy-cache-replace-squid/ * My edited nginx.conf file: https://docs.google.com/document/d/1q_ikc4Urbq_MDehrFkGKl6In8kTtSaEQ9e7qqTBaT7Q/edit?usp=sharing THE ISSUES: 1. The squid-like proxy passes some sites with garbled fonts, for example: http://dha.com.tr. I suspect problem on that site is Microsoft -related, as most sites in do not have this issue. Also no issue when not using the nginx cache proxy. 2. I'm doing soemething wrong with alias, and it does not work. location /wp { alias /dbhttp/wordpress/; fastcgi_pass unix:/var/run/www/php-fpm.sock; } The browser does not go to page when I type in /wp or any other alias. Thanks for your time. From jeroen.ooms at stat.ucla.edu Thu Feb 6 17:11:31 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Thu, 6 Feb 2014 09:11:31 -0800 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: <20140206121857.GV1835@mdounin.ru> References: <20140206121857.GV1835@mdounin.ru> Message-ID: On Thu, Feb 6, 2014 at 4:18 AM, Maxim Dounin wrote > > Response headers should fit into proxy_buffer_size, see > http://nginx.org/r/proxy_buffer_size. If they don't, the error > is reported. In which the "size" refers to the number of characters that appear up till the blank line that separates the headers from the body in the response? It looks like it would be around 4k. So perhaps I'll increase proxy_buffer_size to 8k. Is it also necessary to increase modify proxy_busy_buffers_size and proxy_buffers to deal with responses with many headers? From mdounin at mdounin.ru Thu Feb 6 17:18:19 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Feb 2014 21:18:19 +0400 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: References: <20140206121857.GV1835@mdounin.ru> Message-ID: <20140206171819.GI1835@mdounin.ru> Hello! On Thu, Feb 06, 2014 at 09:11:31AM -0800, Jeroen Ooms wrote: > On Thu, Feb 6, 2014 at 4:18 AM, Maxim Dounin wrote > > > > Response headers should fit into proxy_buffer_size, see > > http://nginx.org/r/proxy_buffer_size. If they don't, the error > > is reported. > > > In which the "size" refers to the number of characters that appear up > till the blank line that separates the headers from the body in the > response? It looks like it would be around 4k. So perhaps I'll > increase proxy_buffer_size to 8k. Yes. > Is it also necessary to increase modify proxy_busy_buffers_size and > proxy_buffers to deal with responses with many headers? No. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Feb 6 20:10:49 2014 From: nginx-forum at nginx.us (justink101) Date: Thu, 06 Feb 2014 15:10:49 -0500 Subject: Setting a header inside an if block In-Reply-To: <20140206090616.GE43775@lo0.su> References: <20140206090616.GE43775@lo0.su> Message-ID: <7df6c148c4b9de736bb2986526893b5c.NginxMailingListEnglish@forum.nginx.org> Thanks that worked perfectly, though I must admit, a bit of a round-a-bout solution. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247218,247263#msg-247263 From nginx-forum at nginx.us Fri Feb 7 04:39:02 2014 From: nginx-forum at nginx.us (rhklinux) Date: Thu, 06 Feb 2014 23:39:02 -0500 Subject: nginx IMAP hooks In-Reply-To: <20140206125114.GY1835@mdounin.ru> References: <20140206125114.GY1835@mdounin.ru> Message-ID: <9808c7d72f2617dbca0a51ee90317ac0.NginxMailingListEnglish@forum.nginx.org> Thanks ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247241,247270#msg-247270 From nginx-forum at nginx.us Fri Feb 7 14:21:38 2014 From: nginx-forum at nginx.us (Peleke) Date: Fri, 07 Feb 2014 09:21:38 -0500 Subject: IPv4 & IPv6 combination Message-ID: <9dd097377ebd35a33431aac7f07e1994.NginxMailingListEnglish@forum.nginx.org> I have a virtual server with Debian Wheezy and nginx 1.4.4 installed: nginx version: nginx/1.4.4 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --with-pcre-jit --with-debug --with-file-aio --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_gunzip_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_spdy_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-mail --with-mail_ssl_module --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/headers-more-nginx-module --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-auth-pam --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-cache-purge --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-dav-ext-module --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-development-kit --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-echo --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/ngx-fancyindex --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-push-stream-module --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-lua --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-upload-progress --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-upstream-fair --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-syslog --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/ngx_http_pinba_module --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/ngx_http_substitutions_filter_module --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/ngx_pagespeed --add-module=/usr/src/nginx/source/nginx-1.4.4/debian/modules/nginx-x-rid-header --with-ld-opt=-lossp-uuid Now I want to allow IPv4 AND IPv6 connections at the same time. sudo sysctl net.ipv6.bindv6only net.ipv6.bindv6only = 0 My default.conf looks like this: # --------------------------------------- # vHost default # --------------------------------------- server { ## # Deny Direct Access ## #listen 80 default_server; listen [::]:80 default_server;# ipv6only=on; server_name _; return 444; } and my domain.conf looks like this: server { ## # Basic Settings ## #listen 80; listen [::]:80; server_name domain.tld www.domain.tld; When I want to restart nginx I get peleke at vps:~$ sudo /etc/init.d/nginx restart [....] Restarting nginx: nginxnginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] still could not bind() How can I test if my domain is reachable for everyone on IPv4 AND IPv6? I have only a native IPv6 connection so I cannot check for IPv4. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247277,247277#msg-247277 From nginx-forum at nginx.us Fri Feb 7 14:27:02 2014 From: nginx-forum at nginx.us (=?UTF-8?Q? C=C3=A9dric ?=) Date: Fri, 07 Feb 2014 09:27:02 -0500 Subject: 100% CPU Usage Message-ID: Hi (In first sorry for my english, I'm French) In my Company we work with Trend Micro Deep Security v9. This solution is installed on a 2008R2 SP1 (Virtual Machine) and every week we have a crash; 2 NGINX.EXE process uses 100% CPU usage. I have created a scheduled task who reboot the server once per week but the problem is not resolved. NGINX version : Nginx/1.2.3 Nginx.conf : daemon off; worker_processes 1; error_log "C:/Program Files/Trend Micro/Deep Security Relay/relay/logs/error.log"; pid "C:/Program Files/Trend Micro/Deep Security Relay/relay/nginx.pid"; events { worker_connections 1024; } http { server_tokens off; default_type application/octet-stream; keepalive_timeout 60; access_log off; log_not_found off; ssl_certificate "C:/Program Files/Trend Micro/Deep Security Relay/relay/ds_relay.pem"; ssl_certificate_key "C:/Program Files/Trend Micro/Deep Security Relay/relay/ds_relay.key"; server { listen [::]:4122 ssl; listen 4122 ssl; root "C:/Program Files/Trend Micro/Deep Security Relay/relay/iau/"; location / { allow all; } location /data/ { client_body_in_file_only on; client_body_temp_path "C:/Program Files/Trend Micro/Deep Security Relay/relay/iau/upload/"; client_max_body_size 1000m; client_body_timeout 300; fastcgi_pass localhost:4123; fastcgi_param URI $uri; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param CONTENT_FILE $request_body_file; fastcgi_param CONTENT_ROOT $document_root; } location /upload/ { internal; } } } Server : 2008 R2 SP1 (Virtual Machine) 2 vCPU 8Gb memory Have you any ideas about this problem ? Thanx in advance, C?dric Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247278,247278#msg-247278 From mdounin at mdounin.ru Fri Feb 7 15:00:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 7 Feb 2014 19:00:27 +0400 Subject: 100% CPU Usage In-Reply-To: References: Message-ID: <20140207150027.GS1835@mdounin.ru> Hello! On Fri, Feb 07, 2014 at 09:27:02AM -0500, C?dric wrote: > Hi > (In first sorry for my english, I'm French) > In my Company we work with Trend Micro Deep Security v9. > This solution is installed on a 2008R2 SP1 (Virtual Machine) and every week > we have a crash; 2 NGINX.EXE process uses 100% CPU usage. > I have created a scheduled task who reboot the server once per week but the > problem is not resolved. > > NGINX version : > > Nginx/1.2.3 First of all, I would recommend you to upgrade to nginx 1.5.10. Version 1.2.3 is just outdated. It might also be a good idea to ask Trend Micro for support if you've got nginx as a part of their product. Please also make sure you understand that nginx for Windows is in beta, has number of known problems and limitations, and it's not really intended for production use. See http://nginx.org/en/docs/windows.html for some details. -- Maxim Dounin http://nginx.org/ From jaderhs5 at gmail.com Fri Feb 7 16:28:10 2014 From: jaderhs5 at gmail.com (Jader H. Silva) Date: Fri, 7 Feb 2014 14:28:10 -0200 Subject: acess log over nfs hanging Message-ID: Hello all. I've been using nginx as a reverse proxy and to write access.log files named with variables. I am also using open log file cache. It seems that when some processes are running in the nfs server, the share won't allow writing for some time and I noticed all nginx workers in status D and not processing requests. Are all access.log writes blocking? If my nfs server shutdown in an unexpected way, will nginx stop proxying requests to the backend or responses to the client? Thanks in advance -- att. Jader H. Silva -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Fri Feb 7 17:08:09 2014 From: emailgrant at gmail.com (Grant) Date: Fri, 7 Feb 2014 09:08:09 -0800 Subject: restrict by IP for some users In-Reply-To: <20140206000717.GE3581@craic.sysops.org> References: <20140206000717.GE3581@craic.sysops.org> Message-ID: >> I'd like to restrict access to a server block to authenticated users. >> Some of the users should be able to access it from any IP and some of >> the users should be blocked unless they are coming from a particular >> IP. How is this done in nginx? > > Perhaps something along these lines? > > User "a" must come from an address listed in "geo $goodip". > Other users may come from anywhere. > > === > map $remote_user $userip { > default 1; > a $goodip; > } > > geo $goodip { > default 0; > 127.0.0.0/24 1; > } > > server { > auth_basic "This Site"; > auth_basic_user_file htpasswd; > if ($userip = 0) { > return 403; > } > } Interesting solution. I never would have thought of that. I was using an alias to do this in apache. Are there performance implications of adding the geo and map modules to nginx and running that code? - Grant From emailgrant at gmail.com Fri Feb 7 18:05:32 2014 From: emailgrant at gmail.com (Grant) Date: Fri, 7 Feb 2014 10:05:32 -0800 Subject: No authentication prompt with if block Message-ID: Authentication works fine if I don't include the if block but I'd like to allow only a certain user access to this server block. I get a 403 in the browser without any prompt for authentication. auth_basic "Authentication Required"; auth_basic_user_file htpasswd; if ($remote_user != "myuser") { return 403; } What am I doing wrong? - Grant From nginx-forum at nginx.us Fri Feb 7 18:06:21 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Fri, 07 Feb 2014 13:06:21 -0500 Subject: Transforming nginx for Windows In-Reply-To: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> References: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> Message-ID: This is very nice. Thanks. I noticed the encrypted-session-nginx-module is missing from this build. Is there some reason you omitted it? Have you published any docs on how to build this ourselves in windows using your source code? Or is that obvious and I should RTM? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,247287#msg-247287 From nginx-forum at nginx.us Fri Feb 7 18:34:41 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 07 Feb 2014 13:34:41 -0500 Subject: Transforming nginx for Windows In-Reply-To: References: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> Message-ID: tonyschwartz Wrote: ------------------------------------------------------- > This is very nice. Thanks. I noticed the > encrypted-session-nginx-module is missing from this build. Is there No reason to omit it other then no one has requested it :) I'll have a look and add it if there are no cross source/module conflicts for the next release, at the moment we're busy with integrating a select-boost api in the core. > some reason you omitted it? Have you published any docs on how to > build this ourselves in windows using your source code? Or is that > obvious and I should RTM? A Rtfm is always recommended ;) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,247288#msg-247288 From mdounin at mdounin.ru Fri Feb 7 19:09:42 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 7 Feb 2014 23:09:42 +0400 Subject: No authentication prompt with if block In-Reply-To: References: Message-ID: <20140207190942.GV1835@mdounin.ru> Hello! On Fri, Feb 07, 2014 at 10:05:32AM -0800, Grant wrote: > Authentication works fine if I don't include the if block but I'd like > to allow only a certain user access to this server block. I get a 403 > in the browser without any prompt for authentication. > > auth_basic "Authentication Required"; > auth_basic_user_file htpasswd; > if ($remote_user != "myuser") { > return 403; > } > > What am I doing wrong? Rewrite directives, including "if", are executed before access checks (and hence auth_basic). So in your cofiguration 403 is returned before auth_basic has a chance to ask for authentication by returning 401. Something like map $remote_user $invalid_user { default 1; "" 0; "myuser" 0; } if ($invalid_user) { return 403; } auth_basic ... should work, as it will allow empty $remote_user and auth_basic will be able to ask for authentication if credentials wasn't supplied. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Feb 7 23:45:20 2014 From: nginx-forum at nginx.us (tbamise) Date: Fri, 07 Feb 2014 18:45:20 -0500 Subject: One link/area on a https site with a different SSL config? In-Reply-To: <87E64380-0C14-4AAF-BA14-7FBFECD9FE7A@sysoev.ru> References: <87E64380-0C14-4AAF-BA14-7FBFECD9FE7A@sysoev.ru> Message-ID: >> Patrick Lists wrote in post #1132735: >>> On 09-01-14 22:48, Styopa Semenukha wrote: >>>> Patrick, >>>> >>>> It's not possible, because SSL works on lower level (session layer) than HTTP >>> (application layer). >>> >>> Thank you for your feedback. That's unfortunate. I hope to see flexible >>> SSL config one day as an enhancement (if possible). >> >> It is not possible, not with nginx nor any other web server. Read up on >> how the SSL handshake and HTTP over SSL works, and it should become >> clear. >It is actually possible, at least Apache can do this with SSL renegotiation. >But nginx currently does not support this. Expanding on this question, is it possible to use a different set of certs for the client side and another set for the upstream server side? Right now I can defined a server module with ssl and specify the ssl certificates and specify a https protocol for proxy_pass for a location. But both connections end up using the same certificates specified with $ssl_certificate. How can I specify different certificates for the client side connection and upstream side connection? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246208,247293#msg-247293 From siefke_listen at web.de Sat Feb 8 02:27:17 2014 From: siefke_listen at web.de (Silvio Siefke) Date: Sat, 8 Feb 2014 03:27:17 +0100 Subject: SSL Question/Error Message-ID: <20140208032717.81de52f06cd530955187b3c9@web.de> Hello, i has set my vhost with ssl and has create a certificate on cacert.org. I use the tutorial on: https://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/ I has no problems in error log, no in access log all ok, only when i call my blog with chrome or firefox the css files will not be loaded. Is thats a browser error or is it configuration of ssl? Without SSL run all perfect, with ssl blog css not load. I use the follow in vhost config listen 443 ssl spdy; certificate key and this two lines in global config: ssl_session_cache shared:SSL:10m; sl_session_timeout 10m; Has someone idea? Thank you for help & Nice Day Silvio From siefke_listen at web.de Sat Feb 8 02:28:48 2014 From: siefke_listen at web.de (Silvio Siefke) Date: Sat, 8 Feb 2014 03:28:48 +0100 Subject: SSL Question/Error In-Reply-To: <20140208032717.81de52f06cd530955187b3c9@web.de> References: <20140208032717.81de52f06cd530955187b3c9@web.de> Message-ID: <20140208032848.58a09b76bc8336de285d80c5@web.de> Hello, sorry i forget the website. http://silviosiefke.com/blog https://silviosiefke.com/blog Thank you for help & Nice Day Silvio From ajaykemparaj at gmail.com Sat Feb 8 02:39:55 2014 From: ajaykemparaj at gmail.com (Ajay k) Date: Sat, 8 Feb 2014 08:09:55 +0530 Subject: NginxHttpRewriteModule compiled sequence In-Reply-To: <20140206122137.GW1835@mdounin.ru> References: <20140206122137.GW1835@mdounin.ru> Message-ID: Thanks Maxim Dounin On Thu, Feb 6, 2014 at 5:51 PM, Maxim Dounin wrote: > Hello! > > On Thu, Feb 06, 2014 at 01:48:36PM +0530, Ajay k wrote: > > > Is there a way to print all the compiled sequences of a rewrite module as > > documented in > > > > http://wiki.nginx.org/NginxHttpRewriteModule > > No. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks, Ajay K -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sat Feb 8 02:45:14 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 08 Feb 2014 06:45:14 +0400 Subject: SSL Question/Error In-Reply-To: <20140208032848.58a09b76bc8336de285d80c5@web.de> References: <20140208032717.81de52f06cd530955187b3c9@web.de> <20140208032848.58a09b76bc8336de285d80c5@web.de> Message-ID: <1742554.RXT6pHguXH@vbart-laptop> On Saturday 08 February 2014 03:28:48 Silvio Siefke wrote: > Hello, > > sorry i forget the website. > > http://silviosiefke.com/blog > https://silviosiefke.com/blog > > Thank you for help & Nice Day > Silvio > Browser clearly says what is the problem: "Firefox has blocked content that isn't secure". Indeed, all page resources on your site are linked using absolute URIs with the "http" scheme: wbr, Valentin V. Bartenev From siefke_listen at web.de Sat Feb 8 03:00:43 2014 From: siefke_listen at web.de (Silvio Siefke) Date: Sat, 8 Feb 2014 04:00:43 +0100 Subject: SSL Question/Error In-Reply-To: <1742554.RXT6pHguXH@vbart-laptop> References: <20140208032717.81de52f06cd530955187b3c9@web.de> <20140208032848.58a09b76bc8336de285d80c5@web.de> <1742554.RXT6pHguXH@vbart-laptop> Message-ID: <20140208040043.29159a535b5a45cf73d059ba@web.de> Hello, On Sat, 08 Feb 2014 06:45:14 +0400 "Valentin V. Bartenev" wrote: > Browser clearly says what is the problem: > "Firefox has blocked content that isn't secure". yes hhh thats true. The easy way never in eyes :) But why in blog so, but on startpage goes? There same http links in head? Thank you for help & Nice Day Silvio From vbart at nginx.com Sat Feb 8 03:03:33 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 08 Feb 2014 07:03:33 +0400 Subject: SSL Question/Error In-Reply-To: <20140208040043.29159a535b5a45cf73d059ba@web.de> References: <20140208032717.81de52f06cd530955187b3c9@web.de> <1742554.RXT6pHguXH@vbart-laptop> <20140208040043.29159a535b5a45cf73d059ba@web.de> Message-ID: <1729604.KbkTgS9aG0@vbart-laptop> On Saturday 08 February 2014 04:00:43 Silvio Siefke wrote: > Hello, > > On Sat, 08 Feb 2014 06:45:14 +0400 "Valentin V. Bartenev" > wrote: > > > Browser clearly says what is the problem: > > "Firefox has blocked content that isn't secure". > > yes hhh thats true. The easy way never in eyes :) But why in blog so, > but on startpage goes? There same http links in head? > Not the same, on your start page link to CSS is relative: wbr, Valentin V. Bartenev From siefke_listen at web.de Sat Feb 8 03:21:07 2014 From: siefke_listen at web.de (Silvio Siefke) Date: Sat, 8 Feb 2014 04:21:07 +0100 Subject: SSL Question/Error In-Reply-To: <1729604.KbkTgS9aG0@vbart-laptop> References: <20140208032717.81de52f06cd530955187b3c9@web.de> <1742554.RXT6pHguXH@vbart-laptop> <20140208040043.29159a535b5a45cf73d059ba@web.de> <1729604.KbkTgS9aG0@vbart-laptop> Message-ID: <20140208042107.3582db694428e768e33ccc9a@web.de> Hello, On Sat, 08 Feb 2014 07:03:33 +0400 "Valentin V. Bartenev" wrote: > Not the same, on your start page link to CSS is relative: > Ah okay the css files. I think it mean html5shiv and human.txt too, but now has links change too https and has rewrite. Now should work. Thank you for help & Nice Day Silvio From emailgrant at gmail.com Sat Feb 8 16:43:53 2014 From: emailgrant at gmail.com (Grant) Date: Sat, 8 Feb 2014 08:43:53 -0800 Subject: No authentication prompt with if block In-Reply-To: <20140207190942.GV1835@mdounin.ru> References: <20140207190942.GV1835@mdounin.ru> Message-ID: >> Authentication works fine if I don't include the if block but I'd like >> to allow only a certain user access to this server block. I get a 403 >> in the browser without any prompt for authentication. >> >> auth_basic "Authentication Required"; >> auth_basic_user_file htpasswd; >> if ($remote_user != "myuser") { >> return 403; >> } >> >> What am I doing wrong? > > Rewrite directives, including "if", are executed before access > checks (and hence auth_basic). So in your cofiguration 403 is > returned before auth_basic has a chance to ask for authentication > by returning 401. > > Something like > > map $remote_user $invalid_user { > default 1; > "" 0; > "myuser" 0; > } > > if ($invalid_user) { > return 403; > } > > auth_basic ... > > should work, as it will allow empty $remote_user and auth_basic > will be able to ask for authentication if credentials wasn't > supplied. That works great, thank you. Does adding 'map' slow the server down much? - Grant From mdounin at mdounin.ru Sat Feb 8 21:05:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 9 Feb 2014 01:05:28 +0400 Subject: No authentication prompt with if block In-Reply-To: References: <20140207190942.GV1835@mdounin.ru> Message-ID: <20140208210528.GX1835@mdounin.ru> Hello! On Sat, Feb 08, 2014 at 08:43:53AM -0800, Grant wrote: > >> Authentication works fine if I don't include the if block but I'd like > >> to allow only a certain user access to this server block. I get a 403 > >> in the browser without any prompt for authentication. > >> > >> auth_basic "Authentication Required"; > >> auth_basic_user_file htpasswd; > >> if ($remote_user != "myuser") { > >> return 403; > >> } > >> > >> What am I doing wrong? > > > > Rewrite directives, including "if", are executed before access > > checks (and hence auth_basic). So in your cofiguration 403 is > > returned before auth_basic has a chance to ask for authentication > > by returning 401. > > > > Something like > > > > map $remote_user $invalid_user { > > default 1; > > "" 0; > > "myuser" 0; > > } > > > > if ($invalid_user) { > > return 403; > > } > > > > auth_basic ... > > > > should work, as it will allow empty $remote_user and auth_basic > > will be able to ask for authentication if credentials wasn't > > supplied. > > That works great, thank you. Does adding 'map' slow the server down much? No, not at all. In contrast, using maps is usually faster than any other method to do conditional checks. See docs at http://nginx.org/r/map, in particular this note: : Since variables are evaluated only when they are used, the mere : declaration even of a large number of ?map? variables does not add : any extra costs to request processing. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Feb 9 00:39:12 2014 From: nginx-forum at nginx.us (tbamise) Date: Sat, 08 Feb 2014 19:39:12 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx Message-ID: Is it possible to use a different set of certs for the client side and another set for the upstream server side? My use case is to have different sets of local ssl certs on Nginx. A key/cert pair for communicating with clients and another set for communicating with the upstream proxy. Right now I can define a server module with ssl and specify the ssl certificates and specify a https protocol for proxy_pass for a location. But both client and upstream connections end up using the same certificates specified with $ssl_certificate. How can I specify different certificates for the client side connection and upstream side connection? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247305#msg-247305 From mdounin at mdounin.ru Sun Feb 9 21:12:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Feb 2014 01:12:37 +0400 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: References: Message-ID: <20140209211236.GZ1835@mdounin.ru> Hello! On Sat, Feb 08, 2014 at 07:39:12PM -0500, tbamise wrote: > Is it possible to use a different set of certs for the client side and > another set for the upstream server side? > > My use case is to have different sets of local ssl certs on Nginx. A > key/cert pair for communicating with clients and another set for > communicating with the upstream proxy. > > Right now I can define a server module with ssl and specify the ssl > certificates and specify a https protocol for proxy_pass for a location. But > both client and upstream connections end up using the same certificates > specified with $ssl_certificate. How can I specify different certificates > for the client side connection and upstream side connection? Connections to upstream servers don't use any client certificates. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Feb 9 22:15:10 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 09 Feb 2014 17:15:10 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: References: Message-ID: tbamise Wrote: ------------------------------------------------------- > Is it possible to use a different set of certs for the client side and > another set for the upstream server side? Use a tunnel like stunnel to encrypt upstreams, which supports client certs. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247310#msg-247310 From nginx-forum at nginx.us Mon Feb 10 00:13:55 2014 From: nginx-forum at nginx.us (tbamise) Date: Sun, 09 Feb 2014 19:13:55 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: <20140209211236.GZ1835@mdounin.ru> References: <20140209211236.GZ1835@mdounin.ru> Message-ID: <0859e77f6f623e6e1fe71684d75c84b8.NginxMailingListEnglish@forum.nginx.org> > > Connections to upstream servers don't use any client certificates. > Yes I agree. The connection to the upstream server uses the nginx server certificates specified by $ssl_certificate(_key). Basically I want to use: for downstream to client - a.cert & a.cert.key for connection to clients for upstream to upstream servers - b.cert & b.cert.key for connection to upstream servers. The https & server modules of Nginx only allow you to specify a single cert pair via $ssl_certificate(_key) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247311#msg-247311 From nginx-forum at nginx.us Mon Feb 10 00:21:30 2014 From: nginx-forum at nginx.us (tbamise) Date: Sun, 09 Feb 2014 19:21:30 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: References: Message-ID: <8bd627c29b413ac2ee599aa482a3b103.NginxMailingListEnglish@forum.nginx.org> itpp2012 Wrote: ------------------------------------------------------- > tbamise Wrote: > ------------------------------------------------------- > > Is it possible to use a different set of certs for the client side > and > > another set for the upstream server side? > > Use a tunnel like stunnel to encrypt upstreams, which supports client > certs. I've heard that stunned does not scale very well. I'm looking at managing a lot of simultaneous ssl connections hence using Nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247312#msg-247312 From nginx-forum at nginx.us Mon Feb 10 00:24:19 2014 From: nginx-forum at nginx.us (tbamise) Date: Sun, 09 Feb 2014 19:24:19 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: <0859e77f6f623e6e1fe71684d75c84b8.NginxMailingListEnglish@forum.nginx.org> References: <20140209211236.GZ1835@mdounin.ru> <0859e77f6f623e6e1fe71684d75c84b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52ff482ff09605cb74046980763e3c36.NginxMailingListEnglish@forum.nginx.org> tbamise Wrote: ------------------------------------------------------- > > > > Connections to upstream servers don't use any client certificates. > > > > Yes I agree. The connection to the upstream server uses the nginx > server certificates specified by $ssl_certificate(_key). > Basically I want to use: > for downstream to client - a.cert & a.cert.key for connection to > clients > for upstream to upstream servers - b.cert & b.cert.key for connection > to upstream servers. > > The https & server modules of Nginx only allow you to specify a single > cert pair via $ssl_certificate(_key) For a lack of better words, I'm looking to terminate the client ssl connection at Nginx and establish a new ssl connection with the upstream server without modifying the hypertext transport protocol. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247313#msg-247313 From nginx-forum at nginx.us Mon Feb 10 08:13:06 2014 From: nginx-forum at nginx.us (=?UTF-8?Q? C=C3=A9dric ?=) Date: Mon, 10 Feb 2014 03:13:06 -0500 Subject: 100% CPU Usage In-Reply-To: <20140207150027.GS1835@mdounin.ru> References: <20140207150027.GS1835@mdounin.ru> Message-ID: Thanks for you answer, I will try to see with Trend. Have a nice day Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247278,247316#msg-247316 From nginx-forum at nginx.us Mon Feb 10 08:35:07 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 10 Feb 2014 03:35:07 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: <8bd627c29b413ac2ee599aa482a3b103.NginxMailingListEnglish@forum.nginx.org> References: <8bd627c29b413ac2ee599aa482a3b103.NginxMailingListEnglish@forum.nginx.org> Message-ID: <19162ef197a9a14c38b28f83b6d9a7f0.NginxMailingListEnglish@forum.nginx.org> > I've heard that stunned does not scale very well. I'm looking at > managing a lot of simultaneous ssl connections hence using Nginx. You can loadbalance them, even create a pool for one worker with Lua and expand them as needed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247317#msg-247317 From mdounin at mdounin.ru Mon Feb 10 09:11:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Feb 2014 13:11:50 +0400 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: <0859e77f6f623e6e1fe71684d75c84b8.NginxMailingListEnglish@forum.nginx.org> References: <20140209211236.GZ1835@mdounin.ru> <0859e77f6f623e6e1fe71684d75c84b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140210091150.GA1835@mdounin.ru> Hello! On Sun, Feb 09, 2014 at 07:13:55PM -0500, tbamise wrote: > > > > Connections to upstream servers don't use any client certificates. > > > > Yes I agree. The connection to the upstream server uses the nginx server > certificates specified by $ssl_certificate(_key). It looks like you didn't understand my answer. Again: connections to upstream servers don't use any client certificates. That is, no certificates are used by nginx when connecting to upstream servers. > Basically I want to use: > for downstream to client - a.cert & a.cert.key for connection to clients > for upstream to upstream servers - b.cert & b.cert.key for connection to > upstream servers. > > The https & server modules of Nginx only allow you to specify a single cert > pair via $ssl_certificate(_key) The only thing you can specify is ssl_client_certificate (and ssl_client_certificate_key), and it is used only in connections with clients. SSL support in proxy module is rather rudientary and it doesn't support using client certificates. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Feb 10 09:55:00 2014 From: nginx-forum at nginx.us (parnican) Date: Mon, 10 Feb 2014 04:55:00 -0500 Subject: ASP.NET pages with nginx Message-ID: <56b9921288138483cc3c597f4adb3008.NginxMailingListEnglish@forum.nginx.org> Hi all, i would like to use my aspx pages with raspberry and nginx but it seem to be not an easy goal... I have tried almost all "tutorials" on the web but i can not find solution for my issue. With lots of experiments i was able to reach a point where i can not move forward. No Application Found ,Unable to find a matching application for request: ,Host bernolak.dyndns.info:8080 ,Port 8080 ,Request Path /Default.aspx ,Physical Path /var/www/demo/Default.aspx I'm not sure where is the problem, is this error message(500) generated by nginx or mono "framework"? Any help is highly appreciated. Thanks, Peter sudo apt-get install nginx sudo apt-get install mono-complete sudo apt-get install mono-fastcgi-server4 server { listen 8080; server_name bernolak.dyndns.info; #root /var/www/demo; access_log /var/log/nginx/bernolak.access.log; error_log /var/log/nginx/bernolak.error.log; location / { root /var/www/demo; index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; include /etc/nginx/fastcgi_params; fastcgi_buffer_size 4K; fastcgi_buffers 64 4k; } } Added to /etc/nginx/fastcgi_params. fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; Verified configuration and reload was successfull: sudo nginx -t && sudo service nginx reload start the mono server by running this command sudo fastcgi-mono-server4 /applications=/bernolak.dyndns.info:8080:/:/var/www/demo/ /socket=tcp:127.0.0.1:9000 /logfile=/var/log/mono/fastcgi.log /printlog=True & When i reload url i can see error No Application Found, mention above. Multiple screenshots with this issue are here http://www.raspberrypi.org/forum/viewtopic.php?f=66&t=68858&sid=addea653dc910687553f6957aae1add5 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247323,247323#msg-247323 From luky-37 at hotmail.com Mon Feb 10 10:06:48 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 10 Feb 2014 11:06:48 +0100 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: <20140210091150.GA1835@mdounin.ru> References: <20140209211236.GZ1835@mdounin.ru>, <0859e77f6f623e6e1fe71684d75c84b8.NginxMailingListEnglish@forum.nginx.org>, <20140210091150.GA1835@mdounin.ru> Message-ID: Hi, >> Yes I agree. The connection to the upstream server uses the nginx server >> certificates specified by $ssl_certificate(_key). > > It looks like you didn't understand my answer. Again: connections > to upstream servers don't use any client certificates. That is, > no certificates are used by nginx when connecting to upstream > servers. Take a look at haproxy, it can use client certificates when connecting to backend servers [1]. Regards, Lukas [1] http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-crt From nginx-forum at nginx.us Mon Feb 10 10:55:35 2014 From: nginx-forum at nginx.us (rubenarslan) Date: Mon, 10 Feb 2014 05:55:35 -0500 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: <20140206171819.GI1835@mdounin.ru> References: <20140206171819.GI1835@mdounin.ru> Message-ID: <6f7eb680d7a229ab5d91cfed3dc66434.NginxMailingListEnglish@forum.nginx.org> Hi Maxim & Jeroen, I'm the user Jeroen mentioned. I'm sorry for only being to produce sporadic errors earlier, I now made a test case which reliably produces the error, both on our server and Jeroen's server (so it's hopefully not just my amateur status with nginx). Of course, the ridiculously large proxy settings were only chosen in desperation, but I can now report that no increase of proxy buffer settings solves the problem at all (or shifts the treshold at which the error messages occur). So I think it's safe to say the error message upstream sent too big header while reading response header from upstream is misleading (unless the CORS headers that Jeroen added are somehow inflated through the POST request, wouldn't know why that would be). I can also now rule out that's its due to the headers sent or received. I receive only the following headers when the 502s occur HTTP/1.1 100 Continue HTTP/1.1 502 Bad Gateway Server: nginx/1.4.4 Date: Mon, 10 Feb 2014 10:46:57 GMT Content-Type: text/html Content-Length: 172 Connection: keep-alive And I sent only these POST /ocpu/library/base/R/identity HTTP/1.1 Host: ourhost.xx Accept: */* Content-Length: 3545 Content-Type: application/x-www-form-urlencoded Expect: 100-continue I do send a large request, 2241 characters (inlined JSON-ified data), but that is not near the upper limit of POST request as I know them. Here's the issue on the openCPU github, where I uploaded my test cases (sorry for it being a bit primitively done, I'm in a bit of a tight spot time-wise here and just an amateur): https://github.com/jeroenooms/opencpu/issues/76 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247230,247327#msg-247327 From al-nginx at none.at Mon Feb 10 11:06:26 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 10 Feb 2014 12:06:26 +0100 Subject: high Traffic setup problem, module status don't deliver data Message-ID: <4014345467c8e05baa091ec6841a57cb@none.at> Dear list member. currently we have a huge traffic come up. ~500 r/s http://download.none.at/nginx_request-day.png ~3.5K active connections http://download.none.at/port_www-day.png http://download.none.at/nginx_combined.png The Peaks are the raw values from module status. ~1.1g b/s traffic http://download.none.at/if_eth2-day.png http://download.none.at/tcp-day.png I have tried to setup the machine for this traffic but it looks to me that was not successfully. HW: 24 CPUs Memory: 49381124k/52166656k available (7176k kernel code, 1897452k absent, 888080k reserved, 6067k data, 1016k init) OS: lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.4 LTS Release: 12.04 Codename: precise Nginx: /home/nginx/server/sbin/nginx -V nginx version: nginx/1.4.4 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled configure arguments: --prefix=/home/nginx/server --with-debug --without-http_uwsgi_module --without-http_scgi_module --without-http_empty_gif_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_ssl_module --user=nginx --group=www-data --with-file-aio --without-http_ssi_module --with-http_secure_link_module --with-http_sub_module --with-http_spdy_module Conf: http://download.none.at/my_nginx.conf When I activate the aio, nginx and xfs crashes, that's why aio is not active. In one include file we have the following. #### location ~ recent { add_header Cache-Control "no-cache"; } #### sysctl -a http://download.none.at/sysctl.txt lsmod: http://download.none.at/lsmod.txt dmesg: http://download.none.at/dmesg.txt On this machine also runs a postgresql and php-fpm but the current traffic is from delivering of pictures from the file system. /dev/mapper/pada2_vg-pada2_lv on /home/ type xfs (rw,noatime,nodiratime,attr2,inode64,noquota) Thanks for help. Best regards Aleks From black.fledermaus at arcor.de Mon Feb 10 11:13:02 2014 From: black.fledermaus at arcor.de (basti) Date: Mon, 10 Feb 2014 12:13:02 +0100 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: <6f7eb680d7a229ab5d91cfed3dc66434.NginxMailingListEnglish@forum.nginx.org> References: <20140206171819.GI1835@mdounin.ru> <6f7eb680d7a229ab5d91cfed3dc66434.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52F8B43E.1050001@arcor.de> Hello, try this http://stackoverflow.com/questions/13894386/upstream-too-big-nginx-codeigniter Regards, Basti On 10.02.2014 11:55, rubenarslan wrote: > Hi Maxim & Jeroen, > > I'm the user Jeroen mentioned. I'm sorry for only being to produce sporadic > errors earlier, I now made a test case which reliably produces > the error, both on our server and Jeroen's server (so it's hopefully not > just my amateur status with nginx). > > Of course, the ridiculously large proxy settings were only chosen in > desperation, but I can now report that no increase of proxy buffer > settings solves the problem at all (or shifts the treshold at which the > error messages occur). So I think it's safe to say the error message > upstream sent too big header while reading response header from upstream > is misleading (unless the CORS headers that Jeroen added are somehow > inflated through the POST request, wouldn't know why that would be). > > > I can also now rule out that's its due to the headers sent or received. > > I receive only the following headers when the 502s occur > > HTTP/1.1 100 Continue > > HTTP/1.1 502 Bad Gateway > Server: nginx/1.4.4 > Date: Mon, 10 Feb 2014 10:46:57 GMT > Content-Type: text/html > Content-Length: 172 > Connection: keep-alive > > And I sent only these > > POST /ocpu/library/base/R/identity HTTP/1.1 > Host: ourhost.xx > Accept: */* > Content-Length: 3545 > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > > > I do send a large request, 2241 characters (inlined JSON-ified data), but > that is not near the upper limit of POST request as I know them. > > Here's the issue on the openCPU github, where I uploaded my test cases > (sorry for it being a bit primitively done, I'm in a bit of a tight spot > time-wise here and just an amateur): > https://github.com/jeroenooms/opencpu/issues/76 > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247230,247327#msg-247327 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Feb 10 12:15:39 2014 From: nginx-forum at nginx.us (tbamise) Date: Mon, 10 Feb 2014 07:15:39 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: <19162ef197a9a14c38b28f83b6d9a7f0.NginxMailingListEnglish@forum.nginx.org> References: <8bd627c29b413ac2ee599aa482a3b103.NginxMailingListEnglish@forum.nginx.org> <19162ef197a9a14c38b28f83b6d9a7f0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7cc81773d02500665e6970a42fdb60ce.NginxMailingListEnglish@forum.nginx.org> itpp2012 Wrote: ------------------------------------------------------- > > I've heard that stunned does not scale very well. I'm looking at > > managing a lot of simultaneous ssl connections hence using Nginx. > > You can loadbalance them, even create a pool for one worker with Lua > and expand them as needed. Thanks! I'll try this Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247331#msg-247331 From nginx-forum at nginx.us Mon Feb 10 12:25:31 2014 From: nginx-forum at nginx.us (tbamise) Date: Mon, 10 Feb 2014 07:25:31 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: <20140210091150.GA1835@mdounin.ru> References: <20140210091150.GA1835@mdounin.ru> Message-ID: Hello! > The only thing you can specify is ssl_client_certificate (and > ssl_client_certificate_key), and it is used only in connections > with clients. > Following Nginx docs (http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate) you can specify ssl_certificate_key and ssl_certificate files in an nginx conf file which specifies the files with the certificate in PEM format for the given virtual server. The ssl_client_certificate configuration refers to CA cert used to verify clients. I'll rephrase the question. I'm interested in server certificates (not client). The ssl_certificate_key file is used as a private key for the server to decrypt ssl connections for clients. I'm looking to configure another key for encrypting ssl connections from niginx server to upstream server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247332#msg-247332 From nginx-forum at nginx.us Mon Feb 10 12:50:35 2014 From: nginx-forum at nginx.us (rubenarslan) Date: Mon, 10 Feb 2014 07:50:35 -0500 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: <6f7eb680d7a229ab5d91cfed3dc66434.NginxMailingListEnglish@forum.nginx.org> References: <20140206171819.GI1835@mdounin.ru> <6f7eb680d7a229ab5d91cfed3dc66434.NginxMailingListEnglish@forum.nginx.org> Message-ID: <40b3f7d2f40dcd38268b687ec496075e.NginxMailingListEnglish@forum.nginx.org> Hi, after some further testing I discovered that I had the order in which various nginx config files are called wrong. Because location {} isn't merged, but overridden, my directives never 'took'. Setting proxy_buffer_size 8k; kept the errors from occurring. As I wrote on Github https://github.com/jeroenooms/opencpu/issues/76 , it still seems like the error message was misleading here, because the headers sent were identical (except for the exact Content-Length) and the headers received were pretty much the same as well. There are no cookies at all involved here. Thanks for the advice and nginx! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247230,247334#msg-247334 From nginx-forum at nginx.us Mon Feb 10 12:56:22 2014 From: nginx-forum at nginx.us (rubenarslan) Date: Mon, 10 Feb 2014 07:56:22 -0500 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: <52F8B43E.1050001@arcor.de> References: <52F8B43E.1050001@arcor.de> Message-ID: <3c7cdc260c2a866a032ca3f30f5718df.NginxMailingListEnglish@forum.nginx.org> Hi Basti, thanks, I found the SO post myself. I had not set up the directives properly, so thought the fix didn't work. It does now. I also think they described a different problem, as in my case no cookies were sent, headers were fairly small and two requests with pretty much identical headers sent/received had different results (only 502ed). I'm very interested in learning what exactly may have caused the message in my case, so as to know the boundaries in which my requests will work. I now set the buffer_size to 8K and I don't see failures with Content-Lengths that go far beyond that (though maybe compression has to be considered?). Best regards, Ruben Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247230,247335#msg-247335 From mdounin at mdounin.ru Mon Feb 10 13:15:03 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Feb 2014 17:15:03 +0400 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: <3c7cdc260c2a866a032ca3f30f5718df.NginxMailingListEnglish@forum.nginx.org> References: <52F8B43E.1050001@arcor.de> <3c7cdc260c2a866a032ca3f30f5718df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140210131503.GG1835@mdounin.ru> Hello! On Mon, Feb 10, 2014 at 07:56:22AM -0500, rubenarslan wrote: > Hi Basti, > > thanks, I found the SO post myself. I had not set up the directives > properly, so thought the fix didn't work. It does now. I also think they > described a different problem, as in my case no cookies were sent, headers > were fairly small and two requests with pretty much identical headers > sent/received had different results (only 502ed). > > I'm very interested in learning what exactly may have caused the message in > my case, so as to know the boundaries in which my requests will work. I now > set the buffer_size to 8K and I don't see failures with Content-Lengths that > go far beyond that (though maybe compression has to be considered?). Another possible cause may be use of $request_body in proxy_cache_key. Cache header, including cache key, is placed into proxy buffer if caching is enabled, and effectively reduces proxy_buffer_size available to read response headers. Assuming you are using configs like this one: https://github.com/jeroenooms/opencpu-deb/blob/master/opencpu-cache/nginx/opencpu-ocpu.conf it is likely the cause, as the config includes the following lines: proxy_cache_methods POST; proxy_cache_key "$request_method$request_uri$request_body"; -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Feb 10 13:59:43 2014 From: nginx-forum at nginx.us (rubenarslan) Date: Mon, 10 Feb 2014 08:59:43 -0500 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: <20140210131503.GG1835@mdounin.ru> References: <20140210131503.GG1835@mdounin.ru> Message-ID: <4b7c86acf9ef434cca7ad297ac778826.NginxMailingListEnglish@forum.nginx.org> Yes, that's quite probably it! Then I guess it's on Jerome to weigh in, I don't know the exact reasoning for doing so, I would have thought it would be more appropriate to hash the request body in the cache key, but maybe that's not possibly using nginx? OpenCPU allows for request bodies up to 500M by default, what kind of settings would be necessary to make that (or something more reasonable like 50M) play with the buffer? I'm guessing the buffer doesn't have to be as large as the maximal request, right? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247230,247352#msg-247352 From luky-37 at hotmail.com Mon Feb 10 14:44:13 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 10 Feb 2014 15:44:13 +0100 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: References: <20140210091150.GA1835@mdounin.ru>, Message-ID: Hi, > I'll rephrase the question. I'm interested in server certificates (not > client). The ssl_certificate_key file is used as a private key for the > server to decrypt ssl connections for clients. I'm looking to configure > another key for encrypting ssl connections from niginx server to upstream > server. Thats the point exactly. You don't need a key to encrypt ssl connections from nginx to upstream https servers, EXPECT if you are using client certificates. So either you want to specify the CA file to verify the upstream servers certificate and you do not use client certificates (no pem file, no key) OR you are using client certificates, which is way you need a certificate + key on the nginx side to connect to upstream https. So what exactly are you trying to achieve? From contact at jpluscplusm.com Mon Feb 10 15:26:26 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 10 Feb 2014 16:26:26 +0100 Subject: ASP.NET pages with nginx In-Reply-To: <56b9921288138483cc3c597f4adb3008.NginxMailingListEnglish@forum.nginx.org> References: <56b9921288138483cc3c597f4adb3008.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 10 February 2014 10:55, parnican wrote: > Hi all, > i would like to use my aspx pages with raspberry and nginx but it seem to be > not an easy goal... > I have tried almost all "tutorials" on the web but i can not find solution > for my issue. With lots of experiments i was able to reach a point where i > can not move forward. > > No Application Found > ,Unable to find a matching application for request: > ,Host bernolak.dyndns.info:8080 I suspect this is a problem ^^^^^^^^^^ - and it *may* be /your/ problem. You're obviously hitting the mono app server, so the problem is just one of getting mono and nginx to agree on what they're talking about. Figure out how to make nginx *not* pass through the ":8080" in the Host header, and you may find yourself further on the path to a working setup. HTH, J From nginx-forum at nginx.us Mon Feb 10 15:27:31 2014 From: nginx-forum at nginx.us (kustodian) Date: Mon, 10 Feb 2014 10:27:31 -0500 Subject: 404 not showing up in error logs (for PHP files) In-Reply-To: <20120107113709.GQ67687@mdounin.ru> References: <20120107113709.GQ67687@mdounin.ru> Message-ID: <86835cd369762b43752f17a023e26ffb.NginxMailingListEnglish@forum.nginx.org> We solved this problem like this: try_files $uri $uri-404; This would first check if the php file exists, and if not it would try to open that same php file but with "-404" appended to it, so the user would still get a 404 status code, but it would also log what file was called. I hope this helps. P.S. Sorry for replying to this topic 2 years later, but this is one of the topics that come out when you search for this issue and I think it is usefull that there is an easy solution. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,220849,247357#msg-247357 From nginx-forum at nginx.us Mon Feb 10 15:35:13 2014 From: nginx-forum at nginx.us (parnican) Date: Mon, 10 Feb 2014 10:35:13 -0500 Subject: ASP.NET pages with nginx In-Reply-To: References: Message-ID: <781ae34779f9ac61620006ca0da13eed.NginxMailingListEnglish@forum.nginx.org> HI Jonathan, thanks for reply. I really apologize but i have no clue what do you by ^^^^^^^^^^ ? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247323,247359#msg-247359 From contact at jpluscplusm.com Mon Feb 10 15:37:24 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 10 Feb 2014 16:37:24 +0100 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <4014345467c8e05baa091ec6841a57cb@none.at> References: <4014345467c8e05baa091ec6841a57cb@none.at> Message-ID: On 10 February 2014 12:06, Aleksandar Lazic wrote: > Thanks for help. Aleksandar - I can't work out what you need help with. There aren't any questions (or question marks!) in your email :-) I can't see your problem at first or second glance; I'm sure others will, but I'm quite slow. Could you spell the problem out (what you observe; what you expect to observe; what's changed; how you're testing)? J From nginx-forum at nginx.us Mon Feb 10 16:12:11 2014 From: nginx-forum at nginx.us (parnican) Date: Mon, 10 Feb 2014 11:12:11 -0500 Subject: ASP.NET pages with nginx In-Reply-To: References: Message-ID: <73b3edd9212d32b92ffd2d2b6d64ed78.NginxMailingListEnglish@forum.nginx.org> I have tried "experiments" with following parameters but no change..not sure this is the way i should go... Any suggestion how to make nginx *not* pass through the ":8080? proxy_pass_request_headers off; proxy_pass_request_body off; proxy_redirect off; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247323,247362#msg-247362 From al-nginx at none.at Mon Feb 10 16:41:47 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 10 Feb 2014 17:41:47 +0100 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: References: <4014345467c8e05baa091ec6841a57cb@none.at> Message-ID: Hi Jonathan. Sorry to be unclear, thanks for answer and question. Am 10-02-2014 16:37, schrieb Jonathan Matthews: > On 10 February 2014 12:06, Aleksandar Lazic wrote: >> Thanks for help. > > Aleksandar - I can't work out what you need help with. There aren't > any questions (or question marks!) in your email :-) > > I can't see your problem at first or second glance; I'm sure others > will, but I'm quite slow. Could you spell the problem out (what you > observe; what you expect to observe; what's changed; how you're > testing)? I run nginx on the described HW & OS. I use https://github.com/munin-monitoring/contrib/blob/master/plugins/nginx/nginx-combined to get the statistics from stub_status_module. The call from nginx-combined_ runs on the same machine as the nginx server. Due to this fact we have no external network traffic, just an ip alias call on eth2. Every time when I have more then ~400 r/s we get no data from the status-request, this request rate means ~20k Packets/Second. I use netfilter with fail2ban, but not the connection tracking module! I have now seen on the tcpdump that I get a 'RST' Package quite immediately after a request when the 'no answer from server' cames. I think this could be a kernel-network issue not a nginx issue. The question is: Please can you help me to find the reason for the immediately 'RST' answer. I hope my question is more clear now. Thanks for reading and patience. From jacklinkers at gmail.com Mon Feb 10 16:46:08 2014 From: jacklinkers at gmail.com (jack linkers) Date: Mon, 10 Feb 2014 17:46:08 +0100 Subject: missing /etc/nginx/sites-available Message-ID: Hello, I'm new to linux and webservers, but I have a brain and C# background. I installed ngx_pagespeed with nginx on a fresh ubuntu 13.10 following this tutorial : https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source I installed everything under root directory, successfully builded the package and started nginx : root at xxxxx:~# service nginx configtest nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful root at xxxxx:~# curl -I -p http://localhost|grep X-Page-Speed % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 X-Page-Speed: 1.7.30.3-3721 I have no nginx folder under /etc, therefore I can't edit /etc/nginx/sites-available/file_to_edit I believe this folder comes with the package apt-get install nginx by default But how do I edit my .vhost from source build please ? Thanks ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From strattonbrazil at gmail.com Mon Feb 10 16:50:55 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 10 Feb 2014 08:50:55 -0800 Subject: missing /etc/nginx/sites-available In-Reply-To: References: Message-ID: I'm new to nginx myself, but I'm guessing because you're not using the package manager to install nginx. The install location in configured by the package. Following the build steps in your link... sed -e "s|%%PREFIX%%|/usr/local/nginx|" \ -e "s|%%PID_PATH%%|/usr/local/nginx/logs/nginx.pid|" \ -e "s|%%CONF_PATH%%|/usr/local/nginx/conf/nginx.conf|" \ -e "s|%%ERROR_LOG_PATH%%|/usr/local/nginx/logs/error.log|" \ < man/nginx.8 > objs/nginx.8 make[1]: Leaving directory `/tmp/nginx-1.4.4' Do you see something in /usr/local/nginx? If not, try searching directly for the nginx.conf file. On Mon, Feb 10, 2014 at 8:46 AM, jack linkers wrote: > Hello, > > I'm new to linux and webservers, but I have a brain and C# background. > I installed ngx_pagespeed with nginx on a fresh ubuntu 13.10 following > this tutorial : > https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source > I installed everything under root directory, successfully builded the > package and started nginx : > > root at xxxxx:~# service nginx configtest > nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok > nginx: configuration file /usr/local/nginx/conf/nginx.conf test is > successful > > root at xxxxx:~# curl -I -p http://localhost|grep X-Page-Speed > % Total % Received % Xferd Average Speed Time Time Time Current > Dload Upload Total Spent Left Speed > 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 > X-Page-Speed: 1.7.30.3-3721 > > I have no nginx folder under /etc, therefore I can't edit > /etc/nginx/sites-available/file_to_edit > > I believe this folder comes with the package apt-get install nginx by > default > But how do I edit my .vhost from source build please ? > > Thanks ! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 10 17:08:39 2014 From: nginx-forum at nginx.us (parnican) Date: Mon, 10 Feb 2014 12:08:39 -0500 Subject: ASP.NET pages with nginx In-Reply-To: References: Message-ID: I have played with this command: sudo fastcgi-mono-server4 /applications=/bernolak.dyndns.info:8080:/:/var/www/demo/ /socket=tcp:127.0.0.1:9000 /logfile=/var/log/mono/fastcgi.log /printlog=True & Is it possible that i don't have VPath:realpath configured correctly? If not what is my VPath? I have there just :realpath FATAL UNHANDLED EXCEPTION: System.ArgumentException: Should be something like [[hostname:]port:]VPath:realpath Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247323,247366#msg-247366 From jeroen.ooms at stat.ucla.edu Mon Feb 10 17:45:50 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Mon, 10 Feb 2014 09:45:50 -0800 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: <20140210131503.GG1835@mdounin.ru> References: <52F8B43E.1050001@arcor.de> <3c7cdc260c2a866a032ca3f30f5718df.NginxMailingListEnglish@forum.nginx.org> <20140210131503.GG1835@mdounin.ru> Message-ID: On Mon, Feb 10, 2014 at 5:15 AM, Maxim Dounin wrote: > it is likely the cause, as the config includes the following lines: > > proxy_cache_methods POST; > proxy_cache_key "$request_method$request_uri$request_body"; > Yikes I was not aware that the cache key gets stored into the buffers as well. Is this mentioned in the manual anywhere? So we need to set proxy_buffer_size to a value greater than the sum of client_body_buffer_size + header size? Or alternatively, is there a way that to use a fixed length hash of the request body in the proxy_cache_key? From jacklinkers at gmail.com Mon Feb 10 18:15:46 2014 From: jacklinkers at gmail.com (jack linkers) Date: Mon, 10 Feb 2014 19:15:46 +0100 Subject: missing /etc/nginx/sites-available In-Reply-To: References: Message-ID: Hi Josh, Yes, indeed I see inside /usr/local/nginx : client_body_temp conf fastcgi_temp html logs proxy_temp sbin scgi_temp uwsgi_temp But I don't understand what is the default.vhost file in it. If I change : include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; by : /usr/local/nginx/conf/*.conf; /usr/local/nginx/html/*; I get this message : nginx: [emerg] "worker_processes" directive is not allowed here in /usr/local/nginx/conf/nginx.conf:3 nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed Kinda lost here ... 2014-02-10 17:50 GMT+01:00 Josh Stratton : > I'm new to nginx myself, but I'm guessing because you're not using the > package manager to install nginx. The install location in configured by > the package. Following the build steps in your link... > > sed -e "s|%%PREFIX%%|/usr/local/nginx|" \ > -e "s|%%PID_PATH%%|/usr/local/nginx/logs/nginx.pid|" \ > -e "s|%%CONF_PATH%%|/usr/local/nginx/conf/nginx.conf|" \ > -e "s|%%ERROR_LOG_PATH%%|/usr/local/nginx/logs/error.log|" \ > < man/nginx.8 > objs/nginx.8 > make[1]: Leaving directory `/tmp/nginx-1.4.4' > > Do you see something in /usr/local/nginx? If not, try searching directly > for the nginx.conf file. > > > On Mon, Feb 10, 2014 at 8:46 AM, jack linkers wrote: > >> Hello, >> >> I'm new to linux and webservers, but I have a brain and C# background. >> I installed ngx_pagespeed with nginx on a fresh ubuntu 13.10 following >> this tutorial : >> https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source >> I installed everything under root directory, successfully builded the >> package and started nginx : >> >> root at xxxxx:~# service nginx configtest >> nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is >> ok >> nginx: configuration file /usr/local/nginx/conf/nginx.conf test is >> successful >> >> root at xxxxx:~# curl -I -p http://localhost|grep X-Page-Speed >> % Total % Received % Xferd Average Speed Time Time Time Current >> Dload Upload Total Spent Left Speed >> 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 >> X-Page-Speed: 1.7.30.3-3721 >> >> I have no nginx folder under /etc, therefore I can't edit >> /etc/nginx/sites-available/file_to_edit >> >> I believe this folder comes with the package apt-get install nginx by >> default >> But how do I edit my .vhost from source build please ? >> >> Thanks ! >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strattonbrazil at gmail.com Mon Feb 10 18:26:46 2014 From: strattonbrazil at gmail.com (Josh Stratton) Date: Mon, 10 Feb 2014 10:26:46 -0800 Subject: missing /etc/nginx/sites-available In-Reply-To: References: Message-ID: Hrmm, again I've only been using nginx a few days, but you may want to start googling around. Maybe this link will help? http://stackoverflow.com/questions/15208135/nginx-configuration-error On Mon, Feb 10, 2014 at 10:15 AM, jack linkers wrote: > Hi Josh, > > Yes, indeed I see inside /usr/local/nginx : > > client_body_temp > conf > fastcgi_temp > html > logs > proxy_temp > sbin > scgi_temp > uwsgi_temp > > But I don't understand what is the default.vhost file in it. > If I change : > > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > > > by : > > /usr/local/nginx/conf/*.conf; > /usr/local/nginx/html/*; > > I get this message : > > nginx: [emerg] "worker_processes" directive is not allowed here in /usr/local/nginx/conf/nginx.conf:3 > nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed > > Kinda lost here ... > > > 2014-02-10 17:50 GMT+01:00 Josh Stratton : > > I'm new to nginx myself, but I'm guessing because you're not using the >> package manager to install nginx. The install location in configured by >> the package. Following the build steps in your link... >> >> sed -e "s|%%PREFIX%%|/usr/local/nginx|" \ >> -e "s|%%PID_PATH%%|/usr/local/nginx/logs/nginx.pid|" \ >> -e "s|%%CONF_PATH%%|/usr/local/nginx/conf/nginx.conf|" \ >> -e "s|%%ERROR_LOG_PATH%%|/usr/local/nginx/logs/error.log|" \ >> < man/nginx.8 > objs/nginx.8 >> make[1]: Leaving directory `/tmp/nginx-1.4.4' >> >> Do you see something in /usr/local/nginx? If not, try searching directly >> for the nginx.conf file. >> >> >> On Mon, Feb 10, 2014 at 8:46 AM, jack linkers wrote: >> >>> Hello, >>> >>> I'm new to linux and webservers, but I have a brain and C# background. >>> I installed ngx_pagespeed with nginx on a fresh ubuntu 13.10 following >>> this tutorial : >>> https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source >>> I installed everything under root directory, successfully builded the >>> package and started nginx : >>> >>> root at xxxxx:~# service nginx configtest >>> nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is >>> ok >>> nginx: configuration file /usr/local/nginx/conf/nginx.conf test is >>> successful >>> >>> root at xxxxx:~# curl -I -p http://localhost|grep X-Page-Speed >>> % Total % Received % Xferd Average Speed Time Time Time Current >>> Dload Upload Total Spent Left Speed >>> 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 >>> X-Page-Speed: 1.7.30.3-3721 >>> >>> I have no nginx folder under /etc, therefore I can't edit >>> /etc/nginx/sites-available/file_to_edit >>> >>> I believe this folder comes with the package apt-get install nginx by >>> default >>> But how do I edit my .vhost from source build please ? >>> >>> Thanks ! >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon Feb 10 20:34:25 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 11 Feb 2014 09:34:25 +1300 Subject: missing /etc/nginx/sites-available In-Reply-To: References: Message-ID: <1392064465.14628.135.camel@steve-new> On Mon, 2014-02-10 at 17:46 +0100, jack linkers wrote: > Hello, > > I'm new to linux and webservers, but I have a brain and C# background. > I installed ngx_pagespeed with nginx on a fresh ubuntu 13.10 following > this > tutorial :https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source > I installed everything under root directory, successfully builded the > package and started nginx : > > root at xxxxx:~# service nginx configtest > nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax > is ok > nginx: configuration file /usr/local/nginx/conf/nginx.conf test is > successful > > root at xxxxx:~# curl -I -p http://localhost|grep X-Page-Speed > % Total % Received % Xferd Average Speed Time Time Time Current > Dload Upload Total Spent Left Speed > 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 > X-Page-Speed: 1.7.30.3-3721 > > I have no nginx folder under /etc, therefore I can't > edit /etc/nginx/sites-available/file_to_edit > > I believe this folder comes with the package apt-get install nginx by > default > But how do I edit my .vhost from source build please ? > > Thanks ! the sites-available / sites-enabled pair of directories is not something that comes by default - a number of package managers have used it to mimic the apache method. You have installed into /usr/local/nginx, with your config stored in conf below that, so any includes in nginx.conf are relative to /usr/local/nginx/conf, unless you put an absolute path in there. look at the end of the nginx.conf file, and you'll see include conf.d/*.conf; or maybe include /usr/local/nginx/conf/conf.d/*.conf; This is the folder where you put your vhosts. If you want to do it the apache way, add include sites-enabled/* and put your config files there. Alternatively, build your nginx to mimic the default install. I use this script to build up with pagespeed... you may want to play with it, as I'm using Amazon linux / 1.5.10 / openssl that supports ECDHE ciphers and a few other odds and sods ( the starting point is the output from nginx -V from the version that comes with your distro ) --8<-- ./configure --prefix=/etc/nginx \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/run/nginx.lock \ --http-client-body-temp-path=/var/cache/nginx/client_temp \ --http-proxy-temp-path=/var/cache/nginx/proxy_temp \ --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \ --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \ --http-scgi-temp-path=/var/cache/nginx/scgi_temp \ --user=nginx \ --group=nginx \ --with-openssl-opt="enable-ec_nistp_64_gcc_128" \ --with-http_ssl_module \ --with-http_spdy_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_sub_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_stub_status_module \ --with-http_dav_module \ --with-http_xslt_module \ --with-mail \ --with-mail_ssl_module \ --with-file-aio \ --with-debug \ --with-sha1=/usr/include/openssl \ --with-md5=/usr/include/openssl \ --add-module=../ngx_pagespeed-1.7.30.3-beta \ '--with-cc-opt=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic ' --8<-- hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From jeroen.ooms at stat.ucla.edu Mon Feb 10 22:17:30 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Mon, 10 Feb 2014 14:17:30 -0800 Subject: upstream_response_time Message-ID: I am using add_header x-responsetime $upstream_response_time; to report response times of the back-end to the client. I was expecting to see the back-end response time (e.g. 0.500 for half a second), however the headers that I am getting contain an epoch timestamp, e.g: x-responsetime: 1392070197.589 What am I doing wrong? From nginx-forum at nginx.us Tue Feb 11 05:32:43 2014 From: nginx-forum at nginx.us (kate_r) Date: Tue, 11 Feb 2014 00:32:43 -0500 Subject: Protecting URIs with OAuth2 Message-ID: <944688d13fe8e433c66d5da535d69f6f.NginxMailingListEnglish@forum.nginx.org> Hi Does anyone know how to protect an URI with OAuth authentication? the upstream sever is already capable of issuing new tokens, but I'm hoping that nginx can check the access token for certain URIs. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247374,247374#msg-247374 From ru at nginx.com Tue Feb 11 09:04:59 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 11 Feb 2014 13:04:59 +0400 Subject: upstream_response_time In-Reply-To: References: Message-ID: <20140211090459.GA77081@lo0.su> On Mon, Feb 10, 2014 at 02:17:30PM -0800, Jeroen Ooms wrote: > I am using > > add_header x-responsetime $upstream_response_time; > > to report response times of the back-end to the client. I was > expecting to see the back-end response time (e.g. 0.500 for half a > second), however the headers that I am getting contain an epoch > timestamp, e.g: > > x-responsetime: 1392070197.589 > > What am I doing wrong? http://mailman.nginx.org/pipermail/nginx/2012-May/033630.html From miaohonghit at gmail.com Tue Feb 11 09:25:16 2014 From: miaohonghit at gmail.com (Harold.Miao) Date: Tue, 11 Feb 2014 17:25:16 +0800 Subject: [rewrite] replace the __ to % Message-ID: hi all , I have a problem about rewrite location ~ (\.ts)$ { } req? http://127.0.0.1/__ce__a3__c7__e9__b5__fd__d5__bd(1080P__bb__ad__d6__ca).ts I need to replace __ to % ? http://127.0.0.1/%ce%a3%c7%e9%b5%fd%d5%bd(1080P%bb%ad%d6%ca).ts How to design the "rewrite" cmd? THX -- Best Regards, Harold Miao -------------- next part -------------- An HTML attachment was scrubbed... URL: From citrin at citrin.ru Tue Feb 11 09:30:25 2014 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Tue, 11 Feb 2014 13:30:25 +0400 Subject: acess log over nfs hanging In-Reply-To: References: Message-ID: <52F9EDB1.5030203@citrin.ru> On 02/07/14 20:28, Jader H. Silva wrote: > It seems that when some processes are running in the nfs server, the share won't > allow writing for some time and I noticed all nginx workers in status D and not > processing requests. I general it is a bad idea to write logs over NFS instead local HDD. If you need central log store across many servers: - write logs locally - rotate as often as need (SIGUSR1) - copy logs to cental log server (rsync is handy for this, but other methods are possible) > Are all access.log writes blocking? yes, blocking > If my nfs server shutdown in an unexpected > way, will nginx stop proxying requests to the backend or responses to the client? yes, will stop From gmm at csdoc.com Tue Feb 11 09:40:32 2014 From: gmm at csdoc.com (Gena Makhomed) Date: Tue, 11 Feb 2014 11:40:32 +0200 Subject: upstream_response_time In-Reply-To: <20140211090459.GA77081@lo0.su> References: <20140211090459.GA77081@lo0.su> Message-ID: <52F9F010.8000600@csdoc.com> On 11.02.2014 11:04, Ruslan Ermilov wrote: >> I am using >> >> add_header x-responsetime $upstream_response_time; >> >> to report response times of the back-end to the client. I was >> expecting to see the back-end response time (e.g. 0.500 for half a >> second), however the headers that I am getting contain an epoch >> timestamp, e.g: >> >> x-responsetime: 1392070197.589 >> >> What am I doing wrong? > > http://mailman.nginx.org/pipermail/nginx/2012-May/033630.html may be better set default value of variable $upstream_response_time to url on site nginx.org with faq/documentation, something like this: =================================================================== The $upstream_response_time is only meaningful once response is fully got from upstream, and this happens after response headers are got (and sent to client). That is, you basically can't use $upstream_response_time in add_header, only in logs. =================================================================== ? or, if only number allowed, some fail-safe value, for example, -1 or 0. -- Best regards, Gena From mdounin at mdounin.ru Tue Feb 11 10:55:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Feb 2014 14:55:01 +0400 Subject: upstream sent too big header while reading response header from upstream In-Reply-To: References: <52F8B43E.1050001@arcor.de> <3c7cdc260c2a866a032ca3f30f5718df.NginxMailingListEnglish@forum.nginx.org> <20140210131503.GG1835@mdounin.ru> Message-ID: <20140211105501.GJ1835@mdounin.ru> Hello! On Mon, Feb 10, 2014 at 09:45:50AM -0800, Jeroen Ooms wrote: > On Mon, Feb 10, 2014 at 5:15 AM, Maxim Dounin wrote: > > it is likely the cause, as the config includes the following lines: > > > > proxy_cache_methods POST; > > proxy_cache_key "$request_method$request_uri$request_body"; > > > > Yikes I was not aware that the cache key gets stored into the buffers > as well. Is this mentioned in the manual anywhere? Likely no. It's mostly proxy cache implementation detail - it needs a way to write a cache header (which includes key) to a cache file, and places it to the buffer just before response headers from an upstream. > So we need to set proxy_buffer_size to a value greater than the sum of > client_body_buffer_size + header size? If you use $request_body in the cache key - yes. And don't forget to add other variables in proxy_cache_key. > Or alternatively, is there a > way that to use a fixed length hash of the request body in the > proxy_cache_key? As of now hashes may be calculated using, e.g., the embedded perl or 3rd party modules. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Feb 11 11:14:14 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 11 Feb 2014 15:14:14 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: References: <4014345467c8e05baa091ec6841a57cb@none.at> Message-ID: <2106685.d44CGIv2HN@vbart-laptop> On Monday 10 February 2014 17:41:47 Aleksandar Lazic wrote: > Hi Jonathan. > > Sorry to be unclear, thanks for answer and question. > > Am 10-02-2014 16:37, schrieb Jonathan Matthews: > > On 10 February 2014 12:06, Aleksandar Lazic wrote: > >> Thanks for help. > > > > Aleksandar - I can't work out what you need help with. There aren't > > any questions (or question marks!) in your email :-) > > > > I can't see your problem at first or second glance; I'm sure others > > will, but I'm quite slow. Could you spell the problem out (what you > > observe; what you expect to observe; what's changed; how you're > > testing)? > > I run nginx on the described HW & OS. > > I use > > https://github.com/munin-monitoring/contrib/blob/master/plugins/nginx/nginx-combined > > to get the statistics from stub_status_module. > > The call from nginx-combined_ runs on the same machine as the > nginx server. > > Due to this fact we have no external network traffic, just an ip alias > call on eth2. > > Every time when I have more then ~400 r/s we get no data from the > status-request, this request rate means ~20k Packets/Second. > I use netfilter with fail2ban, but not the connection tracking module! Do you see the issue without fail2ban? > > I have now seen on the tcpdump that I get a 'RST' Package quite > immediately after a request when the 'no answer from server' cames. > > I think this could be a kernel-network issue not a nginx issue. > > The question is: > Please can you help me to find the reason for the immediately 'RST' > answer. > > I hope my question is more clear now. > > Thanks for reading and patience. > You haven't shown your server level configuration. Do you use deferred accept? wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Feb 11 11:15:00 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Feb 2014 15:15:00 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: References: <4014345467c8e05baa091ec6841a57cb@none.at> Message-ID: <20140211111500.GK1835@mdounin.ru> Hello! On Mon, Feb 10, 2014 at 05:41:47PM +0100, Aleksandar Lazic wrote: [...] > Every time when I have more then ~400 r/s we get no data from the > status-request, this request rate means ~20k Packets/Second. > I use netfilter with fail2ban, but not the connection tracking module! > > I have now seen on the tcpdump that I get a 'RST' Package quite immediately > after a request when the 'no answer from server' cames. > > I think this could be a kernel-network issue not a nginx issue. > > The question is: > Please can you help me to find the reason for the immediately 'RST' answer. Listen queue overflow? On modern Linux'es, it should be possible to check some listen queue numbers with "ss -nlt" / "netstat -nlt" (on BSD, detailed information is available with "netstat -Lan"), and number of overflows happended in past should be in "netstat -s" stats. To tune listen queue size used by nginx, use "backlog" parameter of the listen directive. Note that system limits like tcp_max_syn_backlog and somaxconn also require tuning. If stateful firewall is used, this also can be a result of "out of states" conditions, check your firewall stats. -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Tue Feb 11 11:34:59 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 11 Feb 2014 12:34:59 +0100 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <2106685.d44CGIv2HN@vbart-laptop> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2106685.d44CGIv2HN@vbart-laptop> Message-ID: <3f79edcb2f093bfecb06810670185cb0@none.at> Am 11-02-2014 12:14, schrieb Valentin V. Bartenev: > On Monday 10 February 2014 17:41:47 Aleksandar Lazic wrote: [snipp] >> Every time when I have more then ~400 r/s we get no data from the >> status-request, this request rate means ~20k Packets/Second. >> I use netfilter with fail2ban, but not the connection tracking module! > > Do you see the issue without fail2ban? I haven't tried the setup with out. >> I have now seen on the tcpdump that I get a 'RST' Package quite >> immediately after a request when the 'no answer from server' cames. >> >> I think this could be a kernel-network issue not a nginx issue. >> >> The question is: >> Please can you help me to find the reason for the immediately 'RST' >> answer. >> >> I hope my question is more clear now. >> >> Thanks for reading and patience. >> > > You haven't shown your server level configuration. > Do you use deferred accept? yes listen :80 deferred default_server; Aleks From al-nginx at none.at Tue Feb 11 11:41:22 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 11 Feb 2014 12:41:22 +0100 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <20140211111500.GK1835@mdounin.ru> References: <4014345467c8e05baa091ec6841a57cb@none.at> <20140211111500.GK1835@mdounin.ru> Message-ID: <19cec996da87274cdb8ddfb584f80a39@none.at> Am 11-02-2014 12:15, schrieb Maxim Dounin: > Hello! > > On Mon, Feb 10, 2014 at 05:41:47PM +0100, Aleksandar Lazic wrote: > > [...] > >> Every time when I have more then ~400 r/s we get no data from the >> status-request, this request rate means ~20k Packets/Second. >> I use netfilter with fail2ban, but not the connection tracking module! >> >> I have now seen on the tcpdump that I get a 'RST' Package quite >> immediately >> after a request when the 'no answer from server' cames. >> >> I think this could be a kernel-network issue not a nginx issue. >> >> The question is: >> Please can you help me to find the reason for the immediately 'RST' >> answer. > > Listen queue overflow? > > On modern Linux'es, it should be possible to check some listen > queue numbers with "ss -nlt" / "netstat -nlt" (on BSD, detailed > information is available with "netstat -Lan"), and number of > overflows happended in past should be in "netstat -s" stats. To > tune listen queue size used by nginx, use "backlog" parameter of > the listen directive. Note that system limits like > tcp_max_syn_backlog and somaxconn also require tuning. root at ns61620:~# ss -nlt|egrep 'Sta|' State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 :80 *:* sysctl -a|egrep 'somaxconn|tcp_max_syn' net.core.somaxconn = 4069 net.ipv4.tcp_max_syn_backlog = 8192 I have not add "backlog" to the listen directive. Do you have some suggestions about useful values for that amount of traffic? > If stateful firewall is used, this also can be a result of "out of > states" conditions, check your firewall stats. I don't use connection track module. Aleks From vbart at nginx.com Tue Feb 11 11:45:48 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 11 Feb 2014 15:45:48 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <3f79edcb2f093bfecb06810670185cb0@none.at> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2106685.d44CGIv2HN@vbart-laptop> <3f79edcb2f093bfecb06810670185cb0@none.at> Message-ID: <3318413.g5RFtGjWvB@vbart-laptop> On Tuesday 11 February 2014 12:34:59 Aleksandar Lazic wrote: [..] > > > > You haven't shown your server level configuration. > > Do you use deferred accept? > > yes > > listen :80 deferred default_server; > Ok. Two other guesses: you have tcp_syncookies disabled, and tcp_abort_on_overflow enabled? Please note, that with deferred accept enabled it is very easy to have tcp_max_syn_backlog overflowed, especially with nginx prior to 1.5.10. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Feb 11 11:48:09 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Feb 2014 15:48:09 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <3f79edcb2f093bfecb06810670185cb0@none.at> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2106685.d44CGIv2HN@vbart-laptop> <3f79edcb2f093bfecb06810670185cb0@none.at> Message-ID: <20140211114809.GN1835@mdounin.ru> Hello! On Tue, Feb 11, 2014 at 12:34:59PM +0100, Aleksandar Lazic wrote: > > > Am 11-02-2014 12:14, schrieb Valentin V. Bartenev: > >On Monday 10 February 2014 17:41:47 Aleksandar Lazic wrote: > > [snipp] > > >>Every time when I have more then ~400 r/s we get no data from the > >>status-request, this request rate means ~20k Packets/Second. > >>I use netfilter with fail2ban, but not the connection tracking module! > > > >Do you see the issue without fail2ban? > > I haven't tried the setup with out. > > >>I have now seen on the tcpdump that I get a 'RST' Package quite > >>immediately after a request when the 'no answer from server' cames. > >> > >>I think this could be a kernel-network issue not a nginx issue. > >> > >>The question is: > >>Please can you help me to find the reason for the immediately 'RST' > >>answer. > >> > >>I hope my question is more clear now. > >> > >>Thanks for reading and patience. > >> > > > >You haven't shown your server level configuration. > >Do you use deferred accept? > > yes > > listen :80 deferred default_server; Try switching it off, there could be a problem if kernel decides to switch to syncookies, see this ticket for details: http://trac.nginx.org/nginx/ticket/353 (The problem is fixed in 1.5.10, and 1.4.5 will have the fix, too.) -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Tue Feb 11 12:00:52 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 11 Feb 2014 13:00:52 +0100 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <3318413.g5RFtGjWvB@vbart-laptop> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2106685.d44CGIv2HN@vbart-laptop> <3f79edcb2f093bfecb06810670185cb0@none.at> <3318413.g5RFtGjWvB@vbart-laptop> Message-ID: Am 11-02-2014 12:45, schrieb Valentin V. Bartenev: > On Tuesday 11 February 2014 12:34:59 Aleksandar Lazic wrote: > [..] >> > >> > You haven't shown your server level configuration. >> > Do you use deferred accept? >> >> yes >> >> listen :80 deferred default_server; >> > > Ok. Two other guesses: you have tcp_syncookies disabled, > and tcp_abort_on_overflow enabled? > > Please note, that with deferred accept enabled it is very > easy to have tcp_max_syn_backlog overflowed, especially > with nginx prior to 1.5.10. sysctl -a|egrep 'tcp_syncookies|tcp_abort_on_overflow' net.ipv4.tcp_abort_on_overflow = 0 net.ipv4.tcp_syncookies = 1 download.none.at # egrep 'tcp_syncookies|tcp_abort_on_overflow' sysctl.txt net.ipv4.tcp_abort_on_overflow = 0 net.ipv4.tcp_syncookies = 1 From al-nginx at none.at Tue Feb 11 12:10:59 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 11 Feb 2014 13:10:59 +0100 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <20140211114809.GN1835@mdounin.ru> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2106685.d44CGIv2HN@vbart-laptop> <3f79edcb2f093bfecb06810670185cb0@none.at> <20140211114809.GN1835@mdounin.ru> Message-ID: Am 11-02-2014 12:48, schrieb Maxim Dounin: > Hello! > > On Tue, Feb 11, 2014 at 12:34:59PM +0100, Aleksandar Lazic wrote: > [snipp] >> >You haven't shown your server level configuration. >> >Do you use deferred accept? >> >> yes >> >> listen :80 deferred default_server; > > Try switching it off, there could be a problem if kernel decides > to switch to syncookies, see this ticket for details: > > http://trac.nginx.org/nginx/ticket/353 > > (The problem is fixed in 1.5.10, and 1.4.5 will have the fix, > too.) Ok thanks. I have now removed deferred and added backlog=1024 Should I add deferred again when I update to 1.4.5? From nginx-forum at nginx.us Tue Feb 11 12:17:35 2014 From: nginx-forum at nginx.us (Gwyneth Llewelyn) Date: Tue, 11 Feb 2014 07:17:35 -0500 Subject: "Idiomatic" Gallery3 configuration In-Reply-To: <20130901115127.GA28186@mail.incertum.net> References: <20130901115127.GA28186@mail.incertum.net> Message-ID: As far as I can tell, this looks good to me, and it's better to use rewrites than "if", which is what (sadly) the Gallery3 wiki still shows. My current issue is that album thumbnails, which use an URL ending in [i].album.jpg?...[/i] (a dot before the album name, a query with a question mark after .jpg) doesn't seem to be caught by these rules and throws a 403. I wonder why, because [b]rewrite ^/var/(albums|thumbs|resizes)/(.*)$ /file_proxy/$2 last;[/b] should catch it. Maybe it needs another rewrite rule, e.g. [b]rewrite ^/var/(albums|thumbs|resizes)/(.*)?(.*)$ /file_proxy/$2?$3 last;[/b] I haven't tested it out, though. I'm still very shaky with nginx configuration! Thanks for posting this. I'm glad to see that there are plenty of people using Gallery3 with nginx. It makes a lot of sense, since images can be accessed directly by nginx and served immediately without the need to go through the PHP processor... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242413,247391#msg-247391 From contact at jpluscplusm.com Tue Feb 11 12:25:44 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 11 Feb 2014 13:25:44 +0100 Subject: ASP.NET pages with nginx In-Reply-To: <73b3edd9212d32b92ffd2d2b6d64ed78.NginxMailingListEnglish@forum.nginx.org> References: <73b3edd9212d32b92ffd2d2b6d64ed78.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 10 Feb 2014 17:12, "parnican" wrote: > > I have tried "experiments" with following parameters but no change..not sure > this is the way i should go... Any suggestion how to make nginx *not* pass > through the ":8080? How about proxy_set_header? J -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 11 12:28:45 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Feb 2014 16:28:45 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: References: <4014345467c8e05baa091ec6841a57cb@none.at> <2106685.d44CGIv2HN@vbart-laptop> <3f79edcb2f093bfecb06810670185cb0@none.at> <20140211114809.GN1835@mdounin.ru> Message-ID: <20140211122845.GP1835@mdounin.ru> Hello! On Tue, Feb 11, 2014 at 01:10:59PM +0100, Aleksandar Lazic wrote: > > Am 11-02-2014 12:48, schrieb Maxim Dounin: > >Hello! > > > >On Tue, Feb 11, 2014 at 12:34:59PM +0100, Aleksandar Lazic wrote: > > > > [snipp] > > >>>You haven't shown your server level configuration. > >>>Do you use deferred accept? > >> > >>yes > >> > >>listen :80 deferred default_server; > > > >Try switching it off, there could be a problem if kernel decides > >to switch to syncookies, see this ticket for details: > > > >http://trac.nginx.org/nginx/ticket/353 > > > >(The problem is fixed in 1.5.10, and 1.4.5 will have the fix, > >too.) > > Ok thanks. > I have now removed deferred and added backlog=1024 Does it actually solve the problem? It also would be intresting to know what exactly did it - removing deferred or adding backlog? (As for backlog size, I usually set it to something big enough to accomodate about 1 or 2 seconds of expected peek connection rate. That is, 1024 is good enough for about 500 connections per second. But with deferred on Linux, it looks like deferred connection are sitting in the same queue as normal ones, and this may drastically change things.) > Should I add deferred again when I update to 1.4.5? It should be safe, though I don't recommend it unless it's beneficial in your setup. -- Maxim Dounin http://nginx.org/ From contact at jpluscplusm.com Tue Feb 11 12:32:31 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 11 Feb 2014 13:32:31 +0100 Subject: Protecting URIs with OAuth2 In-Reply-To: <944688d13fe8e433c66d5da535d69f6f.NginxMailingListEnglish@forum.nginx.org> References: <944688d13fe8e433c66d5da535d69f6f.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 11 Feb 2014 06:33, "kate_r" wrote: > > Hi > > Does anyone know how to protect an URI with OAuth authentication? the > upstream sever is already capable of issuing new tokens, but I'm hoping that > nginx can check the access token for certain URIs. In my experience, you can easily use nginx to pass the request to an auth-only app which then tells nginx from where to serve the success/failure response. I haven't seen a uncomplicated way of getting nginx itself to do the auth entirely. I suppose you could write an oauth implementation in lua/perl/etc and embed it in nginx, but I'd personally argue that would be a mistake in most architectures. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 11 12:44:14 2014 From: nginx-forum at nginx.us (parnican) Date: Tue, 11 Feb 2014 07:44:14 -0500 Subject: ASP.NET pages with nginx In-Reply-To: References: Message-ID: <72956bb4bafd08902c647e43b6182c26.NginxMailingListEnglish@forum.nginx.org> Just did some experiments with following settings: proxy_set_header X-Real-IP $remote_addr; proxy_pass_header X-Accel-Redirect; No change:( ...its time to give up or any ides? Also tried: proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment; proxy_cache_bypass $http_pragma $http_authorization; proxy_set_header X-Real-IP $remote_addr; proxy_pass_header X-Accel-Redirect; #proxy_pass_request_headers off; #proxy_pass http://localhost:9000; #proxy_pass_request_body off; #proxy_redirect off; proxy_buffering on; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247323,247397#msg-247397 From al-nginx at none.at Tue Feb 11 13:14:14 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 11 Feb 2014 14:14:14 +0100 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <20140211122845.GP1835@mdounin.ru> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2106685.d44CGIv2HN@vbart-laptop> <3f79edcb2f093bfecb06810670185cb0@none.at> <20140211114809.GN1835@mdounin.ru> <20140211122845.GP1835@mdounin.ru> Message-ID: <1a6fa3e15d893729bbc7553e08c7de5c@none.at> Hi. Am 11-02-2014 13:28, schrieb Maxim Dounin: > Hello! > > On Tue, Feb 11, 2014 at 01:10:59PM +0100, Aleksandar Lazic wrote: > >> >> Am 11-02-2014 12:48, schrieb Maxim Dounin: >> >Hello! >> > >> >On Tue, Feb 11, 2014 at 12:34:59PM +0100, Aleksandar Lazic wrote: >> > >> >> [snipp] >> >> >>>You haven't shown your server level configuration. >> >>>Do you use deferred accept? >> >> >> >>yes >> >> >> >>listen :80 deferred default_server; >> > >> >Try switching it off, there could be a problem if kernel decides >> >to switch to syncookies, see this ticket for details: >> > >> >http://trac.nginx.org/nginx/ticket/353 >> > >> >(The problem is fixed in 1.5.10, and 1.4.5 will have the fix, >> >too.) >> >> Ok thanks. >> I have now removed deferred and added backlog=1024 > > Does it actually solve the problem? It also would be intresting > to know what exactly did it - removing deferred or adding backlog? Due to the fact that we have mostly in the morning our highest traffic I can tell you this tomorrow. I still search for a useable load testing setup. Currently I have one server (1 GB) with https://github.com/wg/wrk I will try today https://bitbucket.org/yarosla/httpress/wiki/Home https://code.google.com/p/httperf/ But if anybody have a suggestion for distributed load testing service I'm open eared. What I also have seen is that. netstat -s|egrep 'listen queue|SYNs to LISTEN sockets dropped' 131753 times the listen queue of a socket overflowed 18195732 SYNs to LISTEN sockets dropped a second later. netstat -s|egrep 'listen queue|SYNs to LISTEN sockets dropped' 131753 times the listen queue of a socket overflowed 18195743 SYNs to LISTEN sockets dropped What could be this the reason? > (As for backlog size, I usually set it to something big enough to > accomodate about 1 or 2 seconds of expected peek connection rate. > That is, 1024 is good enough for about 500 connections per second. > But with deferred on Linux, it looks like deferred connection are > sitting in the same queue as normal ones, and this may drastically > change things.) OK. That means with deferred I should double or divide the listening value? >> Should I add deferred again when I update to 1.4.5? > > It should be safe, though I don't recommend it unless it's > beneficial in your setup. OK. Thanks for now I will not activate it for the current setup. From contact at jpluscplusm.com Tue Feb 11 13:52:39 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 11 Feb 2014 14:52:39 +0100 Subject: ASP.NET pages with nginx In-Reply-To: <72956bb4bafd08902c647e43b6182c26.NginxMailingListEnglish@forum.nginx.org> References: <72956bb4bafd08902c647e43b6182c26.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 11 Feb 2014 13:44, "parnican" wrote: > > Just did some experiments with following settings: > proxy_set_header X-Real-IP $remote_addr; > proxy_pass_header X-Accel-Redirect; > > No change:( ...its time to give up or any ides? How about using it to set the header that contains the "wrong" setting - the Host header. > Also tried: > proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment; > proxy_cache_bypass $http_pragma $http_authorization; > > proxy_set_header X-Real-IP $remote_addr; > proxy_pass_header X-Accel-Redirect; > > #proxy_pass_request_headers off; > #proxy_pass http://localhost:9000; > #proxy_pass_request_body off; > #proxy_redirect off; > proxy_buffering on; > } > } > You appear to be throwing crap at a wall and seeing what sticks. I suggest that you do the absolutely most simple thing that you can get to work (a Hello world mono app and nginx just passing requests through) and *only* change things as you /need/ to, one at a time, so you can see what change breaks things. Just my 2 cents :-) J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 11 13:58:32 2014 From: nginx-forum at nginx.us (parnican) Date: Tue, 11 Feb 2014 08:58:32 -0500 Subject: ASP.NET pages with nginx In-Reply-To: References: Message-ID: <45e2abdbf56e5b1f1720fce22e6133b0.NginxMailingListEnglish@forum.nginx.org> You are right, now its that phase..throwing crap at a wall and seeing what sticks ;-) WinForm app, better to say console app, hello world.exe is working. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247323,247403#msg-247403 From mdounin at mdounin.ru Tue Feb 11 14:06:38 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Feb 2014 18:06:38 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <1a6fa3e15d893729bbc7553e08c7de5c@none.at> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2106685.d44CGIv2HN@vbart-laptop> <3f79edcb2f093bfecb06810670185cb0@none.at> <20140211114809.GN1835@mdounin.ru> <20140211122845.GP1835@mdounin.ru> <1a6fa3e15d893729bbc7553e08c7de5c@none.at> Message-ID: <20140211140638.GT1835@mdounin.ru> Hello! On Tue, Feb 11, 2014 at 02:14:14PM +0100, Aleksandar Lazic wrote: > Hi. > > Am 11-02-2014 13:28, schrieb Maxim Dounin: > >Hello! > > > >On Tue, Feb 11, 2014 at 01:10:59PM +0100, Aleksandar Lazic wrote: > > > >> > >>Am 11-02-2014 12:48, schrieb Maxim Dounin: > >>>Hello! > >>> > >>>On Tue, Feb 11, 2014 at 12:34:59PM +0100, Aleksandar Lazic wrote: > >>> > >> > >>[snipp] > >> > >>>>>You haven't shown your server level configuration. > >>>>>Do you use deferred accept? > >>>> > >>>>yes > >>>> > >>>>listen :80 deferred default_server; > >>> > >>>Try switching it off, there could be a problem if kernel decides > >>>to switch to syncookies, see this ticket for details: > >>> > >>>http://trac.nginx.org/nginx/ticket/353 > >>> > >>>(The problem is fixed in 1.5.10, and 1.4.5 will have the fix, > >>>too.) > >> > >>Ok thanks. > >>I have now removed deferred and added backlog=1024 > > > >Does it actually solve the problem? It also would be intresting > >to know what exactly did it - removing deferred or adding backlog? > > Due to the fact that we have mostly in the morning our highest traffic I can > tell you this tomorrow. > I still search for a useable load testing setup. > Currently I have one server (1 GB) with > > https://github.com/wg/wrk > > I will try today > > https://bitbucket.org/yarosla/httpress/wiki/Home > https://code.google.com/p/httperf/ > > But if anybody have a suggestion for distributed load testing service I'm > open eared. I personally prefer http_load, mostly for historical reasons. We also use wrk here, which is quite good as well. > What I also have seen is that. > > netstat -s|egrep 'listen queue|SYNs to LISTEN sockets dropped' > 131753 times the listen queue of a socket overflowed > 18195732 SYNs to LISTEN sockets dropped > > a second later. > > netstat -s|egrep 'listen queue|SYNs to LISTEN sockets dropped' > 131753 times the listen queue of a socket overflowed > 18195743 SYNs to LISTEN sockets dropped > > What could be this the reason? I'm not really a Linux expert as I prefer FreeBSD, but number suggests there are no listen queue overflows during the second, but there are other SYN drops. Looking into the code - there are lots of possible reasons, including various allocation failures and/or other edge cases. It needs additional investigation to tell what goes on. > >(As for backlog size, I usually set it to something big enough to > >accomodate about 1 or 2 seconds of expected peek connection rate. > >That is, 1024 is good enough for about 500 connections per second. > >But with deferred on Linux, it looks like deferred connection are > >sitting in the same queue as normal ones, and this may drastically > >change things.) > > OK. That means with deferred I should double or divide the listening value? With deferred, it looks like all (potential) deferred connection should be added to the value. And it's very hard to tell how many there will be. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Feb 11 14:09:49 2014 From: nginx-forum at nginx.us (parnican) Date: Tue, 11 Feb 2014 09:09:49 -0500 Subject: ASP.NET pages with nginx In-Reply-To: References: Message-ID: This didn't work. proxy_set_header Host $proxy_host; next try proxy_set_header Host $http_host:8080; added #proxy_set_header Connection close; //or location / { root /var/www/demo; index index.html index.htm default.aspx Default.aspx; proxy_set_header Host $proxy_host; #proxy_set_header Connection close; include /etc/nginx/fastcgi_params; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247323,247405#msg-247405 From mdounin at mdounin.ru Tue Feb 11 14:10:51 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Feb 2014 18:10:51 +0400 Subject: nginx-1.4.5 Message-ID: <20140211141051.GU1835@mdounin.ru> Changes with nginx 1.4.5 11 Feb 2014 *) Bugfix: the $ssl_session_id variable contained full session serialized instead of just a session id. Thanks to Ivan Risti?. *) Bugfix: client connections might be immediately closed if deferred accept was used; the bug had appeared in 1.3.15. *) Bugfix: alerts "zero size buf in output" might appear in logs while proxying; the bug had appeared in 1.3.9. *) Bugfix: a segmentation fault might occur in a worker process if the ngx_http_spdy_module was used. *) Bugfix: proxied WebSocket connections might hang right after handshake if the select, poll, or /dev/poll methods were used. *) Bugfix: a timeout might occur while reading client request body in an SSL connection using chunked transfer encoding. *) Bugfix: memory leak in nginx/Windows. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Tue Feb 11 14:16:48 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 11 Feb 2014 18:16:48 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <20140211140638.GT1835@mdounin.ru> References: <4014345467c8e05baa091ec6841a57cb@none.at> <1a6fa3e15d893729bbc7553e08c7de5c@none.at> <20140211140638.GT1835@mdounin.ru> Message-ID: <3279357.BQ2JAi0TlL@vbart-laptop> On Tuesday 11 February 2014 18:06:38 Maxim Dounin wrote: [..] > > >(As for backlog size, I usually set it to something big enough to > > >accomodate about 1 or 2 seconds of expected peek connection rate. > > >That is, 1024 is good enough for about 500 connections per second. > > >But with deferred on Linux, it looks like deferred connection are > > >sitting in the same queue as normal ones, and this may drastically > > >change things.) > > > > OK. That means with deferred I should double or divide the listening value? > > With deferred, it looks like all (potential) deferred connection > should be added to the value. And it's very hard to tell how many > there will be. > Deferred connections stay in syn backlog which controlled by tcp_max_syn_backlog. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Feb 11 14:59:52 2014 From: nginx-forum at nginx.us (mevans336) Date: Tue, 11 Feb 2014 09:59:52 -0500 Subject: Sometimes SPDY/2, Sometimes SPDY/3.1? Message-ID: Hello Everyone, We have been running SPDY/2 for months and months without issue and recently upgraded to 1.5.10 for SPDY/3.1 support. However, we are having an issue where sometimes our site reports SPDY/2 and sometimes it reports SPDY/3.1 in Chrome's net-internals and the Chrome spdy extension. We use Nginx in reverse proxy mode and have 4 servers blocks - 2 for HTTP which redirect to 2 for HTTPS (a production block and a dev block). Both blocks experience the same behavior. Here is an HTTP and HTTPS block: # # WWW # upstream www_servers { ip_hash; server 10.x.x.x:8080 max_fails=3 fail_timeout=30s; server 10.x.x.x:8080 max_fails=3 fail_timeout=30s; } # Directives to enable "front end" SSL # Disable weak SSLv2 and only accept SSLv3 and TLSv1 server { listen 10.x.x.x:80; server_name mysite.com; server_name www.mysite.com; location /feed { add_header X-Frame-Options SAMEORIGIN; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_next_upstream error timeout invalid_header; # proxy_http_version 1.1; proxy_pass http://www_servers; proxy_intercept_errors on; # error_page 504 @errors; } location / { add_header X-Frame-Options SAMEORIGIN; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_next_upstream error timeout invalid_header; proxy_intercept_errors on; # error_page 504 @errors; rewrite ^ https://$server_name$request_uri permanent; } } server { listen 10.x.x.x:443 default_server spdy ssl; ssl on; ssl_certificate /path/to/my/ssl.crt; ssl_certificate_key /path/to/my/ssl/ssl.key; ssl_protocols TLSv1.2 TLSv1.1 TLSv1 SSLv3; ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-RC4-SHA:ECDHE-RSA-RC4-SHA:ECDH-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:RC4-SHA; ssl_prefer_server_ciphers on; server_name mysite.com; server_name www.mysite.com; location / { add_header X-Frame-Options SAMEORIGIN; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_next_upstream error timeout invalid_header; proxy_intercept_errors on; # error_page 504 @errors; # proxy_http_version 1.1; proxy_pass http://www_servers; } } Can anyone provide any guidance on why this may be happening? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247412,247412#msg-247412 From contact at jpluscplusm.com Tue Feb 11 15:04:55 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 11 Feb 2014 16:04:55 +0100 Subject: ASP.NET pages with nginx In-Reply-To: References: Message-ID: On 11 Feb 2014 15:09, "parnican" wrote: > > This didn't work. > proxy_set_header Host $proxy_host; > next try > proxy_set_header Host $http_host:8080; Tell your app to expect "bernolak.dyndns.info", without the port suffix. Tell nginx to set the Host header to "bernolak.dyndns.info", without the port suffix. I'll tap out at this point, however, as any further discussion is likely to involve mono integration and that's not something I particularly enjoy. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 11 15:06:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Feb 2014 19:06:37 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <3279357.BQ2JAi0TlL@vbart-laptop> References: <4014345467c8e05baa091ec6841a57cb@none.at> <1a6fa3e15d893729bbc7553e08c7de5c@none.at> <20140211140638.GT1835@mdounin.ru> <3279357.BQ2JAi0TlL@vbart-laptop> Message-ID: <20140211150637.GY1835@mdounin.ru> Hello! On Tue, Feb 11, 2014 at 06:16:48PM +0400, Valentin V. Bartenev wrote: > On Tuesday 11 February 2014 18:06:38 Maxim Dounin wrote: > [..] > > > >(As for backlog size, I usually set it to something big enough to > > > >accomodate about 1 or 2 seconds of expected peek connection rate. > > > >That is, 1024 is good enough for about 500 connections per second. > > > >But with deferred on Linux, it looks like deferred connection are > > > >sitting in the same queue as normal ones, and this may drastically > > > >change things.) > > > > > > OK. That means with deferred I should double or divide the listening value? > > > > With deferred, it looks like all (potential) deferred connection > > should be added to the value. And it's very hard to tell how many > > there will be. > > > > Deferred connections stay in syn backlog which controlled > by tcp_max_syn_backlog. You mean they aren't counted to listen socket's backlog, right? Is there any way to see how many such connections are queued then? -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Feb 11 15:17:06 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Feb 2014 19:17:06 +0400 Subject: Sometimes SPDY/2, Sometimes SPDY/3.1? In-Reply-To: References: Message-ID: <20140211151706.GZ1835@mdounin.ru> Hello! On Tue, Feb 11, 2014 at 09:59:52AM -0500, mevans336 wrote: > Hello Everyone, > > We have been running SPDY/2 for months and months without issue and recently > upgraded to 1.5.10 for SPDY/3.1 support. However, we are having an issue > where sometimes our site reports SPDY/2 and sometimes it reports SPDY/3.1 in > Chrome's net-internals and the Chrome spdy extension. We use Nginx in > reverse proxy mode and have 4 servers blocks - 2 for HTTP which redirect to > 2 for HTTPS (a production block and a dev block). Both blocks experience the > same behavior. Either you have more than one server, or you've forgot to stop old nginx master process during upgrade. http://nginx.org/en/docs/control.html#upgrade -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Feb 11 15:28:43 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 11 Feb 2014 19:28:43 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <20140211150637.GY1835@mdounin.ru> References: <4014345467c8e05baa091ec6841a57cb@none.at> <3279357.BQ2JAi0TlL@vbart-laptop> <20140211150637.GY1835@mdounin.ru> Message-ID: <2113053.BD4bn4vYtf@vbart-laptop> On Tuesday 11 February 2014 19:06:37 Maxim Dounin wrote: > Hello! > > On Tue, Feb 11, 2014 at 06:16:48PM +0400, Valentin V. Bartenev wrote: > > > On Tuesday 11 February 2014 18:06:38 Maxim Dounin wrote: > > [..] > > > > >(As for backlog size, I usually set it to something big enough to > > > > >accomodate about 1 or 2 seconds of expected peek connection rate. > > > > >That is, 1024 is good enough for about 500 connections per second. > > > > >But with deferred on Linux, it looks like deferred connection are > > > > >sitting in the same queue as normal ones, and this may drastically > > > > >change things.) > > > > > > > > OK. That means with deferred I should double or divide the listening value? > > > > > > With deferred, it looks like all (potential) deferred connection > > > should be added to the value. And it's very hard to tell how many > > > there will be. > > > > > > > Deferred connections stay in syn backlog which controlled > > by tcp_max_syn_backlog. > > You mean they aren't counted to listen socket's backlog, right? Yes. > Is there any way to see how many such connections are queued then? They all stay in SYN_RECV state. If I understand right, something like this will show: # netstat -n | grep SYN_RECV | grep :80 | wc -l wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Feb 11 15:41:55 2014 From: nginx-forum at nginx.us (parnican) Date: Tue, 11 Feb 2014 10:41:55 -0500 Subject: ASP.NET pages with nginx In-Reply-To: References: Message-ID: You have all my respect! Hello World! Got an aspx page running on nginx!!! THANK YOU! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247323,247417#msg-247417 From nginx-forum at nginx.us Tue Feb 11 15:44:10 2014 From: nginx-forum at nginx.us (mevans336) Date: Tue, 11 Feb 2014 10:44:10 -0500 Subject: Sometimes SPDY/2, Sometimes SPDY/3.1? In-Reply-To: <20140211151706.GZ1835@mdounin.ru> References: <20140211151706.GZ1835@mdounin.ru> Message-ID: Bingo, I issued a -USR2 but a ps shows both the old and new master processes listening. Thanks Maxim. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247412,247418#msg-247418 From al-nginx at none.at Tue Feb 11 15:44:57 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 11 Feb 2014 16:44:57 +0100 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <2113053.BD4bn4vYtf@vbart-laptop> References: <4014345467c8e05baa091ec6841a57cb@none.at> <3279357.BQ2JAi0TlL@vbart-laptop> <20140211150637.GY1835@mdounin.ru> <2113053.BD4bn4vYtf@vbart-laptop> Message-ID: <6e3a9c811569cf1849e9272a68e3c2a9@none.at> Am 11-02-2014 16:28, schrieb Valentin V. Bartenev: > On Tuesday 11 February 2014 19:06:37 Maxim Dounin wrote: >> Hello! >> >> On Tue, Feb 11, 2014 at 06:16:48PM +0400, Valentin V. Bartenev wrote: >> >> > On Tuesday 11 February 2014 18:06:38 Maxim Dounin wrote: >> > [..] >> > > > >(As for backlog size, I usually set it to something big enough to >> > > > >accomodate about 1 or 2 seconds of expected peek connection rate. >> > > > >That is, 1024 is good enough for about 500 connections per second. >> > > > >But with deferred on Linux, it looks like deferred connection are >> > > > >sitting in the same queue as normal ones, and this may drastically >> > > > >change things.) >> > > > >> > > > OK. That means with deferred I should double or divide the listening value? >> > > >> > > With deferred, it looks like all (potential) deferred connection >> > > should be added to the value. And it's very hard to tell how many >> > > there will be. >> > > >> > >> > Deferred connections stay in syn backlog which controlled >> > by tcp_max_syn_backlog. >> >> You mean they aren't counted to listen socket's backlog, right? > > Yes. > >> Is there any way to see how many such connections are queued then? > > They all stay in SYN_RECV state. If I understand right, something > like this will show: > > # netstat -n | grep SYN_RECV | grep :80 | wc -l We have a average from ~200 here. Is this good/bad/not worth ? From mdounin at mdounin.ru Tue Feb 11 16:29:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Feb 2014 20:29:25 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <2113053.BD4bn4vYtf@vbart-laptop> References: <4014345467c8e05baa091ec6841a57cb@none.at> <3279357.BQ2JAi0TlL@vbart-laptop> <20140211150637.GY1835@mdounin.ru> <2113053.BD4bn4vYtf@vbart-laptop> Message-ID: <20140211162925.GB1835@mdounin.ru> Hello! On Tue, Feb 11, 2014 at 07:28:43PM +0400, Valentin V. Bartenev wrote: > On Tuesday 11 February 2014 19:06:37 Maxim Dounin wrote: > > Hello! > > > > On Tue, Feb 11, 2014 at 06:16:48PM +0400, Valentin V. Bartenev wrote: > > > > > On Tuesday 11 February 2014 18:06:38 Maxim Dounin wrote: > > > [..] > > > > > >(As for backlog size, I usually set it to something big enough to > > > > > >accomodate about 1 or 2 seconds of expected peek connection rate. > > > > > >That is, 1024 is good enough for about 500 connections per second. > > > > > >But with deferred on Linux, it looks like deferred connection are > > > > > >sitting in the same queue as normal ones, and this may drastically > > > > > >change things.) > > > > > > > > > > OK. That means with deferred I should double or divide the listening value? > > > > > > > > With deferred, it looks like all (potential) deferred connection > > > > should be added to the value. And it's very hard to tell how many > > > > there will be. > > > > > > > > > > Deferred connections stay in syn backlog which controlled > > > by tcp_max_syn_backlog. > > > > You mean they aren't counted to listen socket's backlog, right? > > Yes. > > > Is there any way to see how many such connections are queued then? > > They all stay in SYN_RECV state. If I understand right, something > like this will show: > > # netstat -n | grep SYN_RECV | grep :80 | wc -l How these connections are different from ones in real SYN_RECV state then? I.e., how one is expected to distinguish them from connections not yet passed 3-way handhake? -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Feb 11 17:22:48 2014 From: nginx-forum at nginx.us (ct323i) Date: Tue, 11 Feb 2014 12:22:48 -0500 Subject: Nginx returning 414 even when large_client_header_buffers is set In-Reply-To: <8bd0fd9b781422036139358eae2ea463.NginxMailingListEnglish@forum.nginx.org> References: <20120411072023.GC13466@mdounin.ru> <8bd0fd9b781422036139358eae2ea463.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5bc29339f7da23e67aa16c78851e44c2.NginxMailingListEnglish@forum.nginx.org> Hi spacerobot, I am encountering a very similar problem with my nginx/unicorn server with an 11k URI, getting error "HTTP/1.1 414 Request-URI Too Long". We have also modified the nginx.conf httpd context to include: client_header_buffer_size 32k; large_client_header_buffers 16 512k; Which in theory should be more than sufficient to handle this URI... Could you please give some more information on what you've modified with Unicorn to allow a larger URI? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,225093,247422#msg-247422 From vbart at nginx.com Tue Feb 11 17:26:58 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 11 Feb 2014 21:26:58 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <20140211162925.GB1835@mdounin.ru> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2113053.BD4bn4vYtf@vbart-laptop> <20140211162925.GB1835@mdounin.ru> Message-ID: <3736039.GxRQJiPVNo@vbart-laptop> On Tuesday 11 February 2014 20:29:25 Maxim Dounin wrote: > Hello! > > On Tue, Feb 11, 2014 at 07:28:43PM +0400, Valentin V. Bartenev wrote: > > > On Tuesday 11 February 2014 19:06:37 Maxim Dounin wrote: > > > Hello! > > > > > > On Tue, Feb 11, 2014 at 06:16:48PM +0400, Valentin V. Bartenev wrote: > > > > > > > On Tuesday 11 February 2014 18:06:38 Maxim Dounin wrote: > > > > [..] > > > > > > >(As for backlog size, I usually set it to something big enough to > > > > > > >accomodate about 1 or 2 seconds of expected peek connection rate. > > > > > > >That is, 1024 is good enough for about 500 connections per second. > > > > > > >But with deferred on Linux, it looks like deferred connection are > > > > > > >sitting in the same queue as normal ones, and this may drastically > > > > > > >change things.) > > > > > > > > > > > > OK. That means with deferred I should double or divide the listening value? > > > > > > > > > > With deferred, it looks like all (potential) deferred connection > > > > > should be added to the value. And it's very hard to tell how many > > > > > there will be. > > > > > > > > > > > > > Deferred connections stay in syn backlog which controlled > > > > by tcp_max_syn_backlog. > > > > > > You mean they aren't counted to listen socket's backlog, right? > > > > Yes. > > > > > Is there any way to see how many such connections are queued then? > > > > They all stay in SYN_RECV state. If I understand right, something > > like this will show: > > > > # netstat -n | grep SYN_RECV | grep :80 | wc -l > > How these connections are different from ones in real SYN_RECV > state then? I.e., how one is expected to distinguish them from > connections not yet passed 3-way handhake? > AFAIK, there is no way to distinguish them. wbr, Valentin V. Bartenev From david.birdsong at gmail.com Tue Feb 11 17:33:50 2014 From: david.birdsong at gmail.com (David Birdsong) Date: Tue, 11 Feb 2014 09:33:50 -0800 Subject: acess log over nfs hanging In-Reply-To: <52F9EDB1.5030203@citrin.ru> References: <52F9EDB1.5030203@citrin.ru> Message-ID: On Tue, Feb 11, 2014 at 1:30 AM, Anton Yuzhaninov wrote: > On 02/07/14 20:28, Jader H. Silva wrote: > >> It seems that when some processes are running in the nfs server, the >> share won't >> allow writing for some time and I noticed all nginx workers in status D >> and not >> processing requests. >> > > I general it is a bad idea to write logs over NFS instead local HDD. > > If you need central log store across many servers: > - write logs locally > - rotate as often as need (SIGUSR1) > - copy logs to cental log server (rsync is handy for this, but other > methods are possible) yeah, try out: http://hekad.readthedocs.org/ it's like syslog but can write to pretty much any backend store. > > > Are all access.log writes blocking? >> > > yes, blocking > > > If my nfs server shutdown in an unexpected >> way, will nginx stop proxying requests to the backend or responses to the >> client? >> > > yes, will stop > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Feb 11 17:56:28 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 11 Feb 2014 21:56:28 +0400 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <6e3a9c811569cf1849e9272a68e3c2a9@none.at> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2113053.BD4bn4vYtf@vbart-laptop> <6e3a9c811569cf1849e9272a68e3c2a9@none.at> Message-ID: <7106524.JSvWgWkos6@vbart-laptop> On Tuesday 11 February 2014 16:44:57 Aleksandar Lazic wrote: > > Am 11-02-2014 16:28, schrieb Valentin V. Bartenev: > > On Tuesday 11 February 2014 19:06:37 Maxim Dounin wrote: > >> Hello! > >> > >> On Tue, Feb 11, 2014 at 06:16:48PM +0400, Valentin V. Bartenev wrote: > >> > >> > On Tuesday 11 February 2014 18:06:38 Maxim Dounin wrote: > >> > [..] > >> > > > >(As for backlog size, I usually set it to something big enough to > >> > > > >accomodate about 1 or 2 seconds of expected peek connection rate. > >> > > > >That is, 1024 is good enough for about 500 connections per second. > >> > > > >But with deferred on Linux, it looks like deferred connection are > >> > > > >sitting in the same queue as normal ones, and this may drastically > >> > > > >change things.) > >> > > > > >> > > > OK. That means with deferred I should double or divide the > >> > > > listening value? > >> > > > >> > > With deferred, it looks like all (potential) deferred connection > >> > > should be added to the value. And it's very hard to tell how many > >> > > there will be. > >> > > > >> > > >> > Deferred connections stay in syn backlog which controlled > >> > by tcp_max_syn_backlog. > >> > >> You mean they aren't counted to listen socket's backlog, right? > > > > Yes. > > > >> Is there any way to see how many such connections are queued then? > > > > They all stay in SYN_RECV state. If I understand right, something > > like this will show: > > > > # netstat -n | grep SYN_RECV | grep :80 | wc -l > > We have a average from ~200 here. > > Is this good/bad/not worth ? > Not a problem at all, till it's noticeable lower than tcp_max_syn_backlog. wbr, Valentin V. Bartenev From al-nginx at none.at Tue Feb 11 19:26:18 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 11 Feb 2014 20:26:18 +0100 Subject: high Traffic setup problem, module status don't deliver data In-Reply-To: <7106524.JSvWgWkos6@vbart-laptop> References: <4014345467c8e05baa091ec6841a57cb@none.at> <2113053.BD4bn4vYtf@vbart-laptop> <6e3a9c811569cf1849e9272a68e3c2a9@none.at> <7106524.JSvWgWkos6@vbart-laptop> Message-ID: Am 11-02-2014 18:56, schrieb Valentin V. Bartenev: > On Tuesday 11 February 2014 16:44:57 Aleksandar Lazic wrote: >> >> Am 11-02-2014 16:28, schrieb Valentin V. Bartenev: >> > On Tuesday 11 February 2014 19:06:37 Maxim Dounin wrote: >> >> Hello! >> >> >> >> On Tue, Feb 11, 2014 at 06:16:48PM +0400, Valentin V. Bartenev wrote: >> >> >> >> > On Tuesday 11 February 2014 18:06:38 Maxim Dounin wrote: >> >> > [..] >> >> > > > >(As for backlog size, I usually set it to something big enough to >> >> > > > >accomodate about 1 or 2 seconds of expected peek connection rate. >> >> > > > >That is, 1024 is good enough for about 500 connections per second. >> >> > > > >But with deferred on Linux, it looks like deferred connection are >> >> > > > >sitting in the same queue as normal ones, and this may drastically >> >> > > > >change things.) >> >> > > > >> >> > > > OK. That means with deferred I should double or divide the >> >> > > > listening value? >> >> > > >> >> > > With deferred, it looks like all (potential) deferred connection >> >> > > should be added to the value. And it's very hard to tell how many >> >> > > there will be. >> >> > > >> >> > >> >> > Deferred connections stay in syn backlog which controlled >> >> > by tcp_max_syn_backlog. >> >> >> >> You mean they aren't counted to listen socket's backlog, right? >> > >> > Yes. >> > >> >> Is there any way to see how many such connections are queued then? >> > >> > They all stay in SYN_RECV state. If I understand right, something >> > like this will show: >> > >> > # netstat -n | grep SYN_RECV | grep :80 | wc -l >> >> We have a average from ~200 here. >> >> Is this good/bad/not worth ? >> > > Not a problem at all, till it's noticeable lower than > tcp_max_syn_backlog. Ok thanks From nginx-forum at nginx.us Tue Feb 11 22:50:16 2014 From: nginx-forum at nginx.us (Amit Dixit) Date: Tue, 11 Feb 2014 17:50:16 -0500 Subject: TCP -TLS Redirection In-Reply-To: References: Message-ID: <391c99ddb21b4c7711b4a9d07bd235a4.NginxMailingListEnglish@forum.nginx.org> Exploring on Ngnix.... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245248,247430#msg-247430 From nginx-forum at nginx.us Tue Feb 11 22:51:28 2014 From: nginx-forum at nginx.us (tbamise) Date: Tue, 11 Feb 2014 17:51:28 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: References: Message-ID: <8fad7b10c8f42bf336980d8aa29753a9.NginxMailingListEnglish@forum.nginx.org> > > you are using client certificates, which is way you need a certificate > + key > on the nginx side to connect to upstream https. > I am using client certificates on nginx side to connect to upstream https. Issues is when I turn on client verification on upstream server, nginx doesn't provide the client certificates. Any ideas why? Thanks much appreciated! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247431#msg-247431 From luky-37 at hotmail.com Tue Feb 11 22:58:02 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 11 Feb 2014 23:58:02 +0100 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: <8fad7b10c8f42bf336980d8aa29753a9.NginxMailingListEnglish@forum.nginx.org> References: , <8fad7b10c8f42bf336980d8aa29753a9.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I am using client certificates on nginx side to connect to upstream https. > Issues is when I turn on client verification on upstream server, nginx > doesn't provide the client certificates. > > Any ideas why? Please read Maxim's responses. From nginx-forum at nginx.us Tue Feb 11 23:05:56 2014 From: nginx-forum at nginx.us (tbamise) Date: Tue, 11 Feb 2014 18:05:56 -0500 Subject: Proxy to upstream HTTPS server *with different* keys/certs in nginx In-Reply-To: References: Message-ID: <1f3b9bff45c06dd9e0753ccd0730b29f.NginxMailingListEnglish@forum.nginx.org> Thanks Lukas! Guess I have to patch Nginx to use client certificates with upstream servers. Any suggestion as to a good place to start? I'm looking to nix_http_upstream.c and gnx_event_openssl.c Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247305,247433#msg-247433 From luky-37 at hotmail.com Tue Feb 11 23:12:15 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 12 Feb 2014 00:12:15 +0100 Subject: TCP -TLS Redirection In-Reply-To: References: Message-ID: Hi, > I want to do a tcp to tls proxy. we need to communicate to apple server > via tls (tcp over ssl). our server does not have internet access so we > need to use a proxy server that has internet access which can > > * either accept the tcp communication and do a tls communication with > apns. in this case our server just need to send data over tcp to proxy > server without any SSL. > > * our server can send data over tls, if proxy server can do a transparent > redirection. > > we have tried nginx, it is able to do tcp to tcp redirection but nginx is > not allowing ssl directive to be specified in the upstream block of tcp > configuration. I think haproxy 1.5 is more suited to do this kind of configurations, not sure what nginx can do out of the box. > if proxy server can do a transparent redirection. There is no such thing as "redirection" in TCP. What you mean is transparent proxying. Regards, Lukas From perusio at gmail.com Wed Feb 12 01:07:50 2014 From: perusio at gmail.com (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Wed, 12 Feb 2014 02:07:50 +0100 Subject: Path components interpretation by nginx. Message-ID: Hello, While doing an audit for a client I came across an URL of the from: http://host/foobar;arg=quux?q=en/somewhere&a=1&b=2 Now doing something like: location /test-args { return 200 "u: $uri\nq: $query_string\na: $args\n"; } This returns as the value of $uri the string foobar;arg=quux, i.e., the first parameter arg=quux is not being interpreted as an argument but as part of the URI. This is confirmed by changing the location to be exact using = /test-args in which case nginx cannot find a configuration for handling the request. Now if I understand correctly section 3.3 of the RFC http://tools.ietf.org/html/rfc3986#section-3.3 The path may consist of a sequence of path segments separated by a single slash "/" character. Within a path segment, the characters "/", ";", "=", and "?" are reserved. Each path segment may include a sequence of parameters, indicated by the semicolon ";" character. The parameters are not significant to the parsing of relative references. Which means that the above URL is perfectly legal with arg being considered a parameter. Shouldn't nginx interpret arg=quux as an argument and not part of the URI in order to fully support the RFC in question? Thank you, ----appa -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Wed Feb 12 01:43:51 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 12 Feb 2014 14:43:51 +1300 Subject: Path components interpretation by nginx. In-Reply-To: References: Message-ID: <1392169431.14628.216.camel@steve-new> 3.3 Path... End of para 1. "The path is terminated by the first question mark ("?") or number sign ("#") character, or by the end of the URI." although I think most web servers add & to ?. Steve On Wed, 2014-02-12 at 02:07 +0100, Ant?nio P. P. Almeida wrote: > Hello, > > > While doing an audit for a client I came across an URL of the from: > > > http://host/foobar;arg=quux?q=en/somewhere&a=1&b=2 > > > Now doing something like: > > > location /test-args { > return 200 "u: $uri\nq: $query_string\na: $args\n"; > } > > > This returns as the value of $uri the string foobar;arg=quux, i.e., > the first parameter arg=quux is not being interpreted as an argument > but as part of the URI. > > > This is confirmed by changing the location to be exact using > = /test-args in which case nginx cannot find a configuration for > handling the request. > > > Now if I understand correctly section 3.3 of the > RFC http://tools.ietf.org/html/rfc3986#section-3.3 > > > The path may consist of a sequence of path segments > separated by a > single slash "/" character. Within a path segment, the > characters > "/", ";", "=", and "?" are reserved. Each path segment may > include a > sequence of parameters, indicated by the semicolon ";" > character. > The parameters are not significant to the parsing of > relative > references. > > > Which means that the above URL is perfectly legal with arg being > considered a parameter. > > > Shouldn't nginx interpret arg=quux as an argument and not part of the > URI in order to fully support the RFC in question? > > > Thank you, > ----appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From steve at greengecko.co.nz Wed Feb 12 03:29:26 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 12 Feb 2014 16:29:26 +1300 Subject: security risks... ( a bit OT ) Message-ID: <1392175766.14628.233.camel@steve-new> Hi folks, I'm just about to build up a server to take over about 100 smallish drupal sites. I've found that it is better to share a lot of resources between sites if possible, rather than allocate small slices to each. I am talking about the php-fpm backend that I'll be using here, not nginx directly. What security implications do you see when sharing a php backend across some / all of the sites - I will also be using APC. Apologies for asking it here, but here is so much noise on the php sites I know, and I find that there's a far more knowledgeable bunch here! Thanks for your forbearance in advance! Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Wed Feb 12 05:11:22 2014 From: nginx-forum at nginx.us (MasterMind) Date: Wed, 12 Feb 2014 00:11:22 -0500 Subject: Images Aren't Displaying When Perl Interpreter Is Enabled In-Reply-To: <20140124085841.GD19804@craic.sysops.org> References: <20140124085841.GD19804@craic.sysops.org> Message-ID: <52c0e7c9d249c7180df1424adb839bdc.NginxMailingListEnglish@forum.nginx.org> Oops :X https://stats.site.com/icon/other/vv.png # Block Image Hotlinking location /icon/ { valid_referers none blocked stats.site.com; if ($invalid_referer) { return 403; } I thought maybe the image hotlinking part broke it, so i removed it and images still dont display. I just tried copying an image to the root directory and image still doesnt display. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246500,247440#msg-247440 From lists at ruby-forum.com Wed Feb 12 07:51:13 2014 From: lists at ruby-forum.com (Jiang Web) Date: Wed, 12 Feb 2014 08:51:13 +0100 Subject: proxy_pass & getServerPort problems Message-ID: <02e45754f4d005d467c8325f35d738e5@ruby-forum.com> When I use the nginx upstream and proxy_pass to Reverse Proxy the request. the configuration is: upstream w3new_cls { server szxap205-in.huawei.com:9090; server szxap206-in.huawei.com:9090; } server { listen 80; server_name w3.huawei.com; location /NetWeb/ { proxy_pass http://w3new_cls/NetWeb/; proxy_redirect off; proxy_set_header Host $host:80; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } when I access the URL: http://w3.huawei.com/NetWeb and the application use the <%request.getServerPort%> to get the server port is 9090, Not the port 80. But I want to get the port is w3.huawei.com:80, How can I to resovle the problems. Thanks for Help. -- Posted via http://www.ruby-forum.com/. From artemrts at ukr.net Wed Feb 12 09:13:43 2014 From: artemrts at ukr.net (wishmaster) Date: Wed, 12 Feb 2014 11:13:43 +0200 Subject: security risks... ( a bit OT ) In-Reply-To: <1392175766.14628.233.camel@steve-new> References: <1392175766.14628.233.camel@steve-new> Message-ID: <1392195976.961719623.xfhba9pb@frv34.fwdcdn.com> Hi, Steve. I use a lot of sites, like you: joomla, opencart, etc. I have 2 security principals: - virtualization. I use FreeBSD "light" jails + vnet. Each CMS in own jail, e.g. joomla-jail with all sites written on joomla cms, opencart-jail and so on. - php pools. Each site in own pool with right access to sockets. One nginx instance per-jail. Cheers, w --- Original message --- From: "Steve Holdoway" Date: 12 February 2014, 05:29:11 > Hi folks, > > I'm just about to build up a server to take over about 100 smallish > drupal sites. I've found that it is better to share a lot of resources > between sites if possible, rather than allocate small slices to each. > > I am talking about the php-fpm backend that I'll be using here, not > nginx directly. What security implications do you see when sharing a php > backend across some / all of the sites - I will also be using APC. > > Apologies for asking it here, but here is so much noise on the php sites > I know, and I find that there's a far more knowledgeable bunch here! > > Thanks for your forbearance in advance! > > Steve > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Wed Feb 12 09:41:50 2014 From: nginx-forum at nginx.us (gaspy) Date: Wed, 12 Feb 2014 04:41:50 -0500 Subject: How to disable PHP output buffering Message-ID: <31caa33411b71cdfd167bdd3ed36a087.NginxMailingListEnglish@forum.nginx.org> Hi, I know this has been asked before, but I could not find a definitive answer. I tried different solutions, nothing worked. I have a PHP script that has to do time intensive operations and provide a status update from time to time. No way around it. I built a sample PHP script: I have output_buffering = Off in php.ini In nginx I have location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; gzip off; proxy_buffering off; fastcgi_keep_conn on; fastcgi_buffers 128 1k; # up to 1k + 128 * 1k fastcgi_max_temp_file_size 0; fastcgi_buffer_size 1k; fastcgi_buffering off; } (yeah, I put everything and the kitchen sink) Server is Ubuntu 13.04, nginx 1.5.9, php 5.4.9 Any ideas? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247451,247451#msg-247451 From mdounin at mdounin.ru Wed Feb 12 10:51:29 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Feb 2014 14:51:29 +0400 Subject: Path components interpretation by nginx. In-Reply-To: References: Message-ID: <20140212105129.GF38830@mdounin.ru> Hello! On Wed, Feb 12, 2014 at 02:07:50AM +0100, Ant?nio P. P. Almeida wrote: > Hello, > > While doing an audit for a client I came across an URL of the from: > > http://host/foobar;arg=quux?q=en/somewhere&a=1&b=2 > > Now doing something like: > > location /test-args { > return 200 "u: $uri\nq: $query_string\na: $args\n"; > } > > This returns as the value of $uri the string foobar;arg=quux, i.e., the > first parameter arg=quux is not being interpreted as an argument but as > part of the URI. > > This is confirmed by changing the location to be exact using = /test-args > in which case nginx cannot find a configuration for handling the request. > > Now if I understand correctly section 3.3 of the RFC > http://tools.ietf.org/html/rfc3986#section-3.3 > > The path may consist of a sequence of path segments separated by a > single slash "/" character. Within a path segment, the characters > "/", ";", "=", and "?" are reserved. Each path segment may include a > sequence of parameters, indicated by the semicolon ";" character. > The parameters are not significant to the parsing of relative > references. > > > Which means that the above URL is perfectly legal with arg being considered > a parameter. > > Shouldn't nginx interpret arg=quux as an argument and not part of the URI > in order to fully support the RFC in question? I don't see any incompatibilities with RFC in current nginx behaviour. Parameters aren't significant to the parsing of relative references, much like RFC states - i.e., "../foo" from both "/bar;param/bazz" and "/bar/bazz" will result in the same URI. Parameters are not query string though. Note that semantically parameters are for a path segment, and something like "/foo;v=1.1/bar;v=1.2/bazz" indicates a reference to version 1.1 of foo, and version 1.2 of bar. Representing parameters as a part of the query string will be just wrong. Current nginx behaviour is to treat parameters as a part of a path segment, which is believed to be compliant behaviour. -- Maxim Dounin http://nginx.org/ From r1ch+nginx at teamliquid.net Wed Feb 12 11:03:15 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 12 Feb 2014 12:03:15 +0100 Subject: How to disable PHP output buffering In-Reply-To: <31caa33411b71cdfd167bdd3ed36a087.NginxMailingListEnglish@forum.nginx.org> References: <31caa33411b71cdfd167bdd3ed36a087.NginxMailingListEnglish@forum.nginx.org> Message-ID: Did you check postpone_output? http://nginx.org/en/docs/http/ngx_http_core_module.html#postpone_output On Wed, Feb 12, 2014 at 10:41 AM, gaspy wrote: > Hi, > > I know this has been asked before, but I could not find a definitive > answer. > I tried different solutions, nothing worked. > > I have a PHP script that has to do time intensive operations and provide a > status update from time to time. No way around it. > I built a sample PHP script: > @ini_set('zlib.output_compression',0); > @ini_set('implicit_flush',1); > @ob_end_clean(); > ob_end_flush(); > $i=0; > while($i<10) > { > echo "$i\n"; > ob_flush(); > flush(); > sleep(1); > $i++; > } > ?> > > I have output_buffering = Off in php.ini > > In nginx I have > location ~ \.php$ > { > try_files $uri =404; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > > gzip off; > proxy_buffering off; > fastcgi_keep_conn on; > fastcgi_buffers 128 1k; # up to 1k + 128 * > 1k > fastcgi_max_temp_file_size 0; > fastcgi_buffer_size 1k; > fastcgi_buffering off; > } > (yeah, I put everything and the kitchen sink) > > Server is Ubuntu 13.04, nginx 1.5.9, php 5.4.9 > > Any ideas? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,247451,247451#msg-247451 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Wed Feb 12 13:16:29 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Wed, 12 Feb 2014 14:16:29 +0100 Subject: Path components interpretation by nginx. In-Reply-To: <20140212105129.GF38830@mdounin.ru> References: <20140212105129.GF38830@mdounin.ru> Message-ID: Hello Maxim, Thank you. In fact since I never saw this type of URI before on an API I thought that trying to use the path segment parameters as a query string argument was borderline RFC compliant. The original API I was referring to uses the parameter as an argument since they pass a session token as a parameter: /api;jsessionid=somehash?t=1&q=2 obviously they have some issues with their API well beyond merely being non RFC 3986 compliant :) Thanks, ----appa On Wed, Feb 12, 2014 at 11:51 AM, Maxim Dounin wrote: > Hello! > > On Wed, Feb 12, 2014 at 02:07:50AM +0100, Ant?nio P. P. Almeida wrote: > > > Hello, > > > > While doing an audit for a client I came across an URL of the from: > > > > http://host/foobar;arg=quux?q=en/somewhere&a=1&b=2 > > > > Now doing something like: > > > > location /test-args { > > return 200 "u: $uri\nq: $query_string\na: $args\n"; > > } > > > > This returns as the value of $uri the string foobar;arg=quux, i.e., the > > first parameter arg=quux is not being interpreted as an argument but as > > part of the URI. > > > > This is confirmed by changing the location to be exact using = /test-args > > in which case nginx cannot find a configuration for handling the request. > > > > Now if I understand correctly section 3.3 of the RFC > > http://tools.ietf.org/html/rfc3986#section-3.3 > > > > The path may consist of a sequence of path segments separated by a > > single slash "/" character. Within a path segment, the characters > > "/", ";", "=", and "?" are reserved. Each path segment may include a > > sequence of parameters, indicated by the semicolon ";" character. > > The parameters are not significant to the parsing of relative > > references. > > > > > > Which means that the above URL is perfectly legal with arg being > considered > > a parameter. > > > > Shouldn't nginx interpret arg=quux as an argument and not part of the URI > > in order to fully support the RFC in question? > > I don't see any incompatibilities with RFC in current nginx > behaviour. Parameters aren't significant to the parsing of > relative references, much like RFC states - i.e., "../foo" from > both "/bar;param/bazz" and "/bar/bazz" will result in the same > URI. > > Parameters are not query string though. Note that semantically > parameters are for a path segment, and something like > "/foo;v=1.1/bar;v=1.2/bazz" indicates a reference to version 1.1 > of foo, and version 1.2 of bar. Representing parameters as a part > of the query string will be just wrong. > > Current nginx behaviour is to treat parameters as a part of a path > segment, which is believed to be compliant behaviour. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Feb 12 14:28:44 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 12 Feb 2014 09:28:44 -0500 Subject: nginx-1.4.5 In-Reply-To: <20140211141051.GU1835@mdounin.ru> References: <20140211141051.GU1835@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.4.5 for Windows http://goo.gl/8l0JhQ (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via Twitter ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Feb 11, 2014 at 9:10 AM, Maxim Dounin wrote: > Changes with nginx 1.4.5 11 Feb > 2014 > > *) Bugfix: the $ssl_session_id variable contained full session > serialized instead of just a session id. > Thanks to Ivan Risti?. > > *) Bugfix: client connections might be immediately closed if deferred > accept was used; the bug had appeared in 1.3.15. > > *) Bugfix: alerts "zero size buf in output" might appear in logs while > proxying; the bug had appeared in 1.3.9. > > *) Bugfix: a segmentation fault might occur in a worker process if the > ngx_http_spdy_module was used. > > *) Bugfix: proxied WebSocket connections might hang right after > handshake if the select, poll, or /dev/poll methods were used. > > *) Bugfix: a timeout might occur while reading client request body in > an > SSL connection using chunked transfer encoding. > > *) Bugfix: memory leak in nginx/Windows. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 12 14:51:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Feb 2014 18:51:07 +0400 Subject: Path components interpretation by nginx. In-Reply-To: References: <20140212105129.GF38830@mdounin.ru> Message-ID: <20140212145107.GH38830@mdounin.ru> Hello! On Wed, Feb 12, 2014 at 02:16:29PM +0100, Ant?nio P. P. Almeida wrote: > Hello Maxim, > > Thank you. In fact since I never saw this type of URI before on an API I > thought that > trying to use the path segment parameters as a query string argument was > borderline > RFC compliant. > > The original API I was referring to uses the parameter as an argument since > they pass a session token > as a parameter: > > /api;jsessionid=somehash?t=1&q=2 > > obviously they have some issues with their API well beyond merely being non > RFC 3986 > compliant :) I don't think that passing session id as a path segment parameter is wrong per se. One can think of it as ""api" path segment, version for a session specified", and it should work as long as it's properly handled by the server which implements the API. But it may be non-trivial to work with such URIs in various software, including nginx, as path segment parameters support is usually quite limited. -- Maxim Dounin http://nginx.org/ From jacklinkers at gmail.com Wed Feb 12 14:57:17 2014 From: jacklinkers at gmail.com (jack linkers) Date: Wed, 12 Feb 2014 15:57:17 +0100 Subject: Fwd: .conf files In-Reply-To: References: <48145124-c33c-484e-96f3-d6d39a6f064b@googlegroups.com> Message-ID: > > Hello there, > > Does anybody can check my .conf files and maybe comment suggest good ones > for prestashop please ? > > Attached you'll find my domain .conf && my nginx.conf files > > Thanks in advance for time / help ! > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## #ngx_pagespeed module settings ## pagespeed on; pagespeed FileCachePath /var/ngx_pagespeed_cache; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_types application/ecmascript; gzip_types application/javascript; gzip_types application/json; gzip_types application/pdf; gzip_types application/postscript; gzip_types application/x-javascript; gzip_types image/svg+xml; gzip_types text/css; gzip_types text/csv; gzip_types text/javascript; gzip_types text/plain; gzip_types text/xml; gzip_http_version 1.1; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #} -------------- next part -------------- A non-text attachment was scrubbed... Name: carte-graphique-gamer.com.vhost Type: application/octet-stream Size: 6058 bytes Desc: not available URL: From nginx-forum at nginx.us Wed Feb 12 15:36:39 2014 From: nginx-forum at nginx.us (kate_r) Date: Wed, 12 Feb 2014 10:36:39 -0500 Subject: Delete temp upload files at the end of the request? Message-ID: <8a352bc16d64630ba8742c0c4116d741.NginxMailingListEnglish@forum.nginx.org> Hi With the upload module, is the uploaded file supposed to be removed automatically at the end of the request? That is, the file that has just been uploaded and stored in upload_store. I think I saw some behaviour of the files disappearing at the end of a request, but now I don't get that any more. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247462,247462#msg-247462 From shahzaib.cb at gmail.com Wed Feb 12 17:06:29 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 12 Feb 2014 22:06:29 +0500 Subject: Nginx crash !! Message-ID: Hello, We're using nginx (1.4.2) as reverse proxy in front of apache. Looks like nginx keeps on crashing. Is following alerts in nginx error_log are normal ? 2014/02/12 11:57:12 [alert] 6076#0: ignore long locked inactive cache entry f88e22ec45c2a0e03dd1284a6b4fb582, count:1 2014/02/12 11:57:38 [alert] 6076#0: ignore long locked inactive cache entry 78b8719be7ee228ea85a21b029bf9c21, count:1 2014/02/12 11:57:59 [alert] 6076#0: ignore long locked inactive cache entry 9b2717bd19ce4d441a8de010a6dbec6d, count:1 2014/02/12 11:58:21 [alert] 6076#0: ignore long locked inactive cache entry d80f5354bb4877bd13357e90e2b56fe2, count:1 2014/02/12 11:58:28 [alert] 6076#0: ignore long locked inactive cache entry e25fc78ee0add71ddf674996f680a0cc, count:1 2014/02/12 11:58:44 [alert] 6076#0: ignore long locked inactive cache entry f2cb4b66a9e41e7d845f8db52363df95, count:1 2014/02/12 11:58:52 [alert] 6076#0: ignore long locked inactive cache entry 0ee1db877902f1c673931c50dbed5974, count:1 2014/02/12 11:58:55 [alert] 6076#0: ignore long locked inactive cache entry cfe763b22b8b89d17d9ff9960218a9de, count:1 2014/02/12 12:01:25 [alert] 6076#0: ignore long locked inactive cache entry 50fe256a9035f2b0545e8fbdea9218e3, count:1 2014/02/12 12:02:19 [alert] 6076#0: ignore long locked inactive cache entry 4dce1edc6c7c34b765801dacc8d24cff, count:1 regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 12 17:16:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Feb 2014 21:16:56 +0400 Subject: Nginx crash !! In-Reply-To: References: Message-ID: <20140212171656.GL38830@mdounin.ru> Hello! On Wed, Feb 12, 2014 at 10:06:29PM +0500, shahzaib shahzaib wrote: > Hello, > > We're using nginx (1.4.2) as reverse proxy in front of apache. Looks > like nginx keeps on crashing. Is following alerts in nginx error_log are > normal ? > > 2014/02/12 11:57:12 [alert] 6076#0: ignore long locked inactive cache entry > f88e22ec45c2a0e03dd1284a6b4fb582, count:1 No, they are not, but they are somewhat expected if nginx workers have previously crashed. You should find the cause of crashes you see, and fix it. Start from upgrading to latest nginx version and compiling without 3rd party modules. See here for more hints: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Wed Feb 12 17:25:58 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 12 Feb 2014 22:25:58 +0500 Subject: Nginx crash !! In-Reply-To: <20140212171656.GL38830@mdounin.ru> References: <20140212171656.GL38830@mdounin.ru> Message-ID: Hello, Thanks for replying, i have upgraded nginx to 1.4.4 using following guide but still facing the same error. http://nginxcp.com/installation-instruction/ Regards. Shahzaib On Wed, Feb 12, 2014 at 10:16 PM, Maxim Dounin wrote: > Hello! > > On Wed, Feb 12, 2014 at 10:06:29PM +0500, shahzaib shahzaib wrote: > > > Hello, > > > > We're using nginx (1.4.2) as reverse proxy in front of apache. > Looks > > like nginx keeps on crashing. Is following alerts in nginx error_log are > > normal ? > > > > 2014/02/12 11:57:12 [alert] 6076#0: ignore long locked inactive cache > entry > > f88e22ec45c2a0e03dd1284a6b4fb582, count:1 > > No, they are not, but they are somewhat expected if nginx workers > have previously crashed. You should find the cause of crashes you > see, and fix it. Start from upgrading to latest nginx version and > compiling without 3rd party modules. > > See here for more hints: > > http://wiki.nginx.org/Debugging > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Feb 12 17:37:59 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 12 Feb 2014 22:37:59 +0500 Subject: Nginx crash !! In-Reply-To: References: <20140212171656.GL38830@mdounin.ru> Message-ID: Hello maxim, I've enabled debug option and found the following error with nginx : cache loader process exited with code 0 Any hint regarding crash ? Regards. Shahzaib On Wed, Feb 12, 2014 at 10:25 PM, shahzaib shahzaib wrote: > Hello, > > Thanks for replying, i have upgraded nginx to 1.4.4 using > following guide but still facing the same error. > > http://nginxcp.com/installation-instruction/ > > Regards. > Shahzaib > > > > On Wed, Feb 12, 2014 at 10:16 PM, Maxim Dounin wrote: > >> Hello! >> >> On Wed, Feb 12, 2014 at 10:06:29PM +0500, shahzaib shahzaib wrote: >> >> > Hello, >> > >> > We're using nginx (1.4.2) as reverse proxy in front of apache. >> Looks >> > like nginx keeps on crashing. Is following alerts in nginx error_log are >> > normal ? >> > >> > 2014/02/12 11:57:12 [alert] 6076#0: ignore long locked inactive cache >> entry >> > f88e22ec45c2a0e03dd1284a6b4fb582, count:1 >> >> No, they are not, but they are somewhat expected if nginx workers >> have previously crashed. You should find the cause of crashes you >> see, and fix it. Start from upgrading to latest nginx version and >> compiling without 3rd party modules. >> >> See here for more hints: >> >> http://wiki.nginx.org/Debugging >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Feb 12 17:38:35 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 12 Feb 2014 21:38:35 +0400 Subject: How to disable PHP output buffering In-Reply-To: References: <31caa33411b71cdfd167bdd3ed36a087.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2157933.hWLEkgeSWO@vbart-laptop> On Wednesday 12 February 2014 12:03:15 Richard Stanway wrote: > Did you check postpone_output? > > http://nginx.org/en/docs/http/ngx_http_core_module.html#postpone_output [..] It doesn't matter since fastcgi_buffering switched off. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Feb 12 17:51:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Feb 2014 21:51:50 +0400 Subject: Nginx crash !! In-Reply-To: References: <20140212171656.GL38830@mdounin.ru> Message-ID: <20140212175150.GN38830@mdounin.ru> Hello! On Wed, Feb 12, 2014 at 10:25:58PM +0500, shahzaib shahzaib wrote: > Thanks for replying, i have upgraded nginx to 1.4.4 using following > guide but still facing the same error. > > http://nginxcp.com/installation-instruction/ I would recommend upgrading to latest version, as available from the official download page: http://nginx.org/en/download.html Anyway, it looks like version you've installed have no major modifications and should work, and I don't think your problems are related to changes in nginx 1.4.5. So please follow hints here: http://wiki.nginx.org/Debugging#Asking_for_help -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Feb 12 17:52:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Feb 2014 21:52:57 +0400 Subject: Nginx crash !! In-Reply-To: References: <20140212171656.GL38830@mdounin.ru> Message-ID: <20140212175257.GO38830@mdounin.ru> Hello! On Wed, Feb 12, 2014 at 10:37:59PM +0500, shahzaib shahzaib wrote: > Hello maxim, > > I've enabled debug option and found the following error with > nginx : > > cache loader process exited with code 0 > > Any hint regarding crash ? This is not an error, it's just a normal exit of a cache loader process. -- Maxim Dounin http://nginx.org/ From j.andolini at manualsparadise.com Wed Feb 12 19:39:41 2014 From: j.andolini at manualsparadise.com (Jack Andolini) Date: Wed, 12 Feb 2014 14:39:41 -0500 Subject: After 1 minute, I get this error: "connect() to 127.0.0.1:8080 failed (99: Cannot assign requested address) while connecting to upstream" Message-ID: Hi, First of all, my environment: - About 1.6 GB RAM, which doesn't seem to be a bottleneck because actually I'm barely using it. - CPU fast enough (I guess) - Ubuntu 12.0.4 (32 bits, probably thats irrelevant here) - My users make requests using port 80 (actually not specifyng the port) to call a service I'm running on my server. - Nginx 1.4.3 receives the requests, and then derives them to Tomcat 7.0.33 - Tomcat 7.0.33 is running on port 8080. My website/service has been always running fine. I'm making stress tests in order to see if I can handle about 1000 queries per second, and I'm getting this error message in Nginx's log: 2014/02/12 09:59:42 [crit] 806#0: *595361 connect() to 127.0.0.1:8080failed (99: Cannot assign requested address) while connecting to upstream, client: 58.81.5.31, server: api.acme.com, request: "GET /iplocate/locate?key=UZ6FD8747F76VZ&ip=61.157.194.46 HTTP/1.1", upstream: " http://127.0.0.1:8080/iplocate/locate?key=UZ6FD8747F76VZ&ip=61.157.194.46", host: "services.acme.com" My users are geting a "BAD GATEWAY" error status, which they are getting in their Java clients with an exception. My interpretation is that Nginx is suddenly unable to communicate with Tomcat, so it delivers an HTTP error status code "BAD GATEWAY". It starts running fine, but after 1-2 minutes I start geting this error. Obviously I'm running out of some kind of resource (ports?). If I wait for a few minutes to let the system "rest", then it works again for a while (1 minute maybe) and then it fails again. If I run this command (which I suspect is relevant): sysctl net.ipv4.ip_local_port_range I get this response, which I think its standard in Ubuntu (I haven't messed with it): root at ip-10-41-156-142:~# sysctl net.ipv4.ip_local_port_range net.ipv4.ip_local_port_range = 32768 61000 I have read some postings about configuring ports in order to get rid of this error message, but I don't know if that is my problem. Could somebody please help me? Brian ============================== NGINX CONFIGURATION FOLLOWS=========== user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { limit_req_status 429; map $arg_capacity $1X_key{~*^1X$ $http_x_forwarded_for;default "";} limit_req_zone $1X_key zone=1X:1m rate=180r/m; ## # Basic Settings ## client_max_body_size 0m; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## log_format formato_especial '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_real_ip" "$http_x_forwarded_for"'; access_log off; error_log /var/log/nginx/error.log; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; server { limit_req zone=1X burst=300; limit_req_log_level error; listen 80; server_name api.acme.com https-api.acme.com services.acme.com https-services.acme.com; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:8080/; } } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Feb 12 20:06:08 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 12 Feb 2014 20:06:08 +0000 Subject: Delete temp upload files at the end of the request? In-Reply-To: <8a352bc16d64630ba8742c0c4116d741.NginxMailingListEnglish@forum.nginx.org> References: <8a352bc16d64630ba8742c0c4116d741.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140212200608.GB24015@craic.sysops.org> On Wed, Feb 12, 2014 at 10:36:39AM -0500, kate_r wrote: Hi there, > With the upload module, is the uploaded file supposed to be removed > automatically at the end of the request? That is, the file that has just > been uploaded and stored in upload_store. I think I saw some behaviour of > the files disappearing at the end of a request, but now I don't get that any > more. There's possibly a directive to determine whether the file is automatically removed or not. What does the documentation say? Is this the one you are using: http://wiki.nginx.org/HttpUploadModule ? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Feb 12 20:09:20 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 12 Feb 2014 20:09:20 +0000 Subject: proxy_pass & getServerPort problems In-Reply-To: <02e45754f4d005d467c8325f35d738e5@ruby-forum.com> References: <02e45754f4d005d467c8325f35d738e5@ruby-forum.com> Message-ID: <20140212200920.GC24015@craic.sysops.org> On Wed, Feb 12, 2014 at 08:51:13AM +0100, Jiang Web wrote: Hi there, > location /NetWeb/ { > proxy_pass http://w3new_cls/NetWeb/; > proxy_redirect off; > proxy_set_header Host $host:80; > when I access the URL: http://w3.huawei.com/NetWeb and the application > use the <%request.getServerPort%> to get the server port is 9090, Not > the port 80. > But I want to get the port is w3.huawei.com:80, How can I to resovle the > problems. getServerPort is not an nginx thing, and nginx probably cannot affect what it reports. What do you do with the result of getServerPort? Would using something like getHeader("Host") be adequate instead? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Feb 12 20:37:11 2014 From: nginx-forum at nginx.us (gaspy) Date: Wed, 12 Feb 2014 15:37:11 -0500 Subject: How to disable PHP output buffering In-Reply-To: <2157933.hWLEkgeSWO@vbart-laptop> References: <2157933.hWLEkgeSWO@vbart-laptop> Message-ID: Well, I tried with postpone_output off anyway, no joy. I verified that gzip is actually off. I'm out of ideas.... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247451,247479#msg-247479 From perusio at gmail.com Wed Feb 12 21:36:09 2014 From: perusio at gmail.com (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Wed, 12 Feb 2014 22:36:09 +0100 Subject: How to disable PHP output buffering In-Reply-To: References: <2157933.hWLEkgeSWO@vbart-laptop> Message-ID: http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_buffering Set: fastcgi_buffering off; and you're done. ----appa On Wed, Feb 12, 2014 at 9:37 PM, gaspy wrote: > Well, I tried with postpone_output off anyway, no joy. > > I verified that gzip is actually off. I'm out of ideas.... > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,247451,247479#msg-247479 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 12 21:45:05 2014 From: nginx-forum at nginx.us (erankor2) Date: Wed, 12 Feb 2014 16:45:05 -0500 Subject: Using memcached from an Nginx module Message-ID: <96ce6ae73240f552058d25d439a09548.NginxMailingListEnglish@forum.nginx.org> Hi All, I want to develop an Nginx HTTP module that gets several values from memcache, performs some processing on them and returns the result to the client. I want all memcache operations to be performed asynchronously without blocking the worker process for maximum scalability. For this reason, I can't use libmemcached, and I'm looking for something that is built on Nginx. Is anyone familiar with a piece of code that does that ? Some more details: As I understand, it is theoretically possible to use the existing memcache upstream module for this, using subrequests. For example, to get a value I could start a subrequest for it and then handle the result when the subrequest completes. But this approach sounds a bit awkward, and it has the unnecessary overhead of processing all these dummy HTTP requests. What I would like ideally, is a module that exposes a function like - memcache_get(keyname, callback, context), where the callback is called either with the value if successful or an error code. I started writing such a module using the http_upstream module as reference, but found that it takes quite a bit of code to get it done, with lots of edge cases to handle (like timeouts in each one of the states) Thank you ! Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247481,247481#msg-247481 From david.birdsong at gmail.com Wed Feb 12 22:08:26 2014 From: david.birdsong at gmail.com (David Birdsong) Date: Wed, 12 Feb 2014 14:08:26 -0800 Subject: Using memcached from an Nginx module In-Reply-To: <96ce6ae73240f552058d25d439a09548.NginxMailingListEnglish@forum.nginx.org> References: <96ce6ae73240f552058d25d439a09548.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Feb 12, 2014 at 1:45 PM, erankor2 wrote: > Hi All, > > I want to develop an Nginx HTTP module that gets several values from > memcache, performs some processing on them and returns the result to the > client. I want all memcache operations to be performed asynchronously > without blocking the worker process for maximum scalability. For this > reason, I can't use libmemcached, and I'm looking for something that is > built on Nginx. > Is anyone familiar with a piece of code that does that ? > > Some more details: > As I understand, it is theoretically possible to use the existing memcache > upstream module for this, using subrequests. For example, to get a value I > could start a subrequest for it and then handle the result when the > subrequest completes. But this approach sounds a bit awkward, and it has > the > unnecessary overhead of processing all these dummy HTTP requests. > What I would like ideally, is a module that exposes a function like - > memcache_get(keyname, callback, context), where the callback is called > either with the value if successful or an error code. I started writing > such > a module using the http_upstream module as reference, but found that it > takes quite a bit of code to get it done, with lots of edge cases to handle > (like timeouts in each one of the states) > > yeah, use the http://openresty.org/ framework which includes: https://github.com/agentzh/lua-resty-memcached it couldn't be more perfect fit for what you're asking. > Thank you ! > > Eran > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,247481,247481#msg-247481 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 12 22:29:15 2014 From: nginx-forum at nginx.us (erankor2) Date: Wed, 12 Feb 2014 17:29:15 -0500 Subject: Using memcached from an Nginx module In-Reply-To: References: Message-ID: <089b032fcadbce75179eec407c380da3.NginxMailingListEnglish@forum.nginx.org> Thank you for your reply. But, actually, I am looking for a native C module... Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247481,247483#msg-247483 From emailgrant at gmail.com Wed Feb 12 23:23:05 2014 From: emailgrant at gmail.com (Grant) Date: Wed, 12 Feb 2014 15:23:05 -0800 Subject: fastcgi & index Message-ID: I've found that if I don't specify: index index.html index.htm index.php; in the server blocks where I use fastcgi, I can get a 403 due to the forbidden directory index. I would have thought 'fastcgi_index index.php;' would take care of that. If this is the expected behavior, should the index directive be added to the fastcgi wiki? http://wiki.nginx.org/HttpFastcgiModule - Grant From emailgrant at gmail.com Wed Feb 12 23:26:21 2014 From: emailgrant at gmail.com (Grant) Date: Wed, 12 Feb 2014 15:26:21 -0800 Subject: minimal fastcgi config for 1 file? Message-ID: Is it OK to use a minimal fastcgi configuration for a single file like this: location ~ ^/piwik/piwik.php$ { fastcgi_pass unix:/run/php-fpm.socket; include fastcgi_params; } - Grant From reallfqq-nginx at yahoo.fr Wed Feb 12 23:53:15 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 13 Feb 2014 00:53:15 +0100 Subject: How to disable PHP output buffering In-Reply-To: References: <2157933.hWLEkgeSWO@vbart-laptop> Message-ID: Don't forget taking into account browser buffering: depending on which one you are using, it waits for a certain amount of data before displaying anything. To convince you of that, listen to the incoming network traffic to check that data is arriving to the client. That's a limit upon you cannot do much. To ensure configuration of the PHP part is done correctly, you can dump communication between nginx and PHP. With all that, you should be able to reach some conclusions. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Thu Feb 13 01:00:48 2014 From: lists at ruby-forum.com (Jiang Web) Date: Thu, 13 Feb 2014 02:00:48 +0100 Subject: proxy_pass & getServerPort problems In-Reply-To: <20140212200920.GC24015@craic.sysops.org> References: <02e45754f4d005d467c8325f35d738e5@ruby-forum.com> <20140212200920.GC24015@craic.sysops.org> Message-ID: Francis Daly wrote in post #1136476: > On Wed, Feb 12, 2014 at 08:51:13AM +0100, Jiang Web wrote: > > Hi there, > >> location /NetWeb/ { >> proxy_pass http://w3new_cls/NetWeb/; >> proxy_redirect off; >> proxy_set_header Host $host:80; > >> when I access the URL: http://w3.huawei.com/NetWeb and the application >> use the <%request.getServerPort%> to get the server port is 9090, Not >> the port 80. >> But I want to get the port is w3.huawei.com:80, How can I to resovle the >> problems. > > getServerPort is not an nginx thing, and nginx probably cannot affect > what it reports. > > What do you do with the result of getServerPort? > > Would using something like getHeader("Host") be adequate instead? > > f > -- > Francis Daly francis at daoine.org When I access the application http://w3.huawei.com/NetWeb. the application will 302(relocation) to https://login.huawei.com/login?redirect=xxxxxxxxx to make the user login the system(the http://login.huawei.com/login is our sso system). The xxxxxxx in the URI is the last request url(we think is http://w3.huawei.com/NetWeb), and we get the last request url(in java program) using <%request.getRequestURL%>, but the result is xxxxxxxx=http://w3.huawei.com:9090/NetWeb not the http://w3.huawei.com/NetWeb. We test it and found that: <%request.getServerName%> is w3.huawei.com <%request.getServerPort%> is 9090 <%request.getURI%> is /NetWeb so we think that when we use the nginx as a reverse proxy to proxy the request, the request.getServerPort is 9090 not 80. If we use the apache as a reverse proxy to proxy the request adding parameter "ProxyPreserveHost on", the request.getServerPort is 80. Thanks for help and reply. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Feb 13 05:54:54 2014 From: nginx-forum at nginx.us (kate_r) Date: Thu, 13 Feb 2014 00:54:54 -0500 Subject: Delete temp upload files at the end of the request? In-Reply-To: <20140212200608.GB24015@craic.sysops.org> References: <20140212200608.GB24015@craic.sysops.org> Message-ID: <4c257abd079a843922f2cfcb0f77edbd.NginxMailingListEnglish@forum.nginx.org> Yes, this is the one I use - but I can't find it in the documentation. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247462,247490#msg-247490 From nginx-forum at nginx.us Thu Feb 13 05:59:39 2014 From: nginx-forum at nginx.us (kate_r) Date: Thu, 13 Feb 2014 00:59:39 -0500 Subject: Delete temp upload files at the end of the request? In-Reply-To: <4c257abd079a843922f2cfcb0f77edbd.NginxMailingListEnglish@forum.nginx.org> References: <20140212200608.GB24015@craic.sysops.org> <4c257abd079a843922f2cfcb0f77edbd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2489793bed55b2ccfe992cdd097deb43.NginxMailingListEnglish@forum.nginx.org> I guess upload_cleanup 200 is relevant here. I'll give that a try. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247462,247491#msg-247491 From nginx-forum at nginx.us Thu Feb 13 06:19:21 2014 From: nginx-forum at nginx.us (gaspy) Date: Thu, 13 Feb 2014 01:19:21 -0500 Subject: How to disable PHP output buffering In-Reply-To: References: Message-ID: I already have fastcgi_buffering off (it was in my original email). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247451,247492#msg-247492 From nginx-forum at nginx.us Thu Feb 13 06:20:18 2014 From: nginx-forum at nginx.us (gaspy) Date: Thu, 13 Feb 2014 01:20:18 -0500 Subject: How to disable PHP output buffering In-Reply-To: References: Message-ID: <41559863ab5dd92d46a978e5c97c9b1b.NginxMailingListEnglish@forum.nginx.org> "To ensure configuration of the PHP part is done correctly, you can dump communication between nginx and PHP." Now that sounds interesting. How can I do this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247451,247493#msg-247493 From artemrts at ukr.net Thu Feb 13 06:52:36 2014 From: artemrts at ukr.net (wishmaster) Date: Thu, 13 Feb 2014 08:52:36 +0200 Subject: How to disable PHP output buffering In-Reply-To: <41559863ab5dd92d46a978e5c97c9b1b.NginxMailingListEnglish@forum.nginx.org> References: <41559863ab5dd92d46a978e5c97c9b1b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1392274284.136885012.d3surapq@frv34.fwdcdn.com> --- Original message --- From: "gaspy" Date: 13 February 2014, 08:20:21 > "To ensure configuration of the PHP part is done correctly, you can dump > communication between nginx and PHP." > > Now that sounds interesting. How can I do this? > Use php process listening on inet/inet6 socket instead UNIX-socket and tcpdump... From francis at daoine.org Thu Feb 13 10:21:00 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 13 Feb 2014 10:21:00 +0000 Subject: proxy_pass & getServerPort problems In-Reply-To: References: <02e45754f4d005d467c8325f35d738e5@ruby-forum.com> <20140212200920.GC24015@craic.sysops.org> Message-ID: <20140213102100.GE24015@craic.sysops.org> On Thu, Feb 13, 2014 at 02:00:48AM +0100, Jiang Web wrote: > Francis Daly wrote in post #1136476: Hi there, > > getServerPort is not an nginx thing, and nginx probably cannot affect > > what it reports. > > > > What do you do with the result of getServerPort? > we get the last request url(in java > program) using <%request.getRequestURL%>, but the result is > xxxxxxxx=http://w3.huawei.com:9090/NetWeb not the > http://w3.huawei.com/NetWeb. We test it and found that: > <%request.getServerName%> is w3.huawei.com > <%request.getServerPort%> is 9090 > <%request.getURI%> is /NetWeb > so we think that when we use the nginx as a reverse proxy to proxy the > request, the request.getServerPort is 9090 not 80. If we use the apache > as a reverse proxy to proxy the request adding parameter > "ProxyPreserveHost on", the request.getServerPort is 80. The nginx equivalent to "ProxyPreserveHost on" is probably "proxy_set_header Host $http_host". It might be worth trying that. If that doesn't work, then perhaps tcpdump the traffic between apache and the backend when it works, and between nginx and the backend when it fails, and spot the difference. The relevant question is probably: what does your java getServerPort (or getRequestURL) do to find its answer? Once you know that, you can see whether it is possible for nginx to influence it. (Since apache did influence it, presumably nginx can too.) f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Feb 13 12:34:30 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Feb 2014 16:34:30 +0400 Subject: fastcgi & index In-Reply-To: References: Message-ID: <20140213123429.GT38830@mdounin.ru> Hello! On Wed, Feb 12, 2014 at 03:23:05PM -0800, Grant wrote: > I've found that if I don't specify: > > index index.html index.htm index.php; > > in the server blocks where I use fastcgi, I can get a 403 due to the > forbidden directory index. I would have thought 'fastcgi_index > index.php;' would take care of that. If this is the expected > behavior, should the index directive be added to the fastcgi wiki? This is the expected and documented behaviour. The "fastcgi_index" directive is to instruct a fastcgi backend which file to use if a request with an URI ending with "/" is passed to the backend. That is, it makes sense in a configuration like this: location / { fastcgi_pass localhost:9000; fastcgi_index index.php; include fastcgi.conf; } It doesn't make sense in configurations with only *.php file passed to fastcgi backends though. E.g., in a configuration like this it doesn't make sense and should be removed: location ~ \.php$ { fastcgi_pass localhost:9000; # wrong: fastcgi_index doesn't make sense here fastcgi_index index.php; include fastcgi.conf; } In this case, normal index processing applies. It is explained in details in an introduction article here: http://nginx.org/en/docs/http/request_processing.html#simple_php_site_configuration -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Feb 13 12:42:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Feb 2014 16:42:02 +0400 Subject: minimal fastcgi config for 1 file? In-Reply-To: References: Message-ID: <20140213124202.GU38830@mdounin.ru> Hello! On Wed, Feb 12, 2014 at 03:26:21PM -0800, Grant wrote: > Is it OK to use a minimal fastcgi configuration for a single file like this: > > location ~ ^/piwik/piwik.php$ { It doesn't make sense to use regular expression here. Instead, use exact match location: location = /piwik/piwik.php { > fastcgi_pass unix:/run/php-fpm.socket; > include fastcgi_params; The "fastcgi_params" file as shipped with nginx doesn't set the SCRIPT_FILENAME parameter, and php will likely unable to handle such a request. Use "include fastcgi.conf;" instead if you don't need SCRIPT_FILENAME customization. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Feb 13 12:51:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Feb 2014 16:51:58 +0400 Subject: After 1 minute, I get this error: "connect() to 127.0.0.1:8080 failed (99: Cannot assign requested address) while connecting to upstream" In-Reply-To: References: Message-ID: <20140213125158.GV38830@mdounin.ru> Hello! On Wed, Feb 12, 2014 at 02:39:41PM -0500, Jack Andolini wrote: > Hi, > > First of all, my environment: > > - About 1.6 GB RAM, which doesn't seem to be a bottleneck because actually > I'm barely using it. > - CPU fast enough (I guess) > - Ubuntu 12.0.4 (32 bits, probably thats irrelevant here) > - My users make requests using port 80 (actually not specifyng the port) to > call a service I'm running on my server. > - Nginx 1.4.3 receives the requests, and then derives them to Tomcat 7.0.33 > - Tomcat 7.0.33 is running on port 8080. > > My website/service has been always running fine. I'm making stress tests in > order to see if I can handle about 1000 queries per second, and I'm getting > this error message in Nginx's log: > > > 2014/02/12 09:59:42 [crit] 806#0: *595361 connect() to > 127.0.0.1:8080failed (99: Cannot assign requested address) while > connecting to upstream, > client: 58.81.5.31, server: api.acme.com, request: "GET > /iplocate/locate?key=UZ6FD8747F76VZ&ip=61.157.194.46 HTTP/1.1", upstream: " > http://127.0.0.1:8080/iplocate/locate?key=UZ6FD8747F76VZ&ip=61.157.194.46", > host: "services.acme.com" > > My users are geting a "BAD GATEWAY" error status, which they are getting in > their Java clients with an exception. My interpretation is that Nginx is > suddenly unable to communicate with Tomcat, so it delivers an HTTP error > status code "BAD GATEWAY". > > It starts running fine, but after 1-2 minutes I start geting this error. > Obviously I'm running out of some kind of resource (ports?). If I wait for > a few minutes to let the system "rest", then it works again for a while (1 > minute maybe) and then it fails again. > > If I run this command (which I suspect is relevant): > > sysctl net.ipv4.ip_local_port_range > > I get this response, which I think its standard in Ubuntu (I haven't messed > with it): > > root at ip-10-41-156-142:~# sysctl net.ipv4.ip_local_port_range > net.ipv4.ip_local_port_range = 32768 61000 > > I have read some postings about configuring ports in order to get rid of > this error message, but I don't know if that is my problem. You've run out of local ports due to sockets in TIME-WAIT state. There are more than one possible solution, including using lower MSL, using bigger local port range, using keepalive connections, using unix sockets to connect to backends and so on. Simpliest solution would be to enable TIME-WAIT sockets reuse, with net.ipv4.tcp_tw_reuse (or net.ipv4.tcp_tw_recycle): # sysctl net.ipv4.tcp_tw_reuse=1 -- Maxim Dounin http://nginx.org/ From appa at perusio.net Thu Feb 13 13:09:34 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 13 Feb 2014 14:09:34 +0100 Subject: fastcgi & index In-Reply-To: <20140213123429.GT38830@mdounin.ru> References: <20140213123429.GT38830@mdounin.ru> Message-ID: This type of configuration is insecure since there's no whitelisting of the PHP scripts to be processed. ----appa On Thu, Feb 13, 2014 at 1:34 PM, Maxim Dounin wrote: > Hello! > > On Wed, Feb 12, 2014 at 03:23:05PM -0800, Grant wrote: > > > I've found that if I don't specify: > > > > index index.html index.htm index.php; > > > > in the server blocks where I use fastcgi, I can get a 403 due to the > > forbidden directory index. I would have thought 'fastcgi_index > > index.php;' would take care of that. If this is the expected > > behavior, should the index directive be added to the fastcgi wiki? > > This is the expected and documented behaviour. > > The "fastcgi_index" directive is to instruct a fastcgi backend > which file to use if a request with an URI ending with "/" is > passed to the backend. That is, it makes sense in a configuration > like this: > > location / { > fastcgi_pass localhost:9000; > fastcgi_index index.php; > include fastcgi.conf; > } > > It doesn't make sense in configurations with only *.php file > passed to fastcgi backends though. E.g., in a configuration like > this it doesn't make sense and should be removed: > > location ~ \.php$ { > fastcgi_pass localhost:9000; > # wrong: fastcgi_index doesn't make sense here > fastcgi_index index.php; > include fastcgi.conf; > } > > In this case, normal index processing applies. It is explained in > details in an introduction article here: > > > http://nginx.org/en/docs/http/request_processing.html#simple_php_site_configuration > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Feb 13 13:24:10 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 13 Feb 2014 18:24:10 +0500 Subject: Nginx crash !! In-Reply-To: <20140212175257.GO38830@mdounin.ru> References: <20140212171656.GL38830@mdounin.ru> <20140212175257.GO38830@mdounin.ru> Message-ID: Hello Maxim, I switched back to old nginxcp script which is running on other server without issue and looks like error is vanished now. Nginx reverse proxy in front of apache is working fine now. Nginx version is 1.2.7 Will let you know in case of issue. Regards. Shahzaib On Wed, Feb 12, 2014 at 10:52 PM, Maxim Dounin wrote: > Hello! > > On Wed, Feb 12, 2014 at 10:37:59PM +0500, shahzaib shahzaib wrote: > > > Hello maxim, > > > > I've enabled debug option and found the following error > with > > nginx : > > > > cache loader process exited with code 0 > > > > Any hint regarding crash ? > > This is not an error, it's just a normal exit of a cache loader > process. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Feb 13 13:29:10 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Feb 2014 17:29:10 +0400 Subject: fastcgi & index In-Reply-To: References: <20140213123429.GT38830@mdounin.ru> Message-ID: <20140213132910.GX38830@mdounin.ru> Hello! On Thu, Feb 13, 2014 at 02:09:34PM +0100, Ant?nio P. P. Almeida wrote: > This type of configuration is insecure since there's no whitelisting of the > PHP scripts to be processed. You mean "location / { fastcgi_pass ... }"? This type of configuration assumes that any files under "/" are php scripts, and it's ok to execute them. Obviously it won't be secure if you allow utrusted parties to put files there. But the problem is what you allow, not the configuration per se. -- Maxim Dounin http://nginx.org/ From appa at perusio.net Thu Feb 13 13:47:35 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 13 Feb 2014 14:47:35 +0100 Subject: fastcgi & index In-Reply-To: <20140213132910.GX38830@mdounin.ru> References: <20140213123429.GT38830@mdounin.ru> <20140213132910.GX38830@mdounin.ru> Message-ID: No I mean the \.php regex based one. It's just that it opens the door to a lot of problems by allowing all .php scripts to be processed. Furthermore it's even mentioned on the wiki Pitfalls page: http://wiki.nginx.org/Pitfalls#Passing_Uncontrolled_Requests_to_PHP ----appa On Thu, Feb 13, 2014 at 2:29 PM, Maxim Dounin wrote: > Hello! > > On Thu, Feb 13, 2014 at 02:09:34PM +0100, Ant?nio P. P. Almeida wrote: > > > This type of configuration is insecure since there's no whitelisting of > the > > PHP scripts to be processed. > > You mean "location / { fastcgi_pass ... }"? This type of > configuration assumes that any files under "/" are php scripts, > and it's ok to execute them. > > Obviously it won't be secure if you allow utrusted parties to put > files there. But the problem is what you allow, not the > configuration per se. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Thu Feb 13 13:50:18 2014 From: emailgrant at gmail.com (Grant) Date: Thu, 13 Feb 2014 05:50:18 -0800 Subject: minimal fastcgi config for 1 file? In-Reply-To: <20140213124202.GU38830@mdounin.ru> References: <20140213124202.GU38830@mdounin.ru> Message-ID: >> Is it OK to use a minimal fastcgi configuration for a single file like this: >> >> location ~ ^/piwik/piwik.php$ { > > It doesn't make sense to use regular expression here. Instead, > use exact match location: > > location = /piwik/piwik.php { I'm only using one instance of location = or ^~ and that is on a very frequently used location. Should I keep it that way for performance? The idea is that nginx won't have to evaluate all of the other locations when the frequently used location is accessed. >> fastcgi_pass unix:/run/php-fpm.socket; >> include fastcgi_params; > > The "fastcgi_params" file as shipped with nginx doesn't set the > SCRIPT_FILENAME parameter, and php will likely unable to handle > such a request. Use "include fastcgi.conf;" instead if you don't > need SCRIPT_FILENAME customization. I noticed my distro doesn't include any of the following in fastcgi_params and only the first of these in fastcgi.conf: SCRIPT_FILENAME PATH_INFO PATH_TRANSLATED They are all included in fastcgi_params in the example here: http://wiki.nginx.org/PHPFcgiExample Should they all be added to fastcgi_params? - Grant From appa at perusio.net Thu Feb 13 13:53:25 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 13 Feb 2014 14:53:25 +0100 Subject: Path components interpretation by nginx. In-Reply-To: <20140212145107.GH38830@mdounin.ru> References: <20140212105129.GF38830@mdounin.ru> <20140212145107.GH38830@mdounin.ru> Message-ID: This means that if relying solely on nginx we need multiple regexes to extract the parameters (we need to match on both the unescaped and escaped characters) or using Lua we can unescape and do string processing using the Lua libraries to extract the parameters. Correct? ----appa On Wed, Feb 12, 2014 at 3:51 PM, Maxim Dounin wrote: > Hello! > > On Wed, Feb 12, 2014 at 02:16:29PM +0100, Ant?nio P. P. Almeida wrote: > > > Hello Maxim, > > > > Thank you. In fact since I never saw this type of URI before on an API I > > thought that > > trying to use the path segment parameters as a query string argument was > > borderline > > RFC compliant. > > > > The original API I was referring to uses the parameter as an argument > since > > they pass a session token > > as a parameter: > > > > /api;jsessionid=somehash?t=1&q=2 > > > > obviously they have some issues with their API well beyond merely being > non > > RFC 3986 > > compliant :) > > I don't think that passing session id as a path segment parameter > is wrong per se. One can think of it as ""api" path segment, > version for a session specified", and it should work as long as > it's properly handled by the server which implements the API. But > it may be non-trivial to work with such URIs in various software, > including nginx, as path segment parameters support is usually > quite limited. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Thu Feb 13 13:59:43 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 13 Feb 2014 14:59:43 +0100 Subject: minimal fastcgi config for 1 file? In-Reply-To: References: <20140213124202.GU38830@mdounin.ru> Message-ID: If you want to run Piwik in a subdirectory of the webroot, then check this post on how to run Piwik and Drupal together. The idea is the same for running Piwik with any other PHP based application. https://groups.drupal.org/node/407348#comment-1012438 ----appa On Thu, Feb 13, 2014 at 2:50 PM, Grant wrote: > >> Is it OK to use a minimal fastcgi configuration for a single file like > this: > >> > >> location ~ ^/piwik/piwik.php$ { > > > > It doesn't make sense to use regular expression here. Instead, > > use exact match location: > > > > location = /piwik/piwik.php { > > I'm only using one instance of location = or ^~ and that is on a very > frequently used location. Should I keep it that way for performance? > The idea is that nginx won't have to evaluate all of the other > locations when the frequently used location is accessed. > > >> fastcgi_pass unix:/run/php-fpm.socket; > >> include fastcgi_params; > > > > The "fastcgi_params" file as shipped with nginx doesn't set the > > SCRIPT_FILENAME parameter, and php will likely unable to handle > > such a request. Use "include fastcgi.conf;" instead if you don't > > need SCRIPT_FILENAME customization. > > I noticed my distro doesn't include any of the following in > fastcgi_params and only the first of these in fastcgi.conf: > > SCRIPT_FILENAME > PATH_INFO > PATH_TRANSLATED > > They are all included in fastcgi_params in the example here: > > http://wiki.nginx.org/PHPFcgiExample > > Should they all be added to fastcgi_params? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Thu Feb 13 14:12:58 2014 From: emailgrant at gmail.com (Grant) Date: Thu, 13 Feb 2014 06:12:58 -0800 Subject: fastcgi & index In-Reply-To: <20140213123429.GT38830@mdounin.ru> References: <20140213123429.GT38830@mdounin.ru> Message-ID: > The "fastcgi_index" directive is to instruct a fastcgi backend > which file to use if a request with an URI ending with "/" is > passed to the backend. That is, it makes sense in a configuration > like this: > > location / { > fastcgi_pass localhost:9000; > fastcgi_index index.php; > include fastcgi.conf; > } > > It doesn't make sense in configurations with only *.php file > passed to fastcgi backends though. E.g., in a configuration like > this it doesn't make sense and should be removed: > > location ~ \.php$ { > fastcgi_pass localhost:9000; > # wrong: fastcgi_index doesn't make sense here > fastcgi_index index.php; > include fastcgi.conf; > } In that case, should it be removed from the example here: http://wiki.nginx.org/PHPFcgiExample - Grant From mdounin at mdounin.ru Thu Feb 13 14:14:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Feb 2014 18:14:11 +0400 Subject: fastcgi & index In-Reply-To: References: <20140213123429.GT38830@mdounin.ru> <20140213132910.GX38830@mdounin.ru> Message-ID: <20140213141411.GY38830@mdounin.ru> Hello! On Thu, Feb 13, 2014 at 02:47:35PM +0100, Ant?nio P. P. Almeida wrote: > No I mean the \.php regex based one. So now you probably know why top-posting is discouraged. ;) > It's just that it opens the door to a lot of problems by allowing all .php > scripts to be > processed. > > Furthermore it's even mentioned on the wiki Pitfalls page: > http://wiki.nginx.org/Pitfalls#Passing_Uncontrolled_Requests_to_PHP Trivial and correct fix for the problem mentioned on the wiki is to properly configure php, with cgi.fix_pathinfo=0. I would also recommend not allowing php at all under the locations where you allow untrusted parties to put files - or, rather, only allow php under locations where are untrusted parties are not allowed to put files, by properly isolating \.php$ location. But again, there is nothing wrong with the configuration per se. -- Maxim Dounin http://nginx.org/ From emailgrant at gmail.com Thu Feb 13 14:18:07 2014 From: emailgrant at gmail.com (Grant) Date: Thu, 13 Feb 2014 06:18:07 -0800 Subject: fastcgi & index In-Reply-To: <20140213141411.GY38830@mdounin.ru> References: <20140213123429.GT38830@mdounin.ru> <20140213132910.GX38830@mdounin.ru> <20140213141411.GY38830@mdounin.ru> Message-ID: >> No I mean the \.php regex based one. > > So now you probably know why top-posting is discouraged. ;) > >> It's just that it opens the door to a lot of problems by allowing all .php >> scripts to be >> processed. >> >> Furthermore it's even mentioned on the wiki Pitfalls page: >> http://wiki.nginx.org/Pitfalls#Passing_Uncontrolled_Requests_to_PHP > > Trivial and correct fix for the problem mentioned on the wiki is > to properly configure php, with cgi.fix_pathinfo=0. > > I would also recommend not allowing php at all under the locations > where you allow untrusted parties to put files - or, rather, only > allow php under locations where are untrusted parties are not > allowed to put files, by properly isolating \.php$ location. > > But again, there is nothing wrong with the configuration per se. Is the example from the wiki a good one to use? location ~ [^/]\.php(/|$) { http://wiki.nginx.org/PHPFcgiExample - Grant From mdounin at mdounin.ru Thu Feb 13 14:20:42 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Feb 2014 18:20:42 +0400 Subject: fastcgi & index In-Reply-To: References: <20140213123429.GT38830@mdounin.ru> Message-ID: <20140213142042.GZ38830@mdounin.ru> Hello! On Thu, Feb 13, 2014 at 06:12:58AM -0800, Grant wrote: > > The "fastcgi_index" directive is to instruct a fastcgi backend > > which file to use if a request with an URI ending with "/" is > > passed to the backend. That is, it makes sense in a configuration > > like this: > > > > location / { > > fastcgi_pass localhost:9000; > > fastcgi_index index.php; > > include fastcgi.conf; > > } > > > > It doesn't make sense in configurations with only *.php file > > passed to fastcgi backends though. E.g., in a configuration like > > this it doesn't make sense and should be removed: > > > > location ~ \.php$ { > > fastcgi_pass localhost:9000; > > # wrong: fastcgi_index doesn't make sense here > > fastcgi_index index.php; > > include fastcgi.conf; > > } > > In that case, should it be removed from the example here: > > http://wiki.nginx.org/PHPFcgiExample Yes, feel free to do so. -- Maxim Dounin http://nginx.org/ From emailgrant at gmail.com Thu Feb 13 14:26:44 2014 From: emailgrant at gmail.com (Grant) Date: Thu, 13 Feb 2014 06:26:44 -0800 Subject: fastcgi & index In-Reply-To: <20140213141411.GY38830@mdounin.ru> References: <20140213123429.GT38830@mdounin.ru> <20140213132910.GX38830@mdounin.ru> <20140213141411.GY38830@mdounin.ru> Message-ID: > Trivial and correct fix for the problem mentioned on the wiki is > to properly configure php, with cgi.fix_pathinfo=0. I didn't realize the PHP config should be changed for nginx. Are there other important changes to make besides 'cgi.fix_pathinfo=0'? - Grant From mdounin at mdounin.ru Thu Feb 13 14:29:13 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Feb 2014 18:29:13 +0400 Subject: minimal fastcgi config for 1 file? In-Reply-To: References: <20140213124202.GU38830@mdounin.ru> Message-ID: <20140213142913.GA38830@mdounin.ru> Hello! On Thu, Feb 13, 2014 at 05:50:18AM -0800, Grant wrote: > >> Is it OK to use a minimal fastcgi configuration for a single file like this: > >> > >> location ~ ^/piwik/piwik.php$ { > > > > It doesn't make sense to use regular expression here. Instead, > > use exact match location: > > > > location = /piwik/piwik.php { > > I'm only using one instance of location = or ^~ and that is on a very > frequently used location. Should I keep it that way for performance? > The idea is that nginx won't have to evaluate all of the other > locations when the frequently used location is accessed. The "location =" is most effective and also clearest way to specify locations if you want to test for an exact match. > >> fastcgi_pass unix:/run/php-fpm.socket; > >> include fastcgi_params; > > > > The "fastcgi_params" file as shipped with nginx doesn't set the > > SCRIPT_FILENAME parameter, and php will likely unable to handle > > such a request. Use "include fastcgi.conf;" instead if you don't > > need SCRIPT_FILENAME customization. > > I noticed my distro doesn't include any of the following in > fastcgi_params and only the first of these in fastcgi.conf: > > SCRIPT_FILENAME > PATH_INFO > PATH_TRANSLATED > > They are all included in fastcgi_params in the example here: > > http://wiki.nginx.org/PHPFcgiExample > > Should they all be added to fastcgi_params? No. The idea is that fastcgi_params include basic parameters, and usable in configurations like: location / { fastcgi_pass ... fastcgi_param SCRIPT_FILENAME /path/to/script.php; fastcgi_param PATH_INFO $uri; include fastcgi_param; } -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Feb 13 15:12:08 2014 From: nginx-forum at nginx.us (Larry) Date: Thu, 13 Feb 2014 10:12:08 -0500 Subject: Mmm.. Subrequests anyone ? Message-ID: Hello ! I am not sure that I understood this sentence from http://www.aosabook.org/en/nginx.html : "However, nginx goes further?not only can filters perform multiple subrequests and combine the outputs into a single response, but subrequests can also be nested and hierarchical" It is pretty clear by itself, not the question, but how can one "stack" a response from multiple different files from one request by the client ? Does it enables to request fileA and be able to get fileA + fileB + file C ? If true, it is a good way to reduce the req/s over the network. Any code example (even the most basic) ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247521,247521#msg-247521 From r at roze.lv Thu Feb 13 15:50:59 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 13 Feb 2014 17:50:59 +0200 Subject: Mmm.. Subrequests anyone ? In-Reply-To: References: Message-ID: <2FDEEC0998F446B29389DE964C81B15E@MasterPC> > Does it enables to request fileA and be able to get fileA + fileB + file C > ? > > Any code example (even the most basic) ? https://github.com/agentzh/echo-nginx-module#readme http://wiki.nginx.org/HttpEchoModule rr From emailgrant at gmail.com Thu Feb 13 16:07:23 2014 From: emailgrant at gmail.com (Grant) Date: Thu, 13 Feb 2014 08:07:23 -0800 Subject: minimal fastcgi config for 1 file? In-Reply-To: <20140213142913.GA38830@mdounin.ru> References: <20140213124202.GU38830@mdounin.ru> <20140213142913.GA38830@mdounin.ru> Message-ID: >> I noticed my distro doesn't include any of the following in >> fastcgi_params and only the first of these in fastcgi.conf: >> >> SCRIPT_FILENAME >> PATH_INFO >> PATH_TRANSLATED >> >> They are all included in fastcgi_params in the example here: >> >> http://wiki.nginx.org/PHPFcgiExample >> >> Should they all be added to fastcgi_params? > > No. The idea is that fastcgi_params include basic parameters, and > usable in configurations like: > > location / { > fastcgi_pass ... > fastcgi_param SCRIPT_FILENAME /path/to/script.php; > fastcgi_param PATH_INFO $uri; > include fastcgi_param; > } Should the wiki example be switched from fastcgi_param to fastcgi.cfg: http://wiki.nginx.org/PHPFcgiExample Also, PATH_INFO and PATH_TRANSLATED appear in the wiki but don't appear in the shipped files. Should they be removed from the wiki? - Grant From emailgrant at gmail.com Thu Feb 13 16:44:34 2014 From: emailgrant at gmail.com (Grant) Date: Thu, 13 Feb 2014 08:44:34 -0800 Subject: Passing Uncontrolled Requests to PHP Message-ID: Does the wiki example mitigate the "Passing Uncontrolled Requests to PHP" risk? location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } http://wiki.nginx.org/PHPFcgiExample http://wiki.nginx.org/Pitfalls#Passing_Uncontrolled_Requests_to_PHP If not, I'd like to update it. - Grant From appa at perusio.net Thu Feb 13 16:51:03 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 13 Feb 2014 17:51:03 +0100 Subject: Passing Uncontrolled Requests to PHP In-Reply-To: References: Message-ID: No you're just addressing the cgi_fixpathinfo issue. If I manage to upload a file called owned.php I can execute it because you don't whitelist the scripts that can be executed. ----appa On Thu, Feb 13, 2014 at 5:44 PM, Grant wrote: > Does the wiki example mitigate the "Passing Uncontrolled Requests to PHP" > risk? > > location ~ [^/]\.php(/|$) { > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > if (!-f $document_root$fastcgi_script_name) { > return 404; > } > > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > include fastcgi_params; > } > > http://wiki.nginx.org/PHPFcgiExample > > http://wiki.nginx.org/Pitfalls#Passing_Uncontrolled_Requests_to_PHP > > If not, I'd like to update it. > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Thu Feb 13 16:57:58 2014 From: emailgrant at gmail.com (Grant) Date: Thu, 13 Feb 2014 08:57:58 -0800 Subject: Passing Uncontrolled Requests to PHP In-Reply-To: References: Message-ID: > No you're just addressing the cgi_fixpathinfo issue. If I manage to upload a > file called owned.php > I can execute it because you don't whitelist the scripts that can be > executed. So disabling PHP execution in user upload directories and using the location block from the wiki should mitigate this risk? - Grant >> Does the wiki example mitigate the "Passing Uncontrolled Requests to PHP" >> risk? >> >> location ~ [^/]\.php(/|$) { >> fastcgi_split_path_info ^(.+?\.php)(/.*)$; >> if (!-f $document_root$fastcgi_script_name) { >> return 404; >> } >> >> fastcgi_pass 127.0.0.1:9000; >> fastcgi_index index.php; >> include fastcgi_params; >> } >> >> http://wiki.nginx.org/PHPFcgiExample >> >> http://wiki.nginx.org/Pitfalls#Passing_Uncontrolled_Requests_to_PHP >> >> If not, I'd like to update it. >> >> - Grant From guilherme.e at gmail.com Thu Feb 13 17:42:22 2014 From: guilherme.e at gmail.com (Guilherme) Date: Thu, 13 Feb 2014 15:42:22 -0200 Subject: acess log over nfs hanging In-Reply-To: References: <52F9EDB1.5030203@citrin.ru> Message-ID: Anton, I already had the same issue logging to NFS, but I'm curious about why nginx hang in some nfs failures. Log phase is the last, if there is no post action, so why nginx stop responding in some NFS failures? Do you think that I can ease the situation tunning nfs client config, such as timeout and retrans ? tks On Tue, Feb 11, 2014 at 3:33 PM, David Birdsong wrote: > > > > On Tue, Feb 11, 2014 at 1:30 AM, Anton Yuzhaninov wrote: > >> On 02/07/14 20:28, Jader H. Silva wrote: >> >>> It seems that when some processes are running in the nfs server, the >>> share won't >>> allow writing for some time and I noticed all nginx workers in status D >>> and not >>> processing requests. >>> >> >> I general it is a bad idea to write logs over NFS instead local HDD. >> >> If you need central log store across many servers: >> - write logs locally >> - rotate as often as need (SIGUSR1) >> - copy logs to cental log server (rsync is handy for this, but other >> methods are possible) > > > yeah, try out: http://hekad.readthedocs.org/ > > it's like syslog but can write to pretty much any backend store. > > >> >> >> Are all access.log writes blocking? >>> >> >> yes, blocking >> >> >> If my nfs server shutdown in an unexpected >>> way, will nginx stop proxying requests to the backend or responses to >>> the client? >>> >> >> yes, will stop >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hillb at yosemite.edu Thu Feb 13 18:43:08 2014 From: hillb at yosemite.edu (Brian Hill) Date: Thu, 13 Feb 2014 18:43:08 +0000 Subject: Proxy pass location inheritance Message-ID: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> Hello, we are using NGINX to serve a combination of local and proxied content coming from both an Apache server (mostly PHP content) and IIS 7.5 (a handful of third party .Net applications). The proxy is working properly for the pages themselves, but we wanted set up a separate location block for the "static" files (js, images, etc) to use different caching rules. In theory, each of the static file location blocks should be serving from the location specified in its parent location block, but instead ALL image requests are being routed to the root block. Server A: Contains the root site and all sorts of images. Server B: Contains applications in specific folders, and each folder has local images. A simplified version of our server block: upstream server_a {server 10.64.1.10:80;} upstream server_b {server 10.64.1.20:80;} server { listen 80; server_name www.site.edu; #some irrelevant proxy, cache, and header code goes here # root location location / { proxy_cache_valid 200 301 302 304 10m; #content changes regularly proxy_cache_use_stale error timeout updating; expires 60m; proxy_pass http://server_a; #this is the location for "static" content in the root. It is being called for ALL static files of these types location ~* \.(css|js|png|jpe?g|gif)$ { proxy_cache_valid 200 301 302 304 30d; expires 30d; proxy_pass http://server_a; } } #.net locations on second server location ~* /(app1|app2|app3|app4) { proxy_cache_valid 0s; #no caching in these folders proxy_pass http://server_b; #location for static content in these folders. This is not working. location ~* \.(css|js|png|jpe?g|gif)$ { proxy_cache_valid 200 301 302 304 30d; expires 30d; proxy_pass http://server_b; } } } Three of the four conditions are working properly. A request for www.site.edu/index.php gets sent to 10.64.1.10:80/index.php A request for www.site.edu/image1.gif gets sent to 10.64.1.10:80/default.gif A request for www.site.edu/app1/default.aspx gets sent to 10.64.1.20:80/app1/default.aspx But the last condition is not working properly. A request for www.site.edu/app1/image2.gif should be sent to 10.64.1.20:80/app1/image2.gif. Instead, it's being routed to 10.64.1.10:80/app1/image2.gif, which is an invalid location. So it appears that the first server location block is catching ALL of the requests for the static files. Anyone have any idea what I'm doing wrong? BH From nginx-forum at nginx.us Thu Feb 13 19:23:00 2014 From: nginx-forum at nginx.us (offthedeepnd) Date: Thu, 13 Feb 2014 14:23:00 -0500 Subject: 500 error when posting, no message in error logs Message-ID: Hi All, I'm running nginx 1.4.1 on OpenBSD 5.4 stable along with php and php-fpm version 5.3.27 and mysql 5.1.70 on two systems. I'm trying to install piwigo-2.6.1 and running into an issue on of of the systems as indicated by the subject. When I access the site initially it takes me to the setup screen as expected on both systems. I enter in the data required and click on "start installation". On one system it installs perfectly in less than a second, in the second system it returns a page that simply says "server error" if i click on more info it says: "The website encountered an error while retrieving http://piwigo.domain.com/install.php?language=en_US. It may be down for maintenance or configured incorrectly. Reload this webpage. Press the reload button to resubmit the data needed to load the page. Error code: 500" In the access log I see this: 10.0.0.10 - - [13/Feb/2014:12:50:46 -0500] "POST /install.php?language=en_US HTTP/1.1" 500 5 "http://piwigo.domain.com/install.php?language=en_US" "Mozilla/5.0 (X11; OpenBSD amd64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.45 Safari/537.36" "-" No other information, nothing in the error_log. I have run diff's on the entire /etc/nginx directory and the only thing that is different between the two systems is the server name in the sites-available/ files. I also diffed the php-fpm.conf files and they are identical, php.ini are identical as well. I would have consulted the piwigo group, but since it's running perfectly on, actually 2 other systems with the same OS/software/config combinations, it seems like there must be something i'm missing on the nginx/php side. I'm banging my head here, any help would be appreciated. I have included my configuration files below. TIA, Aaron working system: nginx.conf: # cat /etc/nginx/nginx.conf user www; worker_processes 1; events { worker_connections 1024; } http { server_tokens off; include mime.types; default_type application/octet-stream; index index.html index.htm index.php; error_log /var/www/logs/error.log debug; keepalive_timeout 65; gzip on; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; include /etc/nginx/sites-enabled/*; } php-fpm.conf: # cat /etc/php-fpm.conf error_log = /var/log/php-fpm.error.log ; alert, error, warning, notice, debug log_level = error ; log_level = debug [www] user = www group = www listen = 127.0.0.1:9000 listen.owner = www listen.group = www listen.mode = 0666 listen.allowed_clients = 127.0.0.1 pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 php_admin_value[max_execution_time] = 600 php_admin_value[max_input_time] = 600 piwigo.domain.com: cat piwigo26.domain.com server { listen 10.0.4.16:80; server_name piwigo26.domain.com; root /var/www/piwigo; error_log /var/www/logs/piwigo_error_log; access_log /var/www/logs/piwigo_access_log main; client_max_body_size 5172M; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } NON-working system: nginx.conf: # cat /etc/nginx/nginx.conf user www; worker_processes 1; events { worker_connections 1024; } http { server_tokens off; include mime.types; default_type application/octet-stream; index index.html index.htm index.php; error_log /var/www/logs/error.log debug; keepalive_timeout 65; gzip on; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; include /etc/nginx/sites-enabled/*; } php-fpm.conf: # cat /etc/php-fpm.conf error_log = /var/log/php-fpm.error.log ; alert, error, warning, notice, debug log_level = error ; log_level = debug [www] user = www group = www listen = 127.0.0.1:9000 listen.owner = www listen.group = www listen.mode = 0666 listen.allowed_clients = 127.0.0.1 pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 php_admin_value[max_execution_time] = 600 php_admin_value[max_input_time] = 600 sites-available/piwigo.domain.com: cat workhorse.piwigo.domain.com server { listen 172.17.37.77:80; server_name piwigo.domain.com; root /var/www/piwigo; error_log /var/www/logs/piwigo_error_log debug; access_log /var/www/logs/piwigo_access_log main; client_max_body_size 5172M; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247530,247530#msg-247530 From agentzh at gmail.com Thu Feb 13 19:54:10 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 13 Feb 2014 11:54:10 -0800 Subject: Mmm.. Subrequests anyone ? In-Reply-To: <2FDEEC0998F446B29389DE964C81B15E@MasterPC> References: <2FDEEC0998F446B29389DE964C81B15E@MasterPC> Message-ID: Hello! On Thu, Feb 13, 2014 at 7:50 AM, Reinis Rozitis wrote: >> Does it enables to request fileA and be able to get fileA + fileB + file C >> ? >> Any code example (even the most basic) ? > > https://github.com/agentzh/echo-nginx-module#readme > http://wiki.nginx.org/HttpEchoModule > These two links are just two copies for the same document :) Another example is the ngx.location.capture_multi() Lua API provided by the ngx_lua module: https://github.com/chaoslawful/lua-nginx-module#ngxlocationcapture_multi Regards, -agentzh From nginx-forum at nginx.us Thu Feb 13 20:57:03 2014 From: nginx-forum at nginx.us (michelem) Date: Thu, 13 Feb 2014 15:57:03 -0500 Subject: Serving static files statically with GET, everything else to backend Message-ID: <4bafeea94dd1adad7faa71dbc72e23ad.NginxMailingListEnglish@forum.nginx.org> Hello folks, Maybe this will save some time to someone. I have a setup where I serve a web application as follows: * server A with nginx handles directly as much static content as possible * only requests for URLs requesting dynamic processing go to server B hosting the application server This improves performance and keeps the presentational content online if the application server fails. nginx' try_files takes you 95% there: just put on A a filesystem hierarchy containing the static contents. The missing 5% is how to serve pages containing forms posting to themselves. This means, the same URL gets served statically upon GETs or HEADs, and dynamically otherwise. Here's the gist of a working setup for this (sorry if the forum messes up formatting): server { root /var/www/www.foobar.com/; # put files for static content here location / { error_page 405 = @appsrv; # remove if() and @appsrv will get as $uri the last try of try_files if ($request_method != GET) { return 405; } try_files $uri $uri/ $uri/index.html @appsrv; } # pass everything to FastCGI location @appsrv { fastcgi_pass 173.244.206.26:9003; } } cheers michele Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247532,247532#msg-247532 From nginx-forum at nginx.us Thu Feb 13 21:08:14 2014 From: nginx-forum at nginx.us (michelem) Date: Thu, 13 Feb 2014 16:08:14 -0500 Subject: sending POSTs to backend In-Reply-To: <3edbd7f9c9b71dd6cb7fe56005e1e651.NginxMailingListEnglish@forum.nginx.org> References: <3edbd7f9c9b71dd6cb7fe56005e1e651.NginxMailingListEnglish@forum.nginx.org> Message-ID: For the archives, I finally used the solution I document here: http://forum.nginx.org/read.php?2,247532,247532#msg-247532 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242944,247533#msg-247533 From nginx-forum at nginx.us Thu Feb 13 22:34:44 2014 From: nginx-forum at nginx.us (eN_Joy) Date: Thu, 13 Feb 2014 17:34:44 -0500 Subject: correct usage of proxy_set_header? Message-ID: <8f97c797f25953c15bd4718da77dd80b.NginxMailingListEnglish@forum.nginx.org> Hello all! I have configured a location that acts list a transparent proxying cache: location /get { set $hostx ""; set $addrs ""; if ( $uri ~ "^/get/http./+([^/]+)/(.+)$") { set $hostx $1; set $addrs $2; } resolver 8.8.8.8; proxy_set_header Referer " "; proxy_pass http://$hostx/$addrs; proxy_redirect off; access_log /var/log/nginx/get_access.log; } Basically when i browser to: http://mysite.com/get/http://foo.com/bar/some.html it'll get: http://foo.com/bar/some.html The code indeed works that way, except that the directive `proxy_set_header referer " "` is ignored. No matter what value I supply there (in the code above, empty), my get_access.log will always show the referer being the original refering page? What was wrong in my config? Thanks, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247536,247536#msg-247536 From nginx-forum at nginx.us Fri Feb 14 09:24:39 2014 From: nginx-forum at nginx.us (dinghm) Date: Fri, 14 Feb 2014 04:24:39 -0500 Subject: Nginx 400 error Message-ID: <0a3d5d861c879f14feedb721e3895b93.NginxMailingListEnglish@forum.nginx.org> I use nginx proxy tomcat, but there have been many 400 error, about 10%. But do not use nginx, tomcat can normally handle this part of the request. This is my nginx configuration : user nobody; worker_processes 1; error_log logs/error.log; pid logs/nginx.pid; events { # use [ kqueue | rtsig | epoll | /dev/poll | select | poll ]; use epoll; # web server : max_clients = worker_processes * worker_connections # proxy server : max_clients = worker_processes * worker_connections / 4 worker_connections 8192; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" |body: $request_body' ' $request_time' ' $host'; #access_log logs/access.log main; sendfile on; tcp_nopush on; keepalive_timeout 90; gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_min_length 1024; gzip_buffers 4 8k; gzip_types text/plain application/x-javascript text/css application/xml; client_header_buffer_size 32k; large_client_header_buffers 4 32k; #client_max_body_size 2M; #client_body_buffer_size 512k; fastcgi_intercept_errors on; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_buffer_size 128k; proxy_ignore_client_abort on; #proxy_buffer_size 32k; #proxy_buffers 4 32k; # The following includes are specified for virtual hosts include vhosts/*.conf; } upstream download { server 127.0.0.1:8001 weight=1; } server { listen 80; server_name 210.209.116.176 *.newlife-sh.com; access_log logs/download.log main; location /{ proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header REMOTE-HOST $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 50m; client_body_buffer_size 1024k; proxy_intercept_errors on; error_page 400 error; proxy_connect_timeout 90; proxy_send_timeout 120; proxy_read_timeout 300; proxy_buffer_size 4m; proxy_buffers 8 1024k; proxy_busy_buffers_size 4m; proxy_temp_file_write_size 16m; proxy_next_upstream error timeout invalid_header http_500 http_503 http_404; proxy_max_temp_file_size 128m; proxy_pass http://download; } location /nginxstatus { access_log on; allow all; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247540,247540#msg-247540 From nginx-forum at nginx.us Fri Feb 14 09:49:21 2014 From: nginx-forum at nginx.us (dinghm) Date: Fri, 14 Feb 2014 04:49:21 -0500 Subject: Nginx 400 error In-Reply-To: <0a3d5d861c879f14feedb721e3895b93.NginxMailingListEnglish@forum.nginx.org> References: <0a3d5d861c879f14feedb721e3895b93.NginxMailingListEnglish@forum.nginx.org> Message-ID: upstream download { server 127.0.0.1:8001 weight=1; } server { listen 80; server_name 210.209.116.176 *.newlife-sh.com; access_log logs/download.log main; location /{ proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header REMOTE-HOST $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 50m; client_body_buffer_size 1024k; proxy_connect_timeout 90; proxy_send_timeout 120; proxy_read_timeout 300; proxy_buffer_size 4m; proxy_buffers 8 1024k; proxy_busy_buffers_size 4m; proxy_temp_file_write_size 16m; proxy_next_upstream error timeout invalid_header http_500 http_503 http_404; proxy_max_temp_file_size 128m; proxy_pass http://download; } location /nginxstatus { access_log on; allow all; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247540,247541#msg-247541 From nginx-forum at nginx.us Fri Feb 14 10:09:28 2014 From: nginx-forum at nginx.us (gaspy) Date: Fri, 14 Feb 2014 05:09:28 -0500 Subject: How to disable PHP output buffering In-Reply-To: <1392274284.136885012.d3surapq@frv34.fwdcdn.com> References: <1392274284.136885012.d3surapq@frv34.fwdcdn.com> Message-ID: <4d55c4a1bc3cddebe6701d6d3c9ade0a.NginxMailingListEnglish@forum.nginx.org> OK, so I modified nginx and php5-fpm to talk on port 9000 and used tcpdump to see the traffic. It looks like it worked as packets arrived at 1 second intervals (the sleep(1) in my code). However in browser it was still the same. After more testing, it turns out there's something in my computer configuration that's causing it. Not sure yet if it's the antivirus or something else, but trying on different computer worked, as it did using lynx on the server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247451,247544#msg-247544 From igor at sysoev.ru Fri Feb 14 10:13:19 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 14 Feb 2014 14:13:19 +0400 Subject: 500 error when posting, no message in error logs In-Reply-To: References: Message-ID: On Feb 13, 2014, at 23:23 , offthedeepnd wrote: > Hi All, > > I'm running nginx 1.4.1 on OpenBSD 5.4 stable along with php and php-fpm > version 5.3.27 and mysql 5.1.70 on two systems. I'm trying to install > piwigo-2.6.1 and running into an issue on of of the systems as indicated by > the subject. > > When I access the site initially it takes me to the setup screen as expected > on both systems. I enter in the data required and click on "start > installation". On one system it installs perfectly in less than a second, > in the second system it returns a page that simply says "server error" if i > click on more info it says: > > "The website encountered an error while retrieving > http://piwigo.domain.com/install.php?language=en_US. It may be down for > maintenance or configured incorrectly. > > Reload this webpage. > Press the reload button to resubmit the data needed to load the page. > > Error code: 500" > > In the access log I see this: > > 10.0.0.10 - - [13/Feb/2014:12:50:46 -0500] "POST /install.php?language=en_US > HTTP/1.1" 500 5 "http://piwigo.domain.com/install.php?language=en_US" > "Mozilla/5.0 (X11; OpenBSD amd64) AppleWebKit/537.36 (KHTML, like Gecko) > Chrome/28.0.1500.45 Safari/537.36" "-" > > No other information, nothing in the error_log. Try to log $upstream_status to see if this 500 error is returned by backend. -- Igor Sysoev http://nginx.com From mdounin at mdounin.ru Fri Feb 14 11:48:42 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Feb 2014 15:48:42 +0400 Subject: acess log over nfs hanging In-Reply-To: References: <52F9EDB1.5030203@citrin.ru> Message-ID: <20140214114842.GB81431@mdounin.ru> Hello! On Thu, Feb 13, 2014 at 03:42:22PM -0200, Guilherme wrote: > Anton, > > I already had the same issue logging to NFS, but I'm curious about why > nginx hang in some nfs failures. Log phase is the last, if there is no post > action, so why nginx stop responding in some NFS failures? Because NFS failures by default result in infinite blocking of any process trying to access NFS shares. And nginx worker processes will be blocked as well and won't be able to handle requests anymore. > Do you think > that I can ease the situation tunning nfs client config, such as timeout > and retrans ? Some hints can be found here: http://mailman.nginx.org/pipermail/nginx/2014-February/042107.html Using "soft" mount option and small timeouts may help a bit. But I wouldn't recommend using NFS if you want a service which is expected to run independantly from your NFS server. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Feb 14 12:04:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Feb 2014 16:04:37 +0400 Subject: correct usage of proxy_set_header? In-Reply-To: <8f97c797f25953c15bd4718da77dd80b.NginxMailingListEnglish@forum.nginx.org> References: <8f97c797f25953c15bd4718da77dd80b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140214120437.GC81431@mdounin.ru> Hello! On Thu, Feb 13, 2014 at 05:34:44PM -0500, eN_Joy wrote: > Hello all! > I have configured a location that acts list a transparent proxying cache: > > > > location /get > { > set $hostx ""; > set $addrs ""; > if ( $uri ~ "^/get/http./+([^/]+)/(.+)$") { > set $hostx $1; > set $addrs $2; > } > resolver 8.8.8.8; > proxy_set_header Referer " "; > proxy_pass http://$hostx/$addrs; > proxy_redirect off; > access_log /var/log/nginx/get_access.log; > } > > Basically when i browser to: > > http://mysite.com/get/http://foo.com/bar/some.html > > it'll get: > http://foo.com/bar/some.html > > The code indeed works that way, except that the directive `proxy_set_header > referer " "` is ignored. No matter what value I supply there (in the code > above, empty), my get_access.log will always show the referer being the > original refering page? The proxy_set_header directive controls headers sent to a proxied server. It has nothing to do with what's logged to access log on your server. Try looking into logs of the proxied server instead. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Feb 14 12:19:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Feb 2014 16:19:50 +0400 Subject: Proxy pass location inheritance In-Reply-To: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> References: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> Message-ID: <20140214121950.GD81431@mdounin.ru> Hello! On Thu, Feb 13, 2014 at 06:43:08PM +0000, Brian Hill wrote: > Hello, we are using NGINX to serve a combination of local and > proxied content coming from both an Apache server (mostly PHP > content) and IIS 7.5 (a handful of third party .Net > applications). The proxy is working properly for the pages > themselves, but we wanted set up a separate location block for > the "static" files (js, images, etc) to use different caching > rules. In theory, each of the static file location blocks should > be serving from the location specified in its parent location > block, but instead ALL image requests are being routed to the > root block. [...] > Three of the four conditions are working properly. > A request for www.site.edu/index.php gets sent to 10.64.1.10:80/index.php > A request for www.site.edu/image1.gif gets sent to 10.64.1.10:80/default.gif > A request for www.site.edu/app1/default.aspx gets sent to 10.64.1.20:80/app1/default.aspx > > But the last condition is not working properly. > A request for www.site.edu/app1/image2.gif should be sent to 10.64.1.20:80/app1/image2.gif. > Instead, it's being routed to 10.64.1.10:80/app1/image2.gif, which is an invalid location. > > So it appears that the first server location block is catching > ALL of the requests for the static files. Anyone have any idea > what I'm doing wrong? Simplified: location / { location ~ regex1 { # regex inside / } } location ~ regex2 { # regex } The question is: where a request matching regex1 and regex2 will be handled? The answer is - in "location ~ regex1". Locations given by regular expressions within a matching prefix location are tested before other locations given by regular expressions. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Feb 14 13:12:21 2014 From: nginx-forum at nginx.us (Larry) Date: Fri, 14 Feb 2014 08:12:21 -0500 Subject: Mmm.. Subrequests anyone ? In-Reply-To: References: Message-ID: Many thanks I will dig into it :) See you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247521,247554#msg-247554 From nginx-forum at nginx.us Fri Feb 14 14:47:48 2014 From: nginx-forum at nginx.us (sebor) Date: Fri, 14 Feb 2014 09:47:48 -0500 Subject: Compile error with OpenSSL source on Solaris 11 sparc Message-ID: <1c33eb8c4f6893dc49b2f3c69f207b5a.NginxMailingListEnglish@forum.nginx.org> Hi. My system: uname -a: SunOS web-srv 5.11 11.1 sun4v sparc sun4v My configure line: /configure --prefix=/opt/nginx --group=webservd --user=webservd --with-cc-opt='-I ../pcre-8.34' --with-cpu-opt=sparc64 --with-pcre=../pcre-8.34 --with-zlib=../zlib-1.2.8 --with-http_sub_module --with-http_gzip_static_module --with-http_ssl_module --with-http_gzip_static_module --with-http_stub_status_module --with-openssl=../openssl-1.0.1f --with-http_dav_module --with-http_flv_module --with-cc=/opt/solarisstudio12.3/bin/cc --with-cc-opt=-m64 --with-ld-opt=-m64 Pours a lot of warnings: cc: Warning: -xarch=v8plus is deprecated, use -m32 -xarch=sparc instead Closer to the end: ld: warning: file ../openssl-1.0.1f/.openssl/lib/libssl.a(s23_meth.o): wrong ELF class: ELFCLASS32 ld: warning: file ../openssl-1.0.1f/.openssl/lib/libcrypto.a(cryptlib.o): wrong ELF class: ELFCLASS32 And finally: ld: fatal: symbol referencing errors. No output written to objs/nginx *** Error code 2 make: Fatal error: Command failed for target `objs/nginx' Current working directory /export/home/user/nginx-1.5.10 *** Error code 1 make: Fatal error: Command failed for target `build If I compile without openssl source (ie remove configure - with-openssl = ../openssl-1.0.1f), then the compilation succeeds. On solaris 11 system OpenSSL 1.0.0j May 10, 2012. Any ideas? May need to fix openssl source for normal precompile by nginx? I can provide additional information(in the form of files obs / Makefile, objs / autoconf.err etc.), if necessary. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247561,247561#msg-247561 From nginx-forum at nginx.us Fri Feb 14 16:07:21 2014 From: nginx-forum at nginx.us (verofei) Date: Fri, 14 Feb 2014 11:07:21 -0500 Subject: Nginx not passing http request to backend server Message-ID: <8d719142df95857e35a5f77b69348b34.NginxMailingListEnglish@forum.nginx.org> Hi, I have configured Nginx as a reverse-proxy to pass http requests to backend servers. When i try to see the domains thay all show me the default page of Nginx. The config file is in /etc/nginx/conf.d Theese are the entries that i have for one of the domains server { listen 80; server_name test.verofei.local; location / { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://test.verofei.local/; } } So how can i configure nginx to act as a frontend reverse proxy for our backend webservers? Thanx, Fei Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247568,247568#msg-247568 From nginx-forum at nginx.us Fri Feb 14 21:39:23 2014 From: nginx-forum at nginx.us (atarob) Date: Fri, 14 Feb 2014 16:39:23 -0500 Subject: headers_in_hash Message-ID: <5ae184c4347b8f3cc761020253cb4fea.NginxMailingListEnglish@forum.nginx.org> Creating a module, I want to read in from config desired http header fields. Then, during config still, I want to get the struct offset for the fields that have dedicated pointers in the header_in struct. It seems that when I access headers_in_hash from the main config, it is uninitialized. I can see in the code that there is ngx_http_init_headers_in_hash(ngx_conf_t *cf, ngx_http_core_main_conf_t *cmcf) in ngx_http.c. It seems to be called when the main conf is being generated though I am not certain yet. Where and when exactly is headers_in_hash initialized? If I wanted to read from it during ngx_http_X_merge_loc_conf(), what would I need to do? Or am I supposed to do it at some point later? Thanks in advance, Ata Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247572,247572#msg-247572 From kev at inburke.com Fri Feb 14 21:50:23 2014 From: kev at inburke.com (Kevin Burke) Date: Fri, 14 Feb 2014 13:50:23 -0800 Subject: Retrying proxy_pass on connection failure or similar Message-ID: Hello, I was wondering if proxy_pass has any ability to retry ECONNREFUSED, or a connection timeout, or similar? It looks like proxy_next_upstream is fairly close to what I want, though I would rather not configure a second server; I just want the initial request to be more robust. Similarly the upstream directive is close to what I want; you can just add the same server name multiple times to allow for multiple retries. However, this gets more complicated if you are passing in a backend URI via a variable (say, with X-Accel-Redirect). Would be nice to have a `proxy_connection_retries` or similar. -- Kevin Burke phone: 925.271.7005 | kev.inburke.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Feb 14 21:59:26 2014 From: nginx-forum at nginx.us (atarob) Date: Fri, 14 Feb 2014 16:59:26 -0500 Subject: inlining Message-ID: Looking through the codebase, I see a lot of very short helper like functions that are defined in .c files with prototypes in .h files. This means that the compiler cannot inline them outside of that .c file. Am I wrong? How is that not a performance hit? Ata. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247575,247575#msg-247575 From nginx-forum at nginx.us Fri Feb 14 22:11:37 2014 From: nginx-forum at nginx.us (jove4015) Date: Fri, 14 Feb 2014 17:11:37 -0500 Subject: Proxy cache passes GET instead of HEAD to upstream Message-ID: <871cfa40a9251fbb6b2103ee6b7404d8.NginxMailingListEnglish@forum.nginx.org> I'm trying to figure out how to get Nginx to proxy cache HEAD requests as HEAD requests and I can't find any info on google. Basically, I have an upstream server in another datacenter, far away, that I am proxying requests to. I'm caching those requests, so the next time the request comes in, we don't have to go to the other datacenter to get the file. Sometimes, the application needs to check the existence of a file, but doesn't actually need to read the file, so we wanted to set it up to make a HEAD request for the file. When nginx gets the HEAD request, and forwards to the upstream server, it sends the upstream server a GET request instead. This means long delays for the application as it downloads all the data for all the files it is checking (some 50MB of files), when it only needs to know if the files are there. I do notice that if I turn off the proxy cache, then the HEAD requests get sent through to the upstream as HEAD requests, like one would expect. However, turning off the cache defeats the whole purpose of setting this up. Is there any way to set up the proxy to the upstream so that it proxies HEAD requests as HEAD requests, and GET requests as normal, and still caches responses as well? Ideally it would cache all GET responses, check against the cache for HEAD requests, and proxy HEAD requests for objects not in the cache as HEAD requests (thereby not caching the response, or only caching the HEAD response as a separate object - ideally the latter but I could live with the former). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247577,247577#msg-247577 From vbart at nginx.com Fri Feb 14 22:23:26 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 15 Feb 2014 02:23:26 +0400 Subject: Proxy cache passes GET instead of HEAD to upstream In-Reply-To: <871cfa40a9251fbb6b2103ee6b7404d8.NginxMailingListEnglish@forum.nginx.org> References: <871cfa40a9251fbb6b2103ee6b7404d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4788345.6Zf7HEts4H@vbart-laptop> On Friday 14 February 2014 17:11:37 jove4015 wrote: [..] > Is there any way to set up the proxy to the upstream so that it proxies HEAD > requests as HEAD requests, and GET requests as normal, and still caches > responses as well? Ideally it would cache all GET responses, check against > the cache for HEAD requests, and proxy HEAD requests for objects not in the > cache as HEAD requests (thereby not caching the response, or only caching > the HEAD response as a separate object - ideally the latter but I could live > with the former). > You can set: proxy_cache_methods GET; in this case HEAD requests will bypass cache. wbr, Valentin V. Bartenev From semenukha at gmail.com Fri Feb 14 22:50:44 2014 From: semenukha at gmail.com (Styopa Semenukha) Date: Fri, 14 Feb 2014 17:50:44 -0500 Subject: inlining In-Reply-To: References: Message-ID: <10242732.hzlcHAHcZS@tornado> I suppose they will be inlined at -O2 level: http://linux.die.net/man/1/gcc -finline-small-functions Integrate functions into their callers when their body is smaller than expected function call code (so overall size of program gets smaller). The compiler heuristically decides which functions are simple enough to be worth integrating in this way. Enabled at level -O2. On Friday, February 14, 2014 04:59:26 PM atarob wrote: > Looking through the codebase, I see a lot of very short helper like > functions that are defined in .c files with prototypes in .h files. This > means that the compiler cannot inline them outside of that .c file. Am I > wrong? How is that not a performance hit? > > Ata. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247575,247575#msg-247575 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Best regards, Styopa Semenukha. From nginx-forum at nginx.us Fri Feb 14 23:24:20 2014 From: nginx-forum at nginx.us (atarob) Date: Fri, 14 Feb 2014 18:24:20 -0500 Subject: inlining In-Reply-To: <10242732.hzlcHAHcZS@tornado> References: <10242732.hzlcHAHcZS@tornado> Message-ID: <0c3f40ee1f96dad66d8bbc03775c5f86.NginxMailingListEnglish@forum.nginx.org> > On Friday, February 14, 2014 04:59:26 PM atarob wrote: > > Looking through the codebase, I see a lot of very short helper like > > functions that are defined in .c files with prototypes in .h files. > This > > means that the compiler cannot inline them outside of that .c file. > > Am I > > wrong? How is that not a performance hit? > I suppose they will be inlined at -O2 level: > http://linux.die.net/man/1/gcc > Think about it. -O2 is a compiler flag. Optimization happens at compile time. NOT at link time. If the function is defined in another .c file, how can the compiler, which compiles on .c file at a time, possibly have access to the definition, let alone inline it. That's why we put inline function that are supposed to be called by other .c files in a .h file. Am I wrong? Ata. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247575,247582#msg-247582 From mehta.pankaj at gmail.com Sat Feb 15 00:00:24 2014 From: mehta.pankaj at gmail.com (Pankaj Mehta) Date: Sat, 15 Feb 2014 00:00:24 +0000 Subject: inlining In-Reply-To: <0c3f40ee1f96dad66d8bbc03775c5f86.NginxMailingListEnglish@forum.nginx.org> References: <10242732.hzlcHAHcZS@tornado> <0c3f40ee1f96dad66d8bbc03775c5f86.NginxMailingListEnglish@forum.nginx.org> Message-ID: These should be covered during the link time optimisations. Look here for gcc: http://gcc.gnu.org/wiki/LinkTimeOptimization And here for Visual Studio : http://msdn.microsoft.com/en-us/library/xbf3tbeh.aspx Cheers Pankaj On 14 February 2014 23:24, atarob wrote: > > On Friday, February 14, 2014 04:59:26 PM atarob wrote: > > > Looking through the codebase, I see a lot of very short helper like > > > functions that are defined in .c files with prototypes in .h files. > > This > > > means that the compiler cannot inline them outside of that .c file. > > > Am I > > > wrong? How is that not a performance hit? > > > I suppose they will be inlined at -O2 level: > > http://linux.die.net/man/1/gcc > > > > Think about it. -O2 is a compiler flag. Optimization happens at compile > time. NOT at link time. If the function is defined in another .c file, how > can the compiler, which compiles on .c file at a time, possibly have access > to the definition, let alone inline it. That's why we put inline function > that are supposed to be called by other .c files in a .h file. Am I wrong? > > Ata. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,247575,247582#msg-247582 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Feb 15 11:29:00 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 15 Feb 2014 12:29:00 +0100 Subject: Minimal configuration Message-ID: One could imagine a minimal server configured as such: in */nginx/nginx.conf: http { #All default http stuff, like MIME type inclusion, etc. include conf.d/*.conf } in */nginx/conf.d/default.conf server { listen 80; index index.html index.htm; try_files $uri $uri/ /; } However, unless I made some mistake, it seems that such a configuration returns a HTTP 500 error (with a 'too much internal redirection' error). ?Does nginx require any location block at all? Or am I mistaken? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at zeitgeist.se Sat Feb 15 13:21:17 2014 From: alex at zeitgeist.se (Alex) Date: Sat, 15 Feb 2014 14:21:17 +0100 Subject: Way to pass all HTTP requests headers verbatim to PHP-FastCGI server? Message-ID: <61550FE3-82CF-4B63-8F70-3DB5DB6BA46F@postfach.slogh.com> Hi, I would require to access via PHP a client's unmodified HTTP request headers. Using standard PHP variables ($_SERVER etc.) isn't an option since these store headers in normalized form; but I would need to have them verbatim, so for instance, "Keep-Alive" versus "keep-alive" matters. Running PHP with Apache (whether as CGI or as a module) would allow you to access the headers via the apache_request_headers() PHP function. Do you know of a way how I could pass request headers from Nginx to PHP without having them modified? Thanks in advance. Alex From alex at zeitgeist.se Sat Feb 15 16:39:16 2014 From: alex at zeitgeist.se (Alex) Date: Sat, 15 Feb 2014 17:39:16 +0100 Subject: Way to pass all HTTP requests headers verbatim to PHP-FastCGI server? In-Reply-To: <61550FE3-82CF-4B63-8F70-3DB5DB6BA46F@postfach.slogh.com> References: <61550FE3-82CF-4B63-8F70-3DB5DB6BA46F@postfach.slogh.com> Message-ID: <3A12A1A2-D761-4E1C-9B54-0AEC7C51060A@postfach.slogh.com> My Apologies - I should have looked more closely. Looks like the header data in the $_SERVER variable is AS IS; no modification taking place. Best, Alex On 2014-02-15 14:21, Alex wrote: > Hi, From ianevans at digitalhit.com Sat Feb 15 20:30:15 2014 From: ianevans at digitalhit.com (Ian Evans) Date: Sat, 15 Feb 2014 15:30:15 -0500 Subject: grunt deploy and nginx root dir Message-ID: <4c43b9577fd5215bfca8b52036934288@digitalhit.com> Just curious if anyone out there is using grunt and how you deal the possibility of race conditions, i.e. people trying to access old assets just as grunt is creating/renaming new ones. I'd be interested to hear how people are handling this. I heard one example might be to create a new build directory and create a symlink to the name of nginx's root dir. From nginx-forum at nginx.us Sat Feb 15 21:08:04 2014 From: nginx-forum at nginx.us (okayyyyy) Date: Sat, 15 Feb 2014 16:08:04 -0500 Subject: Nginx ask me to watch error log but error log is empty Message-ID: <0eb9eef980968de649575487fd876dae.NginxMailingListEnglish@forum.nginx.org> Hello, Sometimes I get a 403 error also "Un error occured. Faithfully yours, nginx." with nothing in error.log Already checked /var/log/nginx/error.log /usr/local/nginx/logs/error.log Both of them are chmod in 777 I'm tried to configure rutorrent but I can't even access, Got a 403 forbiden ... I use domain2.com conf Any one can help me to understand ? Here is my conf nginx.conf: #user nobody; worker_processes 2; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; access_log on; error_log on; #charset koi8-r; #access_log logs/host.access.log main; root /usr/local/nginx/html; #index index.php; location / { index index.php index.html; } location ~ .php$ { fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } #} location /rt { auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } location /RPC2 { scgi_pass localhost:5000; include scgi_params; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } server { listen 80; server_name domain1.com; access_log off; error_log off; #charset koi8-r; #access_log logs/host.access.log main; root /usr/local/nginx/html; #index index.php; location / { index index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } server { listen 80; server_name domain2.com; access_log off; error_log off; #charset koi8-r; #access_log logs/host.access.log main; root /var/www/d2; #index index.php; location / { index index.php index.html; auth_basic "Restricted"; auth_basic_user_file htpasswd; } # Allow all files in this location to be downloaded #location ~ ^.*/files/.*$ {} # setup the php-fpm pass-trough #location / { location ~ .php$ { fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } #} location /RPC2 { scgi_pass localhost:5000; include scgi_params; } # hide protected files location ~*.(.htaccess|engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(.php)?|xtmpl)$ { deny all; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 80; server_name domain3.com; access_log off; error_log off; root /var/www/d3; location / { index index.php index.html; } location ~ .php$ { fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } # hide protected files location ~*.(.htaccess|engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(.php)?|xtmpl)$ { deny all; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247597,247597#msg-247597 From vbart at nginx.com Sat Feb 15 21:12:19 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 16 Feb 2014 01:12:19 +0400 Subject: Nginx ask me to watch error log but error log is empty In-Reply-To: <0eb9eef980968de649575487fd876dae.NginxMailingListEnglish@forum.nginx.org> References: <0eb9eef980968de649575487fd876dae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2946992.MWCNSVBZyh@vbart-laptop> On Saturday 15 February 2014 16:08:04 okayyyyy wrote: > Hello, > > Sometimes I get a 403 error also "Un error occured. Faithfully yours, > nginx." with nothing in error.log > > Already checked > /var/log/nginx/error.log > /usr/local/nginx/logs/error.log > > Both of them are chmod in 777 > > I'm tried to configure rutorrent but I can't even access, Got a 403 forbiden > ... I use domain2.com conf > > Any one can help me to understand ? > > Here is my conf nginx.conf: > [..] > server { > listen 80; > server_name localhost; > > access_log on; > error_log on; [..] > server { > listen 80; > server_name domain1.com; > > access_log off; > error_log off; [..] > server { > listen 80; > server_name domain2.com; > > access_log off; > error_log off; [..] > server { > listen 80; > server_name domain3.com; > > access_log off; > error_log off; [..] So, you should check file with name "off" in working directory. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sat Feb 15 21:20:17 2014 From: nginx-forum at nginx.us (okayyyyy) Date: Sat, 15 Feb 2014 16:20:17 -0500 Subject: Nginx ask me to watch error log but error log is empty In-Reply-To: <2946992.MWCNSVBZyh@vbart-laptop> References: <2946992.MWCNSVBZyh@vbart-laptop> Message-ID: <4c8441dc11edb85843395c0ca7bac626.NginxMailingListEnglish@forum.nginx.org> Which working directory ? The domain2.com own? If yes no file with off on it. My english is not really good sorry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247597,247599#msg-247599 From francis at daoine.org Sat Feb 15 21:55:03 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 15 Feb 2014 21:55:03 +0000 Subject: Minimal configuration In-Reply-To: References: Message-ID: <20140215215503.GI24015@craic.sysops.org> On Sat, Feb 15, 2014 at 12:29:00PM +0100, B.R. wrote: > One could imagine a minimal server configured as such: If you want the minimal config, start with an empty file and see what you need to add to get it to work. I suspect that === events{} http{ server{} } === will probably work usefully. > server { > listen 80; > index index.html index.htm; > > try_files $uri $uri/ /; > } > > However, unless I made some mistake, it seems that such a configuration > returns a HTTP 500 error (with a 'too much internal redirection' error). What request did you make? What response did you expect? What did the error log say? What did the debug log say? > Does nginx require any location block at all? No. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Feb 15 22:42:34 2014 From: nginx-forum at nginx.us (NetCompany) Date: Sat, 15 Feb 2014 17:42:34 -0500 Subject: nginx mail proxy - dovecot ssl backend In-Reply-To: <9488c17d7cf4ee5a292ad097b1740294.NginxMailingListEnglish@forum.nginx.org> References: <4ECFFD1E.2030002@yahoo.com.br> <9488c17d7cf4ee5a292ad097b1740294.NginxMailingListEnglish@forum.nginx.org> Message-ID: any chance this patch / change will be added to the default release? We run nginx-full 1.4.4 from Debian Wheezy Backports and the 'issue' described above still seems to apply (no hostname / SSL support for backend server). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219069,247602#msg-247602 From vbart at nginx.com Sat Feb 15 22:47:55 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 16 Feb 2014 02:47:55 +0400 Subject: Nginx ask me to watch error log but error log is empty In-Reply-To: <4c8441dc11edb85843395c0ca7bac626.NginxMailingListEnglish@forum.nginx.org> References: <2946992.MWCNSVBZyh@vbart-laptop> <4c8441dc11edb85843395c0ca7bac626.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3564297.lDpaVIAEfZ@vbart-laptop> On Saturday 15 February 2014 16:20:17 okayyyyy wrote: > Which working directory ? The domain2.com own? > > If yes no file with off on it. > > My english is not really good sorry > Working directory of nginx, see "prefix" option: http://nginx.org/en/docs/configure.html Actually that was a hint to you to remove these strange settings, since it's not very convenient to keep logs in files with names "on" and "off". Check the docs: http://nginx.org/r/error_log (the "error_log" directive has no special "on" or "off" flags). wbr, Valentin V. Bartenev From siefke_listen at web.de Sun Feb 16 02:40:27 2014 From: siefke_listen at web.de (Silvio Siefke) Date: Sun, 16 Feb 2014 03:40:27 +0100 Subject: Webdav Access Message-ID: <20140216034027.2a7ecce86e59157a3a8564b7@web.de> Hello, i has read that i can use nginx as webdav server. I has set up the config which was on wiki but the access make me trouble. Can i set so that all is allowed from my dyndns address? Or best authentification with system user? Thank you for help & Nice Day Silvio From nginx-forum at nginx.us Sun Feb 16 05:09:21 2014 From: nginx-forum at nginx.us (dukzcry) Date: Sun, 16 Feb 2014 00:09:21 -0500 Subject: nginx mail proxy - dovecot ssl backend In-Reply-To: References: <4ECFFD1E.2030002@yahoo.com.br> <9488c17d7cf4ee5a292ad097b1740294.NginxMailingListEnglish@forum.nginx.org> Message-ID: <42c203a0a7a1711d61fff1e4118ab921.NginxMailingListEnglish@forum.nginx.org> Hello. I plan to beat all the ugliness and produce a clean patch very soon. I'll let you know. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219069,247608#msg-247608 From nginx-forum at nginx.us Sun Feb 16 05:11:09 2014 From: nginx-forum at nginx.us (dukzcry) Date: Sun, 16 Feb 2014 00:11:09 -0500 Subject: nginx mail proxy - dovecot ssl backend In-Reply-To: <42c203a0a7a1711d61fff1e4118ab921.NginxMailingListEnglish@forum.nginx.org> References: <4ECFFD1E.2030002@yahoo.com.br> <9488c17d7cf4ee5a292ad097b1740294.NginxMailingListEnglish@forum.nginx.org> <42c203a0a7a1711d61fff1e4118ab921.NginxMailingListEnglish@forum.nginx.org> Message-ID: <74e2571fe88da430a609870b92c6197e.NginxMailingListEnglish@forum.nginx.org> P.S.: I think then it may be examined for inclusion into nginx, but not now in it's current form :-) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219069,247609#msg-247609 From philipp.kraus at tu-clausthal.de Sun Feb 16 06:58:33 2014 From: philipp.kraus at tu-clausthal.de (Philipp Kraus) Date: Sun, 16 Feb 2014 07:58:33 +0100 Subject: rewrite except one directory Message-ID: Hello, I'm using the following rewrite options { try_files $uri @pico; } location @pico { rewrite ^(.*)$ /index.php; } So I would like to disable the rewrite for one subfolder. I have some PHP scripts with should not use the rewrite call e.g. http://myserver/scripts/script1.php. All scripts with are in the /script location should not use the rewrite to the index. How can I do this? Thanks Phil From philipp.kraus at tu-clausthal.de Sun Feb 16 07:03:57 2014 From: philipp.kraus at tu-clausthal.de (Philipp Kraus) Date: Sun, 16 Feb 2014 08:03:57 +0100 Subject: HTTP-Post data with Tomcat Message-ID: <3C6FB39B-50C1-4932-95A1-61AFF8AC78B2@tu-clausthal.de> Hello, I'm using Nginx as proxy for a Tomcat 7. My configuration shows: upstream tomcat { server 127.0.0.1:9090 fail_timeout=0; } location /jenkins { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass https://tomcat; } I'm using Jenkins and some other war files. Some of the container can store some binary files, so the container has got an upload form which uses HTTP Post data. I cannot update with this configuration any data, on a native connect to Tomcat everything works fine, so imho the proxy configuration of Nginx seems to create some problems. How can I modify my configuration that HTTP post data is passed from Nginx to Tomcat for uploading binary files? Thanks Phil From francis at daoine.org Sun Feb 16 09:48:12 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 16 Feb 2014 09:48:12 +0000 Subject: rewrite except one directory In-Reply-To: References: Message-ID: <20140216094812.GL24015@craic.sysops.org> On Sun, Feb 16, 2014 at 07:58:33AM +0100, Philipp Kraus wrote: Hi there, > So I would like to disable the rewrite for one subfolder. I have some PHP scripts with should not use the > rewrite call e.g. http://myserver/scripts/script1.php. All scripts with are in the /script location should not > use the rewrite to the index. > > How can I do this? The same way any prefix-based nginx config would be done, unless there's a reason it fails? == location ^~ /scripts/ { # do your /scripts/ stuff } location / { # do everything else } == f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Feb 16 09:50:51 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 16 Feb 2014 09:50:51 +0000 Subject: HTTP-Post data with Tomcat In-Reply-To: <3C6FB39B-50C1-4932-95A1-61AFF8AC78B2@tu-clausthal.de> References: <3C6FB39B-50C1-4932-95A1-61AFF8AC78B2@tu-clausthal.de> Message-ID: <20140216095051.GM24015@craic.sysops.org> On Sun, Feb 16, 2014 at 08:03:57AM +0100, Philipp Kraus wrote: Hi there, > I cannot update with this configuration any data, on a native connect to Tomcat everything > works fine, so imho the proxy configuration of Nginx seems to create some problems. What request do you make? What response do you get? What response do you expect? What do the logs say? f -- Francis Daly francis at daoine.org From narobycaronte at gmail.com Sun Feb 16 11:58:00 2014 From: narobycaronte at gmail.com (Paulo Ferreira) Date: Sun, 16 Feb 2014 11:58:00 +0000 Subject: Nginx like Rewrite feature (.htaccess) doesn't seem to be working (Help) Message-ID: Good Morning. I'm making a site and I want to make the link more usable so I used the Apache's RewriteCond which is the one I more comfortable with. I know that there are a site or two that lets me convert and so I used them, the thing is that it doesn't seem to be working neither of them. this is what I have RewriteEngine On RewriteCond %(REQUEST_FILENAME) !-d RewriteCond %(REQUEST_FILENAME) !-f RewriteCond %(REQUEST_FILENAME) !-i RewriteRule ^(.+)$ index.php?url=$1 [QSA,L] One gave me this (http://winginx.com/htaccess) # nginx configuration location / { rewrite ^(.+)$ /index.php?url=$1 break; } and the other gave me this ( http://www.anilcetin.com/convert-apache-htaccess-to-nginx/) if (!-d %(REQUEST_FILENAME)){ set $rule_0 1$rule_0; } if (!-f %(REQUEST_FILENAME)){ set $rule_0 2$rule_0; } if (%(REQUEST_FILENAME) !~ "-i"){ set $rule_0 3$rule_0; } if ($rule_0 = "321"){ rewrite ^/(.+)$ /index.php?url=$1 last; } My configuration is as follows http://pastebin.com/DrA4aAii My question is... Do I have to make an additional configuration to make it work or do I have to install something extra to make it work? I forgot I'm using Debian 7.3 (Wheezy) and my nginx version is still 1.2.1. Don't know if it's important or not but my MySQL client is MariaDB version 5.5.35 Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppc.lists at gmail.com Sun Feb 16 12:27:13 2014 From: ppc.lists at gmail.com (Pradip Caulagi) Date: Sun, 16 Feb 2014 17:57:13 +0530 Subject: WSGIScriptAlias Message-ID: <5300AEA1.9040806@gmail.com> I want to put my Django application behind Nginx. I have got it working with Apache as - WSGIScriptAlias /app /home/ubuntu/project/settings/wsgi.py What is the equivalent of this in Nginx? I have tried various options but nothing works completely. I currently have - location /app/ { rewrite ^/app/(.*) /$1 break; proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So my Django project is running on 8000. I want a request to http://localhost/app/admin to be submitted to my project as http://localhost:8000/admin But the Apache rule above and the Nginx rules aren't completely same. The POST requests (for admin login) don't work as expected. My impression is that Nginx is getting confused with redirects from Django. So does anybody know what is the equivalent of the Apache rule above in Nginx or what would be the correct way to do this in Nginx? Thanks, Pradip P Caulagi From nginx-forum at nginx.us Sun Feb 16 15:41:40 2014 From: nginx-forum at nginx.us (NetCompany) Date: Sun, 16 Feb 2014 10:41:40 -0500 Subject: nginx mail proxy - dovecot ssl backend In-Reply-To: <42c203a0a7a1711d61fff1e4118ab921.NginxMailingListEnglish@forum.nginx.org> References: <4ECFFD1E.2030002@yahoo.com.br> <9488c17d7cf4ee5a292ad097b1740294.NginxMailingListEnglish@forum.nginx.org> <42c203a0a7a1711d61fff1e4118ab921.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4784f9fe36b38297a526f1ba7537a4a7.NginxMailingListEnglish@forum.nginx.org> dukzcry, that would be very very very nice :-) We are busy with an big change in our email environment (serving +/- 10.000 mailboxes). Currently we use Dovecot with NFS (IPsec) filesystems but we would like to change this to a firewalled dovecot backend + NginX as frontend. Looking forward to your patch and maybe inclusion in an upstream version! Kasper Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219069,247617#msg-247617 From reallfqq-nginx at yahoo.fr Sun Feb 16 19:43:45 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 16 Feb 2014 20:43:45 +0100 Subject: Minimal configuration In-Reply-To: <20140215215503.GI24015@craic.sysops.org> References: <20140215215503.GI24015@craic.sysops.org> Message-ID: Thanks for your input Francis. What I suspected seemed old but I really don't understand the problem I am facing. Consider the following server configuration for some phpBB forum: server { listen 80; server_name b.cd; try_files $uri $uri/; root /var/www/phpBB3; index index.html index.htm index.php; include fastcgi.conf; location /favicon.ico { access_log off; log_not_found off; expires 7d; return 204; } location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; } } Requesting b.cd in the browser ends up wth a HTTP 500 error: '[error] 12345#0: *42 rewrite or internal redirection cycle while internally redirecting to "/", client: 12.34.56.78, server: b.cd, request: "GET / HTTP/1.1", host: "b.cd"' Adding / at the end of try_files so it reads: try_files $uri $uri/ /; solves the problem... Really, I don't get it. Any insight on what is going on? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From philipp.kraus at tu-clausthal.de Sun Feb 16 19:57:47 2014 From: philipp.kraus at tu-clausthal.de (Philipp Kraus) Date: Sun, 16 Feb 2014 20:57:47 +0100 Subject: rewrite except one directory In-Reply-To: <20140216094812.GL24015@craic.sysops.org> References: <20140216094812.GL24015@craic.sysops.org> Message-ID: Hi thanks for your answer, seems to be working > == > location ^~ /scripts/ { > # do your /scripts/ stuff > } > location / { > # do everything else > } > == I have defined my script location with: location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; include /etc/nginx/fastcgi_params; } location ^~/scripts { alias /home/www/content/scripts; } but in my alias path there are stored PHP scripts and at the moment I get a download of the script but the script is not pushed to the fast-cgi process. How can I enabled in the script location the PHP CGI for *.php files? Thanks Phil From jim at ohlste.in Sun Feb 16 20:38:07 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Sun, 16 Feb 2014 15:38:07 -0500 Subject: rewrite except one directory In-Reply-To: References: <20140216094812.GL24015@craic.sysops.org> Message-ID: <530121AF.8030406@ohlste.in> Hello, On 2/16/14, 2:57 PM, Philipp Kraus wrote: > Hi > > thanks for your answer, seems to be working > >> == >> location ^~ /scripts/ { >> # do your /scripts/ stuff >> } >> location / { >> # do everything else >> } >> == > > > I have defined my script location with: > > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9000; > include /etc/nginx/fastcgi_params; > } > > location ^~/scripts > { > alias /home/www/content/scripts; > } > > but in my alias path there are stored PHP scripts and at the moment > I get a download of the script but the script is not pushed to the fast-cgi > process. How can I enabled in the script location the PHP CGI for *.php > files? > With a nested location, or, if all the contents of /home/www/content/scripts are PHP scripts, use a fastcgi_pass. Remember, all requests are handed by one location, and one location only. Writing instructions for how to handle PHP scripts in one location does not tell nginx how to handle them in other locations. So: location ^~/scripts { alias /home/www/content/scripts; fastcigi_pass 127.0.0.1; include /etc/nginx/fastcgi_params; } Or, if you have content there other than PHP scripts in the alias location: location ^~/scripts { alias /home/www/content/scripts; location ~\.php$ { fastcigi_pass 127.0.0.1; include /etc/nginx/fastcgi_params; } } -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From philipp.kraus at tu-clausthal.de Sun Feb 16 20:54:47 2014 From: philipp.kraus at tu-clausthal.de (Philipp Kraus) Date: Sun, 16 Feb 2014 21:54:47 +0100 Subject: rewrite except one directory In-Reply-To: <530121AF.8030406@ohlste.in> References: <20140216094812.GL24015@craic.sysops.org> <530121AF.8030406@ohlste.in> Message-ID: <689CA9F6-C7D5-4F6C-8F60-C571F440ABB9@tu-clausthal.de> Hi, Am 16.02.2014 um 21:38 schrieb Jim Ohlstein : > With a nested location, or, if all the contents of /home/www/content/scripts are PHP scripts, use a fastcgi_pass. > > Remember, all requests are handed by one location, and one location only. Writing instructions for how to handle PHP scripts in one location does not tell nginx how to handle them in other locations. Thanks, imho my first PHP should be a "global" definition, but I see this is wrong. I'm using the nested version with location ^~/scripts. { alias /home/www/content/scripts; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; include /etc/nginx/fastcgi_params; } } so imho all PHP scripts in my script location should be run, but at the moment I get also a download, the script is not passed to the PHP fast-cgi process. I call it with http://myserver/scripts/myscript.php and myscript.php is downloaded Thanks Phil From francis at daoine.org Sun Feb 16 21:39:22 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 16 Feb 2014 21:39:22 +0000 Subject: Minimal configuration In-Reply-To: References: <20140215215503.GI24015@craic.sysops.org> Message-ID: <20140216213922.GO24015@craic.sysops.org> On Sun, Feb 16, 2014 at 08:43:45PM +0100, B.R. wrote: Hi there, > try_files $uri $uri/; The final argument to "try_files" is special. > Requesting b.cd in the browser ends up wth a HTTP 500 error: > '[error] 12345#0: *42 rewrite or internal redirection cycle while > internally redirecting to "/", client: 12.34.56.78, server: b.cd, request: > "GET / HTTP/1.1", host: "b.cd"' Yes. The debug log may show you why. > Adding / at the end of try_files so it reads: > try_files $uri $uri/ /; > solves the problem... Adding pretty much anything else at the end of try_files should also work, for this request. > Really, I don't get it. > Any insight on what is going on? The final argument to "try_files" is a uri for an internal redirect, not a file to be served. http://nginx.org/r/try_files f -- Francis Daly francis at daoine.org From steve at greengecko.co.nz Sun Feb 16 21:41:25 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 17 Feb 2014 10:41:25 +1300 Subject: 1.5.10 doesn't like curl -I Message-ID: <1392586885.14628.395.camel@steve-new> I've built up a version of 1.5.10 on amazon linux 2013.09 which supports spdy and ngx_pagespeed... configure looks like this: ./configure --prefix=/etc/nginx \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/run/nginx.lock \ --http-client-body-temp-path=/var/cache/nginx/client_temp \ --http-proxy-temp-path=/var/cache/nginx/proxy_temp \ --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \ --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \ --http-scgi-temp-path=/var/cache/nginx/scgi_temp \ --user=nginx \ --group=nginx \ --with-openssl-opt="enable-ec_nistp_64_gcc_128" \ --with-http_ssl_module \ --with-http_spdy_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_sub_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_stub_status_module \ --with-http_dav_module \ --with-http_xslt_module \ --with-mail \ --with-mail_ssl_module \ --with-file-aio \ --with-debug \ --with-sha1=/usr/include/openssl \ --with-md5=/usr/include/openssl \ --add-module=../ngx_pagespeed-1.7.30.3-beta \ '--with-cc-opt=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic ' ( so starting point was output from nginx -V from the standard build from the nginx repo, with the other stuff added in as necessary ). I noticed that when sending a curl -I hostname I got a curl: (52) Empty reply from server and /var/log/messages gets the line Feb 16 20:43:05 ip-10-9-0-31 kernel: [1621871.540112] nginx[18956] trap int3 ip:4ba41d sp:7fff398311f0 error:0 added. I've reverted to the stock 1.4.5 for the minute, but would like guidance as to where to look for the problem. Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From francis at daoine.org Sun Feb 16 21:45:24 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 16 Feb 2014 21:45:24 +0000 Subject: rewrite except one directory In-Reply-To: <689CA9F6-C7D5-4F6C-8F60-C571F440ABB9@tu-clausthal.de> References: <20140216094812.GL24015@craic.sysops.org> <530121AF.8030406@ohlste.in> <689CA9F6-C7D5-4F6C-8F60-C571F440ABB9@tu-clausthal.de> Message-ID: <20140216214524.GP24015@craic.sysops.org> On Sun, Feb 16, 2014 at 09:54:47PM +0100, Philipp Kraus wrote: > Am 16.02.2014 um 21:38 schrieb Jim Ohlstein : Hi there, This location... > location ^~/scripts. does not match this request... > with http://myserver/scripts/myscript.php and myscript.php is downloaded So depending on the full configuration used, serving the file as a file may be what nginx was told to do. f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Sun Feb 16 22:00:07 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 16 Feb 2014 23:00:07 +0100 Subject: Minimal configuration In-Reply-To: <20140216213922.GO24015@craic.sysops.org> References: <20140215215503.GI24015@craic.sysops.org> <20140216213922.GO24015@craic.sysops.org> Message-ID: Right, I did not pay attention to that. However, when requesting the root (by typing b.cd in the browser), $uri should be empty, thus why can't '$uri/' act as '/' and redirect accordingly? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Feb 16 22:23:56 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 16 Feb 2014 22:23:56 +0000 Subject: Minimal configuration In-Reply-To: References: <20140215215503.GI24015@craic.sysops.org> <20140216213922.GO24015@craic.sysops.org> Message-ID: <20140216222356.GQ24015@craic.sysops.org> On Sun, Feb 16, 2014 at 11:00:07PM +0100, B.R. wrote: Hi there, > Right, I did not pay attention to that. I think you're still not understanding it. > However, when requesting the root (by typing b.cd in the browser), $uri > should be empty, thus why can't '$uri/' act as '/' and redirect accordingly? It can. It does. When it redirects to /, the location match starts again, and try_files applies again, and you get the "rewrite or internal redirection cycle" that you saw in the logs. (Strictly, when you type b.cd in the browser, it requests /, so $uri is not empty. But that's not related to what try_files does here.) f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Feb 16 22:25:34 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 16 Feb 2014 22:25:34 +0000 Subject: 1.5.10 doesn't like curl -I In-Reply-To: <1392586885.14628.395.camel@steve-new> References: <1392586885.14628.395.camel@steve-new> Message-ID: <20140216222534.GR24015@craic.sysops.org> On Mon, Feb 17, 2014 at 10:41:25AM +1300, Steve Holdoway wrote: Hi there, > added. I've reverted to the stock 1.4.5 for the minute, but would like > guidance as to where to look for the problem. What happens if you omit the third-party modules? f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Sun Feb 16 23:42:28 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 17 Feb 2014 00:42:28 +0100 Subject: Minimal configuration In-Reply-To: <20140216222356.GQ24015@craic.sysops.org> References: <20140215215503.GI24015@craic.sysops.org> <20140216213922.GO24015@craic.sysops.org> <20140216222356.GQ24015@craic.sysops.org> Message-ID: Thanks for you time, Francis. I understand the loop cycles (and thanks for the clarification about $uri content). If I may, there is still a little something bothering me: The condition required for a loop to be created is that $uri (= /) doesn't match any file, thus redirecting and trying again. Why on Earth does '/' as error handler matches anything then? Stated otherwise, why does '/' as error handler uses index files to find something while '/' contained in $uri doesn't find anything? Isn't the 'index' directive used there? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Feb 17 08:42:48 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 17 Feb 2014 08:42:48 +0000 Subject: Minimal configuration In-Reply-To: References: <20140215215503.GI24015@craic.sysops.org> <20140216213922.GO24015@craic.sysops.org> <20140216222356.GQ24015@craic.sysops.org> Message-ID: <20140217084248.GT24015@craic.sysops.org> On Mon, Feb 17, 2014 at 12:42:28AM +0100, B.R. wrote: Hi there, > If I may, there is still a little something bothering me: > The condition required for a loop to be created is that $uri (= /) doesn't > match any file, thus redirecting and trying again. > Why on Earth does '/' as error handler matches anything then? I may be being confused by terminology here. Where does "error handler" come in to it? try_files takes multiple arguments -- at least one "file", plus one "uri". If it gets that far, there is an internal redirect to the final argument. The other arguments are tried in turn, by prepending $document_root and seeing if there is a file or directory with that name available. (If the argument ends in /, it looks for a directory; otherwise it looks for a file.) The first matching file-or-directory is processed in the current context. If that isn't clear, all I can suggest is that you read the source and/or test, and write the documentation that would make it clear to you, and submit that to the official docs. > Stated otherwise, why does '/' as error handler uses index files to find > something while '/' contained in $uri doesn't find anything? Isn't the > 'index' directive used there? try_files doesn't use "index". try_files can see that (if the current "file" argument ends in a /) the directory exists, and that therefore this request should be served by this "file" in this context. After that, "index" can apply and you get index.html or 403 or whatever else is appropriate. f -- Francis Daly francis at daoine.org From hillb at yosemite.edu Mon Feb 17 08:55:02 2014 From: hillb at yosemite.edu (Brian Hill) Date: Mon, 17 Feb 2014 08:55:02 +0000 Subject: Proxy pass location inheritance In-Reply-To: <20140214121950.GD81431@mdounin.ru> References: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu>, <20140214121950.GD81431@mdounin.ru> Message-ID: <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> Close, it's more akin to: location / { location ~ regex1 { # regex inside / } } location ~ regex2 { location ~ regex3 { # regex inside regex2 } } And the question is: where will a request matching both regex1 and regex3 be handled? Regex 1 & 3 look for the same file types and are identical, but contain different configurations based on the parent location. Currently, regex1 is catching all matches, irrespective of the parent location. If I understand correctly, I could solve my problem by moving the regex2 location block before the / location block, and then rewriting regex3 so that it included the elements of both the current regex2 and regex3. That way, regex3 would ONLY hit for items that matched both the current regex2 and regex3, and it would appear before regex1 in the order of execution. Is this correct, or will NGINX always give priority to the / location? ________________________________________ From: nginx-bounces at nginx.org [nginx-bounces at nginx.org] on behalf of Maxim Dounin [mdounin at mdounin.ru] Sent: Friday, February 14, 2014 4:19 AM To: nginx at nginx.org Subject: Re: Proxy pass location inheritance Hello! On Thu, Feb 13, 2014 at 06:43:08PM +0000, Brian Hill wrote: > Hello, we are using NGINX to serve a combination of local and > proxied content coming from both an Apache server (mostly PHP > content) and IIS 7.5 (a handful of third party .Net > applications). The proxy is working properly for the pages > themselves, but we wanted set up a separate location block for > the "static" files (js, images, etc) to use different caching > rules. In theory, each of the static file location blocks should > be serving from the location specified in its parent location > block, but instead ALL image requests are being routed to the > root block. [...] > Three of the four conditions are working properly. > A request for www.site.edu/index.php gets sent to 10.64.1.10:80/index.php > A request for www.site.edu/image1.gif gets sent to 10.64.1.10:80/default.gif > A request for www.site.edu/app1/default.aspx gets sent to 10.64.1.20:80/app1/default.aspx > > But the last condition is not working properly. > A request for www.site.edu/app1/image2.gif should be sent to 10.64.1.20:80/app1/image2.gif. > Instead, it's being routed to 10.64.1.10:80/app1/image2.gif, which is an invalid location. > > So it appears that the first server location block is catching > ALL of the requests for the static files. Anyone have any idea > what I'm doing wrong? Simplified: location / { location ~ regex1 { # regex inside / } } location ~ regex2 { # regex } The question is: where a request matching regex1 and regex2 will be handled? The answer is - in "location ~ regex1". Locations given by regular expressions within a matching prefix location are tested before other locations given by regular expressions. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Feb 17 09:16:55 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 17 Feb 2014 09:16:55 +0000 Subject: Proxy pass location inheritance In-Reply-To: <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> References: <20140214121950.GD81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> Message-ID: <20140217091655.GU24015@craic.sysops.org> On Mon, Feb 17, 2014 at 08:55:02AM +0000, Brian Hill wrote: Hi there, > Regex 1 & 3 look for the same file types and are identical, but contain different configurations based on the parent location. Currently, regex1 is catching all matches, irrespective of the parent location. > > If I understand correctly, I could solve my problem by moving the regex2 location block before the / location block, and then rewriting regex3 so that it included the elements of both the current regex2 and regex3. That way, regex3 would ONLY hit for items that matched both the current regex2 and regex3, and it would appear before regex1 in the order of execution. > > Is this correct, or will NGINX always give priority to the / location? Replace the directives inside the regex1 and regex3 locations with things like return 200 "Inside regex1\n"; and you should be able to test it straightforwardly enough. Alternatively, the mail you are replying to includes the words """ Locations given by regular expressions within a matching prefix location are tested before other locations given by regular expressions. """ If that's not clear, or if you want to test whether it matches what you observe, a similar "return" configuration should work too. (I'd say that your suggestion won't work as you want it to, because "/" is still the best-match prefix location, and therefore regex matches within "/" will be tested before regex matches outside of that location. You'll be happier if you limit yourself to prefix matches at server level.) Good luck with it, f -- Francis Daly francis at daoine.org From quintessence at bulinfo.net Mon Feb 17 10:02:47 2014 From: quintessence at bulinfo.net (Bozhidara Marinchovska) Date: Mon, 17 Feb 2014 12:02:47 +0200 Subject: nginx limit_rate if in location - strange behaviour - possible bug ? In-Reply-To: <52556B1A.4060701@bulinfo.net> References: <5255667F.2060803@bulinfo.net> <52556B1A.4060701@bulinfo.net> Message-ID: <5301DE47.6000509@bulinfo.net> Hi, again FreeBSD 9.1-STABLE #0: Sat May 18 00:32:18 EEST 2013 amd64 nginx version: nginx/1.4.4 TLS SNI support enabled configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --with-http_image_filter_module --with-http_stub_status_module --add-module=/usr/ports/www/nginx/work/nginx_upload_module-2.2.0 --add-module=/usr/ports/www/nginx/work/masterzen-nginx-upload-progress-module-a788dea --add-module=/usr/ports/www/nginx/work/naxsi-core-0.50/naxsi_src --with-pcre --with-http_ssl_module I've meet strange behaviour when connection is established via software download manager. My configuration is as follows: location some_extension { limit_rate_after 15m; limit_rate 250k; } When I open or save as URL with matched extension via browser (desktop, mobile, etc), limit_rate works perfectly, after I hit 15MB of the file, my speed goes down up to 1Mb/s regarding my configuration. When I place the same URL with matched extension in some download manager like IDM (internet download manager), FDM (free download manager) or others, limit_rate is not matched, download speed proceeded at max possible. [the actual problem] As differences from browser and software download manager I met only: - when download is processed via browser - if download is requested on chunks I get 206 partial content (multiple http sessions sent, multiple entries for this connection in my access log), download if processed as expected, after hitting 15MB of the file, download speed goes down to 1 Mb/s - when download is processed via browser - if download is requested on one piece I get 200 ok (1 http session, 1 entry in my access log regarding this connection), download if processed as expected, after hitting 15MB of the file, download speed goes down to 1 Mb/s - when download is processed via software download manager FDM (free download manager) I receive 1 http connection with status 200 OK (1 entry in my access log, multiple established connections from remote addr X.X.X.X and remote ports A,B,C,D,...) , but after inspecting headers, I see multiple requests regarding this single http session, in multiple requests is changed only start byte in Range header without sending end byte in range header; netstat confirms multiple connections via this single session and limit_rate is not matched in that case. tcpdump - download performed via browser Firefox - 200 OK single connection (1 entry in my access_log), no Range header: GET [requested_file] HTTP/1.1 Host: [host] User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0 AlexaToolbar/alxf-2.19 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: [referer] Cookie: [cookie] Connection: keep-alive Pragma: no-cache Cache-Control: no-cache tcpdump - download performed via browser Firefox 206 partial (multiple connections, multiple entries in my access log) - range header sent with end byte specified: GET [requested_file] HTTP/1.1 Host: [host] User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0 AlexaToolbar/alxf-2.19 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: [referer] Cookie: [cookie] Connection: keep-alive Range: bytes=2621440-2686975 tcpdump - download performed via FDM (single session, 1 entry in my access log, multiple established connections) without end byte specified in Range header: GET [requested_file] HTTP/1.1 Referer: [referer] Accept: */* Range: bytes=2541702- User-Agent: FDM 3.x Host: [host] Connection: Keep-Alive Cache-Control: no-cache Cookie: [cookie] My question is what may be the reason when downloading the example file with download manager not to match limit_rate directive and is there any way to debug somehow limit_rate ? I've used to inspect in the past the behaviour when end byte in Range header is not specified (unfortunately I don't remember if is was with some browser, but I can reproduce it if needed), then Nginx returns 416 (and when I see 416 in my access log, I block certain client for some time, let say 1h hour ). Actually is not a problem for me, to match these download managers and block them, or to set some inspection regarding Range header, or to limit the connection with limit_conn instead limit_rate, but I don't want to limit speed by IP address or to block download managers, but I would like to use limit_rate regarding file type extension. Thank you On 10/9/2013 5:41 PM, Bozhidara Marinchovska wrote: > I'm sorry, misread documentation. > > Placed set $limit_rate 0 in my case1 instead limit_rate 0 and now > works as expected. > > > On 09.10.2013 17:21 ?., Bozhidara Marinchovska wrote: >> Hi, >> >> nginx -V >> nginx version: nginx/1.4.2 >> configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I >> /usr/local/include' --with-ld-opt='-L /usr/local/lib' >> --conf-path=/usr/local/etc/nginx/nginx.conf >> --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid >> --error-log-path=/var/log/nginx-error.log --user=www --group=www >> --http-client-body-temp-path=/var/tmp/nginx/client_body_temp >> --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp >> --http-proxy-temp-path=/var/tmp/nginx/proxy_temp >> --http-scgi-temp-path=/var/tmp/nginx/scgi_temp >> --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp >> --http-log-path=/var/log/nginx-access.log >> --with-http_image_filter_module --with-http_stub_status_module >> --add-module=/usr/ports/www/nginx/work/naxsi-core-0.50/naxsi_src >> --with-pcre >> >> FreeBSD 9.1-STABLE #0: Sat May 18 00:32:18 EEST 2013 amd64 >> >> I'm using limit_rate case if in location. Regarding documentation "if >> in location" context is avaiable. >> >> My configuration is as follows: >> >> location some_extension { >> >> # case 1 >> if match_something { >> root ... >> break; >> } >> >> # case 2 >> if match_another { >> root ... >> break; >> } >> >> # else (case3) >> root ... >> something other ... >> here it is placed also limit_rate / limit_after directives >> } >> >> There is a root inside location with a (strong) reason :) (nginx >> pitfails case "root inside location block - BAD"). >> >> When I open in my browser http://my.website.com/myfile.ext it matches >> case 3 from the cofiguration. Limit_rate/limit_after works correct as >> expected. >> I want case1 not to have limit_rate / limit_after. >> >> Test one: >> In case1 I place limit_rate 0, case3 is the same limit_rate_after >> XXm; limit_rate some_rate. When I open in my browser URL matching >> case1 - limit_rate 0 is ignored. After hitting XXm from the file I >> get limit_rate from case 3. >> >> Test 2: >> In case 1 I place limit_rate_after 0; limit_rate 0, case3 is the same >> limit_rate_after XXm; limit_rate some rate. When I open in my browser >> URL matching case 1 - limit_rate_after 0 and limit_rate 0 are >> ignored. Worst is that when I try to download the file, I even didn't >> match case3 - my download starts from the first MB with limit_rate >> bandwidth from case3. >> >> Both tests are made in interval from 20 minutes, 1 connection from my >> IP, etc. >> >> I don't post my whole configuration, because may be it is >> unnessesary. If cases are with http_referer. >> >> Case 1 - if I see referer some_referer, I do something. (here I don't >> want to place any limits) >> Case 2 - If I see another referer , I do something else. >> Case 3 - Else ... something other (here I have some limits) >> >> I'm sure I match case1 when I test (nginx-error.log with debug say >> it), I mean my configuration with cases is working as expected, the >> new thing is limit_rate and limit_rate_after which is not working as >> expected. >> >> Any thoughts ? Meanwhile I will test on another version. >> >> Thanks >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > From contact at jpluscplusm.com Mon Feb 17 10:10:20 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 17 Feb 2014 10:10:20 +0000 Subject: nginx limit_rate if in location - strange behaviour - possible bug ? In-Reply-To: <5301DE47.6000509@bulinfo.net> References: <5255667F.2060803@bulinfo.net> <52556B1A.4060701@bulinfo.net> <5301DE47.6000509@bulinfo.net> Message-ID: On 17 February 2014 10:02, Bozhidara Marinchovska wrote: > My question is what may be the reason when downloading the example file with > download manager not to match limit_rate directive "Download managers" open multiple connections and grab different byte ranges of the same file across those connections. Nginx's limit_rate function limits the data transfer rate of a single connection. From mdounin at mdounin.ru Mon Feb 17 10:37:33 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Feb 2014 14:37:33 +0400 Subject: Nginx like Rewrite feature (.htaccess) doesn't seem to be working (Help) In-Reply-To: References: Message-ID: <20140217103732.GJ81431@mdounin.ru> Hello! On Sun, Feb 16, 2014 at 11:58:00AM +0000, Paulo Ferreira wrote: > Good Morning. > > > I'm making a site and I want to make the link more usable so I used the > Apache's RewriteCond which is the one I more comfortable with. I know that > there are a site or two that lets me convert and so I used them, the thing > is that it doesn't seem to be working neither of them. > > this is what I have > RewriteEngine On > RewriteCond %(REQUEST_FILENAME) !-d > RewriteCond %(REQUEST_FILENAME) !-f > RewriteCond %(REQUEST_FILENAME) !-i > RewriteRule ^(.+)$ index.php?url=$1 [QSA,L] First of all, original rewrite rules you are trying to convert seem to be invalid. There is no "-i" test in Apache AFAIK. You may want to rethink what you are trying to do, and do this in nginx - instead of trying to convert rules you think you need. [...] > My configuration is as follows > http://pastebin.com/DrA4aAii > > My question is... Do I have to make an additional configuration to make it > work or do I have to install something extra to make it work? Your configuration already redirects everything which isn't on the file system to /index.html, using the following rule: location / { try_files $uri $uri/ /index.html; } What you likely need is to change /index.html to your php script. See also this article for examples: http://nginx.org/en/docs/http/converting_rewrite_rules.html -- Maxim Dounin http://nginx.org/ From zaphod at berentweb.com Mon Feb 17 12:08:47 2014 From: zaphod at berentweb.com (Beeblebrox) Date: Mon, 17 Feb 2014 14:08:47 +0200 Subject: cache-proxy passes garbled fonts + alias problem Message-ID: Is there any possibility to get some input re this problem? Where can I start looking for answers? From mdounin at mdounin.ru Mon Feb 17 12:35:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Feb 2014 16:35:17 +0400 Subject: cache-proxy passes garbled fonts + alias problem In-Reply-To: References: Message-ID: <20140217123517.GN81431@mdounin.ru> Hello! On Mon, Feb 17, 2014 at 02:08:47PM +0200, Beeblebrox wrote: > Is there any possibility to get some input re this problem? > Where can I start looking for answers? http://www.catb.org/~esr/faqs/smart-questions.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Feb 17 13:13:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Feb 2014 17:13:28 +0400 Subject: Proxy pass location inheritance In-Reply-To: <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> References: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> <20140214121950.GD81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> Message-ID: <20140217131328.GP81431@mdounin.ru> Hello! On Mon, Feb 17, 2014 at 08:55:02AM +0000, Brian Hill wrote: > Close, it's more akin to: > > location / { > location ~ regex1 { > # regex inside / > } > } > > location ~ regex2 { > location ~ regex3 { > # regex inside regex2 > } > } > > And the question is: where will a request matching both regex1 > and regex3 be handled? Much like in the previous case, regex1 is checked first because it's inside a prefix location matched. And matching stops once it matches a request. > Regex 1 & 3 look for the same file types and are identical, but > contain different configurations based on the parent location. > Currently, regex1 is catching all matches, irrespective of the > parent location. That's expected behaviour. > If I understand correctly, I could solve my problem by moving > the regex2 location block before the / location block, and then > rewriting regex3 so that it included the elements of both the > current regex2 and regex3. That way, regex3 would ONLY hit for > items that matched both the current regex2 and regex3, and it > would appear before regex1 in the order of execution. > > Is this correct, or will NGINX always give priority to the / > location? No. There is no difference between location / { location ~ regex1 { ... } } location ~ regex2 { ... } and location ~ regex2 { ... } location / { location ~ regex1 { ... } } Locations given by regular expressions within a matching prefix location (not necessary "/") are always checked first. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Mon Feb 17 13:13:14 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 17 Feb 2014 14:13:14 +0100 Subject: Minimal configuration In-Reply-To: <20140217084248.GT24015@craic.sysops.org> References: <20140215215503.GI24015@craic.sysops.org> <20140216213922.GO24015@craic.sysops.org> <20140216222356.GQ24015@craic.sysops.org> <20140217084248.GT24015@craic.sysops.org> Message-ID: Sorry for my fluffy terminology. What I called 'error handler' was the final argument of the try_files directive, the one used if any other one fails to detect a valid file/directory. We ended concluding that: try_files $uri $uri/; was invalid, looping internally for an infinite amount of time try_files $uri $uri/ /; was valid I still don't get why the first case is invalid, with all the input you provided me with: The request URI was '/', so $uri = /, thus the first agument of try_files should match the root directory and process it further (finding the index file, etc.). Why hasn't it been the case? Otherwise stated: defaulting to '/' in the valid syntax means that both '$uri' and '$uri/' values (both equalling '/' after cleanup of redundant slashes) don't point towards a valid directory. This is obviously wrong, since the root directory exists (and is processed when the fallback to the '/' argument happens). Considering all that, one could wonder why the 1st syntax is invalid. I hope I clarified my question... It seems simple from my point of view :o\ --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 17 13:44:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Feb 2014 17:44:16 +0400 Subject: Proxy cache passes GET instead of HEAD to upstream In-Reply-To: <4788345.6Zf7HEts4H@vbart-laptop> References: <871cfa40a9251fbb6b2103ee6b7404d8.NginxMailingListEnglish@forum.nginx.org> <4788345.6Zf7HEts4H@vbart-laptop> Message-ID: <20140217134416.GR81431@mdounin.ru> Hello! On Sat, Feb 15, 2014 at 02:23:26AM +0400, Valentin V. Bartenev wrote: > On Friday 14 February 2014 17:11:37 jove4015 wrote: > [..] > > Is there any way to set up the proxy to the upstream so that it proxies HEAD > > requests as HEAD requests, and GET requests as normal, and still caches > > responses as well? Ideally it would cache all GET responses, check against > > the cache for HEAD requests, and proxy HEAD requests for objects not in the > > cache as HEAD requests (thereby not caching the response, or only caching > > the HEAD response as a separate object - ideally the latter but I could live > > with the former). > > > > You can set: > > proxy_cache_methods GET; > > in this case HEAD requests will bypass cache. No, this won't work. HEAD and GET requests are always added to the proxy_cache_methods mask. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Feb 17 14:01:51 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Feb 2014 18:01:51 +0400 Subject: inlining In-Reply-To: References: Message-ID: <20140217140151.GS81431@mdounin.ru> Hello! On Fri, Feb 14, 2014 at 04:59:26PM -0500, atarob wrote: > Looking through the codebase, I see a lot of very short helper like > functions that are defined in .c files with prototypes in .h files. This > means that the compiler cannot inline them outside of that .c file. Am I > wrong? How is that not a performance hit? In no particular order: - As already pointed out, smart enough compilers can inline whatever they want. - Adding all functions to .h files results in unmanagable code, so there should be a bar somewhere. - In many cases inlining may actually be a bad idea even from performance point of view, see, e.g., [1]. - If inlining is considired to be beneficial, in most cases nginx uses macros rather than inline functions, see, e.g., src/core/ngx_queue.h. If you think there are functions which needs be be made inlineable - feel free to suggest. [1] http://stackoverflow.com/questions/1932311/when-to-use-inline-function-and-when-not-to-use-it -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Feb 17 14:29:54 2014 From: nginx-forum at nginx.us (Larry) Date: Mon, 17 Feb 2014 09:29:54 -0500 Subject: Does $remote_port has to change when streaming ? Message-ID: Hello ! I tried but cannot trust myself (and what I tried) : when streaming / playing video in the client browser, does the client's port ($remote_port) may change ? in the logs when configuring remote_port I believe that if the connection is dropped, another port will be assigned, but in case everything is okay, then the port should remain the same. Am I right ? My tests and nginx log analysis didn't show any port change while streaming a video(luck ? normal ?) thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247660,247660#msg-247660 From mdounin at mdounin.ru Mon Feb 17 15:00:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Feb 2014 19:00:59 +0400 Subject: headers_in_hash In-Reply-To: <5ae184c4347b8f3cc761020253cb4fea.NginxMailingListEnglish@forum.nginx.org> References: <5ae184c4347b8f3cc761020253cb4fea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140217150059.GV81431@mdounin.ru> Hello! On Fri, Feb 14, 2014 at 04:39:23PM -0500, atarob wrote: > Creating a module, I want to read in from config desired http header fields. > Then, during config still, I want to get the struct offset for the fields > that have dedicated pointers in the header_in struct. It seems that when I > access headers_in_hash from the main config, it is uninitialized. I can see > in the code that there is > > ngx_http_init_headers_in_hash(ngx_conf_t *cf, ngx_http_core_main_conf_t > *cmcf) > > in ngx_http.c. It seems to be called when the main conf is being generated > though I am not certain yet. > > Where and when exactly is headers_in_hash initialized? If I wanted to read > from it during ngx_http_X_merge_loc_conf(), what would I need to do? Or am > I supposed to do it at some point later? The cmcf->headers_in_hash is expected to be initialized during runtime. As of now, it will be initialized before postconfiguration hooks, but I wouldn't recommend relaying on this. I also won't recommend using cmcf->headers_in_hash in your own module at all, unless you have good reasons to. It's not really a part of the API, it's an internal entity which http core uses to do it's work. -- Maxim Dounin http://nginx.org/ From hillb at yosemite.edu Mon Feb 17 17:15:45 2014 From: hillb at yosemite.edu (Brian Hill) Date: Mon, 17 Feb 2014 17:15:45 +0000 Subject: Proxy pass location inheritance In-Reply-To: <20140217131328.GP81431@mdounin.ru> References: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> <20140214121950.GD81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu>, <20140217131328.GP81431@mdounin.ru> Message-ID: <205444BFA924A34AB206D0E2538D4B4940EC71@x10m01.yosemite.edu> So it sounds like my only solution is to restructure the locations to avoid the original match in /. I don't have access to the servers again until tomorrow, but I'm wondering if something like this would work: location / { #base content } location ~ regex2 { #alternate folders to proxy_pass from .Net servers } location ~ regex3 { #catch all css, js, images, and other static files location ~ regex4 { #same as regex2. Alternate static location for .Net apps } location / { #match all "static files" not caught by regex4 } } If I'm understanding location precedence correctly, the regex3 location should always hit first, because its regex will contain an exact match for the file types. The nested regex4 (identical to regex2) will then match the folder name in that request, so the custom configuration can be applied only to the regex3 file types contained within the regex4 folders. Requests for the regex3 file types at locations not matching regex4 will be handled by the nested /. Will this work, or will the second nested / location break things? ________________________________________ From: nginx-bounces at nginx.org [nginx-bounces at nginx.org] on behalf of Maxim Dounin [mdounin at mdounin.ru] Sent: Monday, February 17, 2014 5:13 AM To: nginx at nginx.org Subject: Re: Proxy pass location inheritance Hello! On Mon, Feb 17, 2014 at 08:55:02AM +0000, Brian Hill wrote: > Close, it's more akin to: > > location / { > location ~ regex1 { > # regex inside / > } > } > > location ~ regex2 { > location ~ regex3 { > # regex inside regex2 > } > } > > And the question is: where will a request matching both regex1 > and regex3 be handled? Much like in the previous case, regex1 is checked first because it's inside a prefix location matched. And matching stops once it matches a request. > Regex 1 & 3 look for the same file types and are identical, but > contain different configurations based on the parent location. > Currently, regex1 is catching all matches, irrespective of the > parent location. That's expected behaviour. > If I understand correctly, I could solve my problem by moving > the regex2 location block before the / location block, and then > rewriting regex3 so that it included the elements of both the > current regex2 and regex3. That way, regex3 would ONLY hit for > items that matched both the current regex2 and regex3, and it > would appear before regex1 in the order of execution. > > Is this correct, or will NGINX always give priority to the / > location? No. There is no difference between location / { location ~ regex1 { ... } } location ~ regex2 { ... } and location ~ regex2 { ... } location / { location ~ regex1 { ... } } Locations given by regular expressions within a matching prefix location (not necessary "/") are always checked first. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Feb 17 17:30:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Feb 2014 21:30:46 +0400 Subject: Proxy pass location inheritance In-Reply-To: <205444BFA924A34AB206D0E2538D4B4940EC71@x10m01.yosemite.edu> References: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> <20140214121950.GD81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> <20140217131328.GP81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940EC71@x10m01.yosemite.edu> Message-ID: <20140217173046.GA33573@mdounin.ru> Hello! On Mon, Feb 17, 2014 at 05:15:45PM +0000, Brian Hill wrote: > So it sounds like my only solution is to restructure the locations to avoid the original match in /. I don't have access to the servers again until tomorrow, but I'm wondering if something like this would work: > > location / { > #base content > } > > location ~ regex2 { > #alternate folders to proxy_pass from .Net servers > } > > location ~ regex3 { > #catch all css, js, images, and other static files > > location ~ regex4 { > #same as regex2. Alternate static location for .Net apps > } > location / { > #match all "static files" not caught by regex4 > } > } This is certainly now how configs should be written, and this won't work as regex4 will never match (and nested / will complain during configuration parsing, but it doesn't make sense at all). > If I'm understanding location precedence correctly, the regex3 > location should always hit first, because its regex will contain > an exact match for the file types. The nested regex4 (identical > to regex2) will then match the folder name in that request, so > the custom configuration can be applied only to the regex3 file > types contained within the regex4 folders. Requests for the > regex3 file types at locations not matching regex4 will be > handled by the nested /. > > Will this work, or will the second nested / location break things? Try reading http://nginx.org/r/location again, and experimenting with trival configs to see how it works. Try to avoid using regular expressions by all means at least till you'll understand how it works. It's very easy to do things wrong using regular expressions. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Mon Feb 17 20:37:38 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 17 Feb 2014 20:37:38 +0000 Subject: Minimal configuration In-Reply-To: References: <20140215215503.GI24015@craic.sysops.org> <20140216213922.GO24015@craic.sysops.org> <20140216222356.GQ24015@craic.sysops.org> <20140217084248.GT24015@craic.sysops.org> Message-ID: <20140217203738.GW24015@craic.sysops.org> On Mon, Feb 17, 2014 at 02:13:14PM +0100, B.R. wrote: Hi there, > What I called 'error handler' was the final argument of the try_files > directive, the one used if any other one fails to detect a valid > file/directory. So: the "uri" argument. If try_files gets as far as the uri argument, there is an internal redirection -- no files or directories are consulted at all. (It can also be the "=code" argument, but that does not apply in this example.) > We ended concluding that: > try_files $uri $uri/; was invalid, looping internally for an infinite > amount of time > try_files $uri $uri/ /; was valid > > I still don't get why the first case is invalid, with all the input you > provided me with: > The request URI was '/', so $uri = /, thus the first agument of try_files > should match the root directory and process it further (finding the index > file, etc.). No. The first argument is a "file" argument, and is the string "$uri". That string does not end in "/" (it ends in "i"), and so try_files will look for a file. Expand variables in the argument, prepend $document_root, and try_files checks to see is there a file called /usr/local/nginx/html/. There is not, so it moves to the next argument. In the first case above, the next argument is the final argument, and so is the "uri" argument, and so try_files expands the variables in it and does an internal redirection to this new request and everything starts again. In this case, that's a loop. In the second case above, the next argument is not the final argument, and so is another "file" argument. It is the string "$uri/", which ends in "/", so try_files will look for a directory. Expand, prepend, and see is there a directory called /usr/local/nginx/html/. There is, and so this request is processed in the current context using that directory. The rest of the configuration says "serve from the filesystem", and "serve index.html in the directory", so that's what happens. > Considering all that, one could wonder why the 1st syntax is invalid. > > I hope I clarified my question... It seems simple from my point of view :o\ The best I can suggest is that when you read the documentation, only read the words that are there. Don't read the words that you want to be there, don't read the words that would be there if they were put in a different order, and don't assume that there are typographical errors and that clearly they meant to write something else. (There may well be typos there, in which case corrections are presumably welcome. But on first reading, if there is a way of parsing the words that does not require there to be an error, that's probably the one to expect.) f -- Francis Daly francis at daoine.org From hillb at yosemite.edu Mon Feb 17 21:26:56 2014 From: hillb at yosemite.edu (Brian Hill) Date: Mon, 17 Feb 2014 21:26:56 +0000 Subject: Proxy pass location inheritance In-Reply-To: <20140217173046.GA33573@mdounin.ru> References: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> <20140214121950.GD81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> <20140217131328.GP81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940EC71@x10m01.yosemite.edu>, <20140217173046.GA33573@mdounin.ru> Message-ID: <205444BFA924A34AB206D0E2538D4B4940ECCA@x10m01.yosemite.edu> So there is no precedence given to nested regex locations at all? What value does nesting provide then? This seems like it should be a fairly simple thing to do. Image/CSS requests to some folders get handled one way, and image/CSS requests to all other folders get handled another way. This is an experimental pilot project for a datacenter conversion, and the use of regex to specify both the file types and folder names is mandatory. The project this pilot is for will eventually require more than 50 server blocks with hundreds of locations in each block if regex cannot be used. It would be an unmaintainable mess without regex. Am I missing something here? Is NGINX the wrong solution for what I'm trying to accomplish? Is there another way to pull this off entirely within NGINX, or should I be using NGINX in conjunction with something like HAProxy to route those particular folders where they need to go (i.e., catch and proxy the .Net folder requests in HAProxy, and pass everything else along to NGINX?) I was hoping to avoid the use of HAProxy and handle everything directly within NGINX for the sake of simplicity, but it's sounding like that may not be an option. ________________________________________ From: nginx-bounces at nginx.org [nginx-bounces at nginx.org] on behalf of Maxim Dounin [mdounin at mdounin.ru] Sent: Monday, February 17, 2014 9:30 AM To: nginx at nginx.org Subject: Re: Proxy pass location inheritance Hello! On Mon, Feb 17, 2014 at 05:15:45PM +0000, Brian Hill wrote: > So it sounds like my only solution is to restructure the locations to avoid the original match in /. I don't have access to the servers again until tomorrow, but I'm wondering if something like this would work: > > location / { > #base content > } > > location ~ regex2 { > #alternate folders to proxy_pass from .Net servers > } > > location ~ regex3 { > #catch all css, js, images, and other static files > > location ~ regex4 { > #same as regex2. Alternate static location for .Net apps > } > location / { > #match all "static files" not caught by regex4 > } > } This is certainly now how configs should be written, and this won't work as regex4 will never match (and nested / will complain during configuration parsing, but it doesn't make sense at all). > If I'm understanding location precedence correctly, the regex3 > location should always hit first, because its regex will contain > an exact match for the file types. The nested regex4 (identical > to regex2) will then match the folder name in that request, so > the custom configuration can be applied only to the regex3 file > types contained within the regex4 folders. Requests for the > regex3 file types at locations not matching regex4 will be > handled by the nested /. > > Will this work, or will the second nested / location break things? Try reading http://nginx.org/r/location again, and experimenting with trival configs to see how it works. Try to avoid using regular expressions by all means at least till you'll understand how it works. It's very easy to do things wrong using regular expressions. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Mon Feb 17 22:16:23 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 17 Feb 2014 23:16:23 +0100 Subject: Minimal configuration In-Reply-To: <20140217203738.GW24015@craic.sysops.org> References: <20140215215503.GI24015@craic.sysops.org> <20140216213922.GO24015@craic.sysops.org> <20140216222356.GQ24015@craic.sysops.org> <20140217084248.GT24015@craic.sysops.org> <20140217203738.GW24015@craic.sysops.org> Message-ID: Thanks for your help, Francis! That's an amazingly detailed explanation. The differences in behavior between 'normal' arguments and the last one are the key but the doc does not (cannot?) go into details about them. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 18 12:12:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Feb 2014 16:12:55 +0400 Subject: Proxy pass location inheritance In-Reply-To: <205444BFA924A34AB206D0E2538D4B4940ECCA@x10m01.yosemite.edu> References: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> <20140214121950.GD81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> <20140217131328.GP81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940EC71@x10m01.yosemite.edu> <20140217173046.GA33573@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940ECCA@x10m01.yosemite.edu> Message-ID: <20140218121255.GF33573@mdounin.ru> Hello! On Mon, Feb 17, 2014 at 09:26:56PM +0000, Brian Hill wrote: > So there is no precedence given to nested regex locations at > all? What value does nesting provide then? Nesting is to do thins like this: location / { # something generic stuff here location ~ \.jpg$ { expires 1w; } } location /app1/ { # something special for app1 here, e.g. # access control auth_basic ... access ... location = /app1/login { # something special for /app1/login, # eveything from /app1/ is inherited proxy_pass ... } location ~ \.jpg$ { expires 1m; } } location /app2/ { # separate configuration for app2 here, # changes in /app1/ doesn't affect it ... location ~ \.jpg$ { expires 1y; } } That is, it allows to write scalable configurations using prefix locations. With such approach, you can edit anything under /app1/ without being concerned how it will affect things for /app2/. It also allows to use inheritance to write shorter configurations, and allows to isolate regexp locations within prefix ones. > This seems like it should be a fairly simple thing to do. > Image/CSS requests to some folders get handled one way, and > image/CSS requests to all other folders get handled another way. See above for an example. (I personally recommend using separate folder for images/css to be able to use prefix locations instead of regexp ones. But it should be relatively safe this way as well - as long as they are isolated in other locations. On of the big problems with regexp locations is that ofthen they are matched when people don't expect them to be matched, and isolating regexp locations within prefix ones minimizes this negative impact.) > This is an experimental pilot project for a datacenter > conversion, and the use of regex to specify both the file types > and folder names is mandatory. The project this pilot is for > will eventually require more than 50 server blocks with hundreds > of locations in each block if regex cannot be used. It would be > an unmaintainable mess without regex. Your problem is that you are trying to mix regex locations and prefix locations without understanding how they work, and to make things even harder you add nested locations to the mix. Instead, just stop doing things harder. Simplify things. Most recommended simplification is to avoid regexp location. Note that many location blocks isn't necessary bad thing. Sometimes it's much easier to handle hundreds of prefix location blocks than dozens of regexp locations. Configuration with prefix locations are much easier to maintain. If you can't avoid regexp locations for some external reason, it would be trivial to write a configuration which does what you want with regexp locations as well: location / { ... } location ~ ^/app1/ { ... location ~ \.jpg$ { expires 1m; } } location ~ ^/app2/ { ... location ~ \.jpg$ { expires 1y; } } location ~ \.jpg$ { expires 1w; } Though such configurations are usually much harder to maintain in contrast to ones based on prefix locations. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Feb 18 14:15:25 2014 From: nginx-forum at nginx.us (p.heppler) Date: Tue, 18 Feb 2014 09:15:25 -0500 Subject: Issue with spdy and proxy_pass Message-ID: <133f8d5c4c4dba7a7c47314864c2bab1.NginxMailingListEnglish@forum.nginx.org> Hello, I compiled nginx 1.5.10 on Centos 6.5 and try to use it as frontend for Tomcat. If I use pure SSL everything works fine. But as soon as I enable SPDY, I only get a blank page. And Content-Length Header is 0, but HTTP status code is 200. But, the blank page only affects request which are handled by Tomcat. Static files, served by nginx, work with SPDY. Here my vhost conf: # Redirect everything to ssl server { listen 192.168.89.175:80; server_name example.org www.example.org; return 301 https://www.example.org$request_uri; } # Redirect to www server { listen 192.168.89.175:443 ssl spdy; server_name example.org return 301 https://www.example.org$request_uri; } server { listen 192.168.89.175:443 ssl spdy; server_name www.example.org; ssl on; ssl_certificate /usr/share/nginx/ssl/example.org.crt; ssl_certificate_key /usr/share/nginx/ssl/example.org.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM EDH+AESGCM EECDH -RC4 EDH -CAMELLIA -SEED !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !kEDH"; ssl_session_cache builtin:1000 shared:SSL:10m; root /usr/share/nginx/html/example.org; location / { try_files $uri $uri/ /index.cfm?q=$uri&$args; } # Proxy CFML to Tomcat/Railo location ~ \.(cfm|cfml|cfc|jsp|cfr)(.*)$ { proxy_pass http://127.0.0.1:8888; proxy_pass_request_headers on; proxy_redirect default; proxy_set_header Host $host; proxy_read_timeout 900; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247692#msg-247692 From vbart at nginx.com Tue Feb 18 14:25:04 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 18 Feb 2014 18:25:04 +0400 Subject: Issue with spdy and proxy_pass In-Reply-To: <133f8d5c4c4dba7a7c47314864c2bab1.NginxMailingListEnglish@forum.nginx.org> References: <133f8d5c4c4dba7a7c47314864c2bab1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1428947.mXBZCnPWkm@vbart-laptop> On Tuesday 18 February 2014 09:15:25 p.heppler wrote: > Hello, > I compiled nginx 1.5.10 on Centos 6.5 and try to use it as frontend for > Tomcat. > If I use pure SSL everything works fine. But as soon as I enable SPDY, I > only get a blank page. > And Content-Length Header is 0, but HTTP status code is 200. > But, the blank page only affects request which are handled by Tomcat. Static > files, served by nginx, work with SPDY. > [..] Could you provide a debug log? http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Feb 18 14:42:55 2014 From: nginx-forum at nginx.us (p.heppler) Date: Tue, 18 Feb 2014 09:42:55 -0500 Subject: Issue with spdy and proxy_pass In-Reply-To: <133f8d5c4c4dba7a7c47314864c2bab1.NginxMailingListEnglish@forum.nginx.org> References: <133f8d5c4c4dba7a7c47314864c2bab1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9ed528bff97e623230a7d8604e508e8b.NginxMailingListEnglish@forum.nginx.org> I just created a new log. This is from nginx restart to first request: 2014/02/18 15:45:56 [debug] 12839#0: epoll add event: fd:9 op:1 ev:00002001 2014/02/18 15:46:12 [debug] 12839#0: post event 00007F29CC9D1078 2014/02/18 15:46:12 [debug] 12839#0: delete posted event 00007F29CC9D1078 2014/02/18 15:46:12 [debug] 12839#0: accept on 192.168.89.175:443, ready: 1 2014/02/18 15:46:12 [debug] 12839#0: posix_memalign: 0000000002547890:256 @16 2014/02/18 15:46:12 [debug] 12839#0: *1 accept: 192.168.76.46 fd:15 2014/02/18 15:46:12 [debug] 12839#0: posix_memalign: 0000000002547330:256 @16 2014/02/18 15:46:12 [debug] 12839#0: *1 event timer add: 15: 60000:1392734832664 2014/02/18 15:46:12 [debug] 12839#0: *1 reusable connection: 1 2014/02/18 15:46:12 [debug] 12839#0: *1 epoll add event: fd:15 op:1 ev:80002001 2014/02/18 15:46:12 [debug] 12839#0: accept() not ready (11: Resource temporarily unavailable) 2014/02/18 15:46:12 [debug] 12839#0: *1 post event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *1 delete posted event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *1 http check ssl handshake 2014/02/18 15:46:12 [debug] 12839#0: *1 http recv(): 1 2014/02/18 15:46:12 [debug] 12839#0: *1 https ssl handshake: 0x16 2014/02/18 15:46:12 [debug] 12839#0: *1 SSL server name: "www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *1 SSL_do_handshake: 0 2014/02/18 15:46:12 [debug] 12839#0: *1 SSL_get_error: 5 2014/02/18 15:46:12 [info] 12839#0: *1 peer closed connection in SSL handshake while SSL handshaking, client: 192.168.76.46, server: 192.168.89.175:443 2014/02/18 15:46:12 [debug] 12839#0: *1 close http connection: 15 2014/02/18 15:46:12 [debug] 12839#0: *1 SSL_shutdown: 1 2014/02/18 15:46:12 [debug] 12839#0: *1 event timer del: 15: 1392734832664 2014/02/18 15:46:12 [debug] 12839#0: *1 reusable connection: 0 2014/02/18 15:46:12 [debug] 12839#0: *1 free: 0000000002547890, unused: 16 2014/02/18 15:46:12 [debug] 12839#0: *1 free: 0000000002547330, unused: 114 2014/02/18 15:46:12 [debug] 12839#0: post event 00007F29CC9D1078 2014/02/18 15:46:12 [debug] 12839#0: delete posted event 00007F29CC9D1078 2014/02/18 15:46:12 [debug] 12839#0: accept on 192.168.89.175:443, ready: 1 2014/02/18 15:46:12 [debug] 12839#0: posix_memalign: 000000000252BF80:256 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 accept: 192.168.76.46 fd:15 2014/02/18 15:46:12 [debug] 12839#0: posix_memalign: 0000000002547890:256 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 event timer add: 15: 60000:1392734832672 2014/02/18 15:46:12 [debug] 12839#0: *2 reusable connection: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 epoll add event: fd:15 op:1 ev:80002001 2014/02/18 15:46:12 [debug] 12839#0: accept() not ready (11: Resource temporarily unavailable) 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *2 http check ssl handshake 2014/02/18 15:46:12 [debug] 12839#0: *2 http recv(): 1 2014/02/18 15:46:12 [debug] 12839#0: *2 https ssl handshake: 0x16 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL server name: "www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL NPN advertised 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_do_handshake: -1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:12 [debug] 12839#0: *2 reusable connection: 0 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL handshake handler: 0 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_do_handshake: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD" 2014/02/18 15:46:12 [debug] 12839#0: *2 init spdy request 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000024B1AA0:424 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025981A0:9552 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000024B5940:5928 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 000000000252C290:4096 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 000000000259A700:4096 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025A1AD0:4096 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025A2AE0:4096 2014/02/18 15:46:12 [debug] 12839#0: *2 posix_memalign: 00000000025A3AF0:4096 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 posix_memalign: 000000000252D2A0:256 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 add cleanup: 000000000252D2C0 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy create SETTINGS frame 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy write WINDOW_UPDATE sid:0 delta:2147418111 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy read handler 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_read: -1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy frame out: 00000000025A3D00 sid:0 prio:0 bl:0 len:8 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy frame out: 00000000025A3C40 sid:0 prio:0 bl:0 len:20 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025AF180:16384 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL buf copy: 28 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL buf copy: 16 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL to write: 44 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_write: 44 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy frame sent: 00000000025A3C40 sid:0 bl:0 len:20 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy frame sent: 00000000025A3D00 sid:0 bl:0 len:8 2014/02/18 15:46:12 [debug] 12839#0: *2 free: 00000000025A3AF0, unused: 3392 2014/02/18 15:46:12 [debug] 12839#0: *2 free: 00000000025AF180 2014/02/18 15:46:12 [debug] 12839#0: *2 reusable connection: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 event timer del: 15: 1392734832672 2014/02/18 15:46:12 [debug] 12839#0: *2 event timer add: 15: 180000:1392734952688 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy keepalive handler 2014/02/18 15:46:12 [debug] 12839#0: *2 reusable connection: 0 2014/02/18 15:46:12 [debug] 12839#0: *2 posix_memalign: 00000000025A3AF0:4096 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy read handler 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_read: 36 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_read: -1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy process frame head:80030004 f:0 l:12 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy SETTINGS frame consists of 1 entries 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy SETTINGS entry fl:0 id:7 val:65536 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy process frame head:80030009 f:0 l:8 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy WINDOW_UPDATE sid:0 delta:268369920 2014/02/18 15:46:12 [debug] 12839#0: *2 free: 00000000025A3AF0, unused: 3760 2014/02/18 15:46:12 [debug] 12839#0: *2 reusable connection: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 event timer: 15, old: 1392734952688, new: 1392734952690 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC9D1148 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy keepalive handler 2014/02/18 15:46:12 [debug] 12839#0: *2 reusable connection: 0 2014/02/18 15:46:12 [debug] 12839#0: *2 posix_memalign: 00000000025A3AF0:4096 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy read handler 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_read: 568 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_read: -1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy process frame head:80030001 f:1 l:544 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy SYN_STREAM frame sid:1 prio:2 2014/02/18 15:46:12 [debug] 12839#0: *2 posix_memalign: 00000000025A4B00:4096 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 posix_memalign: 00000000025A5B10:4096 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy process HEADERS 534 of 534 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025AF180:32768 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy inflateSetDictionary(): 0 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy inflate out: ni:00007F29CC64F238 no:00000000025A5D36 ai:0 ao:505 rc:0 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy HEADERS block consists of 11 entries 2014/02/18 15:46:12 [debug] 12839#0: *2 http uri: "/" 2014/02/18 15:46:12 [debug] 12839#0: *2 http args: "" 2014/02/18 15:46:12 [debug] 12839#0: *2 http exten: "" 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy http request line: "GET / HTTP/1.1" 2014/02/18 15:46:12 [debug] 12839#0: *2 http header: "host: www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *2 http header: "user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0" 2014/02/18 15:46:12 [debug] 12839#0: *2 http header: "cache-control: max-age=0" 2014/02/18 15:46:12 [debug] 12839#0: *2 http header: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2014/02/18 15:46:12 [debug] 12839#0: *2 http header: "accept-language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" 2014/02/18 15:46:12 [debug] 12839#0: *2 http header: "cookie: cfid=b0fc4a14-0812-48b6-aea7-077c7b61cca7; cftoken=0; _ga=GA1.2.982493482.1392732645; JSESSIONID=5ln6ihdqossy18j3j9ouz680c" 2014/02/18 15:46:12 [debug] 12839#0: *2 http header: "dnt: 1" 2014/02/18 15:46:12 [debug] 12839#0: *2 generic phase: 0 2014/02/18 15:46:12 [debug] 12839#0: *2 rewrite phase: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 http script regex: "^/\+" 2014/02/18 15:46:12 [notice] 12839#0: *2 "^/\+" does not match "/", client: 192.168.76.46, server: www.example.org, request: "GET / HTTP/1.1", host: "www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var: "GET" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script regex: "^(GET|HEAD|POST)$" 2014/02/18 15:46:12 [notice] 12839#0: *2 "^(GET|HEAD|POST)$" matches "GET", client: 192.168.76.46, server: www.example.org, request: "GET / HTTP/1.1", host: "www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script if 2014/02/18 15:46:12 [debug] 12839#0: *2 http script if: false 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: "/" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "\.(?:png|jpe?g)$" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "\.php$" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "\.(cfm|cfml|cfc|jsp|cfr)(.*)$" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "/\." 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "\.(?:eot|ttf|woff|otf|svg)$" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "\.(?:css|js|png|jpe?g|ico|gif)$" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "\.(aspx|asp|jsp|cgi)$" 2014/02/18 15:46:12 [debug] 12839#0: *2 using configuration "/" 2014/02/18 15:46:12 [debug] 12839#0: *2 http cl:-1 max:1048576 2014/02/18 15:46:12 [debug] 12839#0: *2 rewrite phase: 3 2014/02/18 15:46:12 [debug] 12839#0: *2 post rewrite phase: 4 2014/02/18 15:46:12 [debug] 12839#0: *2 generic phase: 5 2014/02/18 15:46:12 [debug] 12839#0: *2 generic phase: 6 2014/02/18 15:46:12 [debug] 12839#0: *2 generic phase: 7 2014/02/18 15:46:12 [debug] 12839#0: *2 generic phase: 8 2014/02/18 15:46:12 [debug] 12839#0: *2 access phase: 9 2014/02/18 15:46:12 [debug] 12839#0: *2 access phase: 10 2014/02/18 15:46:12 [debug] 12839#0: *2 post access phase: 11 2014/02/18 15:46:12 [debug] 12839#0: *2 try files phase: 12 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var: "/" 2014/02/18 15:46:12 [debug] 12839#0: *2 trying to use file: "/" "/usr/share/nginx/html/example.org/" 2014/02/18 15:46:12 [debug] 12839#0: *2 add cleanup: 00000000025A5A58 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 0000000002547440:144 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 000000000252BDD0:42 2014/02/18 15:46:12 [debug] 12839#0: *2 cached open file: /usr/share/nginx/html/example.org/, fd:-1, c:0, e:0, u:1 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var: "/" 2014/02/18 15:46:12 [debug] 12839#0: *2 trying to use dir: "/" "/usr/share/nginx/html/example.org/" 2014/02/18 15:46:12 [debug] 12839#0: *2 add cleanup: 00000000025A5A90 2014/02/18 15:46:12 [debug] 12839#0: *2 cached open file: /usr/share/nginx/html/example.org/, fd:-1, c:0, e:0, u:2 2014/02/18 15:46:12 [debug] 12839#0: *2 try file uri: "/" 2014/02/18 15:46:12 [debug] 12839#0: *2 content phase: 13 2014/02/18 15:46:12 [debug] 12839#0: *2 content phase: 14 2014/02/18 15:46:12 [debug] 12839#0: *2 open index "/usr/share/nginx/html/example.org/index.cfm" 2014/02/18 15:46:12 [debug] 12839#0: *2 add cleanup: 00000000025A61D0 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 0000000002547C30:144 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025476D0:51 2014/02/18 15:46:12 [debug] 12839#0: *2 cached open file: /usr/share/nginx/html/example.org/index.cfm, fd:16, c:1, e:0, u:1 2014/02/18 15:46:12 [debug] 12839#0: *2 internal redirect: "/index.cfm?" 2014/02/18 15:46:12 [debug] 12839#0: *2 rewrite phase: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 http script regex: "^/\+" 2014/02/18 15:46:12 [notice] 12839#0: *2 "^/\+" does not match "/index.cfm", client: 192.168.76.46, server: www.example.org, request: "GET / HTTP/1.1", host: "www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var: "GET" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script regex: "^(GET|HEAD|POST)$" 2014/02/18 15:46:12 [notice] 12839#0: *2 "^(GET|HEAD|POST)$" matches "GET", client: 192.168.76.46, server: www.example.org, request: "GET / HTTP/1.1", host: "www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script if 2014/02/18 15:46:12 [debug] 12839#0: *2 http script if: false 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: "/" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: "core/" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: "includes/" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: "templates/" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: "robots.txt" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "\.(?:png|jpe?g)$" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "\.php$" 2014/02/18 15:46:12 [debug] 12839#0: *2 test location: ~ "\.(cfm|cfml|cfc|jsp|cfr)(.*)$" 2014/02/18 15:46:12 [debug] 12839#0: *2 using configuration "\.(cfm|cfml|cfc|jsp|cfr)(.*)$" 2014/02/18 15:46:12 [debug] 12839#0: *2 http cl:-1 max:1048576 2014/02/18 15:46:12 [debug] 12839#0: *2 rewrite phase: 3 2014/02/18 15:46:12 [debug] 12839#0: *2 post rewrite phase: 4 2014/02/18 15:46:12 [debug] 12839#0: *2 generic phase: 5 2014/02/18 15:46:12 [debug] 12839#0: *2 generic phase: 6 2014/02/18 15:46:12 [debug] 12839#0: *2 generic phase: 7 2014/02/18 15:46:12 [debug] 12839#0: *2 generic phase: 8 2014/02/18 15:46:12 [debug] 12839#0: *2 access phase: 9 2014/02/18 15:46:12 [debug] 12839#0: *2 access phase: 10 2014/02/18 15:46:12 [debug] 12839#0: *2 post access phase: 11 2014/02/18 15:46:12 [debug] 12839#0: *2 try files phase: 12 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy read request body 2014/02/18 15:46:12 [debug] 12839#0: *2 http init upstream, client timer: 0 2014/02/18 15:46:12 [debug] 12839#0: *2 posix_memalign: 00000000025B7190:4096 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: "Host: " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var: "www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: " " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: "X-Forwarded-Host: " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var: "www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: " " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: "X-Forwarded-Server: " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var: "www.example.org" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: " " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: "X-Forwarded-For: " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var: "192.168.76.46" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: " " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: "X-Real-IP: " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script var: "192.168.76.46" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: " " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: "Connection: close " 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: "" 2014/02/18 15:46:12 [debug] 12839#0: *2 http script copy: "" 2014/02/18 15:46:12 [debug] 12839#0: *2 http proxy header: "user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0" 2014/02/18 15:46:12 [debug] 12839#0: *2 http proxy header: "cache-control: max-age=0" 2014/02/18 15:46:12 [debug] 12839#0: *2 http proxy header: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2014/02/18 15:46:12 [debug] 12839#0: *2 http proxy header: "accept-language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" 2014/02/18 15:46:12 [debug] 12839#0: *2 http proxy header: "cookie: cfid=b0fc4a14-0812-48b6-aea7-077c7b61cca7; cftoken=0; _ga=GA1.2.982493482.1392732645; JSESSIONID=5ln6ihdqossy18j3j9ouz680c" 2014/02/18 15:46:12 [debug] 12839#0: *2 http proxy header: "dnt: 1" 2014/02/18 15:46:12 [debug] 12839#0: *2 http proxy header: "GET /index.cfm HTTP/1.0 Host: www.example.org X-Forwarded-Host: www.example.org X-Forwarded-Server: www.example.org X-Forwarded-For: 192.168.76.46 X-Real-IP: 192.168.76.46 Connection: close user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0 cache-control: max-age=0 accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 accept-language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 cookie: cfid=b0fc4a14-0812-48b6-aea7-077c7b61cca7; cftoken=0; _ga=GA1.2.982493482.1392732645; JSESSIONID=5ln6ihdqossy18j3j9ouz680c dnt: 1 " 2014/02/18 15:46:12 [debug] 12839#0: *2 http cleanup add: 00000000025A6950 2014/02/18 15:46:12 [debug] 12839#0: *2 get rr peer, try: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 socket 17 2014/02/18 15:46:12 [debug] 12839#0: *2 epoll add connection: fd:17 ev:80002005 2014/02/18 15:46:12 [debug] 12839#0: *2 connect to 127.0.0.1:8443, fd:17 #3 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream connect: -2 2014/02/18 15:46:12 [debug] 12839#0: *2 posix_memalign: 000000000252BB20:128 @16 2014/02/18 15:46:12 [debug] 12839#0: *2 event timer add: 17: 60000:1392734832690 2014/02/18 15:46:12 [debug] 12839#0: *2 http finalize request: -4, "/index.cfm?" a:1, c:3 2014/02/18 15:46:12 [debug] 12839#0: *2 http request count:3 blk:0 2014/02/18 15:46:12 [debug] 12839#0: *2 http finalize request: -4, "/index.cfm?" a:1, c:2 2014/02/18 15:46:12 [debug] 12839#0: *2 http request count:2 blk:0 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy process frame head:80030009 f:0 l:8 2014/02/18 15:46:12 [debug] 12839#0: *2 spdy WINDOW_UPDATE sid:1 delta:268369920 2014/02/18 15:46:12 [debug] 12839#0: *2 event timer del: 15: 1392734952688 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC6901B0 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC6901B0 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream request: "/index.cfm?" 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream send request handler 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025328D0:64 2014/02/18 15:46:12 [debug] 12839#0: *2 set session: 0000000000000000:0 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_do_handshake: -1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC9D11B0 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC6901B0 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC6901B0 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL handshake handler: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_do_handshake: -1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC9D11B0 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL handshake handler: 0 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_do_handshake: -1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC9D11B0 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC6901B0 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC6901B0 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL handshake handler: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_do_handshake: 1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL: TLSv1.2, cipher: "ECDHE-RSA-DES-CBC3-SHA SSLv3 Kx=ECDH Au=RSA Enc=3DES(168) Mac=SHA1" 2014/02/18 15:46:12 [debug] 12839#0: *2 save session: 000000000252D4A0:2 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream send request 2014/02/18 15:46:12 [debug] 12839#0: *2 chain writer buf fl:1 s:597 2014/02/18 15:46:12 [debug] 12839#0: *2 chain writer in: 00000000025A6988 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025CBB30:80 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025B81A0:16384 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL buf copy: 597 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL to write: 597 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_write: 597 2014/02/18 15:46:12 [debug] 12839#0: *2 chain writer out: 0000000000000000 2014/02/18 15:46:12 [debug] 12839#0: *2 event timer del: 17: 1392734832690 2014/02/18 15:46:12 [debug] 12839#0: *2 event timer add: 17: 900000:1392735672713 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream process header 2014/02/18 15:46:12 [debug] 12839#0: *2 malloc: 00000000025BC1B0:4096 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_read: -1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC9D11B0 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream request: "/index.cfm?" 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream process header 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_read: -1 2014/02/18 15:46:12 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:12 [debug] 12839#0: *2 post event 00007F29CC6901B0 2014/02/18 15:46:12 [debug] 12839#0: *2 delete posted event 00007F29CC6901B0 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream request: "/index.cfm?" 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream send request handler 2014/02/18 15:46:12 [debug] 12839#0: *2 http upstream send request 2014/02/18 15:46:12 [debug] 12839#0: *2 chain writer in: 0000000000000000 2014/02/18 15:46:12 [debug] 12839#0: *2 event timer: 17, old: 1392735672713, new: 1392735672713 2014/02/18 15:46:13 [debug] 12839#0: *2 post event 00007F29CC9D11B0 2014/02/18 15:46:13 [debug] 12839#0: *2 post event 00007F29CC6901B0 2014/02/18 15:46:13 [debug] 12839#0: *2 delete posted event 00007F29CC6901B0 2014/02/18 15:46:13 [debug] 12839#0: *2 http upstream request: "/index.cfm?" 2014/02/18 15:46:13 [debug] 12839#0: *2 http upstream dummy handler 2014/02/18 15:46:13 [debug] 12839#0: *2 delete posted event 00007F29CC9D11B0 2014/02/18 15:46:13 [debug] 12839#0: *2 http upstream request: "/index.cfm?" 2014/02/18 15:46:13 [debug] 12839#0: *2 http upstream process header 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_read: 110 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_read: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_get_error: 6 2014/02/18 15:46:13 [debug] 12839#0: *2 peer shutdown SSL cleanly 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy status 200 "200 OK" 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy header: "Content-Type: text/html; charset=UTF-8" 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy header: "Content-Length: 0" 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy header: "Server: Jetty(9.0.6.v20130930)" 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy header done 2014/02/18 15:46:13 [debug] 12839#0: *2 xslt filter header 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy header filter 2014/02/18 15:46:13 [debug] 12839#0: *2 malloc: 00000000025CC7E0:228 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy deflate out: ni:00000000025CC8B1 no:00000000025B7685 ai:0 ao:26 rc:0 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy:1 create SYN_REPLY frame 00000000025A6A88: len:229 2014/02/18 15:46:13 [debug] 12839#0: *2 http cleanup add: 00000000025A6AC8 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame out: 00000000025A6A88 sid:1 prio:2 bl:1 len:229 2014/02/18 15:46:13 [debug] 12839#0: *2 malloc: 00000000025CE6C0:16384 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL buf copy: 237 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy:1 SYN_REPLY frame 00000000025A6A88 was sent 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame sent: 00000000025A6A88 sid:1 bl:1 len:229 2014/02/18 15:46:13 [debug] 12839#0: *2 http cacheable: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy filter init s:200 h:0 c:0 l:0 2014/02/18 15:46:13 [debug] 12839#0: *2 http upstream process upstream 2014/02/18 15:46:13 [debug] 12839#0: *2 pipe read upstream: 1 2014/02/18 15:46:13 [debug] 12839#0: *2 pipe preread: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 pipe recv chain: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 pipe buf free s:0 t:1 f:0 00000000025BC1B0, pos 00000000025BC21E, size: 0 file: 0, size: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 pipe length: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 pipe write downstream: 1 2014/02/18 15:46:13 [debug] 12839#0: *2 pipe write downstream done 2014/02/18 15:46:13 [debug] 12839#0: *2 event timer del: 17: 1392735672713 2014/02/18 15:46:13 [debug] 12839#0: *2 event timer add: 17: 900000:1392735673435 2014/02/18 15:46:13 [debug] 12839#0: *2 http upstream exit: 0000000000000000 2014/02/18 15:46:13 [debug] 12839#0: *2 finalize http upstream request: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 finalize http proxy request 2014/02/18 15:46:13 [debug] 12839#0: *2 free rr peer 1 0 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_shutdown: 1 2014/02/18 15:46:13 [debug] 12839#0: *2 close http upstream connection: 17 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025B81A0 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025CBB30 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025328D0 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 000000000252BB20, unused: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 event timer del: 17: 1392735673435 2014/02/18 15:46:13 [debug] 12839#0: *2 reusable connection: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 http upstream temp fd: -1 2014/02/18 15:46:13 [debug] 12839#0: *2 http output filter "/index.cfm?" 2014/02/18 15:46:13 [debug] 12839#0: *2 http copy filter: "/index.cfm?" 2014/02/18 15:46:13 [debug] 12839#0: *2 image filter 2014/02/18 15:46:13 [debug] 12839#0: *2 xslt filter body 2014/02/18 15:46:13 [debug] 12839#0: *2 http postpone filter "/index.cfm?" 00007FFF05E55010 2014/02/18 15:46:13 [debug] 12839#0: *2 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 http write filter: l:1 f:0 s:0 2014/02/18 15:46:13 [debug] 12839#0: *2 http write filter limit 0 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy:1 create DATA frame 00000000025A6A88: len:0 flags:1 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame out: 00000000025A6A88 sid:1 prio:2 bl:0 len:0 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL buf copy: 8 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL to write: 245 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_write: 245 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy:1 DATA frame 00000000025A6A88 was sent 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame sent: 00000000025A6A88 sid:1 bl:0 len:0 2014/02/18 15:46:13 [debug] 12839#0: *2 http write filter 0000000000000000 2014/02/18 15:46:13 [debug] 12839#0: *2 http copy filter: 0 "/index.cfm?" 2014/02/18 15:46:13 [debug] 12839#0: *2 http finalize request: 0, "/index.cfm?" a:1, c:1 2014/02/18 15:46:13 [debug] 12839#0: *2 http request count:1 blk:0 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy close stream 1, queued 0, processing 1 2014/02/18 15:46:13 [debug] 12839#0: *2 http close request 2014/02/18 15:46:13 [debug] 12839#0: *2 http log handler 2014/02/18 15:46:13 [debug] 12839#0: *2 run cleanup: 00000000025A61D0 2014/02/18 15:46:13 [debug] 12839#0: *2 close cached open file: /usr/share/nginx/html/example.org/index.cfm, fd:16, c:0, u:1, 0 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025BC1B0 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025A4B00, unused: 4 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025A5B10, unused: 8 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025B7190, unused: 2272 2014/02/18 15:46:13 [debug] 12839#0: *2 post event 00007F29CC9D1148 2014/02/18 15:46:13 [debug] 12839#0: *2 delete posted event 00007F29CC9D1148 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025A3AF0, unused: 3272 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025CE6C0 2014/02/18 15:46:13 [debug] 12839#0: *2 reusable connection: 1 2014/02/18 15:46:13 [debug] 12839#0: *2 event timer add: 15: 180000:1392734953435 2014/02/18 15:46:13 [debug] 12839#0: *2 post event 00007F29CC9D1148 2014/02/18 15:46:13 [debug] 12839#0: *2 delete posted event 00007F29CC9D1148 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy keepalive handler 2014/02/18 15:46:13 [debug] 12839#0: *2 reusable connection: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 posix_memalign: 00000000025C98B0:4096 @16 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy read handler 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_read: 543 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_read: -1 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_get_error: 2 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy process frame head:80030001 f:1 l:519 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy SYN_STREAM frame sid:3 prio:3 2014/02/18 15:46:13 [debug] 12839#0: *2 posix_memalign: 00000000025CA8C0:4096 @16 2014/02/18 15:46:13 [debug] 12839#0: *2 posix_memalign: 00000000025A3AF0:4096 @16 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy process HEADERS 509 of 509 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy inflate out: ni:00007F29CC64F21F no:00000000025A3D03 ai:0 ao:524 rc:0 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy HEADERS block consists of 10 entries 2014/02/18 15:46:13 [debug] 12839#0: *2 http uri: "/favicon.ico" 2014/02/18 15:46:13 [debug] 12839#0: *2 http args: "" 2014/02/18 15:46:13 [debug] 12839#0: *2 http exten: "ico" 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy http request line: "GET /favicon.ico HTTP/1.1" 2014/02/18 15:46:13 [debug] 12839#0: *2 http header: "host: www.example.org" 2014/02/18 15:46:13 [debug] 12839#0: *2 http header: "user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0" 2014/02/18 15:46:13 [debug] 12839#0: *2 http header: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2014/02/18 15:46:13 [debug] 12839#0: *2 http header: "accept-language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" 2014/02/18 15:46:13 [debug] 12839#0: *2 http header: "cookie: cfid=b0fc4a14-0812-48b6-aea7-077c7b61cca7; cftoken=0; _ga=GA1.2.982493482.1392732645; JSESSIONID=5ln6ihdqossy18j3j9ouz680c" 2014/02/18 15:46:13 [debug] 12839#0: *2 http header: "dnt: 1" 2014/02/18 15:46:13 [debug] 12839#0: *2 generic phase: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 rewrite phase: 1 2014/02/18 15:46:13 [debug] 12839#0: *2 http script regex: "^/\+" 2014/02/18 15:46:13 [notice] 12839#0: *2 "^/\+" does not match "/favicon.ico", client: 192.168.76.46, server: www.example.org, request: "GET /favicon.ico HTTP/1.1", host: "www.example.org" 2014/02/18 15:46:13 [debug] 12839#0: *2 http script var 2014/02/18 15:46:13 [debug] 12839#0: *2 http script var: "GET" 2014/02/18 15:46:13 [debug] 12839#0: *2 http script regex: "^(GET|HEAD|POST)$" 2014/02/18 15:46:13 [notice] 12839#0: *2 "^(GET|HEAD|POST)$" matches "GET", client: 192.168.76.46, server: www.example.org, request: "GET /favicon.ico HTTP/1.1", host: "www.example.org" 2014/02/18 15:46:13 [debug] 12839#0: *2 http script if 2014/02/18 15:46:13 [debug] 12839#0: *2 http script if: false 2014/02/18 15:46:13 [debug] 12839#0: *2 test location: "/" 2014/02/18 15:46:13 [debug] 12839#0: *2 test location: "core/" 2014/02/18 15:46:13 [debug] 12839#0: *2 test location: "includes/" 2014/02/18 15:46:13 [debug] 12839#0: *2 test location: "favicon.ico" 2014/02/18 15:46:13 [debug] 12839#0: *2 using configuration "=/favicon.ico" 2014/02/18 15:46:13 [debug] 12839#0: *2 http cl:-1 max:1048576 2014/02/18 15:46:13 [debug] 12839#0: *2 rewrite phase: 3 2014/02/18 15:46:13 [debug] 12839#0: *2 post rewrite phase: 4 2014/02/18 15:46:13 [debug] 12839#0: *2 generic phase: 5 2014/02/18 15:46:13 [debug] 12839#0: *2 generic phase: 6 2014/02/18 15:46:13 [debug] 12839#0: *2 generic phase: 7 2014/02/18 15:46:13 [debug] 12839#0: *2 generic phase: 8 2014/02/18 15:46:13 [debug] 12839#0: *2 access phase: 9 2014/02/18 15:46:13 [debug] 12839#0: *2 access phase: 10 2014/02/18 15:46:13 [debug] 12839#0: *2 post access phase: 11 2014/02/18 15:46:13 [debug] 12839#0: *2 try files phase: 12 2014/02/18 15:46:13 [debug] 12839#0: *2 content phase: 13 2014/02/18 15:46:13 [debug] 12839#0: *2 content phase: 14 2014/02/18 15:46:13 [debug] 12839#0: *2 content phase: 15 2014/02/18 15:46:13 [debug] 12839#0: *2 content phase: 16 2014/02/18 15:46:13 [debug] 12839#0: *2 content phase: 17 2014/02/18 15:46:13 [debug] 12839#0: *2 content phase: 18 2014/02/18 15:46:13 [debug] 12839#0: *2 http filename: "/usr/share/nginx/html/example.org/favicon.ico" 2014/02/18 15:46:13 [debug] 12839#0: *2 add cleanup: 00000000025CB828 2014/02/18 15:46:13 [debug] 12839#0: *2 malloc: 00000000024A8F00:144 2014/02/18 15:46:13 [debug] 12839#0: *2 malloc: 00000000024B2120:53 2014/02/18 15:46:13 [debug] 12839#0: *2 cached open file: /usr/share/nginx/html/example.org/favicon.ico, fd:16, c:1, e:0, u:1 2014/02/18 15:46:13 [debug] 12839#0: *2 http static fd: 16 2014/02/18 15:46:13 [debug] 12839#0: *2 xslt filter header 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy header filter 2014/02/18 15:46:13 [debug] 12839#0: *2 malloc: 000000000252AE40:315 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy deflate out: ni:000000000252AF6B no:00000000025A4431 ai:0 ao:44 rc:0 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy:3 create SYN_REPLY frame 00000000025A4460: len:313 2014/02/18 15:46:13 [debug] 12839#0: *2 http cleanup add: 00000000025CB8A8 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame out: 00000000025A4460 sid:3 prio:3 bl:1 len:313 2014/02/18 15:46:13 [debug] 12839#0: *2 malloc: 00000000025B7190:16384 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL buf copy: 321 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy:3 SYN_REPLY frame 00000000025A4460 was sent 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame sent: 00000000025A4460 sid:3 bl:1 len:313 2014/02/18 15:46:13 [debug] 12839#0: *2 http output filter "/favicon.ico?" 2014/02/18 15:46:13 [debug] 12839#0: *2 http copy filter: "/favicon.ico?" 2014/02/18 15:46:13 [debug] 12839#0: *2 posix_memalign: 00000000025A4B00:4096 @16 2014/02/18 15:46:13 [debug] 12839#0: *2 read: 16, 00000000025A4B20, 3262, 0 2014/02/18 15:46:13 [debug] 12839#0: *2 image filter 2014/02/18 15:46:13 [debug] 12839#0: *2 xslt filter body 2014/02/18 15:46:13 [debug] 12839#0: *2 http postpone filter "/favicon.ico?" 00000000025A4560 2014/02/18 15:46:13 [debug] 12839#0: *2 write new buf t:1 f:1 00000000025A4B20, pos 00000000025A4B20, size: 3262 file: 0, size: 3262 2014/02/18 15:46:13 [debug] 12839#0: *2 http write filter: l:1 f:0 s:3262 2014/02/18 15:46:13 [debug] 12839#0: *2 http write filter limit 0 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy:3 create DATA frame 00000000025A4460: len:3262 flags:1 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame out: 00000000025A4460 sid:3 prio:3 bl:0 len:3262 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL buf copy: 8 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL buf copy: 3262 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL to write: 3591 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_write: 3591 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy:3 DATA frame 00000000025A4460 was sent 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame sent: 00000000025A4460 sid:3 bl:0 len:3262 2014/02/18 15:46:13 [debug] 12839#0: *2 http write filter 0000000000000000 2014/02/18 15:46:13 [debug] 12839#0: *2 http copy filter: 0 "/favicon.ico?" 2014/02/18 15:46:13 [debug] 12839#0: *2 http finalize request: 0, "/favicon.ico?" a:1, c:1 2014/02/18 15:46:13 [debug] 12839#0: *2 http request count:1 blk:0 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy close stream 3, queued 0, processing 1 2014/02/18 15:46:13 [debug] 12839#0: *2 http close request 2014/02/18 15:46:13 [debug] 12839#0: *2 http log handler 2014/02/18 15:46:13 [debug] 12839#0: *2 run cleanup: 00000000025CB828 2014/02/18 15:46:13 [debug] 12839#0: *2 close cached open file: /usr/share/nginx/html/example.org/favicon.ico, fd:16, c:0, u:1, 0 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025CA8C0, unused: 0 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025A3AF0, unused: 1272 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025A4B00, unused: 802 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy process frame head:80030009 f:0 l:8 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy WINDOW_UPDATE sid:3 delta:268369920 2014/02/18 15:46:13 [info] 12839#0: *2 client sent WINDOW_UPDATE frame for unknown stream 3 while processing SPDY, client: 192.168.76.46, server: 192.168.89.175:443 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy write RST_STREAM sid:3 st:2 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame out: 00000000025C9BE8 sid:0 prio:7 bl:0 len:8 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL buf copy: 16 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL to write: 16 2014/02/18 15:46:13 [debug] 12839#0: *2 SSL_write: 16 2014/02/18 15:46:13 [debug] 12839#0: *2 spdy frame sent: 00000000025C9BE8 sid:0 bl:0 len:8 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025C98B0, unused: 3096 2014/02/18 15:46:13 [debug] 12839#0: *2 free: 00000000025B7190 2014/02/18 15:46:13 [debug] 12839#0: *2 reusable connection: 1 2014/02/18 15:46:13 [debug] 12839#0: *2 event timer: 15, old: 1392734953435, new: 1392734953471 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247695#msg-247695 From nginx-forum at nginx.us Tue Feb 18 14:50:10 2014 From: nginx-forum at nginx.us (p.heppler) Date: Tue, 18 Feb 2014 09:50:10 -0500 Subject: Issue with spdy and proxy_pass In-Reply-To: <9ed528bff97e623230a7d8604e508e8b.NginxMailingListEnglish@forum.nginx.org> References: <133f8d5c4c4dba7a7c47314864c2bab1.NginxMailingListEnglish@forum.nginx.org> <9ed528bff97e623230a7d8604e508e8b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <26d8231ba7eb223c4b45b37e20cf3c78.NginxMailingListEnglish@forum.nginx.org> I meanwhile replaced Tomcat with Jetty, because Jetty also supports SPDY. But same result. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247696#msg-247696 From nginx-forum at nginx.us Tue Feb 18 14:54:40 2014 From: nginx-forum at nginx.us (p.heppler) Date: Tue, 18 Feb 2014 09:54:40 -0500 Subject: Issue with spdy and proxy_pass In-Reply-To: <26d8231ba7eb223c4b45b37e20cf3c78.NginxMailingListEnglish@forum.nginx.org> References: <133f8d5c4c4dba7a7c47314864c2bab1.NginxMailingListEnglish@forum.nginx.org> <9ed528bff97e623230a7d8604e508e8b.NginxMailingListEnglish@forum.nginx.org> <26d8231ba7eb223c4b45b37e20cf3c78.NginxMailingListEnglish@forum.nginx.org> Message-ID: Another run with Tomcat, fresh log: 2014/02/18 15:58:53 [debug] 12993#0: epoll add event: fd:9 op:1 ev:00002001 2014/02/18 15:58:57 [debug] 12993#0: post event 00007F19759D4078 2014/02/18 15:58:57 [debug] 12993#0: delete posted event 00007F19759D4078 2014/02/18 15:58:57 [debug] 12993#0: accept on 192.168.89.175:443, ready: 1 2014/02/18 15:58:57 [debug] 12993#0: posix_memalign: 0000000001CF4860:256 @16 2014/02/18 15:58:57 [debug] 12993#0: *1 accept: 192.168.76.46 fd:15 2014/02/18 15:58:57 [debug] 12993#0: posix_memalign: 0000000001CF4300:256 @16 2014/02/18 15:58:57 [debug] 12993#0: *1 event timer add: 15: 60000:1392735597357 2014/02/18 15:58:57 [debug] 12993#0: *1 reusable connection: 1 2014/02/18 15:58:57 [debug] 12993#0: *1 epoll add event: fd:15 op:1 ev:80002001 2014/02/18 15:58:57 [debug] 12993#0: accept() not ready (11: Resource temporarily unavailable) 2014/02/18 15:58:57 [debug] 12993#0: *1 post event 00007F19759D4148 2014/02/18 15:58:57 [debug] 12993#0: *1 delete posted event 00007F19759D4148 2014/02/18 15:58:57 [debug] 12993#0: *1 http check ssl handshake 2014/02/18 15:58:57 [debug] 12993#0: *1 http recv(): 1 2014/02/18 15:58:57 [debug] 12993#0: *1 https ssl handshake: 0x16 2014/02/18 15:58:57 [debug] 12993#0: *1 ssl get session: C26B2FD1:32 2014/02/18 15:58:57 [debug] 12993#0: *1 SSL server name: "www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *1 SSL_do_handshake: 0 2014/02/18 15:58:57 [debug] 12993#0: *1 SSL_get_error: 5 2014/02/18 15:58:57 [info] 12993#0: *1 peer closed connection in SSL handshake while SSL handshaking, client: 192.168.76.46, server: 192.168.89.175:443 2014/02/18 15:58:57 [debug] 12993#0: *1 close http connection: 15 2014/02/18 15:58:57 [debug] 12993#0: *1 SSL_shutdown: 1 2014/02/18 15:58:57 [debug] 12993#0: *1 event timer del: 15: 1392735597357 2014/02/18 15:58:57 [debug] 12993#0: *1 reusable connection: 0 2014/02/18 15:58:57 [debug] 12993#0: *1 free: 0000000001CF4860, unused: 16 2014/02/18 15:58:57 [debug] 12993#0: *1 free: 0000000001CF4300, unused: 114 2014/02/18 15:58:57 [debug] 12993#0: post event 00007F19759D4078 2014/02/18 15:58:57 [debug] 12993#0: delete posted event 00007F19759D4078 2014/02/18 15:58:57 [debug] 12993#0: accept on 192.168.89.175:443, ready: 1 2014/02/18 15:58:57 [debug] 12993#0: posix_memalign: 0000000001C56F00:256 @16 2014/02/18 15:58:57 [debug] 12993#0: *2 accept: 192.168.76.46 fd:15 2014/02/18 15:58:57 [debug] 12993#0: posix_memalign: 0000000001CD8F70:256 @16 2014/02/18 15:58:57 [debug] 12993#0: *2 event timer add: 15: 60000:1392735597365 2014/02/18 15:58:57 [debug] 12993#0: *2 reusable connection: 1 2014/02/18 15:58:57 [debug] 12993#0: *2 epoll add event: fd:15 op:1 ev:80002001 2014/02/18 15:58:57 [debug] 12993#0: accept() not ready (11: Resource temporarily unavailable) 2014/02/18 15:58:57 [debug] 12993#0: *2 post event 00007F19759D4148 2014/02/18 15:58:57 [debug] 12993#0: *2 delete posted event 00007F19759D4148 2014/02/18 15:58:57 [debug] 12993#0: *2 http check ssl handshake 2014/02/18 15:58:57 [debug] 12993#0: *2 http recv(): 1 2014/02/18 15:58:57 [debug] 12993#0: *2 https ssl handshake: 0x16 2014/02/18 15:58:57 [debug] 12993#0: *2 ssl get session: C26B2FD1:32 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL server name: "www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL NPN advertised 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_do_handshake: -1 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_get_error: 2 2014/02/18 15:58:57 [debug] 12993#0: *2 reusable connection: 0 2014/02/18 15:58:57 [debug] 12993#0: *2 post event 00007F19759D4148 2014/02/18 15:58:57 [debug] 12993#0: *2 delete posted event 00007F19759D4148 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL handshake handler: 0 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_do_handshake: 1 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD" 2014/02/18 15:58:57 [debug] 12993#0: *2 init spdy request 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001C5FB10:424 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001D451A0:9552 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001C63930:5928 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001CD9260:4096 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001D47700:4096 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001D4EAD0:4096 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001D4FAE0:4096 2014/02/18 15:58:57 [debug] 12993#0: *2 posix_memalign: 0000000001D50AF0:4096 @16 2014/02/18 15:58:57 [debug] 12993#0: *2 posix_memalign: 0000000001CF4300:256 @16 2014/02/18 15:58:57 [debug] 12993#0: *2 add cleanup: 0000000001CF4320 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy create SETTINGS frame 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy write WINDOW_UPDATE sid:0 delta:2147418111 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy read handler 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_read: -1 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_get_error: 2 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy frame out: 0000000001D50D00 sid:0 prio:0 bl:0 len:8 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy frame out: 0000000001D50C40 sid:0 prio:0 bl:0 len:20 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001D5C180:16384 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL buf copy: 28 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL buf copy: 16 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL to write: 44 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_write: 44 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy frame sent: 0000000001D50C40 sid:0 bl:0 len:20 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy frame sent: 0000000001D50D00 sid:0 bl:0 len:8 2014/02/18 15:58:57 [debug] 12993#0: *2 free: 0000000001D50AF0, unused: 3392 2014/02/18 15:58:57 [debug] 12993#0: *2 free: 0000000001D5C180 2014/02/18 15:58:57 [debug] 12993#0: *2 reusable connection: 1 2014/02/18 15:58:57 [debug] 12993#0: *2 event timer del: 15: 1392735597365 2014/02/18 15:58:57 [debug] 12993#0: *2 event timer add: 15: 180000:1392735717380 2014/02/18 15:58:57 [debug] 12993#0: *2 post event 00007F19759D4148 2014/02/18 15:58:57 [debug] 12993#0: *2 delete posted event 00007F19759D4148 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy keepalive handler 2014/02/18 15:58:57 [debug] 12993#0: *2 reusable connection: 0 2014/02/18 15:58:57 [debug] 12993#0: *2 posix_memalign: 0000000001D50AF0:4096 @16 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy read handler 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_read: 36 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_read: 568 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_read: -1 2014/02/18 15:58:57 [debug] 12993#0: *2 SSL_get_error: 2 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy process frame head:80030004 f:0 l:12 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy SETTINGS frame consists of 1 entries 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy SETTINGS entry fl:0 id:7 val:65536 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy process frame head:80030009 f:0 l:8 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy WINDOW_UPDATE sid:0 delta:268369920 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy process frame head:80030001 f:1 l:544 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy SYN_STREAM frame sid:1 prio:2 2014/02/18 15:58:57 [debug] 12993#0: *2 posix_memalign: 0000000001D51B00:4096 @16 2014/02/18 15:58:57 [debug] 12993#0: *2 posix_memalign: 0000000001D52B10:4096 @16 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy process HEADERS 534 of 534 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001D5C180:32768 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy inflateSetDictionary(): 0 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy inflate out: ni:00007F197565225C no:0000000001D52D36 ai:0 ao:505 rc:0 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy HEADERS block consists of 11 entries 2014/02/18 15:58:57 [debug] 12993#0: *2 http uri: "/" 2014/02/18 15:58:57 [debug] 12993#0: *2 http args: "" 2014/02/18 15:58:57 [debug] 12993#0: *2 http exten: "" 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy http request line: "GET / HTTP/1.1" 2014/02/18 15:58:57 [debug] 12993#0: *2 http header: "host: www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *2 http header: "user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0" 2014/02/18 15:58:57 [debug] 12993#0: *2 http header: "cache-control: max-age=0" 2014/02/18 15:58:57 [debug] 12993#0: *2 http header: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2014/02/18 15:58:57 [debug] 12993#0: *2 http header: "accept-language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" 2014/02/18 15:58:57 [debug] 12993#0: *2 http header: "cookie: cfid=b0fc4a14-0812-48b6-aea7-077c7b61cca7; cftoken=0; _ga=GA1.2.982493482.1392732645; JSESSIONID=5ln6ihdqossy18j3j9ouz680c" 2014/02/18 15:58:57 [debug] 12993#0: *2 http header: "dnt: 1" 2014/02/18 15:58:57 [debug] 12993#0: *2 generic phase: 0 2014/02/18 15:58:57 [debug] 12993#0: *2 rewrite phase: 1 2014/02/18 15:58:57 [debug] 12993#0: *2 http script regex: "^/\+" 2014/02/18 15:58:57 [notice] 12993#0: *2 "^/\+" does not match "/", client: 192.168.76.46, server: www.example.org, request: "GET / HTTP/1.1", host: "www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var: "GET" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script regex: "^(GET|HEAD|POST)$" 2014/02/18 15:58:57 [notice] 12993#0: *2 "^(GET|HEAD|POST)$" matches "GET", client: 192.168.76.46, server: www.example.org, request: "GET / HTTP/1.1", host: "www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script if 2014/02/18 15:58:57 [debug] 12993#0: *2 http script if: false 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: "/" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "\.(?:png|jpe?g)$" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "\.php$" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "\.(cfm|cfml|cfc|jsp|cfr)(.*)$" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "/\." 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "\.(?:eot|ttf|woff|otf|svg)$" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "\.(?:css|js|png|jpe?g|ico|gif)$" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "\.(aspx|asp|jsp|cgi)$" 2014/02/18 15:58:57 [debug] 12993#0: *2 using configuration "/" 2014/02/18 15:58:57 [debug] 12993#0: *2 http cl:-1 max:1048576 2014/02/18 15:58:57 [debug] 12993#0: *2 rewrite phase: 3 2014/02/18 15:58:57 [debug] 12993#0: *2 post rewrite phase: 4 2014/02/18 15:58:57 [debug] 12993#0: *2 generic phase: 5 2014/02/18 15:58:57 [debug] 12993#0: *2 generic phase: 6 2014/02/18 15:58:57 [debug] 12993#0: *2 generic phase: 7 2014/02/18 15:58:57 [debug] 12993#0: *2 generic phase: 8 2014/02/18 15:58:57 [debug] 12993#0: *2 access phase: 9 2014/02/18 15:58:57 [debug] 12993#0: *2 access phase: 10 2014/02/18 15:58:57 [debug] 12993#0: *2 post access phase: 11 2014/02/18 15:58:57 [debug] 12993#0: *2 try files phase: 12 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var: "/" 2014/02/18 15:58:57 [debug] 12993#0: *2 trying to use file: "/" "/usr/share/nginx/html/example.org/" 2014/02/18 15:58:57 [debug] 12993#0: *2 add cleanup: 0000000001D52A58 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001CF4A60:144 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001CDA7D0:42 2014/02/18 15:58:57 [debug] 12993#0: *2 cached open file: /usr/share/nginx/html/example.org/, fd:-1, c:0, e:0, u:1 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var: "/" 2014/02/18 15:58:57 [debug] 12993#0: *2 trying to use dir: "/" "/usr/share/nginx/html/example.org/" 2014/02/18 15:58:57 [debug] 12993#0: *2 add cleanup: 0000000001D52A90 2014/02/18 15:58:57 [debug] 12993#0: *2 cached open file: /usr/share/nginx/html/example.org/, fd:-1, c:0, e:0, u:2 2014/02/18 15:58:57 [debug] 12993#0: *2 try file uri: "/" 2014/02/18 15:58:57 [debug] 12993#0: *2 content phase: 13 2014/02/18 15:58:57 [debug] 12993#0: *2 content phase: 14 2014/02/18 15:58:57 [debug] 12993#0: *2 open index "/usr/share/nginx/html/example.org/index.cfm" 2014/02/18 15:58:57 [debug] 12993#0: *2 add cleanup: 0000000001D531D0 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001CDF7F0:144 2014/02/18 15:58:57 [debug] 12993#0: *2 malloc: 0000000001CF4CA0:51 2014/02/18 15:58:57 [debug] 12993#0: *2 cached open file: /usr/share/nginx/html/example.org/index.cfm, fd:16, c:1, e:0, u:1 2014/02/18 15:58:57 [debug] 12993#0: *2 internal redirect: "/index.cfm?" 2014/02/18 15:58:57 [debug] 12993#0: *2 rewrite phase: 1 2014/02/18 15:58:57 [debug] 12993#0: *2 http script regex: "^/\+" 2014/02/18 15:58:57 [notice] 12993#0: *2 "^/\+" does not match "/index.cfm", client: 192.168.76.46, server: www.example.org, request: "GET / HTTP/1.1", host: "www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var: "GET" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script regex: "^(GET|HEAD|POST)$" 2014/02/18 15:58:57 [notice] 12993#0: *2 "^(GET|HEAD|POST)$" matches "GET", client: 192.168.76.46, server: www.example.org, request: "GET / HTTP/1.1", host: "www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script if 2014/02/18 15:58:57 [debug] 12993#0: *2 http script if: false 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: "/" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: "core/" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: "includes/" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: "templates/" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: "robots.txt" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "\.(?:png|jpe?g)$" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "\.php$" 2014/02/18 15:58:57 [debug] 12993#0: *2 test location: ~ "\.(cfm|cfml|cfc|jsp|cfr)(.*)$" 2014/02/18 15:58:57 [debug] 12993#0: *2 using configuration "\.(cfm|cfml|cfc|jsp|cfr)(.*)$" 2014/02/18 15:58:57 [debug] 12993#0: *2 http cl:-1 max:1048576 2014/02/18 15:58:57 [debug] 12993#0: *2 rewrite phase: 3 2014/02/18 15:58:57 [debug] 12993#0: *2 post rewrite phase: 4 2014/02/18 15:58:57 [debug] 12993#0: *2 generic phase: 5 2014/02/18 15:58:57 [debug] 12993#0: *2 generic phase: 6 2014/02/18 15:58:57 [debug] 12993#0: *2 generic phase: 7 2014/02/18 15:58:57 [debug] 12993#0: *2 generic phase: 8 2014/02/18 15:58:57 [debug] 12993#0: *2 access phase: 9 2014/02/18 15:58:57 [debug] 12993#0: *2 access phase: 10 2014/02/18 15:58:57 [debug] 12993#0: *2 post access phase: 11 2014/02/18 15:58:57 [debug] 12993#0: *2 try files phase: 12 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy read request body 2014/02/18 15:58:57 [debug] 12993#0: *2 http init upstream, client timer: 0 2014/02/18 15:58:57 [debug] 12993#0: *2 posix_memalign: 0000000001D64190:4096 @16 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: "Host: " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var: "www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: " " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: "X-Forwarded-Host: " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var: "www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: " " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: "X-Forwarded-Server: " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var: "www.example.org" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: " " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: "X-Forwarded-For: " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var: "192.168.76.46" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: " " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: "X-Real-IP: " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script var: "192.168.76.46" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: " " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: "Connection: close " 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: "" 2014/02/18 15:58:57 [debug] 12993#0: *2 http script copy: "" 2014/02/18 15:58:57 [debug] 12993#0: *2 http proxy header: "user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0" 2014/02/18 15:58:57 [debug] 12993#0: *2 http proxy header: "cache-control: max-age=0" 2014/02/18 15:58:57 [debug] 12993#0: *2 http proxy header: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2014/02/18 15:58:57 [debug] 12993#0: *2 http proxy header: "accept-language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" 2014/02/18 15:58:57 [debug] 12993#0: *2 http proxy header: "cookie: cfid=b0fc4a14-0812-48b6-aea7-077c7b61cca7; cftoken=0; _ga=GA1.2.982493482.1392732645; JSESSIONID=5ln6ihdqossy18j3j9ouz680c" 2014/02/18 15:58:57 [debug] 12993#0: *2 http proxy header: "dnt: 1" 2014/02/18 15:58:57 [debug] 12993#0: *2 http proxy header: "GET /index.cfm HTTP/1.0 Host: www.example.org X-Forwarded-Host: www.example.org X-Forwarded-Server: www.example.org X-Forwarded-For: 192.168.76.46 X-Real-IP: 192.168.76.46 Connection: close user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0 cache-control: max-age=0 accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 accept-language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 cookie: cfid=b0fc4a14-0812-48b6-aea7-077c7b61cca7; cftoken=0; _ga=GA1.2.982493482.1392732645; JSESSIONID=5ln6ihdqossy18j3j9ouz680c dnt: 1 " 2014/02/18 15:58:57 [debug] 12993#0: *2 http cleanup add: 0000000001D53950 2014/02/18 15:58:57 [debug] 12993#0: *2 get rr peer, try: 1 2014/02/18 15:58:57 [debug] 12993#0: *2 socket 17 2014/02/18 15:58:57 [debug] 12993#0: *2 epoll add connection: fd:17 ev:80002005 2014/02/18 15:58:57 [debug] 12993#0: *2 connect to 127.0.0.1:8888, fd:17 #3 2014/02/18 15:58:57 [debug] 12993#0: *2 http upstream connect: -2 2014/02/18 15:58:57 [debug] 12993#0: *2 posix_memalign: 0000000001CD8B00:128 @16 2014/02/18 15:58:57 [debug] 12993#0: *2 event timer add: 17: 60000:1392735597388 2014/02/18 15:58:57 [debug] 12993#0: *2 http finalize request: -4, "/index.cfm?" a:1, c:3 2014/02/18 15:58:57 [debug] 12993#0: *2 http request count:3 blk:0 2014/02/18 15:58:57 [debug] 12993#0: *2 http finalize request: -4, "/index.cfm?" a:1, c:2 2014/02/18 15:58:57 [debug] 12993#0: *2 http request count:2 blk:0 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy process frame head:80030009 f:0 l:8 2014/02/18 15:58:57 [debug] 12993#0: *2 spdy WINDOW_UPDATE sid:1 delta:268369920 2014/02/18 15:58:57 [debug] 12993#0: *2 event timer del: 15: 1392735717380 2014/02/18 15:58:57 [debug] 12993#0: *2 post event 00007F19756931B0 2014/02/18 15:58:57 [debug] 12993#0: *2 delete posted event 00007F19756931B0 2014/02/18 15:58:57 [debug] 12993#0: *2 http upstream request: "/index.cfm?" 2014/02/18 15:58:57 [debug] 12993#0: *2 http upstream send request handler 2014/02/18 15:58:57 [debug] 12993#0: *2 http upstream send request 2014/02/18 15:58:57 [debug] 12993#0: *2 chain writer buf fl:1 s:597 2014/02/18 15:58:57 [debug] 12993#0: *2 chain writer in: 0000000001D53988 2014/02/18 15:58:57 [debug] 12993#0: *2 writev: 597 2014/02/18 15:58:57 [debug] 12993#0: *2 chain writer out: 0000000000000000 2014/02/18 15:58:57 [debug] 12993#0: *2 event timer del: 17: 1392735597388 2014/02/18 15:58:57 [debug] 12993#0: *2 event timer add: 17: 900000:1392736437394 2014/02/18 15:59:01 [debug] 12993#0: *2 post event 00007F19759D41B0 2014/02/18 15:59:01 [debug] 12993#0: *2 post event 00007F19756931B0 2014/02/18 15:59:01 [debug] 12993#0: *2 delete posted event 00007F19756931B0 2014/02/18 15:59:01 [debug] 12993#0: *2 http upstream request: "/index.cfm?" 2014/02/18 15:59:01 [debug] 12993#0: *2 http upstream dummy handler 2014/02/18 15:59:01 [debug] 12993#0: *2 delete posted event 00007F19759D41B0 2014/02/18 15:59:01 [debug] 12993#0: *2 http upstream request: "/index.cfm?" 2014/02/18 15:59:01 [debug] 12993#0: *2 http upstream process header 2014/02/18 15:59:01 [debug] 12993#0: *2 malloc: 0000000001D651A0:4096 2014/02/18 15:59:01 [debug] 12993#0: *2 recv: fd:17 235 of 4096 2014/02/18 15:59:01 [debug] 12993#0: *2 http proxy status 200 "200 OK" 2014/02/18 15:59:01 [debug] 12993#0: *2 http proxy header: "Server: Apache-Coyote/1.1" 2014/02/18 15:59:01 [debug] 12993#0: *2 http proxy header: "Set-Cookie: JSESSIONID=396DC49FA6AF6BE7BDB1B46A00612E34; Path=/; HttpOnly" 2014/02/18 15:59:01 [debug] 12993#0: *2 http proxy header: "Content-Type: text/html;charset=UTF-8" 2014/02/18 15:59:01 [debug] 12993#0: *2 http proxy header: "Content-Length: 0" 2014/02/18 15:59:01 [debug] 12993#0: *2 http proxy header: "Date: Tue, 18 Feb 2014 14:59:01 GMT" 2014/02/18 15:59:01 [debug] 12993#0: *2 http proxy header: "Connection: close" 2014/02/18 15:59:01 [debug] 12993#0: *2 http proxy header done 2014/02/18 15:59:01 [debug] 12993#0: *2 xslt filter header 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy header filter 2014/02/18 15:59:01 [debug] 12993#0: *2 malloc: 0000000001CDA270:306 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy deflate out: ni:0000000001CDA38F no:0000000001D646D3 ai:0 ao:36 rc:0 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy:1 create SYN_REPLY frame 0000000001D64708: len:307 2014/02/18 15:59:01 [debug] 12993#0: *2 http cleanup add: 0000000001D64748 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame out: 0000000001D64708 sid:1 prio:2 bl:1 len:307 2014/02/18 15:59:01 [debug] 12993#0: *2 malloc: 0000000001D661B0:16384 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL buf copy: 315 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy:1 SYN_REPLY frame 0000000001D64708 was sent 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame sent: 0000000001D64708 sid:1 bl:1 len:307 2014/02/18 15:59:01 [debug] 12993#0: *2 http cacheable: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 http proxy filter init s:200 h:0 c:0 l:0 2014/02/18 15:59:01 [debug] 12993#0: *2 http upstream process upstream 2014/02/18 15:59:01 [debug] 12993#0: *2 pipe read upstream: 1 2014/02/18 15:59:01 [debug] 12993#0: *2 pipe preread: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 readv: 1:3861 2014/02/18 15:59:01 [debug] 12993#0: *2 readv() not ready (11: Resource temporarily unavailable) 2014/02/18 15:59:01 [debug] 12993#0: *2 pipe recv chain: -2 2014/02/18 15:59:01 [debug] 12993#0: *2 pipe buf free s:0 t:1 f:0 0000000001D651A0, pos 0000000001D6528B, size: 0 file: 0, size: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 pipe length: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 pipe write downstream: 1 2014/02/18 15:59:01 [debug] 12993#0: *2 pipe write downstream done 2014/02/18 15:59:01 [debug] 12993#0: *2 event timer del: 17: 1392736437394 2014/02/18 15:59:01 [debug] 12993#0: *2 event timer add: 17: 900000:1392736441385 2014/02/18 15:59:01 [debug] 12993#0: *2 http upstream exit: 0000000000000000 2014/02/18 15:59:01 [debug] 12993#0: *2 finalize http upstream request: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 finalize http proxy request 2014/02/18 15:59:01 [debug] 12993#0: *2 free rr peer 1 0 2014/02/18 15:59:01 [debug] 12993#0: *2 close http upstream connection: 17 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001CD8B00, unused: 48 2014/02/18 15:59:01 [debug] 12993#0: *2 event timer del: 17: 1392736441385 2014/02/18 15:59:01 [debug] 12993#0: *2 reusable connection: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 http upstream temp fd: -1 2014/02/18 15:59:01 [debug] 12993#0: *2 http output filter "/index.cfm?" 2014/02/18 15:59:01 [debug] 12993#0: *2 http copy filter: "/index.cfm?" 2014/02/18 15:59:01 [debug] 12993#0: *2 image filter 2014/02/18 15:59:01 [debug] 12993#0: *2 xslt filter body 2014/02/18 15:59:01 [debug] 12993#0: *2 http postpone filter "/index.cfm?" 00007FFF0EE2CD10 2014/02/18 15:59:01 [debug] 12993#0: *2 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 http write filter: l:1 f:0 s:0 2014/02/18 15:59:01 [debug] 12993#0: *2 http write filter limit 0 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy:1 create DATA frame 0000000001D64708: len:0 flags:1 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame out: 0000000001D64708 sid:1 prio:2 bl:0 len:0 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL buf copy: 8 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL to write: 323 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL_write: 323 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy:1 DATA frame 0000000001D64708 was sent 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame sent: 0000000001D64708 sid:1 bl:0 len:0 2014/02/18 15:59:01 [debug] 12993#0: *2 http write filter 0000000000000000 2014/02/18 15:59:01 [debug] 12993#0: *2 http copy filter: 0 "/index.cfm?" 2014/02/18 15:59:01 [debug] 12993#0: *2 http finalize request: 0, "/index.cfm?" a:1, c:1 2014/02/18 15:59:01 [debug] 12993#0: *2 http request count:1 blk:0 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy close stream 1, queued 0, processing 1 2014/02/18 15:59:01 [debug] 12993#0: *2 http close request 2014/02/18 15:59:01 [debug] 12993#0: *2 http log handler 2014/02/18 15:59:01 [debug] 12993#0: *2 run cleanup: 0000000001D531D0 2014/02/18 15:59:01 [debug] 12993#0: *2 close cached open file: /usr/share/nginx/html/example.org/index.cfm, fd:16, c:0, u:1, 0 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D651A0 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D51B00, unused: 4 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D52B10, unused: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D64190, unused: 2048 2014/02/18 15:59:01 [debug] 12993#0: *2 post event 00007F19759D4148 2014/02/18 15:59:01 [debug] 12993#0: *2 delete posted event 00007F19759D4148 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D50AF0, unused: 3272 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D661B0 2014/02/18 15:59:01 [debug] 12993#0: *2 reusable connection: 1 2014/02/18 15:59:01 [debug] 12993#0: *2 event timer add: 15: 180000:1392735721385 2014/02/18 15:59:01 [debug] 12993#0: *2 post event 00007F19759D4148 2014/02/18 15:59:01 [debug] 12993#0: *2 delete posted event 00007F19759D4148 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy keepalive handler 2014/02/18 15:59:01 [debug] 12993#0: *2 reusable connection: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 posix_memalign: 0000000001D50AF0:4096 @16 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy read handler 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL_read: 550 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL_read: -1 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL_get_error: 2 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy process frame head:80030001 f:1 l:526 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy SYN_STREAM frame sid:3 prio:3 2014/02/18 15:59:01 [debug] 12993#0: *2 posix_memalign: 0000000001D51B00:4096 @16 2014/02/18 15:59:01 [debug] 12993#0: *2 posix_memalign: 0000000001D52B10:4096 @16 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy process HEADERS 516 of 516 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy inflate out: ni:00007F1975652226 no:0000000001D52D2A ai:0 ao:517 rc:0 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy HEADERS block consists of 10 entries 2014/02/18 15:59:01 [debug] 12993#0: *2 http uri: "/favicon.ico" 2014/02/18 15:59:01 [debug] 12993#0: *2 http args: "" 2014/02/18 15:59:01 [debug] 12993#0: *2 http exten: "ico" 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy http request line: "GET /favicon.ico HTTP/1.1" 2014/02/18 15:59:01 [debug] 12993#0: *2 http header: "host: www.example.org" 2014/02/18 15:59:01 [debug] 12993#0: *2 http header: "user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0" 2014/02/18 15:59:01 [debug] 12993#0: *2 http header: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2014/02/18 15:59:01 [debug] 12993#0: *2 http header: "accept-language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" 2014/02/18 15:59:01 [debug] 12993#0: *2 http header: "cookie: cfid=b0fc4a14-0812-48b6-aea7-077c7b61cca7; cftoken=0; _ga=GA1.2.982493482.1392732645; JSESSIONID=396DC49FA6AF6BE7BDB1B46A00612E34" 2014/02/18 15:59:01 [debug] 12993#0: *2 http header: "dnt: 1" 2014/02/18 15:59:01 [debug] 12993#0: *2 generic phase: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 rewrite phase: 1 2014/02/18 15:59:01 [debug] 12993#0: *2 http script regex: "^/\+" 2014/02/18 15:59:01 [notice] 12993#0: *2 "^/\+" does not match "/favicon.ico", client: 192.168.76.46, server: www.example.org, request: "GET /favicon.ico HTTP/1.1", host: "www.example.org" 2014/02/18 15:59:01 [debug] 12993#0: *2 http script var 2014/02/18 15:59:01 [debug] 12993#0: *2 http script var: "GET" 2014/02/18 15:59:01 [debug] 12993#0: *2 http script regex: "^(GET|HEAD|POST)$" 2014/02/18 15:59:01 [notice] 12993#0: *2 "^(GET|HEAD|POST)$" matches "GET", client: 192.168.76.46, server: www.example.org, request: "GET /favicon.ico HTTP/1.1", host: "www.example.org" 2014/02/18 15:59:01 [debug] 12993#0: *2 http script if 2014/02/18 15:59:01 [debug] 12993#0: *2 http script if: false 2014/02/18 15:59:01 [debug] 12993#0: *2 test location: "/" 2014/02/18 15:59:01 [debug] 12993#0: *2 test location: "core/" 2014/02/18 15:59:01 [debug] 12993#0: *2 test location: "includes/" 2014/02/18 15:59:01 [debug] 12993#0: *2 test location: "favicon.ico" 2014/02/18 15:59:01 [debug] 12993#0: *2 using configuration "=/favicon.ico" 2014/02/18 15:59:01 [debug] 12993#0: *2 http cl:-1 max:1048576 2014/02/18 15:59:01 [debug] 12993#0: *2 rewrite phase: 3 2014/02/18 15:59:01 [debug] 12993#0: *2 post rewrite phase: 4 2014/02/18 15:59:01 [debug] 12993#0: *2 generic phase: 5 2014/02/18 15:59:01 [debug] 12993#0: *2 generic phase: 6 2014/02/18 15:59:01 [debug] 12993#0: *2 generic phase: 7 2014/02/18 15:59:01 [debug] 12993#0: *2 generic phase: 8 2014/02/18 15:59:01 [debug] 12993#0: *2 access phase: 9 2014/02/18 15:59:01 [debug] 12993#0: *2 access phase: 10 2014/02/18 15:59:01 [debug] 12993#0: *2 post access phase: 11 2014/02/18 15:59:01 [debug] 12993#0: *2 try files phase: 12 2014/02/18 15:59:01 [debug] 12993#0: *2 content phase: 13 2014/02/18 15:59:01 [debug] 12993#0: *2 content phase: 14 2014/02/18 15:59:01 [debug] 12993#0: *2 content phase: 15 2014/02/18 15:59:01 [debug] 12993#0: *2 content phase: 16 2014/02/18 15:59:01 [debug] 12993#0: *2 content phase: 17 2014/02/18 15:59:01 [debug] 12993#0: *2 content phase: 18 2014/02/18 15:59:01 [debug] 12993#0: *2 http filename: "/usr/share/nginx/html/example.org/favicon.ico" 2014/02/18 15:59:01 [debug] 12993#0: *2 add cleanup: 0000000001D52A68 2014/02/18 15:59:01 [debug] 12993#0: *2 malloc: 0000000001CDC1D0:144 2014/02/18 15:59:01 [debug] 12993#0: *2 malloc: 0000000001CBBE20:53 2014/02/18 15:59:01 [debug] 12993#0: *2 cached open file: /usr/share/nginx/html/example.org/favicon.ico, fd:16, c:1, e:0, u:1 2014/02/18 15:59:01 [debug] 12993#0: *2 http static fd: 16 2014/02/18 15:59:01 [debug] 12993#0: *2 xslt filter header 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy header filter 2014/02/18 15:59:01 [debug] 12993#0: *2 malloc: 0000000001CDA270:315 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy deflate out: ni:0000000001CDA39B no:0000000001D53451 ai:0 ao:44 rc:0 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy:3 create SYN_REPLY frame 0000000001D53480: len:313 2014/02/18 15:59:01 [debug] 12993#0: *2 http cleanup add: 0000000001D52AE8 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame out: 0000000001D53480 sid:3 prio:3 bl:1 len:313 2014/02/18 15:59:01 [debug] 12993#0: *2 malloc: 0000000001D64190:16384 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL buf copy: 321 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy:3 SYN_REPLY frame 0000000001D53480 was sent 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame sent: 0000000001D53480 sid:3 bl:1 len:313 2014/02/18 15:59:01 [debug] 12993#0: *2 http output filter "/favicon.ico?" 2014/02/18 15:59:01 [debug] 12993#0: *2 http copy filter: "/favicon.ico?" 2014/02/18 15:59:01 [debug] 12993#0: *2 posix_memalign: 0000000001D681A0:4096 @16 2014/02/18 15:59:01 [debug] 12993#0: *2 read: 16, 0000000001D681C0, 3262, 0 2014/02/18 15:59:01 [debug] 12993#0: *2 image filter 2014/02/18 15:59:01 [debug] 12993#0: *2 xslt filter body 2014/02/18 15:59:01 [debug] 12993#0: *2 http postpone filter "/favicon.ico?" 0000000001D53580 2014/02/18 15:59:01 [debug] 12993#0: *2 write new buf t:1 f:1 0000000001D681C0, pos 0000000001D681C0, size: 3262 file: 0, size: 3262 2014/02/18 15:59:01 [debug] 12993#0: *2 http write filter: l:1 f:0 s:3262 2014/02/18 15:59:01 [debug] 12993#0: *2 http write filter limit 0 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy:3 create DATA frame 0000000001D53480: len:3262 flags:1 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame out: 0000000001D53480 sid:3 prio:3 bl:0 len:3262 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL buf copy: 8 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL buf copy: 3262 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL to write: 3591 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL_write: 3591 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy:3 DATA frame 0000000001D53480 was sent 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame sent: 0000000001D53480 sid:3 bl:0 len:3262 2014/02/18 15:59:01 [debug] 12993#0: *2 http write filter 0000000000000000 2014/02/18 15:59:01 [debug] 12993#0: *2 http copy filter: 0 "/favicon.ico?" 2014/02/18 15:59:01 [debug] 12993#0: *2 http finalize request: 0, "/favicon.ico?" a:1, c:1 2014/02/18 15:59:01 [debug] 12993#0: *2 http request count:1 blk:0 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy close stream 3, queued 0, processing 1 2014/02/18 15:59:01 [debug] 12993#0: *2 http close request 2014/02/18 15:59:01 [debug] 12993#0: *2 http log handler 2014/02/18 15:59:01 [debug] 12993#0: *2 run cleanup: 0000000001D52A68 2014/02/18 15:59:01 [debug] 12993#0: *2 close cached open file: /usr/share/nginx/html/example.org/favicon.ico, fd:16, c:0, u:1, 0 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D51B00, unused: 0 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D52B10, unused: 1272 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D681A0, unused: 802 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy process frame head:80030009 f:0 l:8 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy WINDOW_UPDATE sid:3 delta:268369920 2014/02/18 15:59:01 [info] 12993#0: *2 client sent WINDOW_UPDATE frame for unknown stream 3 while processing SPDY, client: 192.168.76.46, server: 192.168.89.175:443 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy write RST_STREAM sid:3 st:2 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame out: 0000000001D50E28 sid:0 prio:7 bl:0 len:8 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL buf copy: 16 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL to write: 16 2014/02/18 15:59:01 [debug] 12993#0: *2 SSL_write: 16 2014/02/18 15:59:01 [debug] 12993#0: *2 spdy frame sent: 0000000001D50E28 sid:0 bl:0 len:8 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D50AF0, unused: 3096 2014/02/18 15:59:01 [debug] 12993#0: *2 free: 0000000001D64190 2014/02/18 15:59:01 [debug] 12993#0: *2 reusable connection: 1 2014/02/18 15:59:01 [debug] 12993#0: *2 event timer: 15, old: 1392735721385, new: 1392735721410 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247697#msg-247697 From vbart at nginx.com Tue Feb 18 17:48:41 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 18 Feb 2014 21:48:41 +0400 Subject: Issue with spdy and proxy_pass In-Reply-To: References: <133f8d5c4c4dba7a7c47314864c2bab1.NginxMailingListEnglish@forum.nginx.org> <26d8231ba7eb223c4b45b37e20cf3c78.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4994671.FQ2TuXncPT@vbart-laptop> On Tuesday 18 February 2014 09:54:40 p.heppler wrote: > Another run with Tomcat, fresh log: [..] In both cases your backend returned a "200 OK" response with empty body: 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy status 200 "200 OK" 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy header: "Content-Type: text/html; charset=UTF-8" 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy header: "Content-Length: 0" 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy header: "Server: Jetty(9.0.6.v20130930)" 2014/02/18 15:46:13 [debug] 12839#0: *2 http proxy header done wbr, Valentin V. Bartenev From hillb at yosemite.edu Tue Feb 18 18:12:36 2014 From: hillb at yosemite.edu (Brian Hill) Date: Tue, 18 Feb 2014 18:12:36 +0000 Subject: Proxy pass location inheritance In-Reply-To: <20140218121255.GF33573@mdounin.ru> References: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> <20140214121950.GD81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> <20140217131328.GP81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940EC71@x10m01.yosemite.edu> <20140217173046.GA33573@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940ECCA@x10m01.yosemite.edu> <20140218121255.GF33573@mdounin.ru> Message-ID: <205444BFA924A34AB206D0E2538D4B49410B7F@x10m01.yosemite.edu> Are there any performance implications associated with having a large number of static prefix locations? We really are looking at having hundreds of location blocks per server if we use static prefixes, and my primary concern up until now has been maintainability. If I eliminate maintainability as a concern, the next question that comes up is performance. How much of a performance hit (if any) will I take if my config files have 150 or 250 locations per server block, instead of the 5 or 10 that I've limited myself to until now? Will the increased parsing cause any major performance problems? As I was looking over my config files, it occurred to me that it would be fairly straightforward for me to write a frontend to generate the server blocks and locations automatically, which would eliminate my worries over maintainability. If having a large number of location blocks isn't going to harm performance, I may just go that route. If I'd spent the last few days writing a tool to generate the static config locations instead of wrestling with regex, I'd be done right now. -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: Tuesday, February 18, 2014 4:13 AM To: nginx at nginx.org Subject: Re: Proxy pass location inheritance Hello! On Mon, Feb 17, 2014 at 09:26:56PM +0000, Brian Hill wrote: > So there is no precedence given to nested regex locations at all? What > value does nesting provide then? Nesting is to do thins like this: location / { # something generic stuff here location ~ \.jpg$ { expires 1w; } } location /app1/ { # something special for app1 here, e.g. # access control auth_basic ... access ... location = /app1/login { # something special for /app1/login, # eveything from /app1/ is inherited proxy_pass ... } location ~ \.jpg$ { expires 1m; } } location /app2/ { # separate configuration for app2 here, # changes in /app1/ doesn't affect it ... location ~ \.jpg$ { expires 1y; } } That is, it allows to write scalable configurations using prefix locations. With such approach, you can edit anything under /app1/ without being concerned how it will affect things for /app2/. It also allows to use inheritance to write shorter configurations, and allows to isolate regexp locations within prefix ones. > This seems like it should be a fairly simple thing to do. > Image/CSS requests to some folders get handled one way, and image/CSS > requests to all other folders get handled another way. See above for an example. (I personally recommend using separate folder for images/css to be able to use prefix locations instead of regexp ones. But it should be relatively safe this way as well - as long as they are isolated in other locations. On of the big problems with regexp locations is that ofthen they are matched when people don't expect them to be matched, and isolating regexp locations within prefix ones minimizes this negative impact.) > This is an experimental pilot project for a datacenter conversion, and > the use of regex to specify both the file types and folder names is > mandatory. The project this pilot is for will eventually require more > than 50 server blocks with hundreds of locations in each block if > regex cannot be used. It would be an unmaintainable mess without > regex. Your problem is that you are trying to mix regex locations and prefix locations without understanding how they work, and to make things even harder you add nested locations to the mix. Instead, just stop doing things harder. Simplify things. Most recommended simplification is to avoid regexp location. Note that many location blocks isn't necessary bad thing. Sometimes it's much easier to handle hundreds of prefix location blocks than dozens of regexp locations. Configuration with prefix locations are much easier to maintain. If you can't avoid regexp locations for some external reason, it would be trivial to write a configuration which does what you want with regexp locations as well: location / { ... } location ~ ^/app1/ { ... location ~ \.jpg$ { expires 1m; } } location ~ ^/app2/ { ... location ~ \.jpg$ { expires 1y; } } location ~ \.jpg$ { expires 1w; } Though such configurations are usually much harder to maintain in contrast to ones based on prefix locations. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Feb 18 19:36:24 2014 From: nginx-forum at nginx.us (atarob) Date: Tue, 18 Feb 2014 14:36:24 -0500 Subject: headers_in_hash In-Reply-To: <20140217150059.GV81431@mdounin.ru> References: <20140217150059.GV81431@mdounin.ru> Message-ID: <6b9ffef0949209681435db6f4a0d7e0b.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Fri, Feb 14, 2014 at 04:39:23PM -0500, atarob wrote: > > > Creating a module, I want to read in from config desired http header > fields. > > Then, during config still, I want to get the struct offset for the > fields > > that have dedicated pointers in the header_in struct. It seems that > when I > > access headers_in_hash from the main config, it is uninitialized. I > can see > > in the code that there is > > > > ngx_http_init_headers_in_hash(ngx_conf_t *cf, > ngx_http_core_main_conf_t > > *cmcf) > > > > in ngx_http.c. It seems to be called when the main conf is being > generated > > though I am not certain yet. > > > > Where and when exactly is headers_in_hash initialized? If I wanted > to read > > from it during ngx_http_X_merge_loc_conf(), what would I need to do? > Or am > > I supposed to do it at some point later? > > The cmcf->headers_in_hash is expected to be initialized during > runtime. As of now, it will be initialized before > postconfiguration hooks, but I wouldn't recommend relaying on > this. > > I also won't recommend using cmcf->headers_in_hash in your own > module at all, unless you have good reasons to. It's not really a > part of the API, it's an internal entity which http core uses to > do it's work. There is an API? I thought the only way to figure out nginx was to read source? But seriously, I didn't land on any API doing a google search. API aside, is the point of this hash not to do faster lookups for fields that become needed at runtime (say from config) as opposed to compile time? Otherwise, to look for N fields, I have to do N*M comparisons as I iterate through the fields, right? I was trying to avoid that. Is there a better way? Thanks for your help. Ata Roboubi. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247572,247703#msg-247703 From nginx-forum at nginx.us Tue Feb 18 19:51:10 2014 From: nginx-forum at nginx.us (atarob) Date: Tue, 18 Feb 2014 14:51:10 -0500 Subject: inlining In-Reply-To: References: Message-ID: <4f0e5a3d840c3c60d252a923787808be.NginxMailingListEnglish@forum.nginx.org> Pankaj Mehta Wrote: ------------------------------------------------------- > These should be covered during the link time optimisations. > > Look here for gcc: http://gcc.gnu.org/wiki/LinkTimeOptimization I was very much unaware of this. The linker actually compiles..... Wow. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247575,247704#msg-247704 From nginx-forum at nginx.us Tue Feb 18 19:58:05 2014 From: nginx-forum at nginx.us (atarob) Date: Tue, 18 Feb 2014 14:58:05 -0500 Subject: rcvbuf option Message-ID: <98b7caa772972c35fbd4d30e4ffe2d78.NginxMailingListEnglish@forum.nginx.org> The config listen option rcvbuf which maps to the TCP SO_RCVBUF, is applied to the listening socket, and not inherited by the accept()ed connections. So if you have a high load application where the legitimate request is bound to be no more than 4K, for instance, you could save a lot of RAM by dropping the default here without changing it at the system level. I straced nginx and it does not seem to actually apply the settings through a subsequent setsockopt to the accepted socket. Is this intentional or an oversight? Thanks in advance. Ata Roboubi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247705,247705#msg-247705 From nginx-forum at nginx.us Tue Feb 18 20:08:18 2014 From: nginx-forum at nginx.us (shreyas.purohit@hotmail.com) Date: Tue, 18 Feb 2014 15:08:18 -0500 Subject: [1.4.x][Ubuntu] Limit access to upstream server based on request header Message-ID: <0e51daad817ca735d5b37e72c754557d.NginxMailingListEnglish@forum.nginx.org> Hello, I want something like limit_except functionality but based on custom request headers. Lets say I have a Request header X-Secured-Client set. In that case only I want to allow the request to my upstream server. How can I achieve that without using "if" blocks? (I did search the mailing list with no success). If the above requirement is possible, can I check if the header value is set to say 1 to allow the request to upstream server else send 444 back. Thanks for any help, Shreyas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247706,247706#msg-247706 From mdounin at mdounin.ru Tue Feb 18 21:13:51 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Feb 2014 01:13:51 +0400 Subject: rcvbuf option In-Reply-To: <98b7caa772972c35fbd4d30e4ffe2d78.NginxMailingListEnglish@forum.nginx.org> References: <98b7caa772972c35fbd4d30e4ffe2d78.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140218211351.GJ33573@mdounin.ru> Hello! On Tue, Feb 18, 2014 at 02:58:05PM -0500, atarob wrote: > The config listen option rcvbuf which maps to the TCP SO_RCVBUF, is applied > to the listening socket, and not inherited by the accept()ed connections. So > if you have a high load application where the legitimate request is bound to > be no more than 4K, for instance, you could save a lot of RAM by dropping > the default here without changing it at the system level. > > I straced nginx and it does not seem to actually apply the settings through > a subsequent setsockopt to the accepted socket. Is this intentional or an > oversight? What makes you think that SO_RCVBUF is not inherited by accepted connections? -- Maxim Dounin http://nginx.org/ From noloader at gmail.com Tue Feb 18 21:34:03 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 18 Feb 2014 16:34:03 -0500 Subject: inlining In-Reply-To: <4f0e5a3d840c3c60d252a923787808be.NginxMailingListEnglish@forum.nginx.org> References: <4f0e5a3d840c3c60d252a923787808be.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Feb 18, 2014 at 2:51 PM, atarob wrote: > Pankaj Mehta Wrote: > ------------------------------------------------------- >> These should be covered during the link time optimisations. >> >> Look here for gcc: http://gcc.gnu.org/wiki/LinkTimeOptimization > > I was very much unaware of this. The linker actually compiles..... Wow. > LTO also allows the linker to silently drop code with undefined behavior. The compiler and linker are free to do what they want with UB, including making demons fly out your nose [0]. So if the code includes, for example, signed integer overflow, then it may result in incorrect results. The baffling thing will be when the test suite passed earlier becuase the UB was not detected under the test suite. Jeff [0] https://groups.google.com/forum/?hl=en#!msg/comp.std.c/ycpVKxTZkgw/S2hHdTbv4d8J From nginx-forum at nginx.us Tue Feb 18 22:16:18 2014 From: nginx-forum at nginx.us (atarob) Date: Tue, 18 Feb 2014 17:16:18 -0500 Subject: rcvbuf option In-Reply-To: <20140218211351.GJ33573@mdounin.ru> References: <20140218211351.GJ33573@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Tue, Feb 18, 2014 at 02:58:05PM -0500, atarob wrote: > > > The config listen option rcvbuf which maps to the TCP SO_RCVBUF, is > applied > > to the listening socket, and not inherited by the accept()ed > connections. So > > if you have a high load application where the legitimate request is > bound to > > be no more than 4K, for instance, you could save a lot of RAM by > dropping > > the default here without changing it at the system level. > > > > I straced nginx and it does not seem to actually apply the settings > through > > a subsequent setsockopt to the accepted socket. Is this intentional > or an > > oversight? > > What makes you think that SO_RCVBUF is not inherited by accepted > connections? http://stackoverflow.com/a/12864681 I didn't run it myself, because testing it on one platform isn't enough to assume either way. But why do you think that it is inherited in general? Also, these are inherently different settings. Even if it were inherited, to reduce the 10,000 connections on which I expect 2KB of upload, I have to drop the listening socket's buffer size? Isn't that used by the OS for clients that are trying to connect to the listening socket? If so, I wouldn't want to drop that. In fact, since there is only one (or a few) listening sockets, I might want to increase that while dropping the accepted sockets' buffer size. Thoughts? > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247705,247710#msg-247710 From igor at sysoev.ru Wed Feb 19 07:41:54 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 19 Feb 2014 11:41:54 +0400 Subject: Proxy pass location inheritance In-Reply-To: <205444BFA924A34AB206D0E2538D4B49410B7F@x10m01.yosemite.edu> References: <205444BFA924A34AB206D0E2538D4B4940D5CF@x10m01.yosemite.edu> <20140214121950.GD81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940DB78@x10m01.yosemite.edu> <20140217131328.GP81431@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940EC71@x10m01.yosemite.edu> <20140217173046.GA33573@mdounin.ru> <205444BFA924A34AB206D0E2538D4B4940ECCA@x10m01.yosemite.edu> <20140218121255.GF33573@mdounin.ru> <205444BFA924A34AB206D0E2538D4B49410B7F@x10m01.yosemite.edu> Message-ID: <5052DA50-B6F9-48EA-8C1C-CE543B15905B@sysoev.ru> nginx stores static prefix locations in some kind of binary tree. This means that lookup is fast enough AND the order of the locations does not matter at all. The latter allows to create a lot of easy maintainable locations. Regex locations are processed in the order of appearance. This is slow and will become maintenance nightmare when configuration would eventually grow. If configuration has to have regex locations it is better to isolate them inside static prefix locations. -- Igor Sysoev http://nginx.com On Feb 18, 2014, at 22:12 , Brian Hill wrote: > Are there any performance implications associated with having a large number of static prefix locations? We really are looking at having hundreds of location blocks per server if we use static prefixes, and my primary concern up until now has been maintainability. If I eliminate maintainability as a concern, the next question that comes up is performance. How much of a performance hit (if any) will I take if my config files have 150 or 250 locations per server block, instead of the 5 or 10 that I've limited myself to until now? Will the increased parsing cause any major performance problems? > > As I was looking over my config files, it occurred to me that it would be fairly straightforward for me to write a frontend to generate the server blocks and locations automatically, which would eliminate my worries over maintainability. If having a large number of location blocks isn't going to harm performance, I may just go that route. If I'd spent the last few days writing a tool to generate the static config locations instead of wrestling with regex, I'd be done right now. > > > > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin > Sent: Tuesday, February 18, 2014 4:13 AM > To: nginx at nginx.org > Subject: Re: Proxy pass location inheritance > > Hello! > > On Mon, Feb 17, 2014 at 09:26:56PM +0000, Brian Hill wrote: > >> So there is no precedence given to nested regex locations at all? What >> value does nesting provide then? > > Nesting is to do thins like this: > > location / { > # something generic stuff here > > location ~ \.jpg$ { > expires 1w; > } > } > > location /app1/ { > # something special for app1 here, e.g. > # access control > auth_basic ... > access ... > > location = /app1/login { > # something special for /app1/login, > # eveything from /app1/ is inherited > > proxy_pass ... > } > > location ~ \.jpg$ { > expires 1m; > } > } > > location /app2/ { > # separate configuration for app2 here, > # changes in /app1/ doesn't affect it > > ... > > location ~ \.jpg$ { > expires 1y; > } > } > > That is, it allows to write scalable configurations using prefix locations. With such approach, you can edit anything under /app1/ without being concerned how it will affect things for /app2/. > > It also allows to use inheritance to write shorter configurations, and allows to isolate regexp locations within prefix ones. > >> This seems like it should be a fairly simple thing to do. >> Image/CSS requests to some folders get handled one way, and image/CSS >> requests to all other folders get handled another way. > > See above for an example. > > (I personally recommend using separate folder for images/css to be able to use prefix locations instead of regexp ones. But it should be relatively safe this way as well - as long as they are isolated in other locations. On of the big problems with regexp locations is that ofthen they are matched when people don't expect them to be matched, and isolating regexp locations within prefix ones minimizes this negative impact.) > >> This is an experimental pilot project for a datacenter conversion, and >> the use of regex to specify both the file types and folder names is >> mandatory. The project this pilot is for will eventually require more >> than 50 server blocks with hundreds of locations in each block if >> regex cannot be used. It would be an unmaintainable mess without >> regex. > > Your problem is that you are trying to mix regex locations and prefix locations without understanding how they work, and to make things even harder you add nested locations to the mix. > > Instead, just stop doing things harder. Simplify things. > > Most recommended simplification is to avoid regexp location. Note that many location blocks isn't necessary bad thing. Sometimes it's much easier to handle hundreds of prefix location blocks than dozens of regexp locations. Configuration with prefix locations are much easier to maintain. > > If you can't avoid regexp locations for some external reason, it would be trivial to write a configuration which does what you want with regexp locations as well: > > location / { > ... > } > > location ~ ^/app1/ { > ... > location ~ \.jpg$ { > expires 1m; > } > } > > location ~ ^/app2/ { > ... > location ~ \.jpg$ { > expires 1y; > } > } > > location ~ \.jpg$ { > expires 1w; > } > > Though such configurations are usually much harder to maintain in contrast to ones based on prefix locations. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Feb 19 08:04:35 2014 From: nginx-forum at nginx.us (p.heppler) Date: Wed, 19 Feb 2014 03:04:35 -0500 Subject: Issue with spdy and proxy_pass In-Reply-To: <4994671.FQ2TuXncPT@vbart-laptop> References: <4994671.FQ2TuXncPT@vbart-laptop> Message-ID: <729c5a1d617f324573a89e086f0d0e6b.NginxMailingListEnglish@forum.nginx.org> Strange. As said with pure ssl it works, it's just the spdy part. I'll check Tomcat logs. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247720#msg-247720 From nginx-forum at nginx.us Wed Feb 19 10:05:40 2014 From: nginx-forum at nginx.us (p.heppler) Date: Wed, 19 Feb 2014 05:05:40 -0500 Subject: Issue with spdy and proxy_pass In-Reply-To: <729c5a1d617f324573a89e086f0d0e6b.NginxMailingListEnglish@forum.nginx.org> References: <4994671.FQ2TuXncPT@vbart-laptop> <729c5a1d617f324573a89e086f0d0e6b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <267abce958fb7eee91a4e95b0cdeb407.NginxMailingListEnglish@forum.nginx.org> I checked Tomcat logs. No errors, just status 200 and zero length. I appended %v - Local server name to the log to see if $host is passed thru correct. It is. And now it's getting weird. I replcaed my complex website (which runs fine with pure ssl!) with a simple "Hello World" and it works! Seems the issue was ETag / If-none-match Headers In my website I set ETag for dynamic pages which don't change frequently, so there was no need to hit database on every request. As soon I turn this off, my site works. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247721#msg-247721 From nginx-forum at nginx.us Wed Feb 19 11:24:41 2014 From: nginx-forum at nginx.us (p.heppler) Date: Wed, 19 Feb 2014 06:24:41 -0500 Subject: Issue with spdy and proxy_pass In-Reply-To: <267abce958fb7eee91a4e95b0cdeb407.NginxMailingListEnglish@forum.nginx.org> References: <4994671.FQ2TuXncPT@vbart-laptop> <729c5a1d617f324573a89e086f0d0e6b.NginxMailingListEnglish@forum.nginx.org> <267abce958fb7eee91a4e95b0cdeb407.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3ef56593dc3c9a60892d751e6f83ee6d.NginxMailingListEnglish@forum.nginx.org> Hmm, just tried adding proxy_set_header If-none-Match $http_if_none_match; to my config and now it works with spdy to. Why do I need to set this manually when using spdy, but not on "normal" http/https? Weird... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247723#msg-247723 From mdounin at mdounin.ru Wed Feb 19 12:01:43 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Feb 2014 16:01:43 +0400 Subject: rcvbuf option In-Reply-To: References: <20140218211351.GJ33573@mdounin.ru> Message-ID: <20140219120143.GK33573@mdounin.ru> Hello! On Tue, Feb 18, 2014 at 05:16:18PM -0500, atarob wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Tue, Feb 18, 2014 at 02:58:05PM -0500, atarob wrote: > > > > > The config listen option rcvbuf which maps to the TCP SO_RCVBUF, is > > applied > > > to the listening socket, and not inherited by the accept()ed > > connections. So > > > if you have a high load application where the legitimate request is > > bound to > > > be no more than 4K, for instance, you could save a lot of RAM by > > dropping > > > the default here without changing it at the system level. > > > > > > I straced nginx and it does not seem to actually apply the settings > > through > > > a subsequent setsockopt to the accepted socket. Is this intentional > > or an > > > oversight? > > > > What makes you think that SO_RCVBUF is not inherited by accepted > > connections? > > http://stackoverflow.com/a/12864681 > > I didn't run it myself, because testing it on one platform isn't enough to > assume either way. But why do you think that it is inherited in general? This is how it works in BSD since BSD sockets introduction. While I don't think it's guaranteed by any standard, there should be really good reasons to behave differently. If you think there are OSes which behave differently and they are popular enough to care - feel free to name them. > Also, these are inherently different settings. Even if it were inherited, to > reduce the 10,000 connections on which I expect 2KB of upload, I have to > drop the listening socket's buffer size? Isn't that used by the OS for > clients that are trying to connect to the listening socket? If so, I > wouldn't want to drop that. In fact, since there is only one (or a few) > listening sockets, I might want to increase that while dropping the accepted > sockets' buffer size. Listening socket's buffer size isn't something used for anything. Listening queue size aka backlog is what matters for listening sockets (see listen(2)), and there is a separate parameter of the "listen" directive to control it. -- Maxim Dounin http://nginx.org/ From emailgrant at gmail.com Wed Feb 19 13:35:46 2014 From: emailgrant at gmail.com (Grant) Date: Wed, 19 Feb 2014 05:35:46 -0800 Subject: Translating apache config to nginx Message-ID: Roundcube uses some apache config to deny access to certain locations and I'm trying to translate them to nginx. The following seems to work fine: location ~ ^/?(\.git|\.tx|SQL|bin|config|logs|temp|tests|program\/(include|lib|localization|steps)) { deny all; } location ~ /?(README\.md|composer\.json-dist|composer\.json|package\.xml)$ { deny all; } But this causes a 403 during normal operation: location ~ ^(?!installer)(\.?[^\.]+)$ { deny all; } Why is that happening? - Grant From mdounin at mdounin.ru Wed Feb 19 13:36:33 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Feb 2014 17:36:33 +0400 Subject: headers_in_hash In-Reply-To: <6b9ffef0949209681435db6f4a0d7e0b.NginxMailingListEnglish@forum.nginx.org> References: <20140217150059.GV81431@mdounin.ru> <6b9ffef0949209681435db6f4a0d7e0b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140219133633.GN33573@mdounin.ru> Hello! On Tue, Feb 18, 2014 at 02:36:24PM -0500, atarob wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Fri, Feb 14, 2014 at 04:39:23PM -0500, atarob wrote: > > > > > Creating a module, I want to read in from config desired http header > > fields. > > > Then, during config still, I want to get the struct offset for the > > fields > > > that have dedicated pointers in the header_in struct. It seems that > > when I > > > access headers_in_hash from the main config, it is uninitialized. I > > can see > > > in the code that there is > > > > > > ngx_http_init_headers_in_hash(ngx_conf_t *cf, > > ngx_http_core_main_conf_t > > > *cmcf) > > > > > > in ngx_http.c. It seems to be called when the main conf is being > > generated > > > though I am not certain yet. > > > > > > Where and when exactly is headers_in_hash initialized? If I wanted > > to read > > > from it during ngx_http_X_merge_loc_conf(), what would I need to do? > > Or am > > > I supposed to do it at some point later? > > > > The cmcf->headers_in_hash is expected to be initialized during > > runtime. As of now, it will be initialized before > > postconfiguration hooks, but I wouldn't recommend relaying on > > this. > > > > I also won't recommend using cmcf->headers_in_hash in your own > > module at all, unless you have good reasons to. It's not really a > > part of the API, it's an internal entity which http core uses to > > do it's work. > > There is an API? I thought the only way to figure out nginx was to read > source? But seriously, I didn't land on any API doing a google search. API != documentation > API aside, is the point of this hash not to do faster lookups for fields > that become needed at runtime (say from config) as opposed to compile time? > Otherwise, to look for N fields, I have to do N*M comparisons as I iterate > through the fields, right? I was trying to avoid that. Is there a better > way? The point of this hash is to do special processing of certain headers in http core. If you want to do something similar in your own module, you may want to create your own hash. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Feb 19 14:25:22 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 19 Feb 2014 18:25:22 +0400 Subject: Issue with spdy and proxy_pass In-Reply-To: <3ef56593dc3c9a60892d751e6f83ee6d.NginxMailingListEnglish@forum.nginx.org> References: <4994671.FQ2TuXncPT@vbart-laptop> <267abce958fb7eee91a4e95b0cdeb407.NginxMailingListEnglish@forum.nginx.org> <3ef56593dc3c9a60892d751e6f83ee6d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6421273.AxaUT8VlqD@vbart-laptop> On Wednesday 19 February 2014 06:24:41 p.heppler wrote: > Hmm, just tried adding > proxy_set_header If-none-Match $http_if_none_match; > to my config and now it works with spdy to. Probably it works in this case because browser just shows to you cached page, that was retrieved earlier over http or https. > > Why do I need to set this manually when using spdy, but not on "normal" > http/https? > Weird... > There's no difference between SPDY and HTTP/HTTPS. In debug logs, that you have provided, the client didn't send "If-none-Match" and it's normal. Anyway, missing or presence "If-none-Match" header shouldn't result in empty responses from your backend. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Feb 19 14:36:15 2014 From: nginx-forum at nginx.us (sivakr) Date: Wed, 19 Feb 2014 09:36:15 -0500 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 Message-ID: <08354d265d132d68fdeb3a92bd9a5744.NginxMailingListEnglish@forum.nginx.org> Hi, We have strange issue on our swiss based on server. Issue: Incorrect IP Address value in REMOTE_ADDR Header Nginx Version : 1.2.1 Server OS : Debian 7.1 Modules : nginx -V nginx version: nginx/1.2.1 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-auth-pam --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-echo --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-upstream-fair --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-dav-ext-module We have IP based security token for our application , the token created by nginx+php will be checked in wowza before stream play. Recently lot of NZ visitors complained about video not playing , it due to security token failed between nginx+php vs wowza. here are more information 1 . We have installed Apache on same server and deduction IP Address as expected. 2. Even Nginx access log printing incorrect IP Address , so we feel nothing missed on FastCGI setting 3. Based on IPLocation info incorrect IP is gateway IPAddress NGINX $_SERVER Array ( [FCGI_ROLE] => RESPONDER [SCRIPT_FILENAME] => xxxxxx [QUERY_STRING] => msg=Error%20loading%20stream:%20Could%20not%20connect%20to%20server [REQUEST_METHOD] => GET [CONTENT_TYPE] => [CONTENT_LENGTH] => [SCRIPT_NAME] => /trackfail.php [REQUEST_URI] => /trackfail.php?msg=Error%20loading%20stream:%20Could%20not%20connect%20to%20server [DOCUMENT_URI] => /trackfail.php [DOCUMENT_ROOT] => xxxxx [SERVER_PROTOCOL] => HTTP/1.1 [GATEWAY_INTERFACE] => CGI/1.1 [SERVER_SOFTWARE] => nginx/1.2.1 [REMOTE_ADDR] => 210.x5.2x2.93 [REMOTE_PORT] => 60187 [SERVER_ADDR] => xxxxxxx [SERVER_PORT] => 80 [SERVER_NAME] => xxxxx [HTTPS] => [REDIRECT_STATUS] => 200 [GEOIP_COUNTRY_CODE] => NZ [GEOIP_COUNTRY_NAME] => New Zealand [HTTP_HOST] => xxxxxxx [HTTP_ACCEPT] => */* [HTTP_X_REQUESTED_WITH] => XMLHttpRequest [HTTP_USER_AGENT] => Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.107 Safari/537.36 [HTTP_REFERER] => xxxxxxxxxxxxxx [HTTP_ACCEPT_ENCODING] => gzip,deflate,sdch [HTTP_ACCEPT_LANGUAGE] => en-US,en;q=0.8 [HTTP_COOKIE] => zmad=1; 2bfd_unique_user=1; defaults=1 [HTTP_CACHE_CONTROL] => max-stale=0 [HTTP_CONNECTION] => Keep-Alive [PHP_SELF] => /trackfail.php ) Apache $_SERVER Array ( [GEOIP_ADDR] => 115.1x8.3x.37 [GEOIP_CONTINENT_CODE] => OC [GEOIP_COUNTRY_CODE] => NZ [GEOIP_COUNTRY_NAME] => New Zealand [HTTP_HOST] => xxxx:8080 [HTTP_CONNECTION] => keep-alive [HTTP_ACCEPT] => */* [HTTP_USER_AGENT] => Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.107 Safari/537.36 [HTTP_REFERER] => xxxx [HTTP_ACCEPT_ENCODING] => gzip,deflate,sdch [HTTP_ACCEPT_LANGUAGE] => en-US,en;q=0.8 [HTTP_COOKIE] => zmad=1; 2bfd_unique_user=1; defaults=1 [PATH] => /usr/local/bin:/usr/bin:/bin [SERVER_SIGNATURE] =>
Apache/2.2.22 (Debian) Server at xxxxx Port 8080
[SERVER_SOFTWARE] => Apache/2.2.22 (Debian) [SERVER_NAME] => xxxxxx [SERVER_ADDR] => xxxxx [SERVER_PORT] => 8080 [REMOTE_ADDR] => 115.1x8.3x.37 [DOCUMENT_ROOT] => xxxxxx [SERVER_ADMIN] => webmaster at localhost [SCRIPT_FILENAME] => xxxxxxx [REMOTE_PORT] => 51777 [GATEWAY_INTERFACE] => CGI/1.1 [SERVER_PROTOCOL] => HTTP/1.1 [REQUEST_METHOD] => GET [QUERY_STRING] => reqid=662&callback=jsonCallback&_=1392745097470 [REQUEST_URI] => /trackfail.php?reqid=662&callback=jsonCallback&_=1392745097470 [SCRIPT_NAME] => /trackfail.php [PHP_SELF] => /trackfail.php ) Thanks Siva Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247736,247736#msg-247736 From nginx-forum at nginx.us Wed Feb 19 15:44:59 2014 From: nginx-forum at nginx.us (p.heppler) Date: Wed, 19 Feb 2014 10:44:59 -0500 Subject: Issue with spdy and proxy_pass In-Reply-To: <6421273.AxaUT8VlqD@vbart-laptop> References: <6421273.AxaUT8VlqD@vbart-laptop> Message-ID: <98c0c4360be8fcce02f08cc321e86f17.NginxMailingListEnglish@forum.nginx.org> Damn, seems Firefox didn't clear the cache even if I told him to do so. Worked a while, but now blank page again. But it has to be the Tomcat backend 'cause PHP passed thru to fastcgi works. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247739#msg-247739 From reallfqq-nginx at yahoo.fr Wed Feb 19 16:13:08 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 19 Feb 2014 17:13:08 +0100 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 In-Reply-To: <08354d265d132d68fdeb3a92bd9a5744.NginxMailingListEnglish@forum.nginx.org> References: <08354d265d132d68fdeb3a92bd9a5744.NginxMailingListEnglish@forum.nginx.org> Message-ID: Despite what you are stating, I see a valid NZ IP address in the '$_SERVER'? environment variables of the PHP instance running behind Nginx (most probably 210.55.x.x prefix). The Apache remote address is not the right one. Since you failed to explain your setup, I suppose Nginx proxies traffic to Apache. That would explain the REMOTE_ADDR is the one from the gateway. Your proxy configuration fails to pass the original visitor IP address through the HTTP_X_FORWARDED_FOR field. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintessence at bulinfo.net Wed Feb 19 16:40:55 2014 From: quintessence at bulinfo.net (Bozhidara Marinchovska) Date: Wed, 19 Feb 2014 18:40:55 +0200 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 In-Reply-To: <08354d265d132d68fdeb3a92bd9a5744.NginxMailingListEnglish@forum.nginx.org> References: <08354d265d132d68fdeb3a92bd9a5744.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5304DE97.80107@bulinfo.net> Hello, If your trackfail.php (behind FastCGI) detects IP address based on $_SERVER['REMOTE_ADDR'], you may would like to place additional headers in your nginx configuration: proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Then pass X-Real-IP as param to FastCGI: fastcgi_param REMOTE_ADDR $http_x_real_ip; or direct to change you trackfail.php to detects IP address based on $_SERVER['HTTP_X_REAL_IP'] (after adding additional X-Real-IP header). On 2/19/2014 4:36 PM, sivakr wrote: > > [REMOTE_ADDR] => 210.x5.2x2.93 > [REMOTE_PORT] => 60187 > > [GEOIP_COUNTRY_CODE] => NZ > [GEOIP_COUNTRY_NAME] => New Zealand > > > > [GEOIP_ADDR] => 115.1x8.3x.37 > > [GEOIP_COUNTRY_CODE] => NZ > [GEOIP_COUNTRY_NAME] => New Zealand > > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247736,247736#msg-247736 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Feb 19 17:02:04 2014 From: nginx-forum at nginx.us (sivakr) Date: Wed, 19 Feb 2014 12:02:04 -0500 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 In-Reply-To: References: Message-ID: Hi BR, www.whatismyip.com shows 115.1x8.3x.37 , Apache shows 115.1x8.3x.37 But nginx shows 210.55.x.x Server Setup like follows Nginx on Port 80 backend FastCGI Php Apache on Port 8080 nginx conf user www-data; worker_processes 4; worker_rlimit_nofile 204800; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 65535; multi_accept on; } http { #include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; geoip_country /usr/share/GeoIP/GeoIP.dat; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; server_tokens off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_static on; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js; gzip_buffers 16 8k; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247736,247743#msg-247743 From nginx-forum at nginx.us Wed Feb 19 17:02:45 2014 From: nginx-forum at nginx.us (sivakr) Date: Wed, 19 Feb 2014 12:02:45 -0500 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 In-Reply-To: <5304DE97.80107@bulinfo.net> References: <5304DE97.80107@bulinfo.net> Message-ID: Hi Bozhidara, Thanks for the suggesting . I will try your setting and let you know soon. Thanks Siva Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247736,247744#msg-247744 From nginx-forum at nginx.us Wed Feb 19 17:18:37 2014 From: nginx-forum at nginx.us (sivakr) Date: Wed, 19 Feb 2014 12:18:37 -0500 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 In-Reply-To: <5304DE97.80107@bulinfo.net> References: <5304DE97.80107@bulinfo.net> Message-ID: <0a96241968c50741b7e55f2dc36145af.NginxMailingListEnglish@forum.nginx.org> REAL IP value not passing location ~ \.php$ { root /var/www; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; include fastcgi_params; } fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param HTTP_X_REAL_IP $http_x_real_ip; fastcgi_param REMOTE_PORT $remote_port; response Array ( [USER] => www-data [HOME] => /var/www [FCGI_ROLE] => RESPONDER [SCRIPT_FILENAME] => /var/www/test.php [QUERY_STRING] => [REQUEST_METHOD] => GET [CONTENT_TYPE] => [CONTENT_LENGTH] => [SCRIPT_NAME] => /test.php [REQUEST_URI] => /test.php [DOCUMENT_URI] => /test.php [DOCUMENT_ROOT] => /var/www [SERVER_PROTOCOL] => HTTP/1.1 [GATEWAY_INTERFACE] => CGI/1.1 [SERVER_SOFTWARE] => nginx/1.2.1 [REMOTE_ADDR] => 122.1xx.xx.227 [HTTP_X_REAL_IP] => [REMOTE_PORT] => 17467 [SERVER_ADDR] => xxxx [SERVER_PORT] => 80 [SERVER_NAME] =>xxxx [HTTPS] => [REDIRECT_STATUS] => 200 [GEOIP_COUNTRY_CODE] => IN [GEOIP_COUNTRY_NAME] => India [HTTP_HOST] => xxxx [HTTP_CONNECTION] => keep-alive [HTTP_CACHE_CONTROL] => max-age=0 [HTTP_ACCEPT] => text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 [HTTP_USER_AGENT] => Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.107 Safari/537.36 [HTTP_ACCEPT_ENCODING] => gzip,deflate,sdch [HTTP_ACCEPT_LANGUAGE] => en-GB,en-US;q=0.8,en;q=0.6 [PHP_SELF] => /test.php [REQUEST_TIME_FLOAT] => 1392830171.8188 [REQUEST_TIME] => 1392830171 ) Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247736,247745#msg-247745 From reallfqq-nginx at yahoo.fr Wed Feb 19 18:13:45 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 19 Feb 2014 19:13:45 +0100 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 In-Reply-To: References: Message-ID: I am tempted to copy an URL recently provided by Maxim in another thread: http://www.catb.org/~esr/faqs/smart-questions.html --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 19 18:22:18 2014 From: nginx-forum at nginx.us (sivakr) Date: Wed, 19 Feb 2014 13:22:18 -0500 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 In-Reply-To: References: Message-ID: <86bd338932842d7566e76ae7d087441a.NginxMailingListEnglish@forum.nginx.org> Hi BR, I am really sorry , if you feel not good the way of I am asking. Since I am not a English person nature , perhaps I am lagging in this part Sorry Thanks Siva Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247736,247750#msg-247750 From nginx-forum at nginx.us Wed Feb 19 20:59:42 2014 From: nginx-forum at nginx.us (atarob) Date: Wed, 19 Feb 2014 15:59:42 -0500 Subject: headers_in_hash In-Reply-To: <20140219133633.GN33573@mdounin.ru> References: <20140219133633.GN33573@mdounin.ru> Message-ID: <2121d613306fc6b031a5b59ce7da22cb.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Tue, Feb 18, 2014 at 02:36:24PM -0500, atarob wrote: > > > Maxim Dounin Wrote: > > ------------------------------------------------------- > > > Hello! > > > > > > On Fri, Feb 14, 2014 at 04:39:23PM -0500, atarob wrote: > > > > > > > Creating a module, I want to read in from config desired http > header > > > fields. > > > > Then, during config still, I want to get the struct offset for > the > > > fields > > > > that have dedicated pointers in the header_in struct. It seems > that > > > when I > > > > access headers_in_hash from the main config, it is > uninitialized. I > > > can see > > > > in the code that there is > > > > > > > > ngx_http_init_headers_in_hash(ngx_conf_t *cf, > > > ngx_http_core_main_conf_t > > > > *cmcf) > > > > > > > > in ngx_http.c. It seems to be called when the main conf is being > > > generated > > > > though I am not certain yet. > > > > > > > > Where and when exactly is headers_in_hash initialized? If I > wanted > > > to read > > > > from it during ngx_http_X_merge_loc_conf(), what would I need to > do? > > > Or am > > > > I supposed to do it at some point later? > > > > > > The cmcf->headers_in_hash is expected to be initialized during > > > runtime. As of now, it will be initialized before > > > postconfiguration hooks, but I wouldn't recommend relaying on > > > this. > > > > > > I also won't recommend using cmcf->headers_in_hash in your own > > > module at all, unless you have good reasons to. It's not really a > > > > part of the API, it's an internal entity which http core uses to > > > do it's work. > > > > There is an API? I thought the only way to figure out nginx was to > read > > source? But seriously, I didn't land on any API doing a google > search. > > API != documentation How true. It's more fun reading source anyway. What I mean was that it wasn't entirely clear to me what I should rely on as "API" and what I shouldn't because it might easily change down the road. > > > API aside, is the point of this hash not to do faster lookups for > fields > > that become needed at runtime (say from config) as opposed to > compile time? > > Otherwise, to look for N fields, I have to do N*M comparisons as I > iterate > > through the fields, right? I was trying to avoid that. Is there a > better > > way? > > The point of this hash is to do special processing of certain > headers in http core. If you want to do something similar in your > own module, you may want to create your own hash. Fair enough. And thanks for all the hard work. Ata Roboubi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247572,247756#msg-247756 From nginx-forum at nginx.us Wed Feb 19 21:34:47 2014 From: nginx-forum at nginx.us (atarob) Date: Wed, 19 Feb 2014 16:34:47 -0500 Subject: rcvbuf option In-Reply-To: <20140219120143.GK33573@mdounin.ru> References: <20140219120143.GK33573@mdounin.ru> Message-ID: <4e22b53e742f9e8b84457d44e4996b08.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Tue, Feb 18, 2014 at 05:16:18PM -0500, atarob wrote: > > > Maxim Dounin Wrote: > > ------------------------------------------------------- > > > Hello! > > > > > > On Tue, Feb 18, 2014 at 02:58:05PM -0500, atarob wrote: > > > > > > > The config listen option rcvbuf which maps to the TCP SO_RCVBUF, > is > > > applied > > > > to the listening socket, and not inherited by the accept()ed > > > connections. So > > > > if you have a high load application where the legitimate request > is > > > bound to > > > > be no more than 4K, for instance, you could save a lot of RAM by > > > dropping > > > > the default here without changing it at the system level. > > > > > > > > I straced nginx and it does not seem to actually apply the > settings > > > through > > > > a subsequent setsockopt to the accepted socket. Is this > intentional > > > or an > > > > oversight? > > > > > > What makes you think that SO_RCVBUF is not inherited by accepted > > > connections? > > > > http://stackoverflow.com/a/12864681 > > > > I didn't run it myself, because testing it on one platform isn't > enough to > > assume either way. But why do you think that it is inherited in > general? > > This is how it works in BSD since BSD sockets introduction. > While I don't think it's guaranteed by any standard, there should be > really good reasons to behave differently. > > If you think there are OSes which behave differently and they are > popular enough to care - feel free to name them. You are right. I tested it on my linux as well and it does inherit. I guess the confusion was that getsockopt() is not returning what we set with setsockopt() due to OS inflating it. But it is accepted socket does return the same as the listening socket. > > > Also, these are inherently different settings. Even if it were > inherited, to > > reduce the 10,000 connections on which I expect 2KB of upload, I > have to > > drop the listening socket's buffer size? Isn't that used by the OS > for > > clients that are trying to connect to the listening socket? If so, I > > wouldn't want to drop that. In fact, since there is only one (or a > few) > > listening sockets, I might want to increase that while dropping the > accepted > > sockets' buffer size. > > Listening socket's buffer size isn't something used for > anything. Listening queue size aka backlog is what matters for > listening sockets (see listen(2)), and there is a separate > parameter of the "listen" directive to control it. I was aware of backlog for listen(), but I was not comfortable to assume that 'Listening socket's buffer size isn't something used for anything'. Doing a bit more reading, it would appear you are right about this also. Thanks again. Ata Roboubi. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247705,247761#msg-247761 From quintessence at bulinfo.net Wed Feb 19 22:29:22 2014 From: quintessence at bulinfo.net (Bozhidara Marinchovska) Date: Thu, 20 Feb 2014 00:29:22 +0200 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 In-Reply-To: <0a96241968c50741b7e55f2dc36145af.NginxMailingListEnglish@forum.nginx.org> References: <5304DE97.80107@bulinfo.net> <0a96241968c50741b7e55f2dc36145af.NginxMailingListEnglish@forum.nginx.org> Message-ID: <53053042.6020101@bulinfo.net> Hello, Your FastCGI params are wrong. It should be as I wrote previously: fastcgi_param REMOTE_ADDR $http_x_real_ip; In your fastcgi_params file remove: fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param HTTP_X_REAL_IP $http_x_real_ip; and add on their place only: fastcgi_param REMOTE_ADDR $http_x_real_ip; Also you may place proxy_set_headers outside location, for example in server section server { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; some other ... location ...{ ... include fastcgi_params; } } On 2/19/2014 7:18 PM, sivakr wrote: > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param HTTP_X_REAL_IP $http_x_real_ip; > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Feb 19 23:15:55 2014 From: nginx-forum at nginx.us (wardrop) Date: Wed, 19 Feb 2014 18:15:55 -0500 Subject: Loadable runtime modules? Message-ID: <7623ef7d5759ff36a16b3dec35012be6.NginxMailingListEnglish@forum.nginx.org> Hi, I much prefer Nginx over Apache, and would use it as our primary HTTP server at my place of work (as I already do for my personal sites), but the thing that limits my willingness to do this is the fact that one must recompile Nginx to turn on/off modules. This can get tedious and problematic when you have multiple 3rd party modules like Phusion Passenger that you need to compile into Nginx, especially when those modules need to be updated with a new version. Are there any plans to support loadable modules at runtime like Apache? I figure this has probably already been discussed, in which case I'd appreciate links to those discussions. Cheers, Tom Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247763,247763#msg-247763 From nginx-forum at nginx.us Thu Feb 20 00:00:01 2014 From: nginx-forum at nginx.us (Sylvia) Date: Wed, 19 Feb 2014 19:00:01 -0500 Subject: Loadable runtime modules? In-Reply-To: <7623ef7d5759ff36a16b3dec35012be6.NginxMailingListEnglish@forum.nginx.org> References: <7623ef7d5759ff36a16b3dec35012be6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi. Maybe you need Tengine, its a fork of NGINX http://tengine.taobao.org/ ~GL Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247763,247765#msg-247765 From nginx-forum at nginx.us Thu Feb 20 00:01:20 2014 From: nginx-forum at nginx.us (wardrop) Date: Wed, 19 Feb 2014 19:01:20 -0500 Subject: Loadable runtime modules? In-Reply-To: References: <7623ef7d5759ff36a16b3dec35012be6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8ef25825a92ec95d295c21ebd7829c61.NginxMailingListEnglish@forum.nginx.org> Thanks, that's handy to be aware of, but would still like to enquire as to why Nginx doesn't support such loadable modukes? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247763,247766#msg-247766 From vl at nginx.com Thu Feb 20 07:27:05 2014 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 20 Feb 2014 11:27:05 +0400 Subject: Loadable runtime modules? In-Reply-To: <7623ef7d5759ff36a16b3dec35012be6.NginxMailingListEnglish@forum.nginx.org> References: <7623ef7d5759ff36a16b3dec35012be6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140220072704.GA16237@vlpc> On Wed, Feb 19, 2014 at 06:15:55PM -0500, wardrop wrote: > Hi, > > I much prefer Nginx over Apache, and would use it as our primary HTTP server > at my place of work (as I already do for my personal sites), but the thing > that limits my willingness to do this is the fact that one must recompile > Nginx to turn on/off modules. This can get tedious and problematic when you > have multiple 3rd party modules like Phusion Passenger that you need to > compile into Nginx, especially when those modules need to be updated with a > new version. > > Are there any plans to support loadable modules at runtime like Apache? I > figure this has probably already been discussed, in which case I'd > appreciate links to those discussions. > http://mailman.nginx.org/pipermail/nginx/2012-September/035405.html From francis at daoine.org Thu Feb 20 08:58:58 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 20 Feb 2014 08:58:58 +0000 Subject: Minimal configuration In-Reply-To: References: <20140215215503.GI24015@craic.sysops.org> <20140216213922.GO24015@craic.sysops.org> <20140216222356.GQ24015@craic.sysops.org> <20140217084248.GT24015@craic.sysops.org> <20140217203738.GW24015@craic.sysops.org> Message-ID: <20140220085858.GB29880@craic.sysops.org> On Mon, Feb 17, 2014 at 11:16:23PM +0100, B.R. wrote: Hi there, > Thanks for your help, Francis! You're welcome. > That's an amazingly detailed explanation. It is. But be aware: it is not actually correct. It covers some of the common cases, which may be good enough for you for now. > The differences in behavior > between 'normal' arguments and the last one are the key but the doc does > not (cannot?) go into details about them. To my mind, the first paragraph of http://nginx.org/r/try_files does say that -- but only if you read it as if it is a technical document, where every word means something. Maybe there's room for alternate documentation, which spells out things in much more details and gives even more examples of how things could be set up -- but that is likely to be book-length and more quickly stale. I suspect that if someone wants to write and maintain it, it would be welcomed. Anyway -- it's good that you have the answer you were looking for. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Feb 20 09:04:05 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 20 Feb 2014 09:04:05 +0000 Subject: Translating apache config to nginx In-Reply-To: References: Message-ID: <20140220090405.GC29880@craic.sysops.org> On Wed, Feb 19, 2014 at 05:35:46AM -0800, Grant wrote: Hi there, > The following seems to work fine: > > location ~ ^/?(\.git|\.tx|SQL|bin|config|logs|temp|tests|program\/(include|lib|localization|steps)) > { > deny all; > } That's probably work until you add "location ^~ /oops{}" and request /oops/a.git Not a problem; just a thing to be aware of when you use top-level regex locations. > But this causes a 403 during normal operation: > > location ~ ^(?!installer)(\.?[^\.]+)$ { > deny all; > } > > Why is that happening? What requests do you want to match that location? What requests actually match that location? Alternatively: what request do you make? What response do you expect? And what is the regex above intended to do? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Feb 20 09:47:41 2014 From: nginx-forum at nginx.us (nenad) Date: Thu, 20 Feb 2014 04:47:41 -0500 Subject: nginx location fine routing multiple backend Message-ID: I neet to route requests to multiple backends, and I'm can't match all, what I have so far: ... root /srv/www; == below does what it should == location ~ /assets/(.+\.(?:gif|jpe?g|png|txt|html|css|eot|svg|ttf|woff|swf|js))$ { alias /srv/www2/$1; } ==> below should match only: /, /en, /en/, /de, /de/ but NOT: /search/, /search this should be catched by last location location ~ /$ { alias /srv/www3; ... try_files $uri $uri/ /index.php$is_args$args; location ~* { ... } } ==> should match: /about, /en/about, /about/, /en/about/ ... same alias as above probably this can mbe merged location ~ (about|terms|privacy) { alias /srv/www3; } ==> below should match all other and send to php-fpm, I had must remove location / as nginx don't accept because ovveride index index.php; try_files $uri $uri/ @handler; location @handler { rewrite ^/(.*)$ /index.php/$1; } location ~ \.php { ... } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247783,247783#msg-247783 From emailgrant at gmail.com Thu Feb 20 13:58:05 2014 From: emailgrant at gmail.com (Grant) Date: Thu, 20 Feb 2014 05:58:05 -0800 Subject: Translating apache config to nginx In-Reply-To: <20140220090405.GC29880@craic.sysops.org> References: <20140220090405.GC29880@craic.sysops.org> Message-ID: >> But this causes a 403 during normal operation: >> >> location ~ ^(?!installer)(\.?[^\.]+)$ { >> deny all; >> } >> >> Why is that happening? > > What requests do you want to match that location? > > What requests actually match that location? > > Alternatively: what request do you make? What response do you expect? And > what is the regex above intended to do? I actually got these apache deny directives from the roundcube list. I don't have a more specific idea of what this one is supposed to do beyond "securing things". I'm not very good with regex and I was hoping someone here would see the problem. Does it make sense that this would work in apache but not in nginx? - Grant From zxcvbn4038 at gmail.com Thu Feb 20 14:42:25 2014 From: zxcvbn4038 at gmail.com (nginx user) Date: Thu, 20 Feb 2014 09:42:25 -0500 Subject: worker_connections are not enough while requesting certificate status Message-ID: I have seen errors in my logs: worker_connections are not enough while requesting certificate status I believe the main problem was that the worker_connections was set too low, and I've fixed that. However after looking at the source around the OCSP stapling, I have a question - It appears that ocsp responses are cached for five minutes, correct? When the cached responses expire, does each worker make a new request? Or does every new connection cause a request to be sent until one of the requests (from each worker) receives a reply and populates the cache? -------------- next part -------------- An HTML attachment was scrubbed... URL: From superfelo at yahoo.com Thu Feb 20 15:19:26 2014 From: superfelo at yahoo.com (Felix Quintana) Date: Thu, 20 Feb 2014 07:19:26 -0800 (PST) Subject: nginx.exe has encountered a problem and needs to close. (SSL) Message-ID: <1392909566.36636.YahooMailNeo@web163402.mail.gq1.yahoo.com> I have configured the server to accept SSL connections and after two or three requests always gives the error: nginx.exe has encountered a problem and needs to close. Restart server does not work, the first request fails. Restart the computer let me make two or three requests without problems and then fails. For port 80 works correctly. I have tried versions 1.4.5 and 1.5.10. My operating system is Windows XP SP3. This is my setup: worker_processes? 1; events { ??? worker_connections? 1024; } http { ??? include mime.types; ??? charset utf-8; ??? default_type application/octet-stream; ??? gzip? on; ??? ??? log_format? main? '$remote_addr - $remote_user [$time_local] "$request" ' ??? ??? ????? '$status $body_bytes_sent "$http_referer" ' ??? ??? ????? '"$http_user_agent" "$http_x_forwarded_for"'; ??? ??? ????? ??? access_log? logs/access.log? main; ??? ??? ssl_session_cache?? shared:SSL:10m; ??? ssl_session_timeout 10m; ??? ssl_certificate???? server.crt; ??? ssl_certificate_key server.key;??? ??? ??? server ??? { ??? ??? listen 80; ??? ??? listen????????????? 443 ssl; ??? ??? server_name???????? localhost ??? ??? ssl_protocols?????? SSLv3 TLSv1 TLSv1.1 TLSv1.2; ??? ??? ssl_ciphers???????? HIGH:!aNULL:!MD5; ??? ??? root D:\symfony\web; ??? ??? location ~ [^/]\.php(/|$) ??? ??? { ??? ??? ??? fastcgi_split_path_info ^(.+?\.php)(/.*)$; ??? ??? ??? if (!-f $document_root$fastcgi_script_name) ??? ??? ??? { ??? ??? ??? ??? return 404; ??? ??? ??? } ??? ??? ??? #WARNING ABOUT PORT: Many guides suggest setting PHP to port 9000. The XDebug extension uses port 9000 by default. ??? ??? ??? fastcgi_pass?? 127.0.0.1:9123; ??? ??? ??? fastcgi_index? index.php; ??? ??? ??? #fastcgi_param SCRIPT_FILENAME $request_filename; ??? ??? ??? fastcgi_param? SCRIPT_FILENAME? $document_root$fastcgi_script_name; ??? ??? ??? include??????? fastcgi_params; ??? ??? ??? #fastcgi_param QUERY_STRING??? $query_string; ??? ??? ??? #fastcgi_read_timeout 180; ??? ??? }??? ??? } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 20 18:10:36 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 20 Feb 2014 13:10:36 -0500 Subject: nginx.exe has encountered a problem and needs to close. (SSL) In-Reply-To: <1392909566.36636.YahooMailNeo@web163402.mail.gq1.yahoo.com> References: <1392909566.36636.YahooMailNeo@web163402.mail.gq1.yahoo.com> Message-ID: <81db9fad7f56e7f07de6004720d40c51.NginxMailingListEnglish@forum.nginx.org> Felix Quintana Wrote: ------------------------------------------------------- > I have configured the server to accept SSL connections and after two > or three requests always gives the error: > nginx.exe has encountered a problem and needs to close. [...] > ??? ssl_session_cache?? shared:SSL:10m; This is a known issue, ssl_session_cache has a bug when it attempts to use shared memory. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247795,247810#msg-247810 From superfelo at yahoo.com Thu Feb 20 18:53:55 2014 From: superfelo at yahoo.com (Felix Quintana) Date: Thu, 20 Feb 2014 10:53:55 -0800 (PST) Subject: nginx.exe has encountered a problem and needs to close. (SSL) In-Reply-To: <81db9fad7f56e7f07de6004720d40c51.NginxMailingListEnglish@forum.nginx.org> References: <1392909566.36636.YahooMailNeo@web163402.mail.gq1.yahoo.com> <81db9fad7f56e7f07de6004720d40c51.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1392922435.81894.YahooMailNeo@web163403.mail.gq1.yahoo.com> Thank you very much. I commented the lines ssl_session_timeout and ssl_session_cache and now everything works fine. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.algermissen at nordsc.com Thu Feb 20 18:59:59 2014 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Thu, 20 Feb 2014 19:59:59 +0100 Subject: Combining nginx with a library that manages it's own threads? Message-ID: Hi, I would like to connect nginx to the Cassandra NoSQL database. There is a C++ library[1] that I could wrap to C to use with nginx. However, the library does it's own connection pooling and thread management and I do not really have an idea how that will interfere with nginx's (single)threading model. What do you think? Or are you maybe aware of any other C driver for Cassanrda? Jan [1] https://github.com/datastax/cpp-driver From nginx-forum at nginx.us Thu Feb 20 20:15:11 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 20 Feb 2014 15:15:11 -0500 Subject: nginx.exe has encountered a problem and needs to close. (SSL) In-Reply-To: <1392922435.81894.YahooMailNeo@web163403.mail.gq1.yahoo.com> References: <1392922435.81894.YahooMailNeo@web163403.mail.gq1.yahoo.com> Message-ID: <3abcc080ce60bc4955e55dca5886c624.NginxMailingListEnglish@forum.nginx.org> You might try; ssl_session_cache builtin:4000; which seems to work ok as its processed inside openssl only. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247795,247813#msg-247813 From francis at daoine.org Thu Feb 20 20:24:04 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 20 Feb 2014 20:24:04 +0000 Subject: Translating apache config to nginx In-Reply-To: References: <20140220090405.GC29880@craic.sysops.org> Message-ID: <20140220202404.GD29880@craic.sysops.org> On Thu, Feb 20, 2014 at 05:58:05AM -0800, Grant wrote: Hi there, > >> location ~ ^(?!installer)(\.?[^\.]+)$ { > >> deny all; > >> } > > Alternatively: what request do you make? What response do you expect? And > > what is the regex above intended to do? > > I actually got these apache deny directives from the roundcube list. Possibly the roundcube list will be able to explain, in words, what the intention is. Then someone may be able to translate those words into an nginx config fragment. > I don't have a more specific idea of what this one is supposed to do > beyond "securing things". I'm not very good with regex and I was > hoping someone here would see the problem. The problem is that the regex matches requests that you don't want it to match -- that much is straightforward. Since this is intended to be a security thing, it's probably better not to try to guess what the author might have meant. (It currently *is* secure, sort of -- you can't get at the urls they intended to block.) > Does it make sense that > this would work in apache but not in nginx? Yes; the two programs have different expectations of their configuration. In nginx, for example, all user requests will start with "/", so any regex that requires anything other than "/" as the first character will fail. In apache, some regexes don't have that requirement. (Or maybe that's incorrect -- check with an apache person if it matters.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Feb 20 20:32:28 2014 From: nginx-forum at nginx.us (ahurkala) Date: Thu, 20 Feb 2014 15:32:28 -0500 Subject: post_action for specific requests only Message-ID: <14442fd8037054818fc79b8b1f143239.NginxMailingListEnglish@forum.nginx.org> Hi, i'm using post_action for remote logging. Logging all requests works great and efficient (4k req/s) but when I try to log only certain requests ngix becomes unstable (some connections hang and some weird errors occur). I've tried several ways to log specific requests only, but none of them works stable: 1) first way (if directive) post_action @afterdownload; location @afterdownload { if ($foo) { proxy_pass blah } } 2) second way (lua) post_action @afterdownload; location @afterdownload { access_by_lua ' if ngx.var.blah then ngx.location.capture(....) end ngx.exit(ngx.HTTP_OK) '; } 3) third way (lua) post_action /afterdownload; location /afterdownload { access_by_lua ' if ngx.var.blah then ngx.location.capture(....) end ngx.exit(ngx.HTTP_OK) '; } First way involves evil 'if' so I assume it might not work smoothly, but what's wrong with the second or 3rd approach? It seems that if you remove ngx.exit() from the 3rd method it kinda works, but this way you get lots of 404 errors saying that /afterdownload is not found. I've read here (http://mailman.nginx.org/pipermail/nginx/2012-November/036199.html) that post_action is executed in a context of main requests which explains a bit, but maybe there is a way for logging only specific requests (like setting post_action from lua maybe)? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247815,247815#msg-247815 From superfelo at yahoo.com Thu Feb 20 21:21:34 2014 From: superfelo at yahoo.com (Felix Quintana) Date: Thu, 20 Feb 2014 13:21:34 -0800 (PST) Subject: nginx.exe has encountered a problem and needs to close. (SSL) In-Reply-To: <3abcc080ce60bc4955e55dca5886c624.NginxMailingListEnglish@forum.nginx.org> References: <1392922435.81894.YahooMailNeo@web163403.mail.gq1.yahoo.com> <3abcc080ce60bc4955e55dca5886c624.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1392931294.80466.YahooMailNeo@web163403.mail.gq1.yahoo.com> It works well. thank you very much -------------- next part -------------- An HTML attachment was scrubbed... URL: From superfelo at yahoo.com Thu Feb 20 21:34:10 2014 From: superfelo at yahoo.com (Felix Quintana) Date: Thu, 20 Feb 2014 13:34:10 -0800 (PST) Subject: Check if the file exists. Message-ID: <1392932050.61449.YahooMailNeo@web163404.mail.gq1.yahoo.com> I'm trying to check if the file exists before passing it to fastcgi but I have not found as. This is what I'm doing: ??? ??? ??? #if (!-f $document_root$fastcgi_script_name) ??? ??? ??? if (!-f $request_filename) ??? ??? ??? ??? ??? ??? { ??? ??? ??????? return 404; ??? ??? ??? } I've tried try_files but not find a way. The file is of the form: /web/app.php/asd/fgh?ert=xx If it let me pass over fastcgi returns the error: No input file specified. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Feb 20 21:53:45 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 20 Feb 2014 21:53:45 +0000 Subject: Check if the file exists. In-Reply-To: <1392932050.61449.YahooMailNeo@web163404.mail.gq1.yahoo.com> References: <1392932050.61449.YahooMailNeo@web163404.mail.gq1.yahoo.com> Message-ID: <20140220215345.GE29880@craic.sysops.org> On Thu, Feb 20, 2014 at 01:34:10PM -0800, Felix Quintana wrote: Hi there, > I'm trying to check if the file exists before passing it to fastcgi but I have not found as. > I've tried try_files but not find a way. > The file is of the form: > /web/app.php/asd/fgh?ert=xx What http request do you make? What file on your filesystem do you want to check the existence of? What file on your filesystem do you want to tell the fastcgi server to process? (The last two questions will probably have the same answer unless chroots are involved.) What nginx variables hold the names of the files you care about? Possibly fastcgi_split_path_info (http://nginx.org/r/fastcgi_split_path_info) will be useful to you. f -- Francis Daly francis at daoine.org From superfelo at yahoo.com Thu Feb 20 22:01:22 2014 From: superfelo at yahoo.com (Felix Quintana) Date: Thu, 20 Feb 2014 14:01:22 -0800 (PST) Subject: Check if the file exists. In-Reply-To: <20140220215345.GE29880@craic.sysops.org> References: <1392932050.61449.YahooMailNeo@web163404.mail.gq1.yahoo.com> <20140220215345.GE29880@craic.sysops.org> Message-ID: <1392933682.10631.YahooMailNeo@web163405.mail.gq1.yahoo.com> >What http request do you make? https://localhost/web/app.php/asd/fgh >What file on your filesystem do you want to check the existence of? >What file on your filesystem do you want to tell the fastcgi server to process? d:\symfony\web\app.php >What nginx variables hold the names of the files you care about? I have no idea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Feb 20 22:10:45 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 20 Feb 2014 22:10:45 +0000 Subject: Check if the file exists. In-Reply-To: <1392933682.10631.YahooMailNeo@web163405.mail.gq1.yahoo.com> References: <1392932050.61449.YahooMailNeo@web163404.mail.gq1.yahoo.com> <20140220215345.GE29880@craic.sysops.org> <1392933682.10631.YahooMailNeo@web163405.mail.gq1.yahoo.com> Message-ID: <20140220221045.GF29880@craic.sysops.org> On Thu, Feb 20, 2014 at 02:01:22PM -0800, Felix Quintana wrote: Hi there, > >What http request do you make? > > https://localhost/web/app.php/asd/fgh > > >What file on your filesystem do you want to check the existence of? > >What file on your filesystem do you want to tell the fastcgi server to process? > d:\symfony\web\app.php > > >What nginx variables hold the names of the files you care about? > I have no idea. http://nginx.org/en/docs/http/ngx_http_core_module.html#variables See also http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files and the usual way it is used for php/fastcgi checking. With the other directive mentioned, you'll possibly (untested!) be able to use try_files $fastcgi_script_name =404 in your fastcgi-processing location. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Feb 21 03:29:17 2014 From: nginx-forum at nginx.us (xinggu) Date: Thu, 20 Feb 2014 22:29:17 -0500 Subject: Incorrect IP Address Deducted by Nginx version: nginx/1.2.1 In-Reply-To: <08354d265d132d68fdeb3a92bd9a5744.NginxMailingListEnglish@forum.nginx.org> References: <08354d265d132d68fdeb3a92bd9a5744.NginxMailingListEnglish@forum.nginx.org> Message-ID: <24561b627d1ba4d37cb2300da6aab339.NginxMailingListEnglish@forum.nginx.org> We are professional Ring Die manufacturers and factory.We can produce high quality Pellet Die according to your requirements.More types of Pellet Mill Die wanted,please contact us right now Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247736,247824#msg-247824 From reeteshr at outlook.com Fri Feb 21 06:08:50 2014 From: reeteshr at outlook.com (Reetesh Ranjan) Date: Fri, 21 Feb 2014 11:38:50 +0530 Subject: Combining nginx with a library that manages it's own threads? In-Reply-To: References: Message-ID: Hi Jan, I guess you are looking for an upstream nginx module to talk to Cassandra in place of the C++ client you mentioned. I did something similar for talking to Sphinx search platform (https://github.com/reeteshranjan/sphinx2-nginx-module). There was a C++ client; but if you want nginx to control all connections, the whole non-blocking I/O etc. you need to really write an upstream module, where all socket read/writes are done by core nginx code and you need to provide only hooks. Your hooks would perform the request-response protocol with the service e.g. Cassandra in your case. Regards,Reetesh > From: jan.algermissen at nordsc.com > Subject: Combining nginx with a library that manages it's own threads? > Date: Thu, 20 Feb 2014 19:59:59 +0100 > To: nginx at nginx.org > > Hi, > > I would like to connect nginx to the Cassandra NoSQL database. > > There is a C++ library[1] that I could wrap to C to use with nginx. > > However, the library does it's own connection pooling and thread management and I do not really have an idea how that will interfere with nginx's (single)threading model. > > What do you think? Or are you maybe aware of any other C driver for Cassanrda? > > Jan > > [1] https://github.com/datastax/cpp-driver > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From reeteshr at outlook.com Fri Feb 21 06:10:36 2014 From: reeteshr at outlook.com (Reetesh Ranjan) Date: Fri, 21 Feb 2014 11:40:36 +0530 Subject: Combining nginx with a library that manages it's own threads? In-Reply-To: References: , Message-ID: I forgot to mention: you loose all thread management as well. Regards,Reetesh From: reeteshr at outlook.com To: nginx at nginx.org Subject: RE: Combining nginx with a library that manages it's own threads? Date: Fri, 21 Feb 2014 11:38:50 +0530 Hi Jan, I guess you are looking for an upstream nginx module to talk to Cassandra in place of the C++ client you mentioned. I did something similar for talking to Sphinx search platform (https://github.com/reeteshranjan/sphinx2-nginx-module). There was a C++ client; but if you want nginx to control all connections, the whole non-blocking I/O etc. you need to really write an upstream module, where all socket read/writes are done by core nginx code and you need to provide only hooks. Your hooks would perform the request-response protocol with the service e.g. Cassandra in your case. Regards,Reetesh > From: jan.algermissen at nordsc.com > Subject: Combining nginx with a library that manages it's own threads? > Date: Thu, 20 Feb 2014 19:59:59 +0100 > To: nginx at nginx.org > > Hi, > > I would like to connect nginx to the Cassandra NoSQL database. > > There is a C++ library[1] that I could wrap to C to use with nginx. > > However, the library does it's own connection pooling and thread management and I do not really have an idea how that will interfere with nginx's (single)threading model. > > What do you think? Or are you maybe aware of any other C driver for Cassanrda? > > Jan > > [1] https://github.com/datastax/cpp-driver > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From momeunier at gmail.com Fri Feb 21 07:28:53 2014 From: momeunier at gmail.com (Marc-Olivier Meunier) Date: Fri, 21 Feb 2014 09:28:53 +0200 Subject: virtual or physical? Message-ID: Hi! If I have one host, am I going to get better performances if I run only one nginx on the physical host, or would it be more beneficial to run 4 virtual machines each running a different nginx? Thanks! -- Marc-Olivier Meunier +358 50 4840036 -------------- next part -------------- An HTML attachment was scrubbed... URL: From citrin at citrin.ru Fri Feb 21 09:52:45 2014 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Fri, 21 Feb 2014 13:52:45 +0400 Subject: virtual or physical? In-Reply-To: References: Message-ID: <530721ED.40702@citrin.ru> On 02/21/14 11:28, Marc-Olivier Meunier wrote: > If I have one host, am I going to get better performances if I run only one > nginx on the physical host, or would it be more beneficial to run 4 virtual > machines each running a different nginx? With nginx on physical host you will get better performance, but VMs are more easy to manage and maintain (backup/resoter, upgrade by replacing one VM by other VM, fast move to other host in case of hardware failure, e. t. c.). Decision should depends on many factors including server load (traffic, requests per second e. t. c). From nginx-forum at nginx.us Fri Feb 21 14:40:43 2014 From: nginx-forum at nginx.us (p.heppler) Date: Fri, 21 Feb 2014 09:40:43 -0500 Subject: Issue with spdy and proxy_pass In-Reply-To: <98c0c4360be8fcce02f08cc321e86f17.NginxMailingListEnglish@forum.nginx.org> References: <6421273.AxaUT8VlqD@vbart-laptop> <98c0c4360be8fcce02f08cc321e86f17.NginxMailingListEnglish@forum.nginx.org> Message-ID: <423772fec5488f1c2e502e59ec6192e4.NginxMailingListEnglish@forum.nginx.org> In Railo you can access some environment and client vars within a struct called cgi. If I hit my site with plain ssl it shows cgi.server_port = 80. Should be 443, shouldn't it? Hitting Tomcat directly shows cgi.server_port=8888 which is fine. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247836#msg-247836 From nginx-forum at nginx.us Fri Feb 21 15:25:58 2014 From: nginx-forum at nginx.us (rge3) Date: Fri, 21 Feb 2014 10:25:58 -0500 Subject: Serve *only* from cache for particular user-agents Message-ID: <82e47a934f716ea8bdef21329c71db56.NginxMailingListEnglish@forum.nginx.org> I havne't found any ideas for this and thought I might ask here. We have a fairly straightforward proxy_cache setup with a proxy_pass backend. We cache documents for different lengths of time or go the backend for what's missing. My problem is we're getting overrun with bot and spider requests. MSN in particular started hitting us exceptionally hard yesterday and started bringing our backend servers down. Because they're crawling the site from end to end our cache is missing a lot of those pages and nginx has to pass the request on through. I'm looking for a way to match on User-Agent and say that if it matches certain bots to *only* serve out of proxy_cache. Ideally I'd like the logic to be: if it's in the cache, serve it. If it's not, then return some 4xx error. But in the case of those user-agents, *don't* go to the backend. Only give them cache. My first thought was something like... if ($http_user_agent ~* msn-bot) { proxy_pass http://devnull; } by making a bogus backend. But in nginx 1.4.3 (that's what we're running) I get nginx: [emerg] "proxy_pass" directive is not allowed here Does anyone have another idea? Thanks, -Rick Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247837,247837#msg-247837 From emailgrant at gmail.com Fri Feb 21 15:36:20 2014 From: emailgrant at gmail.com (Grant) Date: Fri, 21 Feb 2014 07:36:20 -0800 Subject: Translating apache config to nginx In-Reply-To: <20140220202404.GD29880@craic.sysops.org> References: <20140220090405.GC29880@craic.sysops.org> <20140220202404.GD29880@craic.sysops.org> Message-ID: >> >> location ~ ^(?!installer)(\.?[^\.]+)$ { >> >> deny all; >> >> } > >> > Alternatively: what request do you make? What response do you expect? And >> > what is the regex above intended to do? >> >> I actually got these apache deny directives from the roundcube list. > > Possibly the roundcube list will be able to explain, in words, what the > intention is. > > Then someone may be able to translate those words into an nginx config > fragment. Here is the description: "deny access to files not containing a dot or starting with a dot in all locations except installer directory" Should the following accomplish this in nginx? It gives me 403 during normal operation. location ~ ^(?!installer)(\.?[^\.]+)$ { deny all; } - Grant From mdounin at mdounin.ru Fri Feb 21 15:47:47 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 21 Feb 2014 19:47:47 +0400 Subject: Serve *only* from cache for particular user-agents In-Reply-To: <82e47a934f716ea8bdef21329c71db56.NginxMailingListEnglish@forum.nginx.org> References: <82e47a934f716ea8bdef21329c71db56.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140221154747.GG33573@mdounin.ru> Hello! On Fri, Feb 21, 2014 at 10:25:58AM -0500, rge3 wrote: > I havne't found any ideas for this and thought I might ask here. We have a > fairly straightforward proxy_cache setup with a proxy_pass backend. We > cache documents for different lengths of time or go the backend for what's > missing. My problem is we're getting overrun with bot and spider requests. > MSN in particular started hitting us exceptionally hard yesterday and > started bringing our backend servers down. Because they're crawling the > site from end to end our cache is missing a lot of those pages and nginx has > to pass the request on through. > > I'm looking for a way to match on User-Agent and say that if it matches > certain bots to *only* serve out of proxy_cache. Ideally I'd like the logic > to be: if it's in the cache, serve it. If it's not, then return some 4xx > error. But in the case of those user-agents, *don't* go to the backend. > Only give them cache. My first thought was something like... > > if ($http_user_agent ~* msn-bot) { > proxy_pass http://devnull; > } > > by making a bogus backend. But in nginx 1.4.3 (that's what we're running) I > get > nginx: [emerg] "proxy_pass" directive is not allowed here > > Does anyone have another idea? The message suggests you are trying to write the snippet above at server{} level. Moving things into a location should do the trick. Please make sure to read http://wiki.nginx.org/IfIsEvil though. -- Maxim Dounin http://nginx.org/ From r at roze.lv Fri Feb 21 16:34:43 2014 From: r at roze.lv (Reinis Rozitis) Date: Fri, 21 Feb 2014 18:34:43 +0200 Subject: Loadable runtime modules? In-Reply-To: <8ef25825a92ec95d295c21ebd7829c61.NginxMailingListEnglish@forum.nginx.org> References: <7623ef7d5759ff36a16b3dec35012be6.NginxMailingListEnglish@forum.nginx.org> <8ef25825a92ec95d295c21ebd7829c61.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3A21F55703144AEE81E1AD909F2042F8@MasterPC> > Thanks, that's handy to be aware of, but would still like to enquire as to > why Nginx doesn't support such loadable modukes? I don't see how that makes any harder (or more time consuming) when the modules are linked into the binary. There is no need to always recompile the whole source as you can just delete the particular module *.o files files from objs/ (objs/addon/ if for third party modules) and then just do 'make' which will recompile those and relink into binary. This way you can always do a live upgrade (kill -usr2/kill -quit) for the running process (or roll back to the old/prev binary) without the mess what involves dynamically (un)loading a module on the fly. rr From nginx-forum at nginx.us Fri Feb 21 16:46:02 2014 From: nginx-forum at nginx.us (rge3) Date: Fri, 21 Feb 2014 11:46:02 -0500 Subject: Serve *only* from cache for particular user-agents In-Reply-To: <20140221154747.GG33573@mdounin.ru> References: <20140221154747.GG33573@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Fri, Feb 21, 2014 at 10:25:58AM -0500, rge3 wrote: > > > I havne't found any ideas for this and thought I might ask here. We > have a > > fairly straightforward proxy_cache setup with a proxy_pass backend. > We > > cache documents for different lengths of time or go the backend for > what's > > missing. My problem is we're getting overrun with bot and spider > requests. > > MSN in particular started hitting us exceptionally hard yesterday > and > > started bringing our backend servers down. Because they're crawling > the > > site from end to end our cache is missing a lot of those pages and > nginx has > > to pass the request on through. > > > > I'm looking for a way to match on User-Agent and say that if it > matches > > certain bots to *only* serve out of proxy_cache. Ideally I'd like > the logic > > to be: if it's in the cache, serve it. If it's not, then return > some 4xx > > error. But in the case of those user-agents, *don't* go to the > backend. > > Only give them cache. My first thought was something like... > > > > if ($http_user_agent ~* msn-bot) { > > proxy_pass http://devnull; > > } > > > > by making a bogus backend. But in nginx 1.4.3 (that's what we're > running) I > > get > > nginx: [emerg] "proxy_pass" directive is not allowed here > > > > Does anyone have another idea? > > The message suggests you are trying to write the snippet above at > server{} level. Moving things into a location should do the > trick. > > Please make sure to read http://wiki.nginx.org/IfIsEvil though. That seems to have done it! With a location block I now have... location / { proxy_cache_valid 200 301 302 30m; if ($http_user_agent ~* msn-bot) { proxy_pass http://devnull; } if ($http_user_agent !~* msn-bot) { proxy_pass http://productionrupal; } } That seems to work perfectly. But is it a safe use of "if"? Is there a safer way to do it without an if? Thanks for the help! -R Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247837,247845#msg-247845 From ajaykemparaj at gmail.com Fri Feb 21 16:59:04 2014 From: ajaykemparaj at gmail.com (Ajay k) Date: Fri, 21 Feb 2014 22:29:04 +0530 Subject: Serve *only* from cache for particular user-agents In-Reply-To: References: <20140221154747.GG33573@mdounin.ru> Message-ID: you can use http://nginx.org/en/docs/http/ngx_http_map_module.html Ex: map $http_user_agent $mobile { ~* msn-bot 'http://devnull'; default 'http://productionrupal'; } Thanks, Ajay K On Fri, Feb 21, 2014 at 10:16 PM, rge3 wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Fri, Feb 21, 2014 at 10:25:58AM -0500, rge3 wrote: > > > > > I havne't found any ideas for this and thought I might ask here. We > > have a > > > fairly straightforward proxy_cache setup with a proxy_pass backend. > > We > > > cache documents for different lengths of time or go the backend for > > what's > > > missing. My problem is we're getting overrun with bot and spider > > requests. > > > MSN in particular started hitting us exceptionally hard yesterday > > and > > > started bringing our backend servers down. Because they're crawling > > the > > > site from end to end our cache is missing a lot of those pages and > > nginx has > > > to pass the request on through. > > > > > > I'm looking for a way to match on User-Agent and say that if it > > matches > > > certain bots to *only* serve out of proxy_cache. Ideally I'd like > > the logic > > > to be: if it's in the cache, serve it. If it's not, then return > > some 4xx > > > error. But in the case of those user-agents, *don't* go to the > > backend. > > > Only give them cache. My first thought was something like... > > > > > > if ($http_user_agent ~* msn-bot) { > > > proxy_pass http://devnull; > > > } > > > > > > by making a bogus backend. But in nginx 1.4.3 (that's what we're > > running) I > > > get > > > nginx: [emerg] "proxy_pass" directive is not allowed here > > > > > > Does anyone have another idea? > > > > The message suggests you are trying to write the snippet above at > > server{} level. Moving things into a location should do the > > trick. > > > > Please make sure to read http://wiki.nginx.org/IfIsEvil though. > > That seems to have done it! With a location block I now have... > > location / { > proxy_cache_valid 200 301 302 30m; > > if ($http_user_agent ~* msn-bot) { > proxy_pass http://devnull; > } > > if ($http_user_agent !~* msn-bot) { > proxy_pass http://productionrupal; > } > } > > That seems to work perfectly. But is it a safe use of "if"? Is there a > safer way to do it without an if? > > Thanks for the help! > -R > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,247837,247845#msg-247845 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks, Ajay K -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Feb 21 17:18:00 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 21 Feb 2014 21:18:00 +0400 Subject: Serve *only* from cache for particular user-agents In-Reply-To: References: <20140221154747.GG33573@mdounin.ru> Message-ID: <20140221171800.GJ33573@mdounin.ru> Hello! On Fri, Feb 21, 2014 at 11:46:02AM -0500, rge3 wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Fri, Feb 21, 2014 at 10:25:58AM -0500, rge3 wrote: > > > > > I havne't found any ideas for this and thought I might ask here. We > > have a > > > fairly straightforward proxy_cache setup with a proxy_pass backend. > > We > > > cache documents for different lengths of time or go the backend for > > what's > > > missing. My problem is we're getting overrun with bot and spider > > requests. > > > MSN in particular started hitting us exceptionally hard yesterday > > and > > > started bringing our backend servers down. Because they're crawling > > the > > > site from end to end our cache is missing a lot of those pages and > > nginx has > > > to pass the request on through. > > > > > > I'm looking for a way to match on User-Agent and say that if it > > matches > > > certain bots to *only* serve out of proxy_cache. Ideally I'd like > > the logic > > > to be: if it's in the cache, serve it. If it's not, then return > > some 4xx > > > error. But in the case of those user-agents, *don't* go to the > > backend. > > > Only give them cache. My first thought was something like... > > > > > > if ($http_user_agent ~* msn-bot) { > > > proxy_pass http://devnull; > > > } > > > > > > by making a bogus backend. But in nginx 1.4.3 (that's what we're > > running) I > > > get > > > nginx: [emerg] "proxy_pass" directive is not allowed here > > > > > > Does anyone have another idea? > > > > The message suggests you are trying to write the snippet above at > > server{} level. Moving things into a location should do the > > trick. > > > > Please make sure to read http://wiki.nginx.org/IfIsEvil though. > > That seems to have done it! With a location block I now have... > > location / { > proxy_cache_valid 200 301 302 30m; > > if ($http_user_agent ~* msn-bot) { > proxy_pass http://devnull; > } > > if ($http_user_agent !~* msn-bot) { > proxy_pass http://productionrupal; > } > } Second condition can be removed, it's surplus. Just a location / { if (...) { proxy_pass ... } proxy_pass ... } should be enough. > That seems to work perfectly. But is it a safe use of "if"? Is there a > safer way to do it without an if? As long as it's full configuration, there should be no problems. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Feb 21 18:13:08 2014 From: nginx-forum at nginx.us (rge3) Date: Fri, 21 Feb 2014 13:13:08 -0500 Subject: Serve *only* from cache for particular user-agents In-Reply-To: References: Message-ID: ajay Wrote: ------------------------------------------------------- > you can use http://nginx.org/en/docs/http/ngx_http_map_module.html > > Ex: > > map $http_user_agent $mobile { > ~* msn-bot 'http://devnull'; > > default 'http://productionrupal'; > > } Actually that worked perfectly! Then I can do it entirely without the 'if'. Thanks Ajay and Maxim. I appreciate all the help! -R Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247837,247849#msg-247849 From paulnpace at gmail.com Fri Feb 21 18:36:01 2014 From: paulnpace at gmail.com (Paul N. Pace) Date: Fri, 21 Feb 2014 10:36:01 -0800 Subject: Service restart testing nginx.conf Message-ID: It seems like way back in the olden days, when I restarted nginx ('sudo service nginx restart'), if there was a configuration issue in nginx.conf, I would get a warning telling me such and, IIRC, nginx would boot using the last known valid configuration. It doesn't seem to happen that way any more. Did I unknowingly change a configuration setting, or was there a change to nginx? Thanks! Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From semenukha at gmail.com Fri Feb 21 18:44:50 2014 From: semenukha at gmail.com (Styopa Semenukha) Date: Fri, 21 Feb 2014 13:44:50 -0500 Subject: Service restart testing nginx.conf In-Reply-To: References: Message-ID: <3093147.1OdRutFMt7@tornado> You must have used `service nginx reload` or `service nginx configtest`. On Friday, February 21, 2014 10:36:01 AM Paul N. Pace wrote: > It seems like way back in the olden days, when I restarted nginx ('sudo > service nginx restart'), if there was a configuration issue in nginx.conf, > I would get a warning telling me such and, IIRC, nginx would boot using the > last known valid configuration. > > It doesn't seem to happen that way any more. Did I unknowingly change a > configuration setting, or was there a change to nginx? > > Thanks! > > > Paul -- Best regards, Styopa Semenukha. From emailgrant at gmail.com Fri Feb 21 20:00:47 2014 From: emailgrant at gmail.com (Grant) Date: Fri, 21 Feb 2014 12:00:47 -0800 Subject: fastcgi caching Message-ID: I'm using the following config to cache only /piwik/piwik.php: fastcgi_cache_path /var/cache/php-fpm levels=1:2 keys_zone=piwik:10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; location /piwik/piwik.php { fastcgi_cache piwik; add_header X-Cache $upstream_cache_status; fastcgi_pass unix:/run/php-fpm.socket; include fastcgi.conf; } I'm getting "X-Cache: HIT". I tried to set up a minimal config, but am I missing anything essential? Is setting up a manual purge required or will this manage itself? - Grant From francis at daoine.org Fri Feb 21 20:25:15 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 21 Feb 2014 20:25:15 +0000 Subject: Translating apache config to nginx In-Reply-To: References: <20140220090405.GC29880@craic.sysops.org> <20140220202404.GD29880@craic.sysops.org> Message-ID: <20140221202515.GG29880@craic.sysops.org> On Fri, Feb 21, 2014 at 07:36:20AM -0800, Grant wrote: Hi there, > Here is the description: > > "deny access to files not containing a dot or starting with a dot in > all locations except installer directory" So: you want it to block /one and /two/, to allow /thr.ee, and to block /.four, yes? > Should the following accomplish this in nginx? It gives me 403 during > normal operation. That configuration seems to get the first three correct and the last one wrong. If you add a "/" immediately after the first ^, it seems to get all four correct. What is "normal operation"? If the request you make is like /thr.ee, it should be allowed; if it is not like /thr.ee is should be blocked. (Personally, I'm not sure why you would want that set of restrictions. But if you want it, this is one way to get it.) > location ~ ^(?!installer)(\.?[^\.]+)$ { > deny all; > } A more nginx-ish way would probably be to only have prefix locations at the top level; but if what you have works for you, it's good. f -- Francis Daly francis at daoine.org From sarah at nginx.com Fri Feb 21 21:08:05 2014 From: sarah at nginx.com (Sarah Novotny) Date: Fri, 21 Feb 2014 13:08:05 -0800 Subject: Join Us! Nginx User Summit 2/25 in San Francisco Message-ID: <2C3D0A35-BEB6-444E-81A2-3D121B6A7CFF@nginx.com> The Users Have Spoken. So, don?t forget the inaugural NGINX User Summit! Please join us February 25 in San Francisco to learn what's coming up with NGINX, and to collaborate with the larger NGINX community. We're eager to hear what the users say, and we also look forward to sharing some insights with you. Some of what you'll enjoy at the NGINX User Summit: - Igor Sysoev, founder of NGINX FOSS and Nginx, Inc., who talk about NGINX: Past, Present and Future - Yichun Zhang (@agentzh), renowned module developer within the NGINX ecosystem, who will share his experiences with NGINX, Lua, and Beyond. - Cocktails and socializing with other users and the NGINX team with drinks sponsored by MaxCDN We hope to see you next week. Register today! https://www.eventbrite.com/e/nginx-user-summit-and-training-tickets-10393173261 Sarah From emailgrant at gmail.com Fri Feb 21 21:51:41 2014 From: emailgrant at gmail.com (Grant) Date: Fri, 21 Feb 2014 13:51:41 -0800 Subject: Translating apache config to nginx In-Reply-To: <20140221202515.GG29880@craic.sysops.org> References: <20140220090405.GC29880@craic.sysops.org> <20140220202404.GD29880@craic.sysops.org> <20140221202515.GG29880@craic.sysops.org> Message-ID: >> Here is the description: >> >> "deny access to files not containing a dot or starting with a dot in >> all locations except installer directory" > > So: you want it to block /one and /two/, to allow /thr.ee, and to block /.four, yes? That's how I read it too. >> Should the following accomplish this in nginx? It gives me 403 during >> normal operation. > > That configuration seems to get the first three correct and the last > one wrong. > > If you add a "/" immediately after the first ^, it seems to get all > four correct. > > What is "normal operation"? If the request you make is like /thr.ee, > it should be allowed; if it is not like /thr.ee is should be blocked. I just meant normal browsing around the inbox in Roundcube. > (Personally, I'm not sure why you would want that set of restrictions. But > if you want it, this is one way to get it.) > >> location ~ ^(?!installer)(\.?[^\.]+)$ { >> deny all; >> } I think the corrected directive is as follows? location ~ ^/(?!installer)(\.?[^\.]+)$ { deny all; } - Grant From list_nginx at bluerosetech.com Fri Feb 21 22:15:06 2014 From: list_nginx at bluerosetech.com (Darren Pilgrim) Date: Fri, 21 Feb 2014 14:15:06 -0800 Subject: Serve *only* from cache for particular user-agents In-Reply-To: <82e47a934f716ea8bdef21329c71db56.NginxMailingListEnglish@forum.nginx.org> References: <82e47a934f716ea8bdef21329c71db56.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5307CFEA.2010403@bluerosetech.com> On 2/21/2014 7:25 AM, rge3 wrote: > I havne't found any ideas for this and thought I might ask here. We have a > fairly straightforward proxy_cache setup with a proxy_pass backend. We > cache documents for different lengths of time or go the backend for what's > missing. My problem is we're getting overrun with bot and spider requests. > MSN in particular started hitting us exceptionally hard yesterday and > started bringing our backend servers down. Because they're crawling the > site from end to end our cache is missing a lot of those pages and nginx has > to pass the request on through. Are they ignoring your robots.txt? From nginx-forum at nginx.us Fri Feb 21 22:23:06 2014 From: nginx-forum at nginx.us (toddlahman) Date: Fri, 21 Feb 2014 17:23:06 -0500 Subject: WordPress + Code Igniter Config not working Message-ID: Could someone look over my configuration? It appears I am missing something that is causing it not to work. I have a Code Igniter application that runs from the root WordPress folder fine with Apache, but it is not working with my Nginx configuration. The WordPress root folder is: /var/www/vhosts/wordpress The Code Igniter root folder is: /var/www/vhosts/wordpress/code_igniter My config is: root /var/www/vhosts/wordpress; location / { try_files $uri $uri/ @wordpress; } location @wordpress { try_files $uri =404; fastcgi_pass php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } location ~ /code_igniter/ { try_files $uri =404; fastcgi_pass php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root/code_igniter/$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } location ~ \.php$ { try_files $uri =404; fastcgi_pass php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247861,247861#msg-247861 From contact at jpluscplusm.com Sat Feb 22 01:01:43 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 22 Feb 2014 01:01:43 +0000 Subject: Translating apache config to nginx In-Reply-To: References: <20140220090405.GC29880@craic.sysops.org> <20140220202404.GD29880@craic.sysops.org> <20140221202515.GG29880@craic.sysops.org> Message-ID: On 21 February 2014 21:51, Grant wrote: >> What is "normal operation"? If the request you make is like /thr.ee, >> it should be allowed; if it is not like /thr.ee is should be blocked. > > I just meant normal browsing around the inbox in Roundcube. If you assume that people on this list magically know what Roundcube URIs look like, you're going to be massively reducing the audience that might otherwise be able to help you! ;-) J From emailgrant at gmail.com Sat Feb 22 02:16:09 2014 From: emailgrant at gmail.com (Grant) Date: Fri, 21 Feb 2014 18:16:09 -0800 Subject: Translating apache config to nginx In-Reply-To: References: <20140220090405.GC29880@craic.sysops.org> <20140220202404.GD29880@craic.sysops.org> <20140221202515.GG29880@craic.sysops.org> Message-ID: >>> What is "normal operation"? If the request you make is like /thr.ee, >>> it should be allowed; if it is not like /thr.ee is should be blocked. >> >> I just meant normal browsing around the inbox in Roundcube. > > If you assume that people on this list magically know what Roundcube > URIs look like, you're going to be massively reducing the audience > that might otherwise be able to help you! ;-) You're right, but the regex was originally written for Roundcube, so my point was that it was supposed to work but didn't and something was probably lost in translation between apache and nginx. It just needed an extra slash. - Grant From nginx-forum at nginx.us Sat Feb 22 03:49:29 2014 From: nginx-forum at nginx.us (wardrop) Date: Fri, 21 Feb 2014 22:49:29 -0500 Subject: Loadable runtime modules? In-Reply-To: <7623ef7d5759ff36a16b3dec35012be6.NginxMailingListEnglish@forum.nginx.org> References: <7623ef7d5759ff36a16b3dec35012be6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <922c75465e002bd327b7175bb8089ab6.NginxMailingListEnglish@forum.nginx.org> Thanks for the replies guys. Reines, I didn't know about the method of upgrading modules that you described. I also appreciate the reasoning in the link that you shared Vladmir. Much appreciated. One thing about Nginx I do appreciate is how much less painful it is to compile compared to Apache, which you're often best installing via your operating system package manager, in which you're choice of versioning is not great. I just can't wait for more distributions to abandon those painful legacy init.d scripts in favor of the much less painful alternatives that are much easier to setup. Most have moved away, but I still deal with a lot of older distro's where the biggest pain in setting up Nginx is getting it working as a managed daemon. Anyway, cheers guys. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247763,247864#msg-247864 From francis at daoine.org Sat Feb 22 08:41:28 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 22 Feb 2014 08:41:28 +0000 Subject: WordPress + Code Igniter Config not working In-Reply-To: References: Message-ID: <20140222084128.GI29880@craic.sysops.org> On Fri, Feb 21, 2014 at 05:23:06PM -0500, toddlahman wrote: Hi there, > Could someone look over my configuration? It appears I am missing something > that is causing it not to work. I have a Code Igniter application that runs > from the root WordPress folder fine with Apache, but it is not working with > my Nginx configuration. What's the problem? What request do you make, what response do you get, what response do you want? > location / { > location @wordpress { > location ~ /code_igniter/ { > location ~ \.php$ { f -- Francis Daly francis at daoine.org From dfisek at ozguryazilim.com.tr Sat Feb 22 13:55:17 2014 From: dfisek at ozguryazilim.com.tr (Doruk Fisek) Date: Sat, 22 Feb 2014 15:55:17 +0200 Subject: rpm packaging a web application software to run on nginx Message-ID: <20140222155517.0f3cf22c0cc16084b0c90e49@ozguryazilim.com.tr> Hi, I'm trying to add Nginx support to the LDAP Account Manager RPM package. In Apache, the package adds a /etc/httpd/conf.d/lam.apache.conf that has a simple Alias directive: Alias /lam /usr/share/ldap-account-manager However when I try to do that with a Location directive in /etc/nginx/conf.d/lam.nginx.conf, I get: "location directive is not allowed here" I have to put the Location directive inside a server block, which makes the config file not usable out of the box (the user has to manually place it in all server blocks, including the default one). Since my Apache way of thinking isn't working in Nginx, my question is: How do I prepare an RPM package of a web application, which when installed, will be up and running in Nginx without any manual editing on the user's part? Doruk -- ?zg?r Yaz?l?m A.?. ~ # http://www.ozguryazilim.com.tr From semenukha at gmail.com Sat Feb 22 15:25:28 2014 From: semenukha at gmail.com (Styopa Semenukha) Date: Sat, 22 Feb 2014 10:25:28 -0500 Subject: rpm packaging a web application software to run on nginx In-Reply-To: <20140222155517.0f3cf22c0cc16084b0c90e49@ozguryazilim.com.tr> References: <20140222155517.0f3cf22c0cc16084b0c90e49@ozguryazilim.com.tr> Message-ID: <1522810.PQWRsXytWB@hydra> Hello Doruk, Package manager should not undertake creating _usable_ configuration, this is configuration manager prerogative. The PM must create a sample configuration ("Welcome to Nginx"). I would declare your conf file as %doc, then rpmbuild will put it to /usr/share/doc/%{name}-%{version}/ Experienced users will see it and fit to their needs. Beginners will ignore it and copy-paste configs from forums anyway :) On Saturday, February 22, 2014 03:55:17 PM Doruk Fisek wrote: > Hi, > > I'm trying to add Nginx support to the LDAP Account Manager RPM > package. > > In Apache, the package adds a /etc/httpd/conf.d/lam.apache.conf that > has a simple Alias directive: > > Alias /lam /usr/share/ldap-account-manager > > However when I try to do that with a Location directive > in /etc/nginx/conf.d/lam.nginx.conf, I get: > > "location directive is not allowed here" > > I have to put the Location directive inside a server block, which > makes the config file not usable out of the box (the user has to > manually place it in all server blocks, including the default one). > > Since my Apache way of thinking isn't working in Nginx, my question > is: How do I prepare an RPM package of a web application, which when > installed, will be up and running in Nginx without any manual editing > on the user's part? > > Doruk > > -- > ?zg?r Yaz?l?m A.?. ~ # > http://www.ozguryazilim.com.tr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Sincerely yours, Styopa Semenukha. From anaxagramma at gmail.com Sun Feb 23 12:59:39 2014 From: anaxagramma at gmail.com (naxa) Date: Sun, 23 Feb 2014 13:59:39 +0100 Subject: How to combine text and variables in fastcgi_param? Message-ID: <5309F0BB.20005@gmail.com> Hello there dear nginx people, I am a beginner in nginx and also to mailing lists. :) Once read the docs I can't say I am sure I remember all the important parts so please excuse me if I am asking something silly. I'm having trouble with using the `value` part `fastcgi_param` of ngx_http_fastcgi. I am using nginx 1.2.5. I am trying to include four spaces between two variables. Here is an example in markdown. This is cross-posted at http://stackoverflow.com/questions/21968255 The [fastcgi_param docs](http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_param) reads > A value can contain text, variables, and their combination. Does not link or specify explicitly what the text format is or what the combination format is. I am trying to add four spaces between two variables in order to understand the 'expression format' used in `fastcgi_param`. I get errors. Here are relevant parts from `nginx.conf` with line numbers: # try 1 with apostrophe `'` 78: fastcgi_param SCRIPT_FILENAME $document_root' '$fastcgi_script_name; 82: # deny access to .htaccess files, if Apache's document root produces nginx: [emerg] unexpected "s" in :82 # try 2 (bare) Another try 78: fastcgi_param SCRIPT_FILENAME $document_root $fastcgi_script_name; produces nginx: [emerg] invalid parameter "$fastcgi_script_name" in :78 # try 3 with double quotse `"` If I use `"` like this: 78: fastcgi_param SCRIPT_FILENAME $document_root" "$fastcgi_script_name; the error I get is: nginx: [emerg] unexpected end of file, expecting ";" or "}" in :128 If I simply do 78: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; as usual, it works, *so I believe there is no other error in my* `nginx.conf`. Tried to look up the [source](http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c) but due to its generalized nature, I couldn't yet find where the parse and variable substitution is actually done. How do I mix text and variables in a free-form way with nginx, in particular in the value part of `fastcgi_param`? I am using **nginx 1.2.5**. Thank you, Kristof From mdounin at mdounin.ru Sun Feb 23 13:51:10 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 23 Feb 2014 17:51:10 +0400 Subject: How to combine text and variables in fastcgi_param? In-Reply-To: <5309F0BB.20005@gmail.com> References: <5309F0BB.20005@gmail.com> Message-ID: <20140223135110.GP33573@mdounin.ru> Hello! On Sun, Feb 23, 2014 at 01:59:39PM +0100, naxa wrote: > Hello there dear nginx people, > > I am a beginner in nginx and also to mailing lists. :) Once read the > docs I can't say I am sure I remember all the important parts so please > excuse me if I am asking something silly. I'm having trouble with using > the `value` part `fastcgi_param` of ngx_http_fastcgi. I am using nginx > 1.2.5. > I am trying to include four spaces between two variables. Here is an > example in markdown. This is cross-posted at > http://stackoverflow.com/questions/21968255 > > The [fastcgi_param > docs](http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_param) > reads > > > A value can contain text, variables, and their combination. > > Does not link or specify explicitly what the text format is or what the > combination format is. > > I am trying to add four spaces between two variables in order to > understand the 'expression format' used in `fastcgi_param`. I get > errors. Here are relevant parts from `nginx.conf` with line numbers: Try this: fastcgi_param SCRIPT_FILENAME "$document_root $fastcgi_script_name"; or this: fastcgi_param SCRIPT_FILENAME '$document_root $fastcgi_script_name'; If a parameter includes special characters, the whole parameter should be enclosed in single or double quotes. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sun Feb 23 14:06:43 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 23 Feb 2014 18:06:43 +0400 Subject: fastcgi caching In-Reply-To: References: Message-ID: <20140223140643.GQ33573@mdounin.ru> Hello! On Fri, Feb 21, 2014 at 12:00:47PM -0800, Grant wrote: > I'm using the following config to cache only /piwik/piwik.php: > > fastcgi_cache_path /var/cache/php-fpm levels=1:2 keys_zone=piwik:10m; > > fastcgi_cache_key "$scheme$request_method$host$request_uri"; > > location /piwik/piwik.php { > fastcgi_cache piwik; > add_header X-Cache $upstream_cache_status; > fastcgi_pass unix:/run/php-fpm.socket; > include fastcgi.conf; > } > > I'm getting "X-Cache: HIT". I tried to set up a minimal config, but > am I missing anything essential? Is setting up a manual purge > required or will this manage itself? Unless explicitly told to do otherwise, nginx will cache responses based on validity specified using Expires and/or Cache-Control headers. That is, if your backend uses headers correctly, no futher action required. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Feb 23 17:40:08 2014 From: nginx-forum at nginx.us (vikingboy) Date: Sun, 23 Feb 2014 12:40:08 -0500 Subject: newbie help - unknown directive memc_pass Message-ID: I'm a newbie to unix and nginx so please be gentle :) I'm starting to pull my hair out a bit now so figured I'd ask for some pointers. Im trying to build a nginx version which includes memc functionality but I can't for the life of me figure out what I'm missing as memc_pass is always reported as an unknown directive in my error log. I've tried compiling on v1.4.4 with the following configure string ./configure \ --prefix=/usr/local/nginx \ --sbin-path=/usr/local/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/run/nginx.pid \ --lock-path=/run/lock/subsys/nginx \ --user=nginx --group=nginx \ --with-file-aio \ --with-ipv6 \ --with-pcre \ --with-debug \ --with-http_ssl_module \ --with-http_spdy_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_xslt_module \ --with-http_image_filter_module \ --with-http_geoip_module \ --with-http_sub_module \ --with-http_dav_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_degradation_module \ --with-http_stub_status_module \ --with-http_perl_module \ --with-mail \ --with-mail_ssl_module \ --with-google_perftools_module \ --add-module=../ngx_openresty-1.5.8.1/bundle/srcache-nginx-module-0.25 \ --add-module=../ngx_openresty-1.5.8.1/bundle/memc-nginx-module-0.14 --add-module=../ngx_openresty-1.5.8.1/bundle/ngx_devel_kit-0.2.19 \ --add-module=../ngx_openresty-1.5.8.1/bundle/echo-nginx-module-0.51 \ --add-module=../ngx_openresty-1.5.8.1/bundle/set-misc-nginx-module-0.24 I see the following reported in the compile stage adding module in ../ngx_openresty-1.5.8.1/bundle/srcache-nginx-module-0.25 + ngx_http_srcache_filter_module was configured adding module in ../ngx_openresty-1.5.8.1/bundle/memc-nginx-module-0.14 + ngx_http_memc_module was configured adding module in ../ngx_openresty-1.5.8.1/bundle/ngx_devel_kit-0.2.19 + ngx_devel_kit was configured adding module in ../ngx_openresty-1.5.8.1/bundle/echo-nginx-module-0.51 + ngx_http_echo_module was configured adding module in ../ngx_openresty-1.5.8.1/bundle/set-misc-nginx-module-0.24 found ngx_devel_kit for ngx_set_misc; looks good. + ngx_http_set_misc_module was configured but still get the error when trying to restart nginx. 2014/02/23 17:20:59 [emerg] 31056#0: unknown directive "memc_pass" in /etc/nginx/sites-enabled/irj972.co.uk:55 here's the nginx -V which illustrates the memo lib is compiled in ok. nginx version: nginx/1.4.4 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --sbin-path=/usr/local/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-pcre --with-debug --with-http_ssl_module --with-http_spdy_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-mail_ssl_module --with-google_perftools_module --add-module=../ngx_openresty-1.5.8.1/bundle/srcache-nginx-module-0.25 --add-module=../ngx_openresty-1.5.8.1/bundle/memc-nginx-module-0.14 --add-module=../ngx_openresty-1.5.8.1/bundle/ngx_devel_kit-0.2.19 --add-module=../ngx_openresty-1.5.8.1/bundle/echo-nginx-module-0.51 --add-module=../ngx_openresty-1.5.8.1/bundle/set-misc-nginx-module-0.24 and also tried compiling the openresty build but identical results - still no known directive for memc_pass. think Ive gone blind trying to read all the manuals, guides & blogs on line but still no joy of understanding what Im doing wrong. thx in adv for any help, Ian Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247877,247877#msg-247877 From agentzh at gmail.com Sun Feb 23 19:52:36 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 23 Feb 2014 11:52:36 -0800 Subject: newbie help - unknown directive memc_pass In-Reply-To: References: Message-ID: Hello! On Sun, Feb 23, 2014 at 9:40 AM, vikingboy wrote: > but still get the error when trying to restart nginx. > > 2014/02/23 17:20:59 [emerg] 31056#0: unknown directive "memc_pass" in > /etc/nginx/sites-enabled/irj972.co.uk:55 > One common cause is that your nginx startup script invokes a different nginx in another path which does not include the ngx_memc module. Try starting your new nginx with its absolute path directly, that is, /path/to/your/new/nginx/sbin/nginx -c /path/to/nginx.conf -p /path/to/your/prefix/ > and also tried compiling the openresty build but identical results - still > no known directive for memc_pass. > The ngx_openresty bundle installs nginx into /usr/local/openresty/nginx/ by default, so ensure your startup script start the nginx in the right path, that is, /usr/local/openresty/nginx/sbin/nginx See the following sample for more details: http://openresty.org/#GettingStarted Regards, -agentzh From nginx-forum at nginx.us Sun Feb 23 23:26:50 2014 From: nginx-forum at nginx.us (vikingboy) Date: Sun, 23 Feb 2014 18:26:50 -0500 Subject: newbie help - unknown directive memc_pass In-Reply-To: References: Message-ID: <2400ce7021a2c021e2edcfed26374710.NginxMailingListEnglish@forum.nginx.org> it does look like there are some fundamental config issues causing problems. After I posted I spun up a clean VPS and installed openresty with memc support in about 15 minutes so can confirm my problem its somehow related to updating my primary machine setup. Thanks for your pointers and info, your documentation and contribution to nginx is awesome Agentzh. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247877,247880#msg-247880 From nginx-forum at nginx.us Mon Feb 24 06:48:07 2014 From: nginx-forum at nginx.us (mex) Date: Mon, 24 Feb 2014 01:48:07 -0500 Subject: [ANN] sticky-nginx-module forked and extended Message-ID: Hi List, i'm proud to announce the comeback of the nginx-sticky-module. i included a patch by markus linnala to mark route-cookies httponly/secure and put the modified version online: https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng i'll keep care of this module and test for compatibility with future-releases of nginx. feel free to contact me if you have requirements for that module, contact-data might be found in the readme. regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247882,247882#msg-247882 From wim.crols at esaturnus.com Mon Feb 24 10:52:48 2014 From: wim.crols at esaturnus.com (Wim R. Crols) Date: Mon, 24 Feb 2014 11:52:48 +0100 Subject: upstream sent invalid chunked response Message-ID: Hi, We are using nginx as a reverse proxy before JBoss and Apache. Certain POST requests are failing when passing through nginx. /var/log/nginx/error.log showed errors like this: 2014/02/24 10:05:05 [error] 3409#0: *1744 upstream sent invalid chunked response while reading response header from upstream, client: 10.100.7.100, server: appliance, request: "POST /archive/python/pulsar/browser/ HTTP/1.1", upstream: "https://10.100.7.3:443/archive/python/pulsar/browser/", host: "10.100.7.1" Doing a little research I understood version 1.2.1 which we used had a problem with chunked encoding so I upgraded to 1.4.5 (since this is the official Debian wheezy-backports version). However, the same error appears. Do I have to turn something on to make this feature work? I did not find anything in the documentation (only a config option 'chunked_transfer_encoding' to turn if off, if I understood correctly). If it is any help, my test with wget (performed from "in front" of nginx of course) below produces the above "invalid chunked response" log error each time. I confirmed that the request performed from "behind" nginx works. wcrols at wim:/home/wcrols/downloads/pacstmp$ wget -SO - --post-file POST.xml --no-check-certificate https://10.100.7.1/archive/python/pulsar/browser/ --2014-02-24 11:48:00-- https://10.100.7.1/archive/python/pulsar/browser/ Connecting to 10.100.7.1:443... connected. The certificate's owner does not match hostname `10.100.7.1' HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: nginx/1.4.5 Date: Mon, 24 Feb 2014 10:47:59 GMT Content-Type: text/xml Transfer-Encoding: chunked Connection: keep-alive Pragma: no-cache Cache-control: no-cache, no-store, must-revalidate, max-age=1, max-stale=0, post-check=0, pre-check=0, private Expires: Mon, 26 Jul 1997 05:00:00 GMT Vary: *,Accept-Encoding via: HTTP/1.1 apps Length: unspecified [text/xml] Saving to: `STDOUT' [ <=> ] 0 --.-K/s in 0s 2014-02-24 11:48:01 (0.00 B/s) - written to stdout [0] Thank you for your help, Wim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 24 11:04:54 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Feb 2014 15:04:54 +0400 Subject: upstream sent invalid chunked response In-Reply-To: References: Message-ID: <20140224110454.GR33573@mdounin.ru> Hello! On Mon, Feb 24, 2014 at 11:52:48AM +0100, Wim R. Crols wrote: > Hi, > > We are using nginx as a reverse proxy before JBoss and Apache. Certain POST > requests are failing when passing through nginx. /var/log/nginx/error.log > showed errors like this: > > 2014/02/24 10:05:05 [error] 3409#0: *1744 upstream sent invalid chunked > response while reading response header from upstream, client: 10.100.7.100, > server: appliance, request: "POST /archive/python/pulsar/browser/ > HTTP/1.1", upstream: "https://10.100.7.3:443/archive/python/pulsar/browser/", > host: "10.100.7.1" > > > Doing a little research I understood version 1.2.1 which we used had a > problem with chunked encoding so I upgraded to 1.4.5 (since this is the > official Debian wheezy-backports version). However, the same error appears. > Do I have to turn something on to make this feature work? I did not find The message indicate that upstream server returned invalid response, or at least nginx thinks it is invalid. If you think that response returned is valid, it's good idea to actually provide what was returned, as well as nginx's debug log. See here for some more hints: http://wiki.nginx.org/Debugging > anything in the documentation (only a config option > 'chunked_transfer_encoding' to turn if off, if I understood correctly). No, "chunked_transfer_encoding" is to disable use of chunked transfer encoding with clients by nginx. The message complains about what's returned by your backend server. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Feb 24 11:34:41 2014 From: nginx-forum at nginx.us (greekduke) Date: Mon, 24 Feb 2014 06:34:41 -0500 Subject: Worker dies with a segfault error Message-ID: Hello, Hello, I have an issue with segfaults especially when I send lot's of traffic to the server. The hardware is an HP proliant G8 with 24 cores but I am using only one worker because the problem becames worse with more than one. I have also modified the ulimit for the nginx user up to 65535 but I am still getting the segfaults. I have also tried various nginx versions from 1.3.9 up to the latest 1.5.9. In one of the tests I have also removed the two modules I am using. Finally I am using nginx as a proxy only. Feb 24 13:08:19 -00 kernel: nginx[23543]: segfault at 20 ip 0000000000405d3f sp 00007fffc66ab600 error 4 in nginx[400000+92000] Feb 24 13:09:12 a1-00 kernel: nginx[23547]: segfault at 20 ip 0000000000405d3f sp 00007fffc66ab600 error 4 in nginx[400000+92000] Feb 24 13:09:43 a1-00 kernel: nginx[23584]: segfault at 20 ip 0000000000405d3f sp 00007fff0f2f3b80 error 4 in nginx[400000+92000] Feb 24 13:10:20 a1-00 kernel: nginx[23592]: segfault at 20 ip 0000000000405d3f sp 00007fff0f2f3be0 error 4 in nginx[400000+92000] Feb 24 13:12:06 a1-00 kernel: nginx[23671]: segfault at 20 ip 0000000000405d3f sp 00007fff0f2f3be0 error 4 in nginx[400000+92000] nginx version: nginx/1.5.9 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) configure arguments: --add-module=/software/ngx_postgres-master-1.5 --add-module=/software/nginx-eval-module-master --user=nginx --with-debug Red Hat Enterprise Linux Server release 6.5 (Santiago) Linux ape-00 2.6.32-431.el6.x86_64 #1 SMP Sun Nov 10 22:19:54 EST 2013 x86_64 x86_64 x86_64 GNU/Linux Thanks BR/Kostas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247886,247886#msg-247886 From mdounin at mdounin.ru Mon Feb 24 12:45:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Feb 2014 16:45:07 +0400 Subject: Worker dies with a segfault error In-Reply-To: References: Message-ID: <20140224124507.GS33573@mdounin.ru> Hello! On Mon, Feb 24, 2014 at 06:34:41AM -0500, greekduke wrote: > Hello, > > Hello, > > I have an issue with segfaults especially when I send lot's of traffic to > the server. The hardware is an HP proliant G8 with 24 cores but I am using > only one worker because the problem becames worse with more than one. I have > also modified the ulimit for the nginx user up to 65535 but I am still > getting the segfaults. I have also tried various nginx versions from 1.3.9 > up to the latest 1.5.9. In one of the tests I have also removed the two > modules I am using. Finally I am using nginx as a proxy only. > > Feb 24 13:08:19 -00 kernel: nginx[23543]: segfault at 20 ip 0000000000405d3f > sp 00007fffc66ab600 error 4 in nginx[400000+92000] > Feb 24 13:09:12 a1-00 kernel: nginx[23547]: segfault at 20 ip > 0000000000405d3f sp 00007fffc66ab600 error 4 in nginx[400000+92000] > Feb 24 13:09:43 a1-00 kernel: nginx[23584]: segfault at 20 ip > 0000000000405d3f sp 00007fff0f2f3b80 error 4 in nginx[400000+92000] > Feb 24 13:10:20 a1-00 kernel: nginx[23592]: segfault at 20 ip > 0000000000405d3f sp 00007fff0f2f3be0 error 4 in nginx[400000+92000] > Feb 24 13:12:06 a1-00 kernel: nginx[23671]: segfault at 20 ip > 0000000000405d3f sp 00007fff0f2f3be0 error 4 in nginx[400000+92000] > > nginx version: nginx/1.5.9 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > configure arguments: --add-module=/software/ngx_postgres-master-1.5 > --add-module=/software/nginx-eval-module-master --user=nginx --with-debug > > Red Hat Enterprise Linux Server release 6.5 (Santiago) > Linux ape-00 2.6.32-431.el6.x86_64 #1 SMP Sun Nov 10 22:19:54 EST 2013 > x86_64 x86_64 x86_64 GNU/Linux http://wiki.nginx.org/Debugging#Asking_for_help -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Feb 24 12:59:52 2014 From: nginx-forum at nginx.us (greekduke) Date: Mon, 24 Feb 2014 07:59:52 -0500 Subject: Worker dies with a segfault error In-Reply-To: <20140224124507.GS33573@mdounin.ru> References: <20140224124507.GS33573@mdounin.ru> Message-ID: Hello, The debug log is huge. Any ideas where to uploaded? BR/Kostas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247886,247891#msg-247891 From zxcvbn4038 at gmail.com Mon Feb 24 14:04:07 2014 From: zxcvbn4038 at gmail.com (nginx user) Date: Mon, 24 Feb 2014 09:04:07 -0500 Subject: Resend - worker_connections are not enough while requesting certificate status Message-ID: I have seen errors in my logs: worker_connections are not enough while requesting certificate status I believe the main problem was that the worker_connections was set too low, and I've fixed that. However after looking at the source around the OCSP stapling, I have a couple questions: - It appears that ocsp responses are cached for five minutes, correct? - When the cached responses expire, does each worker make a new request? Or does every new connection cause a request to be sent until one of the requests (from each worker) receives a reply and populates the cache? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 24 14:04:42 2014 From: nginx-forum at nginx.us (p.heppler) Date: Mon, 24 Feb 2014 09:04:42 -0500 Subject: Issue with spdy and proxy_pass In-Reply-To: <98c0c4360be8fcce02f08cc321e86f17.NginxMailingListEnglish@forum.nginx.org> References: <6421273.AxaUT8VlqD@vbart-laptop> <98c0c4360be8fcce02f08cc321e86f17.NginxMailingListEnglish@forum.nginx.org> Message-ID: <014a21ddb966853e940a044b23e562b9.NginxMailingListEnglish@forum.nginx.org> I got it! SPDY breaks as soon as my Upstream uses GZip! I turned GZip off in Railo and voila it works. Turn it on and get blank page again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247692,247893#msg-247893 From mdounin at mdounin.ru Mon Feb 24 14:13:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Feb 2014 18:13:56 +0400 Subject: Resend - worker_connections are not enough while requesting certificate status In-Reply-To: References: Message-ID: <20140224141356.GA89422@mdounin.ru> Hello! On Mon, Feb 24, 2014 at 09:04:07AM -0500, nginx user wrote: > I have seen errors in my logs: worker_connections are not enough while > requesting certificate status > > I believe the main problem was that the worker_connections was set too low, > and I've fixed that. > > However after looking at the source around the OCSP stapling, I have a > couple questions: > > - It appears that ocsp responses are cached for five minutes, correct? No. Succesfull responses are cached for an hour, errors are retried in 5 minuts. > - When the cached responses expire, does each worker make a new request? Or > does every new connection cause a request to be sent until one of the > requests (from each worker) receives a reply and populates the cache? Each worker does only one request. -- Maxim Dounin http://nginx.org/ From black.fledermaus at arcor.de Mon Feb 24 15:01:03 2014 From: black.fledermaus at arcor.de (basti) Date: Mon, 24 Feb 2014 16:01:03 +0100 Subject: Rewrite all https to http except one location Message-ID: <530B5EAF.8040308@arcor.de> Hello, I have a SSL config like server { server_name ...; # do not rewrite this location /mailadmin/(.*.\.php)${ ... # some stuff } location / { ... rewrite ^ http://$server_name$request_uri? permanent; } location ~ \.php$ { ... # php stuff } } URLS like https://example.com/mailadmin/set.php?p_s=301AB1837E730B55&framework= are partly rewrite How can I solve this? Regards, Basti From black.fledermaus at arcor.de Mon Feb 24 15:10:24 2014 From: black.fledermaus at arcor.de (basti) Date: Mon, 24 Feb 2014 16:10:24 +0100 Subject: Rewrite https to http expect one location Message-ID: <530B60E0.10807@arcor.de> Hello, I have a config like: server { ... # do not rewrite this location /mailadmin/(.*.\.php)$ { # some stuff } location / { # some other stuff rewrite ^ http://$server_name$request_uri? permanent; } location ~ \.php$ { # php stuff; } } URLs like https://example.com/mailadmin/test.php?ps=301A1123344556E925803435&framework= are partly rewrite. Regards, Basti From lcordier at gmail.com Mon Feb 24 15:21:36 2014 From: lcordier at gmail.com (Louis Cordier) Date: Mon, 24 Feb 2014 17:21:36 +0200 Subject: Weird bad gateway issues Message-ID: I am using nginx/1.1.19 (Ubuntu 12.04.3). I have 3 reverse proxies setup, like so: server { listen 80; server_name ebatch.example.com; client_max_body_size 20m; location / { proxy_pass http://127.0.0.1:8000/; include /etc/nginx/proxy.conf; } location /static/ { alias /srv/ebatch.example.com/public_html/; autoindex on; # expires 31d; } access_log /srv/ebatch.example.com/logs/static_access.log; error_log /srv/ebatch.example.com/logs/static_error.log; } server { listen 80; server_name capture.example.com; client_max_body_size 20m; location / { proxy_pass http://127.0.0.1:8001/; include /etc/nginx/proxy.conf; } location /static/ { alias /srv/capture.example.com/public_html/; autoindex on; expires 31d; } access_log /srv/capture.example.com/logs/static_access.log; error_log /srv/capture.example.com/logs/static_error.log; } server { listen 80; server_name production.example.com; client_max_body_size 20m; location / { proxy_pass http://127.0.0.1:8003/; include /etc/nginx/proxy.conf; } location /static/ { alias /srv/production.example.com/public_html/; autoindex on; # expires 31d; } access_log /srv/production.example.com/logs/static_access.log; error_log /srv/production.example.com/logs/static_error.log; } Only the paths differ in each setup. The first two works 100%. While the last one gives me a bad gateway error. When I connect to production.example.com/static/ I get served the files from capture.example.com/static/. And no log entries get written to /srv/production/logs/* I can confirm that the upstream server is working correctly. Both lynx http://127.0.0.1:8003 return the correct results and when I change the server to python -m SimpleHTTPServer 8003 I still get a bad gateway. Any help would be much appreciated. Regards, Louis. -- Louis Cordier cell: +27721472305 -------------- next part -------------- An HTML attachment was scrubbed... URL: From semenukha at gmail.com Mon Feb 24 15:34:53 2014 From: semenukha at gmail.com (Styopa Semenukha) Date: Mon, 24 Feb 2014 10:34:53 -0500 Subject: Rewrite all https to http except one location In-Reply-To: <530B5EAF.8040308@arcor.de> References: <530B5EAF.8040308@arcor.de> Message-ID: <12670423.lROieExjFd@tornado> Use: return 301 http://$server_name$request_uri; to redirect. On Monday, February 24, 2014 04:01:03 PM basti wrote: > Hello, > I have a SSL config like > > server { > > server_name ...; > > # do not rewrite this > location /mailadmin/(.*.\.php)${ > ... > # some stuff > } > > > location / { > ... > rewrite ^ http://$server_name$request_uri? permanent; > } > > location ~ \.php$ { > ... > # php stuff > } > } > > URLS like > https://example.com/mailadmin/set.php?p_s=301AB1837E730B55&framework= > are partly rewrite > > How can I solve this? > Regards, > Basti > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Best regards, Styopa Semenukha. From lcordier at gmail.com Mon Feb 24 15:37:59 2014 From: lcordier at gmail.com (Louis Cordier) Date: Mon, 24 Feb 2014 17:37:59 +0200 Subject: Weird bad gateway issues In-Reply-To: References: Message-ID: On Mon, Feb 24, 2014 at 5:21 PM, Louis Cordier wrote: > When I connect to production.example.com/static/ I get served the files > from capture.example.com/static/. > You can ignore this part, looks like browser caching. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wim.crols at esaturnus.com Mon Feb 24 15:40:35 2014 From: wim.crols at esaturnus.com (Wim R. Crols) Date: Mon, 24 Feb 2014 16:40:35 +0100 Subject: upstream sent invalid chunked response In-Reply-To: <20140224110454.GR33573@mdounin.ru> References: <20140224110454.GR33573@mdounin.ru> Message-ID: > > The message indicate that upstream server returned invalid > response, or at least nginx thinks it is invalid. If you think > that response returned is valid, it's good idea to actually > provide what was returned, as well as nginx's debug log. > > > -- > Maxim Dounin > http://nginx.org/ Thank you. You were right. It turned out JBoss (or one of its components) gave a strange response in HTTP 1.1 chunked transfer... but without chunk sizes. I solved it by adding these two directives: + proxy_http_version 1.1; + proxy_set_header Connection ""; -------------- next part -------------- An HTML attachment was scrubbed... URL: From black.fledermaus at arcor.de Mon Feb 24 16:43:36 2014 From: black.fledermaus at arcor.de (basti) Date: Mon, 24 Feb 2014 17:43:36 +0100 Subject: Rewrite all https to http except one location In-Reply-To: <12670423.lROieExjFd@tornado> References: <530B5EAF.8040308@arcor.de> <12670423.lROieExjFd@tornado> Message-ID: <530B76B8.8050707@arcor.de> Sorry same result. On 24.02.2014 16:34, Styopa Semenukha wrote: > Use: > return 301 http://$server_name$request_uri; > to redirect. > > On Monday, February 24, 2014 04:01:03 PM basti wrote: >> Hello, >> I have a SSL config like >> >> server { >> >> server_name ...; >> >> # do not rewrite this >> location /mailadmin/(.*.\.php)${ >> ... >> # some stuff >> } >> >> >> location / { >> ... >> rewrite ^ http://$server_name$request_uri? permanent; >> } >> >> location ~ \.php$ { >> ... >> # php stuff >> } >> } >> >> URLS like >> https://example.com/mailadmin/set.php?p_s=301AB1837E730B55&framework= >> are partly rewrite >> >> How can I solve this? >> Regards, >> Basti >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Feb 24 18:07:38 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 24 Feb 2014 18:07:38 +0000 Subject: Rewrite https to http expect one location In-Reply-To: <530B60E0.10807@arcor.de> References: <530B60E0.10807@arcor.de> Message-ID: <20140224180738.GJ29880@craic.sysops.org> On Mon, Feb 24, 2014 at 04:10:24PM +0100, basti wrote: Hi there, > # do not rewrite this > location /mailadmin/(.*.\.php)$ { You probably will have no requests that will match this prefix location. > location / { > rewrite ^ http://$server_name$request_uri? permanent; Many requests will match this location, and be redirected to a http url. > location ~ \.php$ { Some requests will match this location. > URLs like > https://example.com/mailadmin/test.php?ps=301A1123344556E925803435&framework= > are partly rewrite. I would expect that request to match the third location. What response do you get for it? What response do you want for it? f -- Francis Daly francis at daoine.org From semenukha at gmail.com Mon Feb 24 18:46:22 2014 From: semenukha at gmail.com (Styopa Semenukha) Date: Mon, 24 Feb 2014 13:46:22 -0500 Subject: Rewrite all https to http except one location In-Reply-To: <530B5EAF.8040308@arcor.de> References: <530B5EAF.8040308@arcor.de> Message-ID: <21439344.1X9ynEeb7n@tornado> > location /mailadmin/(.*.\.php)${ This should probably be: location ~ /mailadmin/(.*\.php)$ { Otherwise it's not treated as regex. On Monday, February 24, 2014 04:01:03 PM basti wrote: > Hello, > I have a SSL config like > > server { > > server_name ...; > > # do not rewrite this > location /mailadmin/(.*.\.php)${ > ... > # some stuff > } > > > location / { > ... > rewrite ^ http://$server_name$request_uri? permanent; > } > > location ~ \.php$ { > ... > # php stuff > } > } > > URLS like > https://example.com/mailadmin/set.php?p_s=301AB1837E730B55&framework= > are partly rewrite > > How can I solve this? > Regards, > Basti > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Best regards, Styopa Semenukha. From nginx-forum at nginx.us Mon Feb 24 19:06:15 2014 From: nginx-forum at nginx.us (dukzcry) Date: Mon, 24 Feb 2014 14:06:15 -0500 Subject: nginx mail proxy - dovecot ssl backend In-Reply-To: <4784f9fe36b38297a526f1ba7537a4a7.NginxMailingListEnglish@forum.nginx.org> References: <4ECFFD1E.2030002@yahoo.com.br> <9488c17d7cf4ee5a292ad097b1740294.NginxMailingListEnglish@forum.nginx.org> <42c203a0a7a1711d61fff1e4118ab921.NginxMailingListEnglish@forum.nginx.org> <4784f9fe36b38297a526f1ba7537a4a7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Now, here is the minimal patch: https://raw.github.com/druga/aeriebsd-tree/master/usr.sbin/nginx/patch-nginx_mail_proxy_ssl_backends.diff I do provide neither hostnames nor starttls support in it, because the code for their support is too much invasive :-( P.S.: The patch isn't good enough to be included into nginx. So if anybody is going to fix it, you're welcome. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219069,247910#msg-247910 From nginx-forum at nginx.us Mon Feb 24 19:11:21 2014 From: nginx-forum at nginx.us (dukzcry) Date: Mon, 24 Feb 2014 14:11:21 -0500 Subject: Configuring nginx as mail proxy In-Reply-To: References: Message-ID: useopenid Wrote: ------------------------------------------------------- > I am looking at proxying to google as well, and thus need SSL on the > backside (and would like it on general principles for other cases as > well) Here you go: http://forum.nginx.org/read.php?2,219069,247910#msg-247910 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232147,247911#msg-247911 From reallfqq-nginx at yahoo.fr Mon Feb 24 22:39:39 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 24 Feb 2014 23:39:39 +0100 Subject: Include order and variables definition in configuration Message-ID: Hello, I am considering the following configuration: server { include fastcgi.conf # Default configuration coming with a Debian package which contains a definition of the SCRIPT_FILENAME FastCGI variable with $document_root$fastcgi_script_name as its value ... location ~^/index\.php { fastcgi_split_path_info ^(/index\.php)(/.*)$; } } ?Will the FastCGI SCRIPT_FILENAME variable value take into account the value of $fastcgi_script_name after fastcgi_split_path_info has been called or will it be resolved when the fastcgi.conf file is included? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Feb 24 22:46:57 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 24 Feb 2014 22:46:57 +0000 Subject: Include order and variables definition in configuration In-Reply-To: References: Message-ID: <20140224224657.GM29880@craic.sysops.org> On Mon, Feb 24, 2014 at 11:39:39PM +0100, B.R. wrote: Hi there, > server { > include fastcgi.conf # Default configuration coming with a Debian > ... > location ~^/index\.php { > fastcgi_split_path_info ^(/index\.php)(/.*)$; > } > Will the FastCGI SCRIPT_FILENAME variable value take into account the > value of $fastcgi_script_name after fastcgi_split_path_info has been called > or will it be resolved when the fastcgi.conf file is included? That seems fairly straightforward to check. What does the debug log say? Or the tcpdump of the traffic between nginx and the fastcgi server? Or the fastcgi server logs, if you can see them? (It's the one that you would expect it to be, based on fastcgi_split_path_info being useful.) What it actually comes down to is the time at which variables are evaluated -- and in general, it's "the first time they are needed". Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Feb 24 23:42:36 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Feb 2014 03:42:36 +0400 Subject: Include order and variables definition in configuration In-Reply-To: References: Message-ID: <20140224234236.GG91191@mdounin.ru> Hello! On Mon, Feb 24, 2014 at 11:39:39PM +0100, B.R. wrote: > Hello, > > I am considering the following configuration: > server { > include fastcgi.conf # Default configuration coming with a Debian > package which contains a definition of the SCRIPT_FILENAME FastCGI variable > with $document_root$fastcgi_script_name as its value > ... > location ~^/index\.php { > fastcgi_split_path_info ^(/index\.php)(/.*)$; > } > } > > ?Will the FastCGI SCRIPT_FILENAME variable value take into account the > value of $fastcgi_script_name after fastcgi_split_path_info has been called > or will it be resolved when the fastcgi.conf file is included? Variables are evaluated at runtime only, during processing of a request. And the $fastcgi_script_name variable will use fastcgi_split_path_info as found in a location where the variable is used - that is, in a location where fastcgi_pass happens. -- Maxim Dounin http://nginx.org/ From anaxagramma at gmail.com Tue Feb 25 00:00:56 2014 From: anaxagramma at gmail.com (naxa) Date: Tue, 25 Feb 2014 01:00:56 +0100 Subject: How to combine text and variables in fastcgi_param? In-Reply-To: <20140223135110.GP33573@mdounin.ru> References: <5309F0BB.20005@gmail.com> <20140223135110.GP33573@mdounin.ru> Message-ID: <530BDD38.6020804@gmail.com> On 2014.02.23. 14:51, Maxim Dounin wrote: > Try this: > > fastcgi_param SCRIPT_FILENAME "$document_root $fastcgi_script_name"; > > or this: > > fastcgi_param SCRIPT_FILENAME '$document_root $fastcgi_script_name'; > > If a parameter includes special characters, the whole parameter > should be enclosed in single or double quotes. > Just a quick note: thank you very much, it worked! From reallfqq-nginx at yahoo.fr Tue Feb 25 00:05:44 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 25 Feb 2014 01:05:44 +0100 Subject: Include order and variables definition in configuration In-Reply-To: <20140224234236.GG91191@mdounin.ru> References: <20140224234236.GG91191@mdounin.ru> Message-ID: Hello Francis and Maxim, I understand very well that $fastcgi_script_name value is defined after fastcgi_split_path_info is called. However I was wondering about other variables which value depend on $fastcgi_script_name, for example when PHP's SCRIPT_NAME has been defined in the already included fastcgi.conf. Here are 2 examples: server { listen 80; server_name b.cd; try_files $uri $uri/ /index.php$uri; root /var/www; index index.html index.htm index.php; include fastcgi.conf; fastcgi_buffers 8 8k; location ~ ^/index\.php { fastcgi_split_path_info ^(/index\.php)(/.*)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_pass unix:/var/run/php5-fpm.sock; } } The previous configuration seems to fail (returns 200 with blank page, no error). server { listen 80; server_name b.cd; try_files $uri $uri/ /index.php$uri; root /var/www; index index.html index.htm index.php; fastcgi_buffers 8 8k; location /favicon.ico { access_log off; log_not_found off; expires 7d; return 204; } location ~ ^/index\.php { fastcgi_split_path_info ^(/index\.php)(/.*)$; include fastcgi.conf; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_pass unix:/var/run/php5-fpm.sock; } } This configuration, on the other hand, runs smoothly. If I get the 'variables are evaluated the first time they are needed' statement, in the first configuration, $fastcgi_script_name was accessed when defining SCRIPT_NAME, thus its value was wrong. Whether or not $fastcgi_script_name value is overwritten in the location block is another story, though. In the second configuration, $fastcgi_script_name is ensured to be accessed if and only if it been correctly defined by the previous fastcgi_split_path_info directive, ensuring the success of the configuration. Am I right? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 25 01:49:22 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Feb 2014 05:49:22 +0400 Subject: Include order and variables definition in configuration In-Reply-To: References: <20140224234236.GG91191@mdounin.ru> Message-ID: <20140225014922.GI91191@mdounin.ru> Hello! On Tue, Feb 25, 2014 at 01:05:44AM +0100, B.R. wrote: > Hello Francis and Maxim, > > I understand very well that $fastcgi_script_name value is defined after > fastcgi_split_path_info is called. > However I was wondering about other variables which value depend on > $fastcgi_script_name, for example when PHP's SCRIPT_NAME has been defined > in the already included fastcgi.conf. There is no "variables which value depend on $fastcgi_script_name". The fastcgi_param directive defines parameters to be sent to FastCGI application. > Here are 2 examples: > server { > listen 80; > server_name b.cd; > try_files $uri $uri/ /index.php$uri; > > root /var/www; > index index.html index.htm index.php; > include fastcgi.conf; > fastcgi_buffers 8 8k; > > location ~ ^/index\.php { > fastcgi_split_path_info ^(/index\.php)(/.*)$; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_pass unix:/var/run/php5-fpm.sock; > } > } > > The previous configuration seems to fail (returns 200 with blank page, no > error). The above configuration has only one fastcgi_param in "location ~ ^/index\.php". And there are no parameters which include $fastcgi_script_name, so your previous question is completely irrelevant here. Note (quote from http://nginx.org/r/fastcgi_param): : These directives are inherited from the previous level if and only : if there are no fastcgi_param directives defined on the current : level. There are fastcgi_param directives in "location ~ ^/index\.php", so fastcgi_params directives used on previous levels are not inherited. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Tue Feb 25 08:45:20 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 25 Feb 2014 09:45:20 +0100 Subject: Include order and variables definition in configuration In-Reply-To: <20140225014922.GI91191@mdounin.ru> References: <20140224234236.GG91191@mdounin.ru> <20140225014922.GI91191@mdounin.ru> Message-ID: On Tue, Feb 25, 2014 at 2:49 AM, Maxim Dounin wrote: > Note (quote from http://nginx.org/r/fastcgi_param): > > : These directives are inherited from the previous level if and only > : if there are no fastcgi_param directives defined on the current > : level. > > There are fastcgi_param directives in "location ~ ^/index\.php", > so fastcgi_params directives used on previous levels are not > inherited. > ?Thanks Maxim. Everything said there!? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From black.fledermaus at arcor.de Tue Feb 25 09:21:13 2014 From: black.fledermaus at arcor.de (basti) Date: Tue, 25 Feb 2014 10:21:13 +0100 Subject: Rewrite https to http expect one location In-Reply-To: <20140224180738.GJ29880@craic.sysops.org> References: <530B60E0.10807@arcor.de> <20140224180738.GJ29880@craic.sysops.org> Message-ID: <530C6089.6090807@arcor.de> There is still a rewrite problem with my config: server { ... root $root_directory; index index.php index.html index.htm; fastcgi_index index.php; location ~ /mailadmin/(.*.\.php)$ { root /path/to/root/; ... index index.php; } location / { return 301 http://$server_name$request_uri; } } I want all != mailadmin is rewrite to http. URLs like https://example.com/mailadmin/test.php?ps=301A1123344556E925803435&framework= https://example.com/mailadmin/share/themes/default/tabmain/first_on.gif I don't want to rewrite. But https://www.tank-app.de/mailadmin/ is also rewrite, I dont know why. On 24.02.2014 19:07, Francis Daly wrote: > On Mon, Feb 24, 2014 at 04:10:24PM +0100, basti wrote: > > Hi there, > >> # do not rewrite this >> location /mailadmin/(.*.\.php)$ { > You probably will have no requests that will match this prefix location. > >> location / { >> rewrite ^ http://$server_name$request_uri? permanent; > Many requests will match this location, and be redirected to a http url. > >> location ~ \.php$ { > Some requests will match this location. > >> URLs like >> https://example.com/mailadmin/test.php?ps=301A1123344556E925803435&framework= >> are partly rewrite. > I would expect that request to match the third location. > > What response do you get for it? What response do you want for it? > > f From lists at ruby-forum.com Tue Feb 25 11:17:10 2014 From: lists at ruby-forum.com (Mahmud Mr.) Date: Tue, 25 Feb 2014 12:17:10 +0100 Subject: =?UTF-8?Q?Premium_Quality_=E2=80=93_Retro_Badges_Vectors_=E2=80=93_Vectors?= =?UTF-8?Q?_Design?= Message-ID: Premium Quality ? Retro Badges Vectors ? Vectors Design. ThemeFlava is proud to present you a collection of our most loved badges, neatly handcrafted from primitive vector shapes and inspired by some of the most relaxing places on Earth. Take a look at the typography?There are more than 12 fonts used and the most of them are new releases from some of the best type foundries. All graphics are saved in Illustrator AI , EPS and Photoshop PSD , every single file being well-organized by layers, perfectly-aligned and extremely easy to customize (every detail is made of a vector layer for AI and EPS or a shape layer for PSD ). Links for the fonts, as well for the images used, are packed in the help file. http://reviewmeans.com/premium-quality-retro-badges-vectors-vectors-design/ -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Tue Feb 25 14:32:54 2014 From: nginx-forum at nginx.us (Kurk_SS) Date: Tue, 25 Feb 2014 09:32:54 -0500 Subject: http header doesn't pass throw nginx to php-fpm Message-ID: <0bd2af772d708e73358d326f51037e15.NginxMailingListEnglish@forum.nginx.org> I use http{ underscores_in_headers on; ignore_invalid_headers off; and all works fine. php code can see HTTP_RANGE in $_SERVER array. but when I use fastcgi cache. location / { if ($request_method != GET) { break;} #bad request try_files @echofile @echofile; #request_uri is a target for download } location @echofile { include fastcgi_params; #fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param QUERY_STRING controller=files&target=$request_uri; fastcgi_param SCRIPT_FILENAME $root_path/index.php; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_ignore_headers "Expires" "Cache-Control" "Set-Cookie"; fastcgi_temp_path /tmp/dfg12d1 2; fastcgi_cache dfg12d1; fastcgi_cache_key "$request_uri"; fastcgi_hide_header "Set-Cookie"; fastcgi_cache_min_uses 1; #fastcgi_cache_valid 1d; # using X-Accel-Expires header in php } It not works now. why? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247932,247932#msg-247932 From jmrepetti at gmail.com Tue Feb 25 15:34:34 2014 From: jmrepetti at gmail.com (=?ISO-8859-1?Q?Juan_Mat=EDas?=) Date: Tue, 25 Feb 2014 16:34:34 +0100 Subject: NGINX proxy, 502 error while SSL handshaking to upstream Message-ID: Hello everyone, I'm new here and this my first post in this mailing list, Maybe this is a frequently answered question but I could't find a solution. Maybe is a "layer 8" issue. Right now, I have a Nginx(1.0.8) proxy running on Ubuntu 10.04 32bits, OpenSSL 0.9.8 doing a https upstream on port 33195. Here is a piece of the nginx.conf file: ...... location /external_services { proxy_pass https://x.x.x.x:33195/external_service; allow x.x.x.x; deny all; } ...... It is working, but I need to migrate this proxy to a new server. This new server runs Ubuntu 12.04, OpenSSL 1.0.1 and Nginx 1.5.10. This server receive an http://myproxy/external_services request and proxy it to https://x.x.x.x:33195/external_service; (http to https) When I try to access http://myproxy/external_services on the new server, I got a 502 error and I see this message in error.log : "peer closed connection in SSL handshake while SSL handshaking to upstream" I found that I can connect(from the proxy server) to https://x.x.x.x:33195/external_service using openssl, doing this: $ openssl s_client -connect https://x.x.x.x:33195/external_service-no_tls1_1 I tried to disable TLSv1.1 in Nginx using the directive: ssl_protocols SSLv3 TLSv1; but nothing change. I don't want to downgrade to Nginx(1.0.8) and OpenSSL 0.9.8 (I think is a possible solution). Any help? I'm doing something wrong, I can't find a solution. Thanks, Matias. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 25 16:22:55 2014 From: nginx-forum at nginx.us (ologmcnoleg) Date: Tue, 25 Feb 2014 11:22:55 -0500 Subject: Reverse and Forward proxy? Message-ID: <34d1ac036e0f3fe380ad400c9a27e4dd.NginxMailingListEnglish@forum.nginx.org> Hi, Can nginx run as a forward and revers proxy in the same host? I am looking for a solution to the below 1) Forward Proxy App Server > Proxy Server > Internet > Source IP My App server will need to use a Proxy to fetch files over the internet (I would usually use Squid for this) 2) Reverse Proxy Source IP > Internet > Proxy Server > App Server Source IP needs to reach my Proxy server which forwards requests on port 443 to my App Server. The App server is listening and then responds. Can can both 1) and 2) above be managed by one nginx instance? I could use Squid for 1) and nginx for 2). But I would like to use nginx for both on the same server. Let me know if you need further info. Thanks guys, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247934,247934#msg-247934 From mejedi at gmail.com Tue Feb 25 16:23:55 2014 From: mejedi at gmail.com (ZNV) Date: Tue, 25 Feb 2014 20:23:55 +0400 Subject: NGINX SSL Session Ticket Key Message-ID: Hi! Recently nginx implemented support for ssl_session_ticket_key allowing to setup key(s) for SSL tickets encryption explicitly. This is usefull when multiple nginx servers must share the same set of keys in order for any server to accept tickets issued by any other server. The key file is an opaque 48 byte long blob. Internally this data is partitioned as follows (ngx_ssl_ticket_session_keys, ngx_event_openssl.c): a key name (16 bytes) encryption key (16 bytes) hmac key (16 bytes) Without nginx customization OpenSSL partitions the key data another way (ssl3_ctx_ctrl in openssl): a key name (16 bytes) hmac key (16 bytes) encryption key (16 bytes) This creates a certain compatibility issue. Though I didn't verify it presumably Apache's mod_ssl isn't going to understand nginx SSL session tickets even though both servers are using OpenSSL. I think it would be better if nginx didn't invent its own ticket key format but use the format defined by OpenSSL instead. Best Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 25 16:25:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Feb 2014 20:25:37 +0400 Subject: http header doesn't pass throw nginx to php-fpm In-Reply-To: <0bd2af772d708e73358d326f51037e15.NginxMailingListEnglish@forum.nginx.org> References: <0bd2af772d708e73358d326f51037e15.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140225162537.GP91191@mdounin.ru> Hello! On Tue, Feb 25, 2014 at 09:32:54AM -0500, Kurk_SS wrote: > I use > > http{ > underscores_in_headers on; > ignore_invalid_headers off; > > and all works fine. php code can see HTTP_RANGE in $_SERVER array. > > but when I use fastcgi cache. [...] > It not works now. > > why? Cache needs full responses, hence it strips Range header from requests to backends. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Feb 25 16:52:00 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Feb 2014 20:52:00 +0400 Subject: NGINX SSL Session Ticket Key In-Reply-To: References: Message-ID: <20140225165200.GQ91191@mdounin.ru> Hello! On Tue, Feb 25, 2014 at 08:23:55PM +0400, ZNV wrote: > Hi! > > Recently nginx implemented support for ssl_session_ticket_key allowing > to setup key(s) for SSL tickets encryption explicitly. This is usefull when > multiple nginx servers must share the same set of keys in order for any > server to accept tickets issued by any other server. > > The key file is an opaque 48 byte long blob. Internally this data is > partitioned > as follows (ngx_ssl_ticket_session_keys, ngx_event_openssl.c): > > a key name (16 bytes) > encryption key (16 bytes) > hmac key (16 bytes) > > Without nginx customization OpenSSL partitions the key data another > way (ssl3_ctx_ctrl in openssl): > > a key name (16 bytes) > hmac key (16 bytes) > encryption key (16 bytes) > > This creates a certain compatibility issue. Though I didn't verify it > presumably Apache's mod_ssl isn't going to understand nginx > SSL session tickets even though both servers are using OpenSSL. > > I think it would be better if nginx didn't invent its own ticket key > format but use the format defined by OpenSSL instead. The format is "48 bytes of random data", and I don't think that compatibility with other software is something to be considered here. Ticket keys are to be used between multiple nginx instances, nothing more. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Feb 25 17:03:22 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Feb 2014 21:03:22 +0400 Subject: NGINX proxy, 502 error while SSL handshaking to upstream In-Reply-To: References: Message-ID: <20140225170322.GR91191@mdounin.ru> Hello! On Tue, Feb 25, 2014 at 04:34:34PM +0100, Juan Mat?as wrote: > Hello everyone, I'm new here and this my first post in this mailing list, > > Maybe this is a frequently answered question but I could't find a solution. > Maybe is a "layer 8" issue. > > Right now, I have a Nginx(1.0.8) proxy running on Ubuntu 10.04 32bits, > OpenSSL 0.9.8 doing a https upstream on port 33195. Here is a piece of the > nginx.conf file: > > ...... > location /external_services { > proxy_pass https://x.x.x.x:33195/external_service; > allow x.x.x.x; > deny all; > } > ...... > > > It is working, but I need to migrate this proxy to a new server. This new > server runs Ubuntu 12.04, OpenSSL 1.0.1 and Nginx 1.5.10. > > This server receive an http://myproxy/external_services request and proxy > it to https://x.x.x.x:33195/external_service; (http to https) > > When I try to access http://myproxy/external_services on the new server, I > got a 502 error and I see this message in error.log : > > "peer closed connection in SSL handshake while SSL handshaking to > upstream" > > I found that I can connect(from the proxy server) to > https://x.x.x.x:33195/external_service using openssl, doing this: > > $ openssl s_client -connect https://x.x.x.x:33195/external_service-no_tls1_1 > > I tried to disable TLSv1.1 in Nginx using the directive: ssl_protocols > SSLv3 TLSv1; but nothing change. You have to use proxy_ssl_protocols, not ssl_protocols. See http://nginx.org/r/proxy_ssl_protocols. The proxy_ssl_ciphers directive may help, too, depending on what exactly triggers the problem on your backend. -- Maxim Dounin http://nginx.org/ From hillb at yosemite.edu Tue Feb 25 17:51:54 2014 From: hillb at yosemite.edu (Brian Hill) Date: Tue, 25 Feb 2014 17:51:54 +0000 Subject: Odd issue with proxy_pass serving wrong cached data Message-ID: <205444BFA924A34AB206D0E2538D4B4941358C@x10m01.yosemite.edu> I'm having an odd issue, and I'm not sure where to start. We've been implementing a number of NGINX servers recently, and one of them is doing a proxy_pass to an older IIS7 server that we'll be phasing out soon, which is hosting a couple of minor sites in our datacenter. When initially browsing the site, everything appears to be working properly. But in most browsers, if you hit reload a couple of times the NGINX server either serves the wrong files (i.e. the page calls for Image1.PNG, but the server is returning the contents of Image2.PNG), or simply fails to load the image or file at all. It will always load the base HTML page, but the linked files (css files, images, etc.) seem to randomly glitch. Because it only happens when we click reload, I'm presuming that we're looking at some sort of configuration problem reading the proxy_pass cache. The problem doesn't occur and the sites work normally if we bypass the NGINX server and connect directly to the content origin server. Anyone have any idea where I should start with this one? It's a bit of a bizarre problem, and it's really got me scratching my head. I've done a number of searches and can't find anything online discussing a similar problem. BH user www-data; worker_processes 4 pid /var/run/nginx.pid; error_log /var/log/nginx/error/error.log crit; events { worker_connections 20000; use epoll; multi_accept on; accept_mutex off; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; client_header_timeout 20; client_body_timeout 20; reset_timedout_connection on; types_hash_max_size 2048; server_tokens off; include /etc/nginx/mime.types; default_type application/octet-stream; #gzip gzip on; gzip_disable "MSIE [1-6]\."; gzip_http_version 1.0; gzip_vary on; gzip_comp_level 6; gzip_min_length 512; gzip_proxied any; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #caching open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; proxy_cache_path /tmp/nginxcache levels=1:2:2 keys_zone=mycache:3096m max_size=3584m inactive=240m; proxy_temp_path /tmp/nginxtmp; proxy_redirect off; upstream wyebase { server 10.64.1.69:80; } server { listen 80 default_server; server_name www.domain.edu; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 75s; proxy_send_timeout 75s; proxy_read_timeout 75s; proxy_pass_header Expires; proxy_pass_header Cache-Control; proxy_pass_header Last-Modified; proxy_pass_header ETag; proxy_pass_header Content-Length; proxy_cache mycache; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_bypass $http_secret_header; proxy_ignore_headers "Cache-Control"; proxy_ignore_headers "Set-Cookie"; add_header X-Cache-Status $upstream_cache_status; location / { proxy_cache_valid 200 301 302 304 10m; #good requests proxy_cache_valid 404 403 10m; #access errors proxy_cache_valid 500 501 502 503 504 1m; # execution or load errors proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; expires 60m; proxy_pass http://wyebase; } location ~* \.(css|js|png|jpe?g|gif|ico)$ { proxy_cache_valid 200 301 302 304 1h; #good requests proxy_cache_valid 404 403 10m; #access errors proxy_cache_valid 500 501 502 503 504 1m; # execution or load errors proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; expires 14d; proxy_pass http://wyebase; } } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 25 18:17:48 2014 From: nginx-forum at nginx.us (ThattyaneAlves) Date: Tue, 25 Feb 2014 13:17:48 -0500 Subject: Odd issue with proxy_pass serving wrong cached data In-Reply-To: <205444BFA924A34AB206D0E2538D4B4941358C@x10m01.yosemite.edu> References: <205444BFA924A34AB206D0E2538D4B4941358C@x10m01.yosemite.edu> Message-ID: <1b920ae50d104f10a29b0d4b37fa5855.NginxMailingListEnglish@forum.nginx.org> I'm having an odd issue, and I'm not sure where to start. We've been implementing a number of NGINX servers recently, and one of them is ... http://time4livetv.blogspot.com/2014/02/south-park.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247939,247940#msg-247940 From mejedi at gmail.com Tue Feb 25 18:23:55 2014 From: mejedi at gmail.com (Nick Zavaritsky) Date: Tue, 25 Feb 2014 22:23:55 +0400 Subject: NGINX SSL Session Ticket Key In-Reply-To: <20140225165200.GQ91191@mdounin.ru> References: <20140225165200.GQ91191@mdounin.ru> Message-ID: Hello, Maxim! > The format is "48 bytes of random data", and I don't think that > compatibility with other software is something to be considered > here. Ticket keys are to be used between multiple nginx > instances, nothing more. > You are certainly right however this looks like an accidental incompatibility rather than an intentional one. Other cryptographic parameters like encryption and message authentication algorithms are the same as used by OpenSSL. I must admit that personally I don't consider this an issue either, just sharing my findings. Have a nice day :) From hillb at yosemite.edu Wed Feb 26 02:32:25 2014 From: hillb at yosemite.edu (Brian Hill) Date: Wed, 26 Feb 2014 02:32:25 +0000 Subject: Odd issue with proxy_pass serving wrong cached data In-Reply-To: <205444BFA924A34AB206D0E2538D4B4941358C@x10m01.yosemite.edu> References: <205444BFA924A34AB206D0E2538D4B4941358C@x10m01.yosemite.edu> Message-ID: <205444BFA924A34AB206D0E2538D4B494138D7@x10m01.yosemite.edu> So now it doesn't look like it's a caching issue at all. I've completely gutted my config files and stripped it down, and I'm still seeing the same issue. I even shot a video of what I'm seeing and stuck it on YouTube as an example (http://youtu.be/lPR1453YBUw). As the video shows, the connections for the linked images don't load properly on reloads if I'm connecting through the NGINX server, but work fine if I connect directly to the backend origin server. Has anyone ever seen anything like this? Any ideas? My new minimalist config files (I've included the SSL version this time as well): user www-data; worker_processes 4 pid /var/run/nginx.pid; error_log /var/log/nginx/error/error.log crit; events { worker_connections 20000; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; proxy_cache_path /tmp/nginxcache levels=1:2:2 keys_zone=mycache:3096m max_size=3584m inactive=240m; proxy_temp_path /tmp/nginxtmp; upstream wyebase { server 10.64.1.69:80; } upstream wyessl { server 10.64.1.69:443; } server { listen 80 default_server; server_name www.domain.edu; location / { proxy_pass http://wyebase; } } server { listen 443 ssl spdy; server_name www.domain.edu; include /etc/nginx/ssl.conf; ssl_certificate /etc/nginx/ssl/xbs2014.crt; ssl_certificate_key /etc/nginx/ssl/xbs2014.key; location / { proxy_pass https://wyessl; } } } BH Sent: Tuesday, February 25, 2014 9:52 AM To: nginx at nginx.org Subject: Odd issue with proxy_pass serving wrong cached data I'm having an odd issue, and I'm not sure where to start. We've been implementing a number of NGINX servers recently, and one of them is doing a proxy_pass to an older IIS7 server that we'll be phasing out soon, which is hosting a couple of minor sites in our datacenter. When initially browsing the site, everything appears to be working properly. But in most browsers, if you hit reload a couple of times the NGINX server either serves the wrong files (i.e. the page calls for Image1.PNG, but the server is returning the contents of Image2.PNG), or simply fails to load the image or file at all. It will always load the base HTML page, but the linked files (css files, images, etc.) seem to randomly glitch. Because it only happens when we click reload, I'm presuming that we're looking at some sort of configuration problem reading the proxy_pass cache. The problem doesn't occur and the sites work normally if we bypass the NGINX server and connect directly to the content origin server. Anyone have any idea where I should start with this one? It's a bit of a bizarre problem, and it's really got me scratching my head. I've done a number of searches and can't find anything online discussing a similar problem. BH user www-data; worker_processes 4 pid /var/run/nginx.pid; error_log /var/log/nginx/error/error.log crit; events { worker_connections 20000; use epoll; multi_accept on; accept_mutex off; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; client_header_timeout 20; client_body_timeout 20; reset_timedout_connection on; types_hash_max_size 2048; server_tokens off; include /etc/nginx/mime.types; default_type application/octet-stream; #gzip gzip on; gzip_disable "MSIE [1-6]\."; gzip_http_version 1.0; gzip_vary on; gzip_comp_level 6; gzip_min_length 512; gzip_proxied any; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #caching open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; proxy_cache_path /tmp/nginxcache levels=1:2:2 keys_zone=mycache:3096m max_size=3584m inactive=240m; proxy_temp_path /tmp/nginxtmp; proxy_redirect off; upstream wyebase { server 10.64.1.69:80; } server { listen 80 default_server; server_name www.domain.edu; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 75s; proxy_send_timeout 75s; proxy_read_timeout 75s; proxy_pass_header Expires; proxy_pass_header Cache-Control; proxy_pass_header Last-Modified; proxy_pass_header ETag; proxy_pass_header Content-Length; proxy_cache mycache; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_bypass $http_secret_header; proxy_ignore_headers "Cache-Control"; proxy_ignore_headers "Set-Cookie"; add_header X-Cache-Status $upstream_cache_status; location / { proxy_cache_valid 200 301 302 304 10m; #good requests proxy_cache_valid 404 403 10m; #access errors proxy_cache_valid 500 501 502 503 504 1m; # execution or load errors proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; expires 60m; proxy_pass http://wyebase; } location ~* \.(css|js|png|jpe?g|gif|ico)$ { proxy_cache_valid 200 301 302 304 1h; #good requests proxy_cache_valid 404 403 10m; #access errors proxy_cache_valid 500 501 502 503 504 1m; # execution or load errors proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; expires 14d; proxy_pass http://wyebase; } } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 26 07:18:08 2014 From: nginx-forum at nginx.us (leontinus) Date: Wed, 26 Feb 2014 02:18:08 -0500 Subject: gzip cause white screen of death (wsod) Message-ID: <86301b14065fd1f17c2b91ab6a9ef3cc.NginxMailingListEnglish@forum.nginx.org> Hi Guys, I have a problem here. My website, www.tokopedia.com use nginx 1.5.10 with gzip activated. Some of our users got white screen of death (wsod). It turns out that it is because of gzip. These are what I found : User report the got wsod. I chat with them, asked them to response header. But the header is different from what we set. This is the result : This is the corrent response header : ======================== Cache-control:max-age=14400 Connection:close Content-Encoding:gzip Content-Type:text/html; charset=utf-8 Date:Mon, 24 Feb 2014 15:06:20 GMT Expires:Mon, 24 Feb 2014 19:06:52 GMT Pragma:no-cache Server:nginx Set-Cookie:_SID_Tokopedia_=c52cd95c9d4b11e3b16d11f55476c0ce; domain=.tokopedia.com; path=/; expires=Tue, 25-Feb-2014 15:06:52 GMT Vary:Accept-Encoding This is user response header ======================== Cache-control: private, no-cache, no-store, must-revalidate, post-check=0, pre-check=0 Connection:close Content-Type:text/html; charset=utf-8 Date:Mon, 24 Feb 2014 05:40:31 GMT Expires:Mon, 17 Aug 2009 00:00:00 GMT Pragma:no-cache Server:nginx Set-Cookie:_SID_Tokopedia_=70531bd49d1511e3adeb5c5c41ecde91; domain=.tokopedia.com; path=/; expires=Mon, 24-Feb-2014 06:41:02 GMT Transfer-Encoding:chunked Vary:Accept-Encoding the different is that the correct 1 has Content-Encoding:gzip and no Transfer-Encoding:chunked. But on user side they have Transfer-Encoding:chunked but no Content-Encoding:gzip. And then I tried to disabled gzip and then user can access our website without problems. So can you guys help me with solutions? I think about to put this of nginx conf : if($remote_addr = 'xxx') {gzip off;} but the problem is on user specific device, if they use other device with the same internet connection, they can access without problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247944,247944#msg-247944 From nginx-forum at nginx.us Wed Feb 26 08:24:40 2014 From: nginx-forum at nginx.us (leontinus) Date: Wed, 26 Feb 2014 03:24:40 -0500 Subject: gzip cause white screen of death (wsod) In-Reply-To: <86301b14065fd1f17c2b91ab6a9ef3cc.NginxMailingListEnglish@forum.nginx.org> References: <86301b14065fd1f17c2b91ab6a9ef3cc.NginxMailingListEnglish@forum.nginx.org> Message-ID: SOLVED, I add Transfer-Encoding:chunked; CASE CLOSED. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247944,247946#msg-247946 From mdounin at mdounin.ru Wed Feb 26 09:11:43 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Feb 2014 13:11:43 +0400 Subject: Odd issue with proxy_pass serving wrong cached data In-Reply-To: <205444BFA924A34AB206D0E2538D4B494138D7@x10m01.yosemite.edu> References: <205444BFA924A34AB206D0E2538D4B4941358C@x10m01.yosemite.edu> <205444BFA924A34AB206D0E2538D4B494138D7@x10m01.yosemite.edu> Message-ID: <20140226091143.GU91191@mdounin.ru> Hello! On Wed, Feb 26, 2014 at 02:32:25AM +0000, Brian Hill wrote: > So now it doesn't look like it's a caching issue at all. I've > completely gutted my config files and stripped it down, and I'm > still seeing the same issue. I even shot a video of what I'm > seeing and stuck it on YouTube as an example > (http://youtu.be/lPR1453YBUw). As the video shows, the > connections for the linked images don't load properly on reloads > if I'm connecting through the NGINX server, but work fine if I > connect directly to the backend origin server. Has anyone ever > seen anything like this? Any ideas? Looking into logs usually helps a lot. Symptoms suggest most likely you've run out of local ports on nginx host due to sockets in TIME-WAIT state (and you have no tw_reuse/tw_recycle switched on), but it's hard to tell anything exact from the information you've provided. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Feb 26 09:58:16 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 26 Feb 2014 04:58:16 -0500 Subject: Odd issue with proxy_pass serving wrong cached data In-Reply-To: <205444BFA924A34AB206D0E2538D4B494138D7@x10m01.yosemite.edu> References: <205444BFA924A34AB206D0E2538D4B494138D7@x10m01.yosemite.edu> Message-ID: disable gzip, sendfile off, use something else then epoll, disable proxy_cache_path and take it from there to see if it still happens. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247939,247950#msg-247950 From nginx-forum at nginx.us Wed Feb 26 10:41:16 2014 From: nginx-forum at nginx.us (snarapureddy) Date: Wed, 26 Feb 2014 05:41:16 -0500 Subject: Best possible configuration for file upload Message-ID: <7d5578f969e82d90766e268aa9f478d7.NginxMailingListEnglish@forum.nginx.org> We are using nginx for file uploads instead of directing to the backend servrs. Used lua openresty module to get the data in chunks in write it to local disk. File size could vary from few KB's to 10MB. We are tuning worker process, connections, accept_mutex off etc, but if we cuncerrently upload files some of the connections were very slow. Chuck size is 4096. CPU utilization is very minimal. We are running 10 worker processes, but most of the cases sam process is handling multiple connections and they are becoming slow. Please suggest the best possible configuration for upload scenarios Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247951,247951#msg-247951 From makailol7 at gmail.com Wed Feb 26 10:56:37 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Wed, 26 Feb 2014 16:26:37 +0530 Subject: Increase either types_hash_max_size: 1024 or types_hash_bucket_size: 32 Message-ID: Hello, I got below error after updating Nginx from nginx-1.4.4-1 to nginx-1.5.10-1. nginx: [emerg] could not build the types_hash, you should increase either types_hash_max_size: 1024 or types_hash_bucket_size: 32 nginx: configuration file /etc/nginx/nginx.conf test failed Could anyone explain why this has happened after nginx version update? I have updated Nginx on two different servers but I faced issue on one server only. Thanks, Makailol -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Wed Feb 26 12:05:22 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 26 Feb 2014 12:05:22 +0000 Subject: [SHOW'N'TELL] Primitive RBAC/AAA implementation in nginx config Message-ID: Hi all - I spent some time poking at a interesting problem that came up last night, and ended up with this primitive RBAC system, implemented in declarative nginx config. You might find it useful, or might be able to tell me why it sucks hence how it could be improved ;-) Readme and config: https://gist.github.com/jpluscplusm/9227777 Cheers, Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Wed Feb 26 16:39:31 2014 From: nginx-forum at nginx.us (mastercan) Date: Wed, 26 Feb 2014 11:39:31 -0500 Subject: SSL_STAPLING when network is unreachable Message-ID: Hello, I've encountered a problem with nginx 1.5.10. I'm running nginx on a highly available system (2 cluster node). When node1 fails, node2 is automatically coming into play. A few days ago the internet connection was bad - on both nodes. They could ping the gateway only sporadically. Node2 became the active one and tried to start nginx. Nginx did not even come up. I replayed the whole scenario (switchover) with a working internet connection. Everything is running perfect then. But with a broken internet connection nginx does not start up. It's hanging. The reason is ssl_stapling I found out. Even when I set resolver_timeout to 5 seconds, nginx won't come up within 5 seconds with an internet connection with high packet loss. Unfortunately I cannnot use "ssl_stapling_file". I tried fetching the OCSP response from globalsign but always get "error querying OCSP response" from globalsign's ocsp server (but with godaddy it worked). My cmd was: openssl ocsp -host ocsp2.globalsign.com -noverify -no_nonce -issuer issuer.crt -cert domain.crt -url http://ocsp2.globalsign.com/gsalphag2 So...it would be nice if nginx did not block on startup or if there was a setting that told nginx "you must startup within x seconds". For now I will remove ssl_stapling support altogether. best regards, Can ?zdemir Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247966,247966#msg-247966 From mdounin at mdounin.ru Wed Feb 26 17:26:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Feb 2014 21:26:34 +0400 Subject: SSL_STAPLING when network is unreachable In-Reply-To: References: Message-ID: <20140226172634.GC91191@mdounin.ru> Hello! On Wed, Feb 26, 2014 at 11:39:31AM -0500, mastercan wrote: > Hello, > > I've encountered a problem with nginx 1.5.10. > I'm running nginx on a highly available system (2 cluster node). > > When node1 fails, node2 is automatically coming into play. A few days ago > the internet connection was bad - on both nodes. They could ping the gateway > only sporadically. > Node2 became the active one and tried to start nginx. Nginx did not even > come up. > > I replayed the whole scenario (switchover) with a working internet > connection. Everything is running perfect then. > But with a broken internet connection nginx does not start up. It's > hanging. > > The reason is ssl_stapling I found out. Even when I set resolver_timeout to > 5 seconds, nginx won't come up within 5 seconds with an internet connection > with high packet loss. On startup, nginx does name resolution of various names in a configuration files, using system resolver. This includes initial resolution of OCSP responders if stapling is used. If your system resolver doesn't have internet access and blocks trying to resolve names - so nginx will do. Traditional approach to the problem is to use local caching DNS server (which is less likely to fail than external services), and to use IP addresses or /etc/hosts for critical things. It's also a good idea to have nginx _running_ instead of trying to start it in an emergency conditions. While nginx usually starts just fine, it is designed to keep things running by all means, not to start by all means. Startup may fail, e.g., due to failed DNS resolution or a listen socket grabbed by some other process. In contrast, if nginx was already started - it will keep running by all means. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Feb 26 17:37:10 2014 From: nginx-forum at nginx.us (paddy3883) Date: Wed, 26 Feb 2014 12:37:10 -0500 Subject: Whitelisting Client Side Certificates Message-ID: <1a1844bcce866e71c8aaf3089c52b31b.NginxMailingListEnglish@forum.nginx.org> I'm currently working on POC for my company which is looking to use NGINX to validate API Requests using Client Side Certificates. Presently we have it setup so we are self signing/generating these certificates on the local machine and are able to use these successfully in our tests. We are also able to use the revocation list to disable generated certificates. Moving forward it is possible we will be using an external CA to generate these certificates and we are trying to determine if this is a way to 'whitelist' certificates so only those generated ones which we have visibility of will be verified, rather than a 'blacklisting' approach to block those which are revoked? i.e. Given a client certificate generated by a external CA how can we established this in a trusted list of certs to verify? Apologies if this question is lacking technical details/knowledge, this is my first hands on experience with SSL. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247969,247969#msg-247969 From david.birdsong at gmail.com Wed Feb 26 17:58:00 2014 From: david.birdsong at gmail.com (David Birdsong) Date: Wed, 26 Feb 2014 09:58:00 -0800 Subject: Whitelisting Client Side Certificates In-Reply-To: <1a1844bcce866e71c8aaf3089c52b31b.NginxMailingListEnglish@forum.nginx.org> References: <1a1844bcce866e71c8aaf3089c52b31b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Having just gone through learning about this over the last few days, here's what I learned. Take it w/ a grain of salt. There are 2 ways I'm aware of. 1. turn on strict client verify and limit the ca list that the server knows about. this will cause the server to have a limited view of what certs are valid in the world and cause it to reject any client who's cert doesn't chain back to your ca list. I think you set that here: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate 2. match subject name and subjectAlternatename to a whitelist. I don't know if nginx can do this part natively. Haproxy can: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-verifyhost ...from skimming, the way you'd do #2 is to use http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificateto set a proxy header from: $ssl_client_cert and have your backend parse and accept/deny names found in that pem structure On Wed, Feb 26, 2014 at 9:37 AM, paddy3883 wrote: > I'm currently working on POC for my company which is looking to use NGINX > to > validate API Requests using Client Side Certificates. Presently we have it > setup so we are self signing/generating these certificates on the local > machine and are able to use these successfully in our tests. We are also > able to use the revocation list to disable generated certificates. > > Moving forward it is possible we will be using an external CA to generate > these certificates and we are trying to determine if this is a way to > 'whitelist' certificates so only those generated ones which we have > visibility of will be verified, rather than a 'blacklisting' approach to > block those which are revoked? i.e. Given a client certificate generated by > a external CA how can we established this in a trusted list of certs to > verify? > > Apologies if this question is lacking technical details/knowledge, this is > my first hands on experience with SSL. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,247969,247969#msg-247969 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.birdsong at gmail.com Wed Feb 26 18:02:44 2014 From: david.birdsong at gmail.com (David Birdsong) Date: Wed, 26 Feb 2014 10:02:44 -0800 Subject: Whitelisting Client Side Certificates In-Reply-To: References: <1a1844bcce866e71c8aaf3089c52b31b.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Feb 26, 2014 at 9:58 AM, David Birdsong wrote: > Having just gone through learning about this over the last few days, > here's what I learned. Take it w/ a grain of salt. > > There are 2 ways I'm aware of. > > 1. turn on strict client verify and limit the ca list that the server > knows about. this will cause the server to have a limited view of what > certs are valid in the world and cause it to reject any client who's cert > doesn't chain back to your ca list. I think you set that here: > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate > > 2. match subject name and subjectAlternatename to a whitelist. I don't > know if nginx can do this part natively. Haproxy can: > http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-verifyhost > > ...from skimming, the way you'd do #2 is to use > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificateto set a proxy header from: $ssl_client_cert and have your backend parse > and accept/deny names found in that pem structure > #2 sounds like a great job for: http://wiki.nginx.org/HttpLuaModule#access_by_lua > > > > > On Wed, Feb 26, 2014 at 9:37 AM, paddy3883 wrote: > >> I'm currently working on POC for my company which is looking to use NGINX >> to >> validate API Requests using Client Side Certificates. Presently we have it >> setup so we are self signing/generating these certificates on the local >> machine and are able to use these successfully in our tests. We are also >> able to use the revocation list to disable generated certificates. >> >> Moving forward it is possible we will be using an external CA to generate >> these certificates and we are trying to determine if this is a way to >> 'whitelist' certificates so only those generated ones which we have >> visibility of will be verified, rather than a 'blacklisting' approach to >> block those which are revoked? i.e. Given a client certificate generated >> by >> a external CA how can we established this in a trusted list of certs to >> verify? >> >> Apologies if this question is lacking technical details/knowledge, this is >> my first hands on experience with SSL. >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,247969,247969#msg-247969 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 26 19:32:48 2014 From: nginx-forum at nginx.us (mastercan) Date: Wed, 26 Feb 2014 14:32:48 -0500 Subject: SSL_STAPLING when network is unreachable In-Reply-To: <20140226172634.GC91191@mdounin.ru> References: <20140226172634.GC91191@mdounin.ru> Message-ID: Hello Maxim, > On startup, nginx does name resolution of various names in a > configuration files, using system resolver. This includes initial > resolution of OCSP responders if stapling is used. If your system > resolver doesn't have internet access and blocks trying to resolve > names - so nginx will do. I see. But what is the parameter "resolver_timeout" for? I had 2 ssl_staple directives in my config, and I set a resolver_timeout of 5 secs. I thought the blocking should not exceed 10 seconds then, assuming the resolving is done sequentially? It took more than 40 seconds to start though. > Traditional approach to the problem is to use local caching DNS > server (which is less likely to fail than external services), and > to use IP addresses or /etc/hosts for critical things. > That sounds good, but I've seen that the ocsp server has a TTL of 5 minutes for its A records. So they seem to change frequently and caching them would -in this case- not help a lot. > It's also a good idea to have nginx _running_ instead of trying to > start it in an emergency conditions. While nginx usually starts > just fine, it is designed to keep things running by all means, not > to start by all means. Startup may fail, e.g., due to failed DNS > resolution or a listen socket grabbed by some other process. In > contrast, if nginx was already started - it will keep running by > all means. > Ok, that's something I should consider. Keep nginx running on both nodes. I hope it doesn't cause troubles if a web directory is empty and gets filled later on by mounting a DRBD device. br, Can Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247966,247973#msg-247973 From contact at jpluscplusm.com Wed Feb 26 20:23:14 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 26 Feb 2014 20:23:14 +0000 Subject: [SHOW'N'TELL] Primitive RBAC/AAA implementation in nginx config In-Reply-To: References: Message-ID: On 26 Feb 2014 12:05, "Jonathan Matthews" wrote: > > Hi all - > > I spent some time poking at a interesting problem that came up last > night, and ended up with this primitive RBAC system, implemented in > declarative nginx config. Thanks to the couple of people who reminded me this may not be a frequently-used term on this list :-) Role Based Access Control systems are a technique for limiting access to resources based on people belonging to groups (roles) and not being granted access individually: https://en.wikipedia.org/wiki/Role-based_access_control In this case, the resources are URIs, potentially proxy_pass'd, and the users are HTTP basic auth users. My implementation is nothing special, but I'd not seen a reasonably scalable one implemented purely in declarative nginx configuration syntax before :-) Anyway, tell me why it sucks ... https://gist.github.com/jpluscplusm/9227777 J -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.birdsong at gmail.com Wed Feb 26 20:29:40 2014 From: david.birdsong at gmail.com (David Birdsong) Date: Wed, 26 Feb 2014 12:29:40 -0800 Subject: Whitelisting Client Side Certificates In-Reply-To: References: <1a1844bcce866e71c8aaf3089c52b31b.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Feb 26, 2014 at 9:58 AM, David Birdsong wrote: > Having just gone through learning about this over the last few days, > here's what I learned. Take it w/ a grain of salt. > > There are 2 ways I'm aware of. > > 1. turn on strict client verify and limit the ca list that the server > knows about. this will cause the server to have a limited view of what > certs are valid in the world and cause it to reject any client who's cert > doesn't chain back to your ca list. I think you set that here: > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate > > 2. match subject name and subjectAlternatename to a whitelist. I don't > know if nginx can do this part natively. Haproxy can: > http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-verifyhost > i misunderstood verifyhost in haproxy, that's something else completely. it doesn't verify subjectname or subjectAlternatename on client supplied certs. > > ...from skimming, the way you'd do #2 is to use > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificateto set a proxy header from: $ssl_client_cert and have your backend parse > and accept/deny names found in that pem structure > > > > > On Wed, Feb 26, 2014 at 9:37 AM, paddy3883 wrote: > >> I'm currently working on POC for my company which is looking to use NGINX >> to >> validate API Requests using Client Side Certificates. Presently we have it >> setup so we are self signing/generating these certificates on the local >> machine and are able to use these successfully in our tests. We are also >> able to use the revocation list to disable generated certificates. >> >> Moving forward it is possible we will be using an external CA to generate >> these certificates and we are trying to determine if this is a way to >> 'whitelist' certificates so only those generated ones which we have >> visibility of will be verified, rather than a 'blacklisting' approach to >> block those which are revoked? i.e. Given a client certificate generated >> by >> a external CA how can we established this in a trusted list of certs to >> verify? >> >> Apologies if this question is lacking technical details/knowledge, this is >> my first hands on experience with SSL. >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,247969,247969#msg-247969 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 27 11:16:14 2014 From: nginx-forum at nginx.us (paddy3883) Date: Thu, 27 Feb 2014 06:16:14 -0500 Subject: Whitelisting Client Side Certificates In-Reply-To: References: Message-ID: <502472789d646cd859e4480bef59f8dd.NginxMailingListEnglish@forum.nginx.org> I was wondering if caching whitelisted certificates' thumbprints somewhere and then verifying against this per request would work? One approach could be storing these thumprints in Memcached and querying using Lua? Or is there a more straightforward/efficient approach? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247969,247987#msg-247987 From mdounin at mdounin.ru Thu Feb 27 11:57:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Feb 2014 15:57:31 +0400 Subject: SSL_STAPLING when network is unreachable In-Reply-To: References: <20140226172634.GC91191@mdounin.ru> Message-ID: <20140227115731.GD91191@mdounin.ru> Hello! On Wed, Feb 26, 2014 at 02:32:48PM -0500, mastercan wrote: > Hello Maxim, > > > On startup, nginx does name resolution of various names in a > > configuration files, using system resolver. This includes initial > > resolution of OCSP responders if stapling is used. If your system > > resolver doesn't have internet access and blocks trying to resolve > > names - so nginx will do. > > I see. But what is the parameter "resolver_timeout" for? I had 2 ssl_staple > directives in my config, and I set a resolver_timeout of 5 secs. I thought > the blocking should not exceed 10 seconds then, assuming the resolving is > done sequentially? It took more than 40 seconds to start though. It's to configure timeout used by nginx's own nonblocking resolver (http://nginx.org/r/resolver) - that is, for name resolution done by running nginx. To configure system resolver you should use your system's settings, usually /etc/resolv.conf. (Actually, sole purpose of nginx's own resolver is to be able to resolve names when nginx is running, without blocking. It's not something possible when using system resolver, as it has only blocking interface.) -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Feb 27 12:00:00 2014 From: nginx-forum at nginx.us (mastercan) Date: Thu, 27 Feb 2014 07:00:00 -0500 Subject: SSL_STAPLING when network is unreachable In-Reply-To: <20140227115731.GD91191@mdounin.ru> References: <20140227115731.GD91191@mdounin.ru> Message-ID: Maxim Dounin Wrote: > It's to configure timeout used by nginx's own nonblocking resolver > (http://nginx.org/r/resolver) - that is, for name resolution done > by running nginx. To configure system resolver you should > use your system's settings, usually /etc/resolv.conf. > > (Actually, sole purpose of nginx's own resolver is to be able to > resolve names when nginx is running, without blocking. It's not > something possible when using system resolver, as it has only > blocking interface.) > Thanks a lot, Maxim! That clarifies things for me. br Can Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247966,247990#msg-247990 From tetuya0703 at me.com Thu Feb 27 15:47:43 2014 From: tetuya0703 at me.com (SaitoTetsuya) Date: Fri, 28 Feb 2014 00:47:43 +0900 Subject: How to Nginx cache server setting Message-ID: <8BA51C2B-88B5-4728-80EC-3E2B3271F3FE@me.com> Hi everyone I would like to make reverse proxy and cache server from Nginx using CentOS6.5, The setup of Nginx is as follows. But, cache information is not saved in /var/cache/nginx/hoge.com of proxy_cache_path. Is this because a setup is wrong? Please advise me. Regards TETSUYA Saito ?/etc/nginx/nginx.conf? user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" "$gzip_ratio"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush off; keepalive_timeout 65; gzip on; gzip_http_version 1.0; gzip_types text/plain text/xml text/css application/xml application/xhtml+xml application/rss+xml application/atom_xml application/javascript application/x-javascript application/x-httpd-php; gzip_disable "MSIE [1-6]\."; gzip_disable "Mozilla/4"; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_buffers 4 8k; server_names_hash_bucket_size 128; # 32/64/128 include /etc/nginx/conf.d/*.conf; } ?/etc/nginx/conf.d/proxy.conf? proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header X-Cache $upstream_cache_status; proxy_connect_timeout 60; proxy_read_timeout 90; proxy_send_timeout 60; proxy_buffering on; proxy_buffer_size 8k; proxy_buffers 100 8k; ?/etc/nginx/conf.d/virtual.conf? upstream backend { ip_hash; server hoge:80; } proxy_cache_path /var/cache/nginx/hoge.com levels=1:2 keys_zone=cache_hoge.com:15m inactive=7d max_size=512m; proxy_temp_path /var/cache/nginx/temp 1 2; server { listen 80; server_name www.hoge.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; access_log /var/log/nginx/wp01.night-walker.asia/access_log main; error_log /var/log/nginx/wp01.night-walker.asia/error_log; if ($http_host = night-walker.asia) { rewrite (.*) http://wp01.night-walker.asia; } try_files $uri $uri/ /index.php?q=$uri&$args; location / { set $do_not_cache 0; # -- POST or HEAD ? if ($request_method != "GET") { set $do_not_cache 1; } # -- Login or Comment or Post Editting ? if ($http_cookie ~ ^.*(comment_author_|wordpress_logged_in|wp-postpass_).*$) { set $do_not_cache 1; } # -- mobile ? if ($http_x_wap_profile ~ ^[a-z0-9\"]+) { set $do_not_cache 1; } # -- Mobile ? if ($http_profile ~ ^[a-z0-9\"]+) { set $do_not_cache 1; } # -- Kei-tai ? if ($http_user_agent ~ ^.*(2.0\ MMP|240x320|400X240|AvantGo|BlackBerry|Blazer|Cellphone|Danger|DoCoMo|Elaine/3.0|EudoraWeb|Googlebot-Mobile|hiptop|IEMobile|KYOCERA/WX310K|LG/U990|MIDP-2.|MMEF20|MOT-V|NetFront|Newt|Nintendo\ Wii|Nitro|Nokia|Opera\ Mini|Palm|PlayStation\ Portable|portalmmm|Proxinet|ProxiNet|SHARP-TQ-GX10|SHG-i900|Small|SonyEricsson|Symbian\ OS|SymbianOS|TS21i-10|UP.Browser|UP.Link|webOS|Windows\ CE|WinWAP|YahooSeeker/M1A1-R2D2|iPhone|iPod|Android|BlackBerry9530|LG-TU915\ Obigo|LGE\ VX|webOS|Nokia5800).*) { set $do_not_cache 1; } # -- Mobile ? if ($http_user_agent ~ ^(w3c\ |w3c-|acs-|alav|alca|amoi|audi|avan|benq|bird|blac|blaz|brew|cell|cldc|cmd-|dang|doco|eric|hipt|htc_|inno|ipaq|ipod|jigs|kddi|keji|leno|lg-c|lg-d|lg-g|lge-|lg/u|maui|maxo|midp|mits|mmef|mobi|mot-|moto|mwbp|nec-|newt|noki|palm|pana|pant|phil|play|port|prox|qwap|sage|sams|sany|sch-|sec-|send|seri|sgh-|shar|sie-|siem|smal|smar|sony|sph-|symb|t-mo|teli|tim-|tosh|tsm-|upg1|upsi|vk-v|voda|wap-|wapa|wapi|wapp|wapr|webc|winw|winw|xda\ |xda-).*) { set $do_not_cache 1; } # -- Kei-tai ? if ($http_user_agent ~ ^(DoCoMo/|J-PHONE/|J-EMULATOR/|Vodafone/|MOT(EMULATOR)?-|SoftBank/|[VS]emulator/|KDDI-|UP\.Browser/|emobile/|Huawei/|IAC/|Nokia|mixi-mobile-converter/)) { set $do_not_cache 1; } # -- Kei-tai ? if ($http_user_agent ~ (DDIPOCKET\;|WILLCOM\;|Opera\ Mini|Opera\ Mobi|PalmOS|Windows\ CE\;|PDA\;\ SL-|PlayStation\ Portable\;|SONY/COM|Nitro|Nintendo)) { set $do_not_cache 1; } proxy_cache cache_hoge.com; proxy_cache_valid 200 2h; proxy_cache_valid 302 2h; proxy_cache_valid 301 4h; proxy_cache_valid any 1m; proxy_no_cache $do_not_cache; proxy_cache_bypass $do_not_cache; resolver 127.0.0.1; proxy_pass http://$host; proxy_cache_key $scheme://$host$request_uri; } } From hillb at yosemite.edu Fri Feb 28 00:55:43 2014 From: hillb at yosemite.edu (Brian Hill) Date: Fri, 28 Feb 2014 00:55:43 +0000 Subject: Odd issue with proxy_pass serving wrong cached data In-Reply-To: <20140226091143.GU91191@mdounin.ru> References: <205444BFA924A34AB206D0E2538D4B4941358C@x10m01.yosemite.edu> <205444BFA924A34AB206D0E2538D4B494138D7@x10m01.yosemite.edu> <20140226091143.GU91191@mdounin.ru> Message-ID: <205444BFA924A34AB206D0E2538D4B49415F1E@x10m01.yosemite.edu> Interestingly, this will teach me not to leave out "irrelevant" bits of my config files when posting them. My configs failed to include a few items like ModSecurity, because it didn't occur to me that they could have any impact on the problem I was describing. As it turns out, disabling ModSecurity solved the problem. I'm still not sure why ModSecurity is causing NGINX to serve bad data on reloads, but it's clearly a ModSecurity issue or a problem with the communication between ModSecurity and NGINX. BH -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: Wednesday, February 26, 2014 1:12 AM To: nginx at nginx.org Subject: Re: Odd issue with proxy_pass serving wrong cached data Hello! On Wed, Feb 26, 2014 at 02:32:25AM +0000, Brian Hill wrote: > So now it doesn't look like it's a caching issue at all. I've > completely gutted my config files and stripped it down, and I'm still > seeing the same issue. I even shot a video of what I'm seeing and > stuck it on YouTube as an example (http://youtu.be/lPR1453YBUw). As > the video shows, the connections for the linked images don't load > properly on reloads if I'm connecting through the NGINX server, but > work fine if I connect directly to the backend origin server. Has > anyone ever seen anything like this? Any ideas? Looking into logs usually helps a lot. Symptoms suggest most likely you've run out of local ports on nginx host due to sockets in TIME-WAIT state (and you have no tw_reuse/tw_recycle switched on), but it's hard to tell anything exact from the information you've provided. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Feb 28 02:56:43 2014 From: nginx-forum at nginx.us (ljw79618@gmail.com) Date: Thu, 27 Feb 2014 21:56:43 -0500 Subject: can proxy_cache(nginx) in front of internal location? Message-ID: <738d5eb8a6eec2208f48bbfdb2fe0a54.NginxMailingListEnglish@forum.nginx.org> I want to using proxy_cache in front of internal location,but proxy_pass not work.my code like following: location /proxy { proxy_cache cache; proxy_cache_key $uri; proxy_cache_valid 200 304 10d; expires 10d; proxy_pass @bar; } location @bar { internal; content_by_lua ' ngx.log(ngx.ERR,"sssssssssssssssssss") ngx.say("yaha") '; } proxy_cache must used with proxy_pass?any other method to do this? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247998,247998#msg-247998 From zaphod at berentweb.com Fri Feb 28 10:00:38 2014 From: zaphod at berentweb.com (Beeblebrox) Date: Fri, 28 Feb 2014 12:00:38 +0200 Subject: WordPress multisite Message-ID: I have nginx running on a local intranet server. I'm trying to setup WordPress multisite with subdirectory, and WP is located in $document_root/wordpress because there are many other folders directly under the document_root path. * I want to access the WP site as http://serv_IP/wp/, but alias does not work - I must specify http://serv_IP/wordpress/ each time. location /wp/ { alias /document_root/wordpress/; try_files $uri $uri/ /index.php?$args; include /usr/local/etc/nginx/wordpress.conf; fastcgi_pass unix:/var/run/www/php-fpm.sock; } nginx-error.log shows: FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.100, server: my.org, request: "GET /wp/ HTTP/1.1", upstream: "fastcgi://unix:/var/run/www/php-fpm.sock:", host: "192.168.1.100", referrer: "http://192.168.1.100/" If I replace in above code "alias" with "root", I just get the php-information page. * I'd like to keep nginx.conf clean and pass wordpress.conf using the include method. How must I modify the code provided in http://wiki.nginx.org/WordPress, so that the multisite traffic is directed to the $document_root/wordpress/ folder, and not "/"? If I pull the try_files code to nginx.conf, startup advises: nginx: [emerg] "map" directive is not allowed here in /usr/local/etc/nginx/wordpress.conf:1 Where should the "map $uri $blogname / map $blogname $blogid" code be placed? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Feb 28 11:24:38 2014 From: nginx-forum at nginx.us (kisa1234) Date: Fri, 28 Feb 2014 06:24:38 -0500 Subject: closed keepalive connection In-Reply-To: References: Message-ID: <8f6f1c7699ecefede901c03d9a97ba97.NginxMailingListEnglish@forum.nginx.org> do you slove it? can you help me? my E-mail is tianyichen1989 at 163.com; Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,214974,248004#msg-248004 From mdounin at mdounin.ru Fri Feb 28 12:28:00 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 Feb 2014 16:28:00 +0400 Subject: closed keepalive connection In-Reply-To: <8f6f1c7699ecefede901c03d9a97ba97.NginxMailingListEnglish@forum.nginx.org> References: <8f6f1c7699ecefede901c03d9a97ba97.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140228122800.GN34696@mdounin.ru> Hello! On Fri, Feb 28, 2014 at 06:24:38AM -0500, kisa1234 wrote: > do you slove it? > can you help me? > my E-mail is tianyichen1989 at 163.com; > Thank you! This is not a problem to solve, it's just information messages logged by nginx. If it's too verbose for you, consider tuning error_log logging level, see http://nginx.org/r/error_log. -- Maxim Dounin http://nginx.org/ From yasser.zamani at live.com Fri Feb 28 19:14:38 2014 From: yasser.zamani at live.com (Yasser Zamani) Date: Fri, 28 Feb 2014 22:44:38 +0330 Subject: custom handler module - dynamic response with unknown content length Message-ID: Hi there, I learned some about how to write a handler module from [1] and [2]. [1] http://blog.zhuzhaoyuan.com/2009/08/creating-a-hello-world-nginx-module/ [2] http://www.evanmiller.org/nginx-modules-guide.html#compiling But I need to rewrite [1] to send dynamically generated octect stream to client with unknown content length but it'll be large usually. Firstly I tried: /* allocate a buffer for your response body */ b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); if (b == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } /* attach this buffer to the buffer chain */ out.buf = b; out.next = NULL; /* adjust the pointers of the buffer */ b->pos = ngx_hello_string; b->last = ngx_hello_string + sizeof(ngx_hello_string) - 1; b->memory = 1; /* this buffer is in memory */ b->last_buf = 0; /* this is the last buffer in the buffer chain */ /* set the status line */ r->headers_out.status = NGX_HTTP_OK; //r->headers_out.content_length_n = sizeof(ngx_hello_string) - 1; /* send the headers of your response */ rc = ngx_http_send_header(r); if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { return rc; } /* send the buffer chain of your response */ int i; for(i=1;i<10000000;i++){b->flush = (0==(i%1000));rc=ngx_http_output_filter(r, &out);if(rc!=NGX_OK){ngx_log_error(NGX_LOG_ALERT,r->connection->log,0,"bad rc, rc:%d", rc);return rc;}} b->last_buf = 1; return ngx_http_output_filter(r, &out); But it simply fails with following errors: 2014/02/28 22:17:25 [alert] 25115#0: *1 zero size buf in writer t:0 r:0 f:0 00000000 080C7431-080C7431 00000000 0-0, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost:8080" 2014/02/28 22:17:25 [alert] 25115#0: *1 bad rc, rc:-1, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost:8080" WHAT IS THE CORRECT WAY TO ACCOMPLISH MY NEED? (I searched a lot but I only found [3] which has rc=-2 rather than -1) [3] http://web.archiveorange.com/archive/v/yKUXMLzGBexXA3PccJa6 Thanks in advance!