From n.sherlock at gmail.com Mon Aug 1 02:42:50 2011 From: n.sherlock at gmail.com (Nicholas Sherlock) Date: Mon, 01 Aug 2011 14:42:50 +1200 Subject: Gzipped content being sent to HTTP/1.0 clients? Message-ID: <4E3612AA.9070409@gmail.com> Hi everyone, I'm using gzip and proxy_cache together, proxying to an Apache backend. Some of my clients are complaining that they are getting gzipped content which their browser is displaying without un-gzipping it, presumably because they getting served gzipped content when their browser doesn't support it. I noticed that HTTP/1.0 clients are getting served gzipped content even though gzip_http_version is set to 1.1. That should never happen, right? I guess it is because a 1.1 client requested it first, and it got cached? Here's the log of an HTTP/1.0 client (wget) grabbing the resource: wget -d "http://www.chickensmoothie.com/Forum/style.php?id=9&lang=en&v=1312084157" ---request begin--- GET /Forum/style.php?id=9&lang=en&v=1312084157 HTTP/1.0 User-Agent: Wget/1.12 (cygwin) Accept: */* Host:www.chickensmoothie.com Connection: Keep-Alive ---response begin--- HTTP/1.1 200 OK Server: nginx/1.0.4 Date: Mon, 01 Aug 2011 00:23:27 GMT Content-Type: text/css; charset=UTF-8 Connection: keep-alive X-Powered-By: PHP/5.3.5-1ubuntu7.2 Expires: Wed, 09 Nov 2011 00:22:33 GMT Last-Modified: Sun, 31 Jul 2011 03:49:17 GMT Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 15750 ---response end--- 200 OK Registered socket 3 for persistent reuse. URI content encoding = `UTF-8' Length: 15750 (15K) [text/css] Saving to: `style.php at id=9&lang=en&v=1312084157.4' 2011-08-01 12:22:35 (21.4 KB/s) - `style.php at id=9&lang=en&v=1312084157.4' saved [15750/15750] Note that the backend isn't sending a gzipped response to Nginx: wget --header="Host:www.chickensmoothie.com" -d "http://localhost:8080/Forum/style.php?id=9&lang=en&v=1312084157" ---request begin--- GET /Forum/style.php?id=9&lang=en&v=1312084157 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host:www.chickensmoothie.com Connection: Keep-Alive ---response begin--- HTTP/1.1 200 OK Date: Mon, 01 Aug 2011 00:32:28 GMT Server: Apache/2.2.17 (Ubuntu) X-Powered-By: PHP/5.3.5-1ubuntu7.2 X-Accel-Expires: 600 Expires: Wed, 09 Nov 2011 00:32:28 GMT Last-Modified: Sun, 31 Jul 2011 03:49:17 GMT Vary: Accept-Encoding Connection: close Content-Type: text/css; charset=UTF-8 ---response end--- 200 OK Length: unspecified [text/css] Saving to: `style.php?id=9&lang=en&v=1312084157.5' 2011-08-01 00:32:28 (377 MB/s) - `style.php?id=9&lang=en&v=1312084157.5' saved [78125] Here's my config details: nginx: nginx version: nginx/1.0.4 gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_buffers 16 4k; gzip_types text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_vary on; gzip_http_version 1.1; proxy_temp_path /caches/proxy_temp; proxy_cache_path /caches/nginx levels=1:2 keys_zone=one:50m inactive=3d max_size=10g; server { listen 80 default; server_name _; index index.php index.html index.htm; location /Forum/style.php { proxy_passhttp://127.0.0.1:8080; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 600; proxy_intercept_errors on; proxy_cache one; proxy_cache_key $host$request_uri; proxy_cache_valid 200 301 302 2000m; proxy_cache_use_stale error timeout invalid_header updating; } } I've disabled the proxy_cache for the moment, which seems to fix this behaviour (HTTP/1.0 clients get a plain response, HTTP/1.1 clients who send an accept-encoding:gzip get a gzipped response). Cheers, Nicholas Sherlock From nginx-forum at nginx.us Mon Aug 1 04:31:20 2011 From: nginx-forum at nginx.us (sahyagiri) Date: Mon, 01 Aug 2011 00:31:20 -0400 Subject: Starting a Thread when Nginx starts Message-ID: <9e7d0ecb988f0a1d24d2d6a1a23665ae.NginxMailingListEnglish@forum.nginx.org> Hi, I am new to Nginx. Have gone through evanmiller blog and nginxguts to write my own module. Wrote a hello module which is working fine. I extended the hello module and in the http handler, it could start the thread. I have used pthread library for this. Now I want to start a thread of my own along with nginx and not with a http handler. 1. What modification is required to make it? 2. How to stop the thread when nginx is exiting? 3. Is there a different way of doing it other than pthread for a background activity. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,213156,213156#msg-213156 From mat999 at gmail.com Mon Aug 1 05:19:37 2011 From: mat999 at gmail.com (SplitIce) Date: Mon, 1 Aug 2011 15:19:37 +1000 Subject: Gzipped content being sent to HTTP/1.0 clients? In-Reply-To: <4E3612AA.9070409@gmail.com> References: <4E3612AA.9070409@gmail.com> Message-ID: proxy cache key needs to include accept encoding as php does the gzip for phpbb3 On Mon, Aug 1, 2011 at 12:42 PM, Nicholas Sherlock wrote: > Hi everyone, > > I'm using gzip and proxy_cache together, proxying to an Apache backend. > Some of my clients are complaining that they are getting gzipped content > which their browser is displaying without un-gzipping it, presumably > because they getting served gzipped content when their browser doesn't > support it. > > I noticed that HTTP/1.0 clients are getting served gzipped content even > though gzip_http_version is set to 1.1. That should never happen, right? > I guess it is because a 1.1 client requested it first, and it got > cached? Here's the log of an HTTP/1.0 client (wget) grabbing the resource: > > wget -d > "http://www.chickensmoothie.**com/Forum/style.php?id=9&lang=** > en&v=1312084157 > " > > ---request begin--- > GET /Forum/style.php?id=9&lang=en&**v=1312084157 HTTP/1.0 > User-Agent: Wget/1.12 (cygwin) > Accept: */* > Host:www.chickensmoothie.com > Connection: Keep-Alive > > ---response begin--- > HTTP/1.1 200 OK > Server: nginx/1.0.4 > Date: Mon, 01 Aug 2011 00:23:27 GMT > Content-Type: text/css; charset=UTF-8 > Connection: keep-alive > X-Powered-By: PHP/5.3.5-1ubuntu7.2 > Expires: Wed, 09 Nov 2011 00:22:33 GMT > Last-Modified: Sun, 31 Jul 2011 03:49:17 GMT > Vary: Accept-Encoding > Content-Encoding: gzip > Content-Length: 15750 > > ---response end--- > 200 OK > Registered socket 3 for persistent reuse. > URI content encoding = `UTF-8' > Length: 15750 (15K) [text/css] > Saving to: `style.php at id=9&lang=en&v=**1312084157.4' > > 2011-08-01 12:22:35 (21.4 KB/s) - > `style.php at id=9&lang=en&v=**1312084157.4' saved [15750/15750] > > > Note that the backend isn't sending a gzipped response to Nginx: > > > wget --header="Host:www.**chickensmoothie.com" > -d > "http://localhost:8080/Forum/**style.php?id=9&lang=en&v=**1312084157 > " > > ---request begin--- > GET /Forum/style.php?id=9&lang=en&**v=1312084157 HTTP/1.0 > User-Agent: Wget/1.12 (linux-gnu) > Accept: */* > Host:www.chickensmoothie.com > Connection: Keep-Alive > > ---response begin--- > HTTP/1.1 200 OK > Date: Mon, 01 Aug 2011 00:32:28 GMT > Server: Apache/2.2.17 (Ubuntu) > X-Powered-By: PHP/5.3.5-1ubuntu7.2 > X-Accel-Expires: 600 > Expires: Wed, 09 Nov 2011 00:32:28 GMT > Last-Modified: Sun, 31 Jul 2011 03:49:17 GMT > Vary: Accept-Encoding > Connection: close > Content-Type: text/css; charset=UTF-8 > > ---response end--- > 200 OK > Length: unspecified [text/css] > Saving to: `style.php?id=9&lang=en&v=**1312084157.5' > > 2011-08-01 00:32:28 (377 MB/s) - `style.php?id=9&lang=en&v=**1312084157.5' > saved [78125] > > > Here's my config details: > > nginx: nginx version: nginx/1.0.4 > > gzip on; > gzip_disable "MSIE [1-6]\.(?!.*SV1)"; > gzip_buffers 16 4k; > gzip_types text/plain text/html text/css application/json > application/x-javascript text/xml application/xml application/xml+rss > text/javascript; > gzip_vary on; > gzip_http_version 1.1; > > proxy_temp_path /caches/proxy_temp; > proxy_cache_path /caches/nginx levels=1:2 keys_zone=one:50m inactive=3d > max_size=10g; > > server { > listen 80 default; > server_name _; > > index index.php index.html index.htm; > > location /Forum/style.php { > proxy_passhttp://127.0.0.1:**8080 ; > proxy_redirect off; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header Host $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_read_timeout 600; > proxy_intercept_errors on; > > proxy_cache one; > proxy_cache_key $host$request_uri; > proxy_cache_valid 200 301 302 2000m; > proxy_cache_use_stale error timeout invalid_header updating; > } > } > > I've disabled the proxy_cache for the moment, which seems to fix this > behaviour (HTTP/1.0 clients get a plain response, HTTP/1.1 clients who > send an accept-encoding:gzip get a gzipped response). > > Cheers, > Nicholas Sherlock > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -- Warez Scene Free Rapidshare Downloads -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Mon Aug 1 05:47:20 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 1 Aug 2011 09:47:20 +0400 Subject: Gzipped content being sent to HTTP/1.0 clients? In-Reply-To: <4E3612AA.9070409@gmail.com> References: <4E3612AA.9070409@gmail.com> Message-ID: <20110801054720.GD56821@sysoev.ru> On Mon, Aug 01, 2011 at 02:42:50PM +1200, Nicholas Sherlock wrote: > Hi everyone, > > I'm using gzip and proxy_cache together, proxying to an Apache backend. > Some of my clients are complaining that they are getting gzipped content > which their browser is displaying without un-gzipping it, presumably > because they getting served gzipped content when their browser doesn't > support it. > > I noticed that HTTP/1.0 clients are getting served gzipped content even > though gzip_http_version is set to 1.1. That should never happen, right? > I guess it is because a 1.1 client requested it first, and it got > cached? Here's the log of an HTTP/1.0 client (wget) grabbing the resource: > > wget -d > "http://www.chickensmoothie.com/Forum/style.php?id=9&lang=en&v=1312084157" > > ---request begin--- > GET /Forum/style.php?id=9&lang=en&v=1312084157 HTTP/1.0 > User-Agent: Wget/1.12 (cygwin) > Accept: */* > Host:www.chickensmoothie.com > Connection: Keep-Alive > > ---response begin--- > HTTP/1.1 200 OK > Server: nginx/1.0.4 > Date: Mon, 01 Aug 2011 00:23:27 GMT > Content-Type: text/css; charset=UTF-8 > Connection: keep-alive > X-Powered-By: PHP/5.3.5-1ubuntu7.2 > Expires: Wed, 09 Nov 2011 00:22:33 GMT > Last-Modified: Sun, 31 Jul 2011 03:49:17 GMT > Vary: Accept-Encoding > Content-Encoding: gzip > Content-Length: 15750 > > ---response end--- > 200 OK > Registered socket 3 for persistent reuse. > URI content encoding = `UTF-8' > Length: 15750 (15K) [text/css] > Saving to: `style.php at id=9&lang=en&v=1312084157.4' > > 2011-08-01 12:22:35 (21.4 KB/s) - > `style.php at id=9&lang=en&v=1312084157.4' saved [15750/15750] > > > Note that the backend isn't sending a gzipped response to Nginx: > > > wget --header="Host:www.chickensmoothie.com" -d > "http://localhost:8080/Forum/style.php?id=9&lang=en&v=1312084157" > > ---request begin--- > GET /Forum/style.php?id=9&lang=en&v=1312084157 HTTP/1.0 > User-Agent: Wget/1.12 (linux-gnu) > Accept: */* > Host:www.chickensmoothie.com > Connection: Keep-Alive > > ---response begin--- > HTTP/1.1 200 OK > Date: Mon, 01 Aug 2011 00:32:28 GMT > Server: Apache/2.2.17 (Ubuntu) > X-Powered-By: PHP/5.3.5-1ubuntu7.2 > X-Accel-Expires: 600 > Expires: Wed, 09 Nov 2011 00:32:28 GMT > Last-Modified: Sun, 31 Jul 2011 03:49:17 GMT > Vary: Accept-Encoding > Connection: close > Content-Type: text/css; charset=UTF-8 > > ---response end--- > 200 OK > Length: unspecified [text/css] > Saving to: `style.php?id=9&lang=en&v=1312084157.5' > > 2011-08-01 00:32:28 (377 MB/s) - `style.php?id=9&lang=en&v=1312084157.5' > saved [78125] > > > Here's my config details: > > nginx: nginx version: nginx/1.0.4 > > gzip on; > gzip_disable "MSIE [1-6]\.(?!.*SV1)"; > gzip_buffers 16 4k; > gzip_types text/plain text/html text/css application/json > application/x-javascript text/xml application/xml application/xml+rss > text/javascript; > gzip_vary on; > gzip_http_version 1.1; > > proxy_temp_path /caches/proxy_temp; > proxy_cache_path /caches/nginx levels=1:2 keys_zone=one:50m inactive=3d > max_size=10g; > > server { > listen 80 default; > server_name _; > > index index.php index.html index.htm; > > location /Forum/style.php { > proxy_passhttp://127.0.0.1:8080; > proxy_redirect off; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header Host $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_read_timeout 600; > proxy_intercept_errors on; > > proxy_cache one; > proxy_cache_key $host$request_uri; > proxy_cache_valid 200 301 302 2000m; > proxy_cache_use_stale error timeout invalid_header updating; > } > } > > I've disabled the proxy_cache for the moment, which seems to fix this > behaviour (HTTP/1.0 clients get a plain response, HTTP/1.1 clients who > send an accept-encoding:gzip get a gzipped response). Add proxy_set_header Accept-Encoding ""; to disable gzipping on backend. or add a gzip flag in cache key: map $http_accept_encoding $gzip { default ""; ~gzip " gzip"; } proxy_cache_key "$host$request_uri$gzip"; -- Igor Sysoev From n.sherlock at gmail.com Mon Aug 1 09:44:48 2011 From: n.sherlock at gmail.com (Nicholas Sherlock) Date: Mon, 01 Aug 2011 21:44:48 +1200 Subject: Gzipped content being sent to HTTP/1.0 clients? Message-ID: <4E367590.7080102@gmail.com> On 1/08/2011 5:47 p.m., Igor Sysoev wrote: > Add > proxy_set_header Accept-Encoding ""; > to disable gzipping on backend. > > or add a gzip flag in cache key: > > map $http_accept_encoding $gzip { > default ""; > ~gzip " gzip"; > } > > proxy_cache_key "$host$request_uri$gzip"; Thanks for the pointers! It turns out that my backend was doing gzipping without me realizing it (despite it being turned off in phpBB settings, Apache's mod_deflate was configured to compress pages anyway). I've turned off all gzip in my backend and now it works correctly. Cheers, Nicholas Sherlock From nginx-forum at nginx.us Mon Aug 1 11:15:25 2011 From: nginx-forum at nginx.us (gidrobaton) Date: Mon, 01 Aug 2011 07:15:25 -0400 Subject: healthcheck do not working In-Reply-To: References: Message-ID: >Hi, >You should compile upstream hash module which cep12 patched, See: >https://github.com/cep21/nginx_upstream_hash/tree/support_http_healthchecks nginx-1.0.5 was compiled with both patches: root at true:/tmp/nginx-1.0.5# patch -p0 < ../nginx_upstream_hash-0.3.1/nginx.patch patching file src/http/ngx_http_upstream.h Hunk #1 succeeded at 106 (offset 1 line). root at true:/tmp/nginx-1.0.5# patch -p1 < ../healthcheck_nginx_upstreams/nginx.patch patching file src/http/ngx_http_upstream.c patching file src/http/ngx_http_upstream.h Hunk #1 succeeded at 110 with fuzz 2 (offset 4 lines). patching file src/http/ngx_http_upstream_round_robin.c patching file src/http/ngx_http_upstream_round_robin.h root at true:/tmp/nginx-1.0.5# ./configure --add-module=/tmp/nginx_upstream_hash-0.3.1/ --add-module=/tmp/healthcheck_nginx_upstreams/ --with-debug root at true:/tmp/nginx-1.0.5# make -j5 && make install But if I use "hash" options, error.log is empty after nginx reload, even if I use "server" option, which indicates to closed port. error_log /usr/local/nginx/logs/error.log; upstream backend { #ip_hash; server 172.16.0.130:181; server 172.16.0.130:182; server 172.16.0.130:122; hash $remote_addr; hash_again 0; healthcheck_enabled; healthcheck_delay 5000; healthcheck_timeout 1500; healthcheck_failcount 1; healthcheck_send "GET /PingAction.do HTTP/1.0" 'Host: ivis0'; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,191445,213176#msg-213176 From fla_torres at yahoo.com.br Mon Aug 1 12:21:16 2011 From: fla_torres at yahoo.com.br (Flavio Torres) Date: Mon, 01 Aug 2011 09:21:16 -0300 Subject: Sync Multiple nginx Configs In-Reply-To: References: <060EDFC2221B014B9504AC06B747F58A49A6ECA2BB@ex01.alentus.lan> <4E33D108.6040906@gmail.com> Message-ID: <4E369A3C.9030204@yahoo.com.br> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/30/2011 03:58 PM, Martin Loy wrote: > config over a git repo and a cron every N minutes :) Or try http://www.puppetlabs.com/ and sync all your datacenter confs :) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk42mjsACgkQNRQApncg294nlgCffIx7NSYgkrCao0zZtN1iOa0r bYoAoIy7IHqMlFUkP8GAzSm+WnZHSx1Z =IXqJ -----END PGP SIGNATURE----- From nginx-forum at nginx.us Mon Aug 1 13:46:53 2011 From: nginx-forum at nginx.us (key) Date: Mon, 01 Aug 2011 09:46:53 -0400 Subject: Make fail, when --with-http_ssl_module Message-ID: <787bab71d84b94b751b35f1ff09f2ea6.NginxMailingListEnglish@forum.nginx.org> System: Fedora 15 2.6.38.6-27.fc15.i686 #1 SMP Sun May 15 17:57:13 UTC 2011 i686 i686 i386 GNU/Linux OpenSSL OpenSSL 1.0.0d-fips 8 Feb 2011 (yum install openssl openssl-devel) Nginx ver 1.0.5 configure ./configure \ --user=daemon \ --group=daemon \ --with-http_ssl_module \ check ok..... make error src/event/ngx_event_openssl.c: In function ?ngx_ssl_get_cached_session?: src/event/ngx_event_openssl.c:1690:31: error: variable ?c? set but not used [-Werror=unused-but-set-variable] cc1: all warnings being treated as errors make[1]: *** [objs/src/event/ngx_event_openssl.o] Error 1 make[1]: Leaving directory `/root/downloads/nginx-1.0.5' make: *** [build] Error 2 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,213180,213180#msg-213180 From jcdmacleod at gmail.com Mon Aug 1 13:48:13 2011 From: jcdmacleod at gmail.com (John Macleod) Date: Mon, 1 Aug 2011 15:48:13 +0200 Subject: Custom HTTP Header Message-ID: Is it possible to add a custom http header that identifies the nginx server that handled the connection? I have noticed a couple CDNs do this, on adds a CF1: xxXX:xxXX location/node descriptor that shows with a curl -I Or is there another way to do this? Thanks! John From mdounin at mdounin.ru Mon Aug 1 13:50:15 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Aug 2011 17:50:15 +0400 Subject: Make fail, when --with-http_ssl_module In-Reply-To: <787bab71d84b94b751b35f1ff09f2ea6.NginxMailingListEnglish@forum.nginx.org> References: <787bab71d84b94b751b35f1ff09f2ea6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20110801135015.GC1137@mdounin.ru> Hello! On Mon, Aug 01, 2011 at 09:46:53AM -0400, key wrote: > System: > Fedora 15 > 2.6.38.6-27.fc15.i686 #1 SMP Sun May 15 17:57:13 UTC 2011 i686 i686 i386 > GNU/Linux > > OpenSSL > OpenSSL 1.0.0d-fips 8 Feb 2011 > (yum install openssl openssl-devel) > > Nginx ver 1.0.5 > > configure > > ./configure \ > --user=daemon \ > --group=daemon \ > --with-http_ssl_module \ > > check ok..... > > make error > > src/event/ngx_event_openssl.c: In function > ?ngx_ssl_get_cached_session?: > src/event/ngx_event_openssl.c:1690:31: error: variable ?c? set but > not used [-Werror=unused-but-set-variable] > cc1: all warnings being treated as errors > > make[1]: *** [objs/src/event/ngx_event_openssl.o] Error 1 > make[1]: Leaving directory `/root/downloads/nginx-1.0.5' > make: *** [build] Error 2 This should be fixed in upcoming 1.0.6. For now, try ./configure --with-cc-opt="-Wno-error=unused-but-set-variable" ... Maxim Dounin From mdounin at mdounin.ru Mon Aug 1 13:51:23 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Aug 2011 17:51:23 +0400 Subject: Custom HTTP Header In-Reply-To: References: Message-ID: <20110801135123.GD1137@mdounin.ru> Hello! On Mon, Aug 01, 2011 at 03:48:13PM +0200, John Macleod wrote: > Is it possible to add a custom http header that identifies the > nginx server that handled the connection? > > I have noticed a couple CDNs do this, on adds a CF1: xxXX:xxXX > location/node descriptor that shows with a curl -I > > Or is there another way to do this? http://wiki.nginx.org/HttpHeadersModule#add_header Maxim Dounin From jcdmacleod at gmail.com Mon Aug 1 13:53:44 2011 From: jcdmacleod at gmail.com (John Macleod) Date: Mon, 1 Aug 2011 15:53:44 +0200 Subject: Log Parsing - Near Real Time Message-ID: <3DF80667-88D8-44A9-A030-2179822E52A5@gmail.com> I'm looking for a near real-time script to parse log files and insert interesting data into a db. Does anyone know of an existing script to do this? John From nginx-forum at nginx.us Mon Aug 1 13:59:10 2011 From: nginx-forum at nginx.us (key) Date: Mon, 01 Aug 2011 09:59:10 -0400 Subject: Make fail, when --with-http_ssl_module In-Reply-To: <787bab71d84b94b751b35f1ff09f2ea6.NginxMailingListEnglish@forum.nginx.org> References: <787bab71d84b94b751b35f1ff09f2ea6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <603f9d49b00451382792ea4fee1f291e.NginxMailingListEnglish@forum.nginx.org> Thank you for your help I will be try Posted at Nginx Forum: http://forum.nginx.org/read.php?2,213180,213186#msg-213186 From benlancaster at holler.co.uk Mon Aug 1 14:04:00 2011 From: benlancaster at holler.co.uk (Ben Lancaster) Date: Mon, 1 Aug 2011 15:04:00 +0100 Subject: Custom HTTP Header In-Reply-To: <20110801135123.GD1137@mdounin.ru> References: <20110801135123.GD1137@mdounin.ru> Message-ID: > On Mon, Aug 01, 2011 at 03:48:13PM +0200, John Macleod wrote: > >> Is it possible to add a custom http header that identifies the >> nginx server that handled the connection? >> >> I have noticed a couple CDNs do this, on adds a CF1: xxXX:xxXX >> location/node descriptor that shows with a curl -I >> >> Or is there another way to do this? > > http://wiki.nginx.org/HttpHeadersModule#add_header ?.and then you can use the variables from http://wiki.nginx.org/HttpUpstreamModule, e.g. add_header X-Backend $upstream_addr; From mat999 at gmail.com Mon Aug 1 14:23:49 2011 From: mat999 at gmail.com (SplitIce) Date: Tue, 2 Aug 2011 00:23:49 +1000 Subject: Log Parsing - Near Real Time In-Reply-To: <3DF80667-88D8-44A9-A030-2179822E52A5@gmail.com> References: <3DF80667-88D8-44A9-A030-2179822E52A5@gmail.com> Message-ID: I beleive the only way to do this is to custom write a script that reads the log files and then issues a reload signal to nginx (something like logrotate). If anyone does know of a script id be interested to learn too as I do this on my servers using a hacked togeather php script and logrotate (on crontab every minute) On Mon, Aug 1, 2011 at 11:53 PM, John Macleod wrote: > I'm looking for a near real-time script to parse log files and insert > interesting data into a db. > > Does anyone know of an existing script to do this? > > John > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Aug 1 14:39:00 2011 From: r at roze.lv (Reinis Rozitis) Date: Mon, 1 Aug 2011 17:39:00 +0300 Subject: Log Parsing - Near Real Time In-Reply-To: <3DF80667-88D8-44A9-A030-2179822E52A5@gmail.com> References: <3DF80667-88D8-44A9-A030-2179822E52A5@gmail.com> Message-ID: <12FC4EF867F4459D8B0BAB596DA076F2@DD21> > I'm looking for a near real-time script to parse log files and insert interesting data into a db. > Does anyone know of an existing script to do this? You can check/try http://www.splunk.com rr From randy.j.parker at gmail.com Mon Aug 1 14:57:15 2011 From: randy.j.parker at gmail.com (Randy Parker) Date: Mon, 1 Aug 2011 10:57:15 -0400 Subject: Log Parsing - Near Real Time In-Reply-To: <12FC4EF867F4459D8B0BAB596DA076F2@DD21> References: <3DF80667-88D8-44A9-A030-2179822E52A5@gmail.com> <12FC4EF867F4459D8B0BAB596DA076F2@DD21> Message-ID: My app has a request that opens the log file, fseeks to the end, backs up as many bytes as it takes to get to the size the log file was on the last similar request by that user, and runs a regex over the novel part to get interesting metrics before closing the file. Since this happens less than once per minute, I have not done anything fancy to optimize. - Randy On Mon, Aug 1, 2011 at 10:39 AM, Reinis Rozitis wrote: > I'm looking for a near real-time script to parse log files and insert >> interesting data into a db. >> Does anyone know of an existing script to do this? >> > > You can check/try http://www.splunk.com > > rr > > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -- http://mobiledyne.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Mon Aug 1 15:13:33 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 1 Aug 2011 19:13:33 +0400 Subject: nginx-1.1.0 Message-ID: <20110801151333.GG68887@sysoev.ru> Changes with nginx 1.1.0 01 Aug 2011 *) Feature: cache loader run time decrease. *) Feature: "loader_files", "loader_sleep", and "loader_threshold" options of the "proxy/fastcgi/scgi/uwsgi_cache_path" directives. *) Feature: loading time decrease of configuration with large number of HTTPS sites. *) Feature: now nginx supports ECDHE key exchange ciphers. Thanks to Adrian Kotelba. *) Feature: the "lingering_close" directive. Thanks to Maxim Dounin. *) Bugfix: in closing connection for pipelined requests. Thanks to Maxim Dounin. *) Bugfix: nginx did not disable gzipping if client sent "gzip;q=0" in "Accept-Encoding" request header line. *) Bugfix: in timeout in unbuffered proxied mode. Thanks to Maxim Dounin. *) Bugfix: memory leaks when a "proxy_pass" directive contains variables and proxies to an HTTPS backend. Thanks to Maxim Dounin. *) Bugfix: in parameter validaiton of a "proxy_pass" directive with variables. Thanks to Lanshun Zhou. *) Bugfix: SSL did not work on QNX. Thanks to Maxim Dounin. *) Bugfix: SSL modules could not be built by gcc 4.6 without --with-debug option. -- Igor Sysoev From mat999 at gmail.com Mon Aug 1 15:19:43 2011 From: mat999 at gmail.com (SplitIce) Date: Tue, 2 Aug 2011 01:19:43 +1000 Subject: nginx-1.1.0 In-Reply-To: <20110801151333.GG68887@sysoev.ru> References: <20110801151333.GG68887@sysoev.ru> Message-ID: great work Igor and all contributors. Look forward to using all the new features in the 1.1.x branch. :) On Tue, Aug 2, 2011 at 1:13 AM, Igor Sysoev wrote: > Changes with nginx 1.1.0 01 Aug > 2011 > > *) Feature: cache loader run time decrease. > > *) Feature: "loader_files", "loader_sleep", and "loader_threshold" > options of the "proxy/fastcgi/scgi/uwsgi_cache_path" directives. > > *) Feature: loading time decrease of configuration with large number of > HTTPS sites. > > *) Feature: now nginx supports ECDHE key exchange ciphers. > Thanks to Adrian Kotelba. > > *) Feature: the "lingering_close" directive. > Thanks to Maxim Dounin. > > *) Bugfix: in closing connection for pipelined requests. > Thanks to Maxim Dounin. > > *) Bugfix: nginx did not disable gzipping if client sent "gzip;q=0" in > "Accept-Encoding" request header line. > > *) Bugfix: in timeout in unbuffered proxied mode. > Thanks to Maxim Dounin. > > *) Bugfix: memory leaks when a "proxy_pass" directive contains > variables and proxies to an HTTPS backend. > Thanks to Maxim Dounin. > > *) Bugfix: in parameter validaiton of a "proxy_pass" directive with > variables. > Thanks to Lanshun Zhou. > > *) Bugfix: SSL did not work on QNX. > Thanks to Maxim Dounin. > > *) Bugfix: SSL modules could not be built by gcc 4.6 without > --with-debug option. > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 1 15:35:44 2011 From: nginx-forum at nginx.us (Ensiferous) Date: Mon, 01 Aug 2011 11:35:44 -0400 Subject: nginx-1.1.0 In-Reply-To: <20110801151333.GG68887@sysoev.ru> References: <20110801151333.GG68887@sysoev.ru> Message-ID: Congratulations on the release Igor! Could you please provide a brief description of the "loader_files", "loader_sleep", and "loader_threshold" directives so that they can be documented in the wiki? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,213199,213201#msg-213201 From Brian.Akins at turner.com Mon Aug 1 15:45:47 2011 From: Brian.Akins at turner.com (Akins, Brian) Date: Mon, 01 Aug 2011 11:45:47 -0400 Subject: Gzipped content being sent to HTTP/1.0 clients? In-Reply-To: Message-ID: On 8/1/11 1:19 AM, "SplitIce" wrote: > proxy cache key needs to include accept encoding as php does the gzip for > phpbb3 We always strip of Accept-encoding header whenever we proxy or use fastcgi. This keeps all the config/logic in nginx. -- Brian Akins From Brian.Akins at turner.com Mon Aug 1 15:46:53 2011 From: Brian.Akins at turner.com (Akins, Brian) Date: Mon, 01 Aug 2011 11:46:53 -0400 Subject: Sync Multiple nginx Configs In-Reply-To: <4E369A3C.9030204@yahoo.com.br> Message-ID: On 8/1/11 8:21 AM, "Flavio Torres" wrote: > Or try http://www.puppetlabs.com/ and sync all your datacenter confs :) Or chef ;) http://wiki.opscode.com/display/chef/Home -- Brian Akins From Brian.Akins at turner.com Mon Aug 1 15:48:19 2011 From: Brian.Akins at turner.com (Akins, Brian) Date: Mon, 01 Aug 2011 11:48:19 -0400 Subject: Log Parsing - Near Real Time In-Reply-To: Message-ID: You could use something like syslog-ng that can "tail" the log file and run them through a script. -- Brian Akins From mdounin at mdounin.ru Mon Aug 1 15:55:03 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Aug 2011 19:55:03 +0400 Subject: nginx-1.1.0 In-Reply-To: References: <20110801151333.GG68887@sysoev.ru> Message-ID: <20110801155503.GF1137@mdounin.ru> Hello! On Mon, Aug 01, 2011 at 11:35:44AM -0400, Ensiferous wrote: > Congratulations on the release Igor! > > Could you please provide a brief description of the "loader_files", > "loader_sleep", and "loader_threshold" directives so that they can be > documented in the wiki? These paramters are used to control cache loader IO (notably keep it low enough to allow other work to be done). loader_files= specifies number of files scanned by cache loader per iteration loader_sleep=