From nginx-forum at nginx.us Thu May 1 06:07:49 2014 From: nginx-forum at nginx.us (nrahl) Date: Thu, 01 May 2014 02:07:49 -0400 Subject: Wordpress Multi-Site Converting Apache to Nginx In-Reply-To: References: Message-ID: <7d0aab03c35ba7948d2331717ac4a56a.NginxMailingListEnglish@forum.nginx.org> Ok, at this point I have removed everything from the config just to try and get the most basic thing working. This is the entire config now: location ^~ /wordpress/ { fastcgi_pass unix:/var/run/php5-fpm.sock; } location / { return 403; } That's all the location blocks. What happens: 1. Going to any page that does not start with /wordpress/ produces a 403. This is correct according to my understating of the config. 2. Going to any url starting with /wordpress/ like /wordpress/wp-admin/ or even just /wordpress/ itself, produces a blank page. > Is your blank page a http 200 with no content, or a http 200 with some > content that the browser shows as blank, or some other http response? The blank page is a response code 200, with proper headers, but no body HTML at all. View page source is empty. > curl -v http://whatever/wp-admin/ Here is the output: MyDomain is used in place of real domain. * Adding handle: conn: 0x18f5b00 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x18f5b00) send_pipe: 1, recv_pipe: 0 * About to connect() to www.MyDomain.com port 443 (#0) * Trying xx.xxx.xxx.xxx... * Connected to www.MyDomain.com (xx.xxx.xxx.xxx) port 443 (#0) > GET /wordpress/ HTTP/1.1 > User-Agent: curl/7.32.0 > Host: www.MyDomain.com > Accept: */* > < HTTP/1.1 200 OK * Server nginx is not blacklisted < Server: nginx < Date: Thu, 01 May 2014 05:55:32 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Powered-By: PHP/5.5.9-1ubuntu4 < * Connection #0 to host www.domain.com left intact > The logs will show which location is used. Can you see which > file-on-the-filesystem is returned? For the request /wordpress/ with above simple config, it matches the /wordpress/ location and passes it to fastcgi: the log says: http upstream request: "/wordpress/?" then: http fastcgi record length: 61 which seems a bit short. So PHP is returning nothing? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249743,249779#msg-249779 From steve at greengecko.co.nz Thu May 1 06:26:22 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 01 May 2014 18:26:22 +1200 Subject: Wordpress Multi-Site Converting Apache to Nginx In-Reply-To: <7d0aab03c35ba7948d2331717ac4a56a.NginxMailingListEnglish@forum.nginx.org> References: <7d0aab03c35ba7948d2331717ac4a56a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1398925582.24481.366.camel@steve-new> Hi, On Thu, 2014-05-01 at 02:07 -0400, nrahl wrote: > Ok, at this point I have removed everything from the config just to try and > get the most basic thing working. > > This is the entire config now: > > location ^~ /wordpress/ { > fastcgi_pass unix:/var/run/php5-fpm.sock; > } > > location / { > return 403; > } > > That's all the location blocks. > > What happens: > 1. Going to any page that does not start with /wordpress/ produces a 403. > This is correct according to my understating of the config. > > 2. Going to any url starting with /wordpress/ like /wordpress/wp-admin/ or > even just /wordpress/ itself, produces a blank page. > > > Is your blank page a http 200 with no content, or a http 200 with some > > content that the browser shows as blank, or some other http response? > > The blank page is a response code 200, with proper headers, but no body HTML > at all. View page source is empty. > > > curl -v http://whatever/wp-admin/ > > Here is the output: > > MyDomain is used in place of real domain. > > * Adding handle: conn: 0x18f5b00 > * Adding handle: send: 0 > * Adding handle: recv: 0 > * Curl_addHandleToPipeline: length: 1 > * - Conn 0 (0x18f5b00) send_pipe: 1, recv_pipe: 0 > * About to connect() to www.MyDomain.com port 443 (#0) > * Trying xx.xxx.xxx.xxx... > * Connected to www.MyDomain.com (xx.xxx.xxx.xxx) port 443 (#0) > > GET /wordpress/ HTTP/1.1 > > User-Agent: curl/7.32.0 > > Host: www.MyDomain.com > > Accept: */* > > > < HTTP/1.1 200 OK > * Server nginx is not blacklisted > < Server: nginx > < Date: Thu, 01 May 2014 05:55:32 GMT > < Content-Type: text/html; charset=UTF-8 > < Transfer-Encoding: chunked > < Connection: keep-alive > < X-Powered-By: PHP/5.5.9-1ubuntu4 > < > * Connection #0 to host www.domain.com left intact > > > > > The logs will show which location is used. Can you see which > > file-on-the-filesystem is returned? > > For the request /wordpress/ with above simple config, it matches the > /wordpress/ location and passes it to fastcgi: > the log says: http upstream request: "/wordpress/?" > then: http fastcgi record length: 61 > which seems a bit short. So PHP is returning nothing? > With wordpress MU setups, you need to manually set up the blog id ( well, at least I have so far )... here's some extracts from one of my site configs: map $http_host $blogid { default 0; example.com.au 2; example.com.tw 3; } location ~ ^/files/(.*)$ { try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ; } and location ^~ /blogs.dir { internal; alias /www/example.com/public_html/wp-content/blogs.dir ; expires max; } If that still doesn't work, can you check that you've got everything connected correctly by delivering a quick file to your browser? There are example configs on the net - wordpress offers one for certain, and googling for wordpress, mu and nginx deliver a plethora of howtos. Why not begin with one of them and try to understand it, rather than fighting to reinvent the wheel? Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Thu May 1 06:48:22 2014 From: nginx-forum at nginx.us (premedge) Date: Thu, 01 May 2014 02:48:22 -0400 Subject: nginx reverse proxy hangs In-Reply-To: <20140429161850.GW34696@mdounin.ru> References: <20140429161850.GW34696@mdounin.ru> Message-ID: <009a5bc0260a2395556f2e66b2190e3b.NginxMailingListEnglish@forum.nginx.org> Hi, We are also facing the same issue ,have any one faced the same problem of getting blank pages with 200 status code when using nginx as reverse proxy to jboss 7 ? any help/suggestion on this are very much appreciated. Regards, Chris Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249736,249781#msg-249781 From francis at daoine.org Thu May 1 06:55:48 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 1 May 2014 07:55:48 +0100 Subject: Wordpress Multi-Site Converting Apache to Nginx In-Reply-To: <7d0aab03c35ba7948d2331717ac4a56a.NginxMailingListEnglish@forum.nginx.org> References: <7d0aab03c35ba7948d2331717ac4a56a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140501065548.GT16942@daoine.org> On Thu, May 01, 2014 at 02:07:49AM -0400, nrahl wrote: Hi there, > This is the entire config now: > > location ^~ /wordpress/ { > fastcgi_pass unix:/var/run/php5-fpm.sock; > } That says "talk fastcgi to that socket", but nothing useful is sent on that connection because you have no fastcgi_param directives there. (And no indication that any are inherited from the surrounding context.) So php will not be asked to process any particular file. You'll probably want SCRIPT_FILENAME, and possibly some more params, to get any kind of useful output. The details depend on your fastcgi server, but the nginx fastcgi.conf is usually a good starting point. > location / { > return 403; > } > > That's all the location blocks. > > What happens: > 1. Going to any page that does not start with /wordpress/ produces a 403. > This is correct according to my understating of the config. Yes, that's what the config asks for. > 2. Going to any url starting with /wordpress/ like /wordpress/wp-admin/ or > even just /wordpress/ itself, produces a blank page. Yes, that's what the config asks for. Strictly, it provides "whatever the fastcgi server returns"; but since that is "nothing", that's what you get. The fastcgi server logs may have more details. > < X-Powered-By: PHP/5.5.9-1ubuntu4 So, nginx sent the request to the fastcgi server. That's good. > > The logs will show which location is used. Can you see which > > file-on-the-filesystem is returned? > > For the request /wordpress/ with above simple config, it matches the > /wordpress/ location and passes it to fastcgi: That's true of this config and test. The question was about the raw wp-admin.php content that you reported with a previous config. But it sounds like you're progressing with getting the config to do what you want, so that's good. > the log says: http upstream request: "/wordpress/?" > then: http fastcgi record length: 61 > which seems a bit short. So PHP is returning nothing? Correct. PHP processes the named input file, which you haven't set, so it processes nothing and returns the output. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu May 1 07:02:01 2014 From: nginx-forum at nginx.us (nrahl) Date: Thu, 01 May 2014 03:02:01 -0400 Subject: Wordpress Multi-Site Converting Apache to Nginx In-Reply-To: <1398925582.24481.366.camel@steve-new> References: <1398925582.24481.366.camel@steve-new> Message-ID: > With wordpress MU setups, you need to manually set up the blog id > ( well, at least I have so far )... here's some extracts from one of > my > site configs: > > location ^~ /blogs.dir { Thanks for sharing your config. What version of WordPress are you running? Mine doesn't have a blogs.dir directory. I think they did away with that in 3.5. My WPMU setup was working fine without that dir on Apache, so it must not be needed in my version. > If that still doesn't work, can you check that you've got everything > connected correctly by delivering a quick file to > your browser? I can make PHP scripts run when I disable the wordpress location blocks and use my CMS's location blocks. The PHP scripts run fine, so PHP and PHP-FPM are running OK. > There are example configs on the net - wordpress offers one for > certain, and googling for wordpress, mu and nginx deliver a plethora of howtos. > Why not begin with one of them and try to understand it, rather than > fighting to reinvent the wheel? > I actually did quite a lot of reading before posting here. There are several reasons the configs out there don't work. 1. WordPress Multi-Site got a major overhaul in 3.5 and the tutorials out there, including Wordpress's own site are for the old version. I'm running 3.9, which uses the new 3.5+ way of detecting blogs. 2. My CMS uses /*/ and /*/*/ (where star is any URL char) type rules to grab everything that isn't otherwise defined explicitly, and this is creating problems with any smart wordpress configs. In order for this to work, I need to hard-code the wordpress blog URLs at a higher priority than the patterns. ie. if /myblog/ or /otherblog/ pass to wordpress, if /*AnythingElse*/ pass to CMS. 3. The CMS, not the wordpress master blog, is at the site root. 4. The actual wordpress files live in a folder /wordpress/ which is not meant to be accessed directly. wordpress expects the blogs to be in subdirectories, such as /wordpress/someblog/, but really, we want them to appear off the root like /someblog/ so we have to trick wordpress using rewrite rules. This entire configuration was 100% functional using Apache2. I'm assuming that Nginx can emulate any behaviour Apache can, but maybe there's some configs that it can't support? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249743,249783#msg-249783 From ml-nginx at zu-con.org Thu May 1 09:02:53 2014 From: ml-nginx at zu-con.org (Matthias Rieber) Date: Thu, 1 May 2014 11:02:53 +0200 (CEST) Subject: ssl_prefer_server_ciphers vs. Android Message-ID: Hi, I've configured ssl with the following options: ssl_dhparam /etc/nginx/pem/dhparam2048.pem; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK'; ssl_prefer_server_ciphers on; more_set_headers "Strict-Transport-Security: max-age=31536000"; spdy_headers_comp 3; While ssl_prefer_server_ciphers usually works I've noticed some strange behaviour with Android. Firefox Sync uses with this settings "TLSv1 RC4-SHA". When I remove all RC4 ciphers from that list, it chooses "TLSv1 DHE-RSA-AES128-SHA". I'm wondering why it chooses RC4-SHA instead of DHE-RSA-AES128-SHA since it should have a higher priority. Matthias From nginx-forum at nginx.us Thu May 1 22:27:19 2014 From: nginx-forum at nginx.us (lovekmla) Date: Thu, 01 May 2014 18:27:19 -0400 Subject: Cache Hit Latency for large responses. Message-ID: Context: I am currently using nginx to serve as a Response Cache Proxy in order to shim-out (isolate) network latency while running some performance related tests. The Perf Tests are comprised of 50 repetitions of the same set of requests. => I have been able to successfully set-up the proxy_cache so that it would Cache the requests. (I'm using the request_uri and request_body as the cache_key, since I need to Cache POST Requests as well) (Our POST Requests are not necessarily Write Requests since our API is using the POST Params for cases where we need to specify longer Params that exceed the limit on GET) => I am logging the Cache HIT Status and Request_time in the access_log and basically, I am seeing a variance between 500 ms ~ 2 seconds for the Same Request being Served from the Cache even when it's both a HIT. The Size of the Response is around 170K, and the request_body length is 65k. Most of the other HITS are like 0~100ms for request_time and only the ones that are made around that huge request seems to have some latency. Question: #1 I'm using proxy_cache_path. Is it possible that disk I/O is causing this latency? I thought the Filesystem Cache was able to handle these Cache HITs in memory. #2 When I set the cache_key to be a combination of the request_body and request_uri itself, when it does the lookup, would it automatically pull those and to the lookup as expected? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249791,249791#msg-249791 From luky-37 at hotmail.com Thu May 1 23:40:02 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 2 May 2014 01:40:02 +0200 Subject: ssl_prefer_server_ciphers vs. Android In-Reply-To: References: Message-ID: Hi Matthias, > While ssl_prefer_server_ciphers usually works I've noticed some strange > behaviour with Android. Firefox Sync uses with this settings "TLSv1 > RC4-SHA". When I remove all RC4 ciphers from that list, it chooses "TLSv1 > DHE-RSA-AES128-SHA". I'm wondering why it chooses RC4-SHA instead of > DHE-RSA-AES128-SHA since it should have a higher priority. Can you provide the capture file with the TLS handshake? Regards, Lukas From makailol7 at gmail.com Fri May 2 05:12:27 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Fri, 2 May 2014 10:42:27 +0530 Subject: Facing content-type issue with try_files. Message-ID: Hello, To serve static contents i.e. images I use try_files directive of Nginx. My configuration location block is as below. location ~* \.(jpg|jpeg|png|gif)$ { try_files $request_uri @missingImg; } @missingImg is named location block with proxy_* directive. The above configuration works fine if the image file name ends with jpg, gif, jpeg, png extension in disk. When image file name(stored in disk) includes query string like "example.jpg?a=123" then request to such image is being served with application/octet-stream content-type . Because of the wrong content type, image is not being displayed and browser prompt to download image. Could someone suggest me what am I doing wrong here? Thanks, Makailol -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 2 20:07:25 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 02 May 2014 16:07:25 -0400 Subject: [ANN] Windows nginx 1.7.1.2 Snowman Message-ID: <6acc6fb397d16bfe909b8ae877a000f0.NginxMailingListEnglish@forum.nginx.org> 13:56 2-5-2014 nginx 1.7.1.2 Snowman Based on nginx 1.7.1 (30-4-2014) with; + lua-nginx-module v0.9.7 (upgraded 1-5-2014) + Openssl fix for CVE-2010-5298 + AJP tomcat backend support (https://github.com/yaoweibin/nginx_ajp_module) Note: a folder '.\nginx\ajp_temp' will be created, when running nginx jailed create it yourself and set additional rights for the service user who runs nginx to allow access + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249798,249798#msg-249798 From al-nginx at none.at Sun May 4 08:38:46 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 04 May 2014 10:38:46 +0200 Subject: NGINX 2014 survey: I know you have opinions. In-Reply-To: References: Message-ID: <74ccbf3f7e314c9cc6eb5d6f522bdffc@none.at> Dear Sarah. Due to the fact that the survey is over now, please can you tell us what the outcome was? Best regards Aleks Am 23-04-2014 18:32, schrieb Sarah Novotny: > Hello! > > As Valentin mentioned in another thread, it?s that time of year again > when we want to tune up our strategy; see how NGINX is used; what you > the community thinks of us; where we can improve our products, > communications or community; and so on. Please take a moment and fill > out this survey[1] and let us know how we can be more valuable in your > organization?s future. > > Happy spring. > > Sarah > (on behalf of the Nginx team globally.) > > > [1] https://www.surveymonkey.com/s/L5B6MVH > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From markus at gekmihesg.de Sun May 4 14:52:39 2014 From: markus at gekmihesg.de (Markus Weippert) Date: Sun, 04 May 2014 16:52:39 +0200 Subject: Problem with ECC certificates Message-ID: <53665437.3060004@gekmihesg.de> Hi, I'm having some strange issues using nginx 1.6 with ECC certs. Handshakes fail for clients using TLSv1.2 and SNI but only if the requested server block is not the default_server. The config looks like this: http { ssl_certificate ecc.crt; ssl_certificate_key ecc.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH+kEECDH+AESGCM:HIGH+kEECDH:HIGH+kEDH:HIGH:!aNULL; ssl_prefer_server_ciphers on; ssl_dhparam dhparam4096.pem; server { listen [::]:443 ssl spdy default_server ipv6only=off; server_name a.example.com; root /var/www/a; } server { listen [::]:443 ssl spdy; server_name b.example.com; root /var/www/b; } } This configuration works for: - TLSv1.0/TLSv1.1 with SNI - TLSv1.2 without SNI - TLSv1.2 with SNI, but only for a.example.com It does not for TLSv1.2 with SNI for b.example.com: # openssl s_client -connect b.example.com:443 -servername b.example.com ... 139718860113552:error:14094438:SSL routines:SSL3_READ_BYTES:tlsv1 alert internal error:s3_pkt.c:1256:SSL alert number 80 139718860113552:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: ... The error.log says: [crit] 21172#0: *486 SSL_do_handshake() failed (SSL: error:1409B044:SSL routines:SSL3_SEND_SERVER_KEY_EXCHANGE:internal error) while SSL handshaking, client: ::ffff:XXX.XXX.XXX.XXX, server: [::]:443 Same result when using Firefox/NSS. Everything works fine, if only the default_server uses the ECC cert: http ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH+kEECDH+AESGCM:HIGH+kEECDH:HIGH+kEDH:HIGH:!aNULL; ssl_prefer_server_ciphers on; ssl_dhparam dhparam4096.pem; server { listen [::]:443 ssl spdy default_server ipv6only=off; server_name a.example.com; root /var/www/a; ssl_certificate ecc.crt; ssl_certificate_key ecc.key; } server { listen [::]:443 ssl spdy; server_name b.example.com; root /var/www/b; ssl_certificate rsa.crt; ssl_certificate_key rsa.key; } } Am I doing something wrong or is this a bug? Regards, Markus nginx version: nginx/1.6.0 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.6.0/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.6.0/debian/modules/nginx-dav-ext-module --add-module=/build/buildd/nginx-1.6.0/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.6.0/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.6.0/debian/modules/ngx_http_substitutions_filter_module From kirpit at gmail.com Sun May 4 15:42:39 2014 From: kirpit at gmail.com (kirpit) Date: Sun, 4 May 2014 18:42:39 +0300 Subject: $memcached_key doesn't fetch unicode url Message-ID: Hi, I'm doing some sort of downstream cache that I save every entire page into memcached from the application with urls such as: www.example.com/file/path/?query=1 then I'm fetching them from nginx if available with the config: location / { # try to fetch from memcached set $memcached_key "$host$request_uri"; memcached_pass localhost:11211; expires 10m; # not fetched from memcached, fallback error_page 404 405 502 = @fallback; } This works perfectly fine for latin char urls. However, it fails to catch unicode urls such as: www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%C4%B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1 The unicode char "%C4%B0" appears same in the nginx logs, application cache setting key (that is actually taken from raw REQUEST_URI what nginx gives). The example url content also exist in the memcached itself when if I try: "get www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%C4%B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1 " However, nginx cannot fetch anything from memcached, @fallbacks every time. I'm using version 1.6.0. Any help is much appreciated. Roy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvrv7575 at yahoo.com Sun May 4 19:04:22 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Mon, 5 May 2014 03:04:22 +0800 (SGT) Subject: proxy_buffer_size values are honored even if proxy_buffering is off Message-ID: <1399230262.26409.YahooMailNeo@web193506.mail.sg3.yahoo.com> At the http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size, the documentation implies that the configuration proxy_buffer_size , proxy_buffers and proxy_busy_buffers will be honored only when proxy_buffering is turned on. I had been seeing truncated responses of files which went away when I increased proxy_buffer_size even though proxy_buffering was turned off. I am running nginx1.5.8. Is this expected behavior ? Thanks for any answers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishal.mestri at cloverinfotech.com Mon May 5 06:51:50 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Mon, 05 May 2014 12:21:50 +0530 (IST) Subject: Issue nginx In-Reply-To: <98a5d84d-3707-4ee6-9575-128f918522bc@mail.cloverinfotech.com> Message-ID: I am facing issue with nginx. Its working on Chrome. But not on IE10 and firefox. I am using proxy pass, please find attached nginx.conf file attached along with. OUTPUT OF command: [root at erttrepsg63 ~]# /usr/sbin/nginx -V nginx version: nginx/1.6.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' Thanks & Regards, Vishal Mestri -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 2658 bytes Desc: not available URL: From steve at greengecko.co.nz Mon May 5 07:15:27 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 05 May 2014 19:15:27 +1200 Subject: Issue nginx In-Reply-To: References: Message-ID: <1399274127.24481.478.camel@steve-new> Hi, On Mon, 2014-05-05 at 12:21 +0530, Vishal Mestri wrote: > I am facing issue with nginx. > > > Its working on Chrome. But not on IE10 and firefox. > > > I am using proxy pass, please find attached nginx.conf file attached > along with. > > > > > OUTPUT OF command: > [root at erttrepsg63 ~]# /usr/sbin/nginx -V > nginx version: nginx/1.6.0 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-http_auth_request_module > --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 > --with-http_spdy_module --with-cc-opt='-O2 -g -pipe > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' > > > > Thanks & Regards, > > Vishal Mestri I'd've said that the code being delivered to each browser is exactly the same ( ok, not necessarily true but you have to force it to be different ), so would look at the accuracy of the html / javascript on the page as a first step. Try a basic hello world page and see if that is displayed ok... Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From vishal.mestri at cloverinfotech.com Mon May 5 07:17:04 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Mon, 05 May 2014 12:47:04 +0530 (IST) Subject: Issue nginx In-Reply-To: <1399274127.24481.478.camel@steve-new> Message-ID: Thanks for your immediate reply steve. But I just want to know whether we can log request and response headers in nginx, for more debugging. As far as code is concerned , we are using , it is running if we don't use nginx. Only when we use proxy pass, it is failing. One important point , we are using ajax on port 6400 and on port 6401 we have done ssl enablement. Can you please help me to debug this issue. Thanks in advance. Regards, Vishal ----- Original Message ----- From: "Steve Holdoway" To: nginx at nginx.org Sent: Monday, May 5, 2014 12:45:27 PM Subject: Re: Issue nginx Hi, On Mon, 2014-05-05 at 12:21 +0530, Vishal Mestri wrote: > I am facing issue with nginx. > > > Its working on Chrome. But not on IE10 and firefox. > > > I am using proxy pass, please find attached nginx.conf file attached > along with. > > > > > OUTPUT OF command: > [root at erttrepsg63 ~]# /usr/sbin/nginx -V > nginx version: nginx/1.6.0 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-http_auth_request_module > --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 > --with-http_spdy_module --with-cc-opt='-O2 -g -pipe > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' > > > > Thanks & Regards, > > Vishal Mestri I'd've said that the code being delivered to each browser is exactly the same ( ok, not necessarily true but you have to force it to be different ), so would look at the accuracy of the html / javascript on the page as a first step. Try a basic hello world page and see if that is displayed ok... Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishal.mestri at cloverinfotech.com Mon May 5 07:56:15 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Mon, 05 May 2014 13:26:15 +0530 (IST) Subject: Issue nginx In-Reply-To: Message-ID: <66317da2-f299-4cfd-9e40-9252c742c82f@mail.cloverinfotech.com> Hi Steve, We are getting below error in log file 2014/05/05 07:50:29 [error] 8800#0: *21 upstream prematurely closed connection while reading upstream, client: 14.97.74.48, server: 168.189.9.09, request: "GET /Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399276681085 HTTP/1.1", upstream: "http://168.189.9.09:6400/Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399276681085", host: "168.189.9.09:6401", referrer: "https://168.189.9.09/Login.do" ----- Original Message ----- From: "Vishal Mestri" To: nginx at nginx.org, steve at greengecko.co.nz Sent: Monday, May 5, 2014 12:47:04 PM Subject: Re: Issue nginx Thanks for your immediate reply steve. But I just want to know whether we can log request and response headers in nginx, for more debugging. As far as code is concerned , we are using , it is running if we don't use nginx. Only when we use proxy pass, it is failing. One important point , we are using ajax on port 6400 and on port 6401 we have done ssl enablement. Can you please help me to debug this issue. Thanks in advance. Regards, Vishal ----- Original Message ----- From: "Steve Holdoway" To: nginx at nginx.org Sent: Monday, May 5, 2014 12:45:27 PM Subject: Re: Issue nginx Hi, On Mon, 2014-05-05 at 12:21 +0530, Vishal Mestri wrote: > I am facing issue with nginx. > > > Its working on Chrome. But not on IE10 and firefox. > > > I am using proxy pass, please find attached nginx.conf file attached > along with. > > > > > OUTPUT OF command: > [root at erttrepsg63 ~]# /usr/sbin/nginx -V > nginx version: nginx/1.6.0 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-http_auth_request_module > --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 > --with-http_spdy_module --with-cc-opt='-O2 -g -pipe > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' > > > > Thanks & Regards, > > Vishal Mestri I'd've said that the code being delivered to each browser is exactly the same ( ok, not necessarily true but you have to force it to be different ), so would look at the accuracy of the html / javascript on the page as a first step. Try a basic hello world page and see if that is displayed ok... Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From iptablez at yahoo.com Mon May 5 08:05:53 2014 From: iptablez at yahoo.com (Indo Php) Date: Mon, 5 May 2014 01:05:53 -0700 (PDT) Subject: ngx_cache_purge + query string Message-ID: <1399277153.11325.YahooMailNeo@web142305.mail.bf1.yahoo.com> Hi, is ngx_cache_purge support to purge file with query string? I've tried with no success Our example page http://www.example.com/images/file.jpg?v=1.0 Is there any additional config I have to put? -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Mon May 5 09:19:52 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 5 May 2014 02:19:52 -0700 Subject: ngx_cache_purge + query string In-Reply-To: <1399277153.11325.YahooMailNeo@web142305.mail.bf1.yahoo.com> References: <1399277153.11325.YahooMailNeo@web142305.mail.bf1.yahoo.com> Message-ID: Hello, > is ngx_cache_purge support to purge file with query string? Yes. > I've tried with no success > > Our example page > > http://www.example.com/images/file.jpg?v=1.0 > > Is there any additional config I have to put? You didn't provide your current config, so no one can tell. Best regards, Piotr Sikora From iptablez at yahoo.com Mon May 5 10:28:30 2014 From: iptablez at yahoo.com (Indo Php) Date: Mon, 5 May 2014 03:28:30 -0700 (PDT) Subject: ngx_cache_purge + query string In-Reply-To: References: <1399277153.11325.YahooMailNeo@web142305.mail.bf1.yahoo.com> Message-ID: <1399285710.83908.YahooMailNeo@web142305.mail.bf1.yahoo.com> Hi, Below is my config ??????? location ~ /purge(/.*) { ??????????????? allow?? 127.0.0.1; ??????????????? allow?? 10.10.0.0/24; ??????????????? deny??? all; ??????????????? proxy_cache_purge? one? backend$1; ??????? } On Monday, May 5, 2014 4:20 PM, Piotr Sikora wrote: Hello, > is ngx_cache_purge support to purge file with query string? Yes. > I've tried with no success > > Our example page > > http://www.example.com/images/file.jpg?v=1.0 > > Is there any additional config I have to put? You didn't provide your current config, so no one can tell. Best regards, Piotr Sikora _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Mon May 5 10:35:21 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 5 May 2014 03:35:21 -0700 Subject: ngx_cache_purge + query string In-Reply-To: <1399285710.83908.YahooMailNeo@web142305.mail.bf1.yahoo.com> References: <1399277153.11325.YahooMailNeo@web142305.mail.bf1.yahoo.com> <1399285710.83908.YahooMailNeo@web142305.mail.bf1.yahoo.com> Message-ID: Hello, > Below is my config > > location ~ /purge(/.*) { > allow 127.0.0.1; > allow 10.10.0.0/24; > deny all; > proxy_cache_purge one backend$1; > } $1 doesn't contain query strings, you should use: proxy_cache_purge one backend$1$is_args$args; or alternatively, use "same location" configuration. Please refer to the documentation for details. Best regards, Piotr Sikora From iptablez at yahoo.com Mon May 5 10:55:27 2014 From: iptablez at yahoo.com (Indo Php) Date: Mon, 5 May 2014 03:55:27 -0700 (PDT) Subject: ngx_cache_purge + query string In-Reply-To: References: <1399277153.11325.YahooMailNeo@web142305.mail.bf1.yahoo.com> <1399285710.83908.YahooMailNeo@web142305.mail.bf1.yahoo.com> Message-ID: <1399287327.47050.YahooMailNeo@web142304.mail.bf1.yahoo.com> Thanks! On Monday, May 5, 2014 5:35 PM, Piotr Sikora wrote: Hello, > Below is my config > >? ? ? ? location ~ /purge(/.*) { >? ? ? ? ? ? ? ? allow? 127.0.0.1; >? ? ? ? ? ? ? ? allow? 10.10.0.0/24; >? ? ? ? ? ? ? ? deny? ? all; >? ? ? ? ? ? ? ? proxy_cache_purge? one? backend$1; >? ? ? ? } $1 doesn't contain query strings, you should use: ? ? proxy_cache_purge? one? backend$1$is_args$args; or alternatively, use "same location" configuration. Please refer to the documentation for details. Best regards, Piotr Sikora _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 5 16:09:13 2014 From: nginx-forum at nginx.us (zvn) Date: Mon, 05 May 2014 12:09:13 -0400 Subject: Mono MVC Timeout In-Reply-To: References: Message-ID: <93ee4dc9a9830130c46b0b45ce087eb8.NginxMailingListEnglish@forum.nginx.org> Hello, OpenBSD grey.my.domain 5.5 GENERIC.MP # mono --version Mono JIT compiler version 2.10.9 # nginx -V nginx version: nginx/1.4.4 i have exactly the same problem, i run the aspx with a very explicit path: # fastcgi-mono-server4 /applications=grey:/index.aspx:./moon/index.aspx /socket=tcp:127.0.0.1:9000 /loglevels=Debug /verbose=True /root=/var/www/ I use this nginx http conf : ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- server { server_name grey; root /var/www/moon/; index index.html index.htm index.aspx default.aspx; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { fastcgi_index index.aspx; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #try_files $uri $uri/ /index.aspx; } # Fighting with ImageCache? This little gem is amazing. location ~ ^/sites/.*/files/imagecache/ { try_files $uri $uri/ @rewrite; } } ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Then i copied-pasta the MVC application, expecting the /bin to be detected etc... # fastcgi-mono-server4 /applications=grey:/:./moon/MonoWeb /socket=tcp:127.0.0.1:9000 /loglevels=Debug /verbose=True /root=/var/www/ i also ran as root and using tcp socket to just test the xsp. The fastcgi-mono-server4 just say nothing , nor reply the request. Best regards. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239541,249838#msg-249838 From sarah at nginx.com Mon May 5 17:22:36 2014 From: sarah at nginx.com (Sarah Novotny) Date: Mon, 5 May 2014 10:22:36 -0700 Subject: NGINX 2014 survey: I know you have opinions. In-Reply-To: <74ccbf3f7e314c9cc6eb5d6f522bdffc@none.at> References: <74ccbf3f7e314c9cc6eb5d6f522bdffc@none.at> Message-ID: Hi Aleks, We have a team analyzing the results over the next few weeks. (There was an amazing number of responses.) I?ll post a summary on our blog[0] when we finish that analysis. sarah [0] http://nginx.com/blog/ On May 4, 2014, at 1:38 AM, Aleksandar Lazic wrote: > Dear Sarah. > > Due to the fact that the survey is over now, please can you tell us what the outcome was? > > Best regards > Aleks > > Am 23-04-2014 18:32, schrieb Sarah Novotny: >> Hello! >> As Valentin mentioned in another thread, it?s that time of year again >> when we want to tune up our strategy; see how NGINX is used; what you >> the community thinks of us; where we can improve our products, >> communications or community; and so on. Please take a moment and fill >> out this survey[1] and let us know how we can be more valuable in your >> organization?s future. >> Happy spring. >> Sarah >> (on behalf of the Nginx team globally.) >> [1] https://www.surveymonkey.com/s/L5B6MVH >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From jdorfman at netdna.com Mon May 5 19:14:48 2014 From: jdorfman at netdna.com (Justin Dorfman) Date: Mon, 5 May 2014 12:14:48 -0700 Subject: Query strings duplicating on 301 redirect Message-ID: Hey All, I am trying to redirect (301) all HTTP request to TLS (HTTPS) and I keep getting duplicate query strings added to the uri. e.g.: curl -I "http://foo.bar.example.com/foobar.css?v=2" HTTP/1.1 301 Moved Permanently [clipped] Location: http://foo.bar.example.com/foobar.css?v=2?v=2 Nginx config: location / { if ($scheme = http) { rewrite ^ https://$http_host$request_uri permanent; } Any ideas? Thanks in advance. Regards, Justin Dorfman -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon May 5 20:11:25 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 5 May 2014 21:11:25 +0100 Subject: Query strings duplicating on 301 redirect In-Reply-To: References: Message-ID: <20140505201125.GX16942@daoine.org> On Mon, May 05, 2014 at 12:14:48PM -0700, Justin Dorfman wrote: Hi there, > I am trying to redirect (301) all HTTP request to TLS (HTTPS) and I keep > getting duplicate query strings added to the uri. e.g.: http://nginx.org/r/rewrite Second last paragraph looks like it should fix it. f -- Francis Daly francis at daoine.org From jdorfman at netdna.com Mon May 5 20:54:49 2014 From: jdorfman at netdna.com (Justin Dorfman) Date: Mon, 5 May 2014 13:54:49 -0700 Subject: Query strings duplicating on 301 redirect In-Reply-To: <20140505201125.GX16942@daoine.org> References: <20140505201125.GX16942@daoine.org> Message-ID: Thanks Francis, worked perfectly. Regards, Justin Dorfman Director of Developer Relations MaxCDN Email / IM: jdorfman at maxcdn.com Mobile: 818.485.1458 Twitter: @jdorfman On Mon, May 5, 2014 at 1:11 PM, Francis Daly wrote: > On Mon, May 05, 2014 at 12:14:48PM -0700, Justin Dorfman wrote: > > Hi there, > > > I am trying to redirect (301) all HTTP request to TLS (HTTPS) and I keep > > getting duplicate query strings added to the uri. e.g.: > > http://nginx.org/r/rewrite > > Second last paragraph looks like it should fix it. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Mon May 5 21:08:43 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 5 May 2014 23:08:43 +0200 Subject: Query strings duplicating on 301 redirect In-Reply-To: References: <20140505201125.GX16942@daoine.org> Message-ID: Just a note, I think the preferred way to do this is with "return". It's much simpler (no rewrite / PCRE overhead): location / { if ($scheme = http) { return 301 https://$http_host$request_uri; } On Mon, May 5, 2014 at 10:54 PM, Justin Dorfman wrote: > Thanks Francis, worked perfectly. > > > Regards, > > Justin Dorfman > > Director of Developer Relations > MaxCDN > > Email / IM: jdorfman at maxcdn.com > Mobile: 818.485.1458 > Twitter: @jdorfman > > > On Mon, May 5, 2014 at 1:11 PM, Francis Daly wrote: > >> On Mon, May 05, 2014 at 12:14:48PM -0700, Justin Dorfman wrote: >> >> Hi there, >> >> > I am trying to redirect (301) all HTTP request to TLS (HTTPS) and I keep >> > getting duplicate query strings added to the uri. e.g.: >> >> http://nginx.org/r/rewrite >> >> Second last paragraph looks like it should fix it. >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 5 22:38:45 2014 From: nginx-forum at nginx.us (dcaillibaud) Date: Mon, 05 May 2014 18:38:45 -0400 Subject: misunderstood regex location Message-ID: Hi, I understood that prefix location was read, then regex location and it stops on a ^~ match Why with this config location / { location ~ .+\.(js|css|ico|png|gif|jpg|jpeg|pdf|zip|html|htm)$ { expires 25h; } } location ^~ /banniere_rotative/.+\.(js|css)$ { expires 30d; } https://ssl.sesamath.net/banniere_rotative/_writable/slides_20140503c.min.css match the first one prefix location and not the second regex one ? After deeper tests, I saw that location /banniere_rotative/ { location ~ \.(js|css)$ { expires 30d; } } match but location ^~ "/banniere_rotative/.*\.css$" { doesn't (inside prefix location or not) What's the obvious mistake I made in my regex ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249846,249846#msg-249846 From jdorfman at netdna.com Mon May 5 23:25:07 2014 From: jdorfman at netdna.com (Justin Dorfman) Date: Mon, 5 May 2014 16:25:07 -0700 Subject: Query strings duplicating on 301 redirect In-Reply-To: References: <20140505201125.GX16942@daoine.org> Message-ID: @Richard Interesting. I shall give that a try. Regards, Justin Dorfman Director of Developer Relations MaxCDN Email / IM: jdorfman at maxcdn.com Mobile: 818.485.1458 Twitter: @jdorfman On Mon, May 5, 2014 at 2:08 PM, Richard Stanway wrote: > Just a note, I think the preferred way to do this is with "return". It's > much simpler (no rewrite / PCRE overhead): > > location / { > if ($scheme = http) { > return 301 https://$http_host$request_uri; > } > > > On Mon, May 5, 2014 at 10:54 PM, Justin Dorfman wrote: > >> Thanks Francis, worked perfectly. >> >> >> Regards, >> >> Justin Dorfman >> >> Director of Developer Relations >> MaxCDN >> >> Email / IM: jdorfman at maxcdn.com >> Mobile: 818.485.1458 >> Twitter: @jdorfman >> >> >> On Mon, May 5, 2014 at 1:11 PM, Francis Daly wrote: >> >>> On Mon, May 05, 2014 at 12:14:48PM -0700, Justin Dorfman wrote: >>> >>> Hi there, >>> >>> > I am trying to redirect (301) all HTTP request to TLS (HTTPS) and I >>> keep >>> > getting duplicate query strings added to the uri. e.g.: >>> >>> http://nginx.org/r/rewrite >>> >>> Second last paragraph looks like it should fix it. >>> >>> f >>> -- >>> Francis Daly francis at daoine.org >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon May 5 23:39:59 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 6 May 2014 00:39:59 +0100 Subject: misunderstood regex location In-Reply-To: References: Message-ID: <20140505233959.GY16942@daoine.org> On Mon, May 05, 2014 at 06:38:45PM -0400, dcaillibaud wrote: Hi there, > location ^~ "/banniere_rotative/.*\.css$" { > > doesn't (inside prefix location or not) > > What's the obvious mistake I made in my regex ? ^~ is a prefix match, not a regex match. http://nginx.org/r/location f -- Francis Daly francis at daoine.org From al-nginx at none.at Tue May 6 00:07:34 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 06 May 2014 02:07:34 +0200 Subject: NGINX 2014 survey: I know you have opinions. In-Reply-To: References: <74ccbf3f7e314c9cc6eb5d6f522bdffc@none.at> Message-ID: <0844f7900fbfe937e239eef260634b80@none.at> Hi Sarah. Great, thanks. BR Aleks Am 05-05-2014 19:22, schrieb Sarah Novotny: > Hi Aleks, > > We have a team analyzing the results over the next few weeks. (There > was an amazing number of responses.) > > I?ll post a summary on our blog[0] when we finish that analysis. > > sarah > > [0] http://nginx.com/blog/ > > On May 4, 2014, at 1:38 AM, Aleksandar Lazic wrote: > >> Dear Sarah. >> >> Due to the fact that the survey is over now, please can you tell us >> what the outcome was? >> >> Best regards >> Aleks >> >> Am 23-04-2014 18:32, schrieb Sarah Novotny: >>> Hello! >>> As Valentin mentioned in another thread, it?s that time of year again >>> when we want to tune up our strategy; see how NGINX is used; what you >>> the community thinks of us; where we can improve our products, >>> communications or community; and so on. Please take a moment and >>> fill >>> out this survey[1] and let us know how we can be more valuable in >>> your >>> organization?s future. >>> Happy spring. >>> Sarah >>> (on behalf of the Nginx team globally.) >>> [1] https://www.surveymonkey.com/s/L5B6MVH >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue May 6 01:58:35 2014 From: nginx-forum at nginx.us (lovekmla) Date: Mon, 05 May 2014 21:58:35 -0400 Subject: How does the Proxy Cache Key Lookup actually happen? Message-ID: I am using part of the request_body as the Cache_key in setting up the Proxy_cache_key and I was wondering how the actual lookup / matching of the Cache would occur? >From the documentation, it looks like it's a MD5 encryption of the Cache Key that I set. Does that mean the cache_key lookup would be done after constructing the cache key for the Incoming request? Is it possible that it would try to do a lookup with the default URI first and failover to construct the cache key?? Context of asking this is I am seeing some inconsistency in the Cache Hit Serving time for huge size of request_body Params even though when it's served from the cache. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249851,249851#msg-249851 From vishal.mestri at cloverinfotech.com Tue May 6 03:23:56 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Tue, 06 May 2014 08:53:56 +0530 (IST) Subject: Issue nginx In-Reply-To: <66317da2-f299-4cfd-9e40-9252c742c82f@mail.cloverinfotech.com> Message-ID: <0c6cfec7-2940-431f-ad46-93efb51fbe4b@mail.cloverinfotech.com> Hi All, please go through below email and help me resolve issue as i am new to nginx. Regards, Vishal ----- Original Message ----- From: "Vishal Mestri" To: nginx at nginx.org, steve at greengecko.co.nz Sent: Monday, May 5, 2014 1:26:15 PM Subject: Re: Issue nginx Hi Steve, We are getting below error in log file 2014/05/05 07:50:29 [error] 8800#0: *21 upstream prematurely closed connection while reading upstream, client: 14.97.74.48, server: 168.189.9.09, request: "GET /Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399276681085 HTTP/1.1", upstream: "http://168.189.9.09:6400/Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399276681085", host: "168.189.9.09:6401", referrer: "https://168.189.9.09/Login.do" ----- Original Message ----- From: "Vishal Mestri" To: nginx at nginx.org, steve at greengecko.co.nz Sent: Monday, May 5, 2014 12:47:04 PM Subject: Re: Issue nginx Thanks for your immediate reply steve. But I just want to know whether we can log request and response headers in nginx, for more debugging. As far as code is concerned , we are using , it is running if we don't use nginx. Only when we use proxy pass, it is failing. One important point , we are using ajax on port 6400 and on port 6401 we have done ssl enablement. Can you please help me to debug this issue. Thanks in advance. Regards, Vishal ----- Original Message ----- From: "Steve Holdoway" To: nginx at nginx.org Sent: Monday, May 5, 2014 12:45:27 PM Subject: Re: Issue nginx Hi, On Mon, 2014-05-05 at 12:21 +0530, Vishal Mestri wrote: > I am facing issue with nginx. > > > Its working on Chrome. But not on IE10 and firefox. > > > I am using proxy pass, please find attached nginx.conf file attached > along with. > > > > > OUTPUT OF command: > [root at erttrepsg63 ~]# /usr/sbin/nginx -V > nginx version: nginx/1.6.0 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-http_auth_request_module > --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 > --with-http_spdy_module --with-cc-opt='-O2 -g -pipe > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' > > > > Thanks & Regards, > > Vishal Mestri I'd've said that the code being delivered to each browser is exactly the same ( ok, not necessarily true but you have to force it to be different ), so would look at the accuracy of the html / javascript on the page as a first step. Try a basic hello world page and see if that is displayed ok... Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 2658 bytes Desc: not available URL: From iptablez at yahoo.com Tue May 6 04:11:28 2014 From: iptablez at yahoo.com (Indo Php) Date: Mon, 5 May 2014 21:11:28 -0700 (PDT) Subject: Image Filter Error Message-ID: <1399349488.58506.YahooMailNeo@web142302.mail.bf1.yahoo.com> Hi When doing resizing on the image, I got the error below gd-png: ?fatal libpng error: IDAT: CRC error gd-png error: setjmp returns error condition 22014/05/06 10:45:55 [error] 6137#0: *4879 gdImageCreateFromPngPtr() failed while sending to client, client: x.x.x.x, server: my.hostname.com, request: "GET /r/5270146_201306300555490402.png HTTP/1.0", upstream: "http://172.16.0.78:80/5270146_201306300555490402.png", host: "backend", referrer: "referer page" My configuration? ? ? ? ? ? ? ? ? expires 1y; ? ? ? ? ? ? ? ? image_filter_buffer 5M; ? ? ? ? ? ? ? ? image_filter resize $1 $2; ? ? ? ? ? ? ? ? proxy_pass http://backend1/$3; ? ? ? ? ? ? ? ? error_page ? ? 415 ? = /empty; Please kindly help Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From makailol7 at gmail.com Tue May 6 04:39:11 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Tue, 6 May 2014 10:09:11 +0530 Subject: Facing content-type issue with try_files. In-Reply-To: References: Message-ID: Hello, Could someone help me with my previous query? Actually the problem is that when image file doesn't have an extension it is not being served with proper content type. Instead it is served with "application/octet-stream" content-type. Due to this browser try to download image instead of displaying it. Is it necessary to have an extension to image file to set proper content-type for Nginx? Couldn't web server set the content-type from *file type* ? Thanks, On Fri, May 2, 2014 at 10:42 AM, Makailol Charls wrote: > Hello, > > To serve static contents i.e. images I use try_files directive of Nginx. > My configuration location block is as below. > > location ~* \.(jpg|jpeg|png|gif)$ { > try_files $request_uri @missingImg; > } > > @missingImg is named location block with proxy_* directive. > > The above configuration works fine if the image file name ends with jpg, > gif, jpeg, png extension in disk. When image file name(stored in disk) > includes query string like "example.jpg?a=123" then request to such image > is being served with application/octet-stream content-type . Because of the > wrong content type, image is not being displayed and browser prompt to > download image. > > Could someone suggest me what am I doing wrong here? > > Thanks, > Makailol > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue May 6 06:55:48 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 6 May 2014 10:55:48 +0400 Subject: misunderstood regex location In-Reply-To: References: Message-ID: <59F4F1B5-8E2B-43F4-8DBF-91B8F47965CE@sysoev.ru> On May 6, 2014, at 2:38 , dcaillibaud wrote: > Hi, > > I understood that prefix location was read, then regex location and it stops > on a ^~ match > > Why with this config > > location / { > location ~ .+\.(js|css|ico|png|gif|jpg|jpeg|pdf|zip|html|htm)$ { > expires 25h; > } > } > > location ^~ /banniere_rotative/.+\.(js|css)$ { > expires 30d; > } > > https://ssl.sesamath.net/banniere_rotative/_writable/slides_20140503c.min.css > match the first one prefix location and not the second regex one ? > > After deeper tests, I saw that > > location /banniere_rotative/ { > location ~ \.(js|css)$ { > expires 30d; > } > } > > match but > > location ^~ "/banniere_rotative/.*\.css$" { > > doesn't (inside prefix location or not) > > What's the obvious mistake I made in my regex ? As Francis has already mentioned regex location is location ~ ^/banniere... But this will not work as you expect. First nginx will match "location /", then it will search regex location inside this location and finally "location ~ .+\.(js|" will be used. Your solution: location /banniere_rotative/ { is right. -- Igor Sysoev http://nginx.com From nginx-forum at nginx.us Tue May 6 08:10:25 2014 From: nginx-forum at nginx.us (dfumagalli) Date: Tue, 06 May 2014 04:10:25 -0400 Subject: Nginx 1.6 under load gives a ton of error 403 Message-ID: Hello, we have used Nginx 1.4.x (the "extras", full featured package) for a year with no major troubles. Only some relatively rare error 403 we can't find the source of, it only happens when our XEN VPS (Ubuntu 12.04 LTS) is under (other sharing customers) load and our website is itself under high load. I have upgraded to 1.6, the (Ondrej PPA). I had to downgrade to 1.4.7 VERY fast because error 403 (access forbidden) were literally spammed every other page just with a smidge of load. The same content shows not a single error is used under light load so it's not a basic "wrong permissions" or similar issue. I'd love if somebody could give me pointers about how to find out the real source of those errors 403. I can't replicate the issue on a basic "mule" computer, our setup uses a lot of features including varnish 3, GeoIP both at Nginx and PHP-fpm levels and so on. The odd thing is, version 1.4.7 very seldom shows that spurious error, whereas to 1.6 it is a showstopper. To worsen things, sometimes varnish caches those 403 pages so unrelated users start seeing those errors as well. I think the issue might be caused by too small buffers or something. The config file is massively huge so I don't know if it's a good idea to post it (nobody would care to read it all). So I am just looking for pointers and ideas about how to pinpoint the problem source. Best regards, dfumagalli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249857,249857#msg-249857 From nginx-forum at nginx.us Tue May 6 08:35:44 2014 From: nginx-forum at nginx.us (dfumagalli) Date: Tue, 06 May 2014 04:35:44 -0400 Subject: Nginx 1.6 under load gives a ton of error 403 In-Reply-To: References: Message-ID: <58e1a6a25d24bdb8e3804b4ba00a7e17.NginxMailingListEnglish@forum.nginx.org> Domain anonymized sample error log: see the first entry returns 200, the next 403 on the same page. 2.139.79.51 - - [06/May/2014:08:13:23 +0000] "GET /flowers/love HTTP/1.1" 200 62 55 "http://dev.domain.com/flowers/friendship" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0" 2.139.79.51 - - [06/May/2014:08:13:28 +0000] "GET / HTTP/1.1" 403 134 "http://de v.domain.com/flowers/love" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) Gec ko/20100101 Firefox/28.0" 2.139.79.51 - - [06/May/2014:08:15:01 +0000] "GET / HTTP/1.1" 403 134 "http://de v.domain.com/flowers/love" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) Gec ko/20100101 Firefox/28.0" 2.139.79.51 - - [06/May/2014:08:17:31 +0000] "GET / HTTP/1.1" 403 134 "http://de v.domain.com/flowers/love" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) Gec ko/20100101 Firefox/28.0" 2.139.79.51 - - [06/May/2014:08:17:52 +0000] "GET / HTTP/1.1" 403 134 "http://de v.domain.com/flowers/love" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) Gec ko/20100101 Firefox/28.0" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249857,249859#msg-249859 From nginx-forum at nginx.us Tue May 6 09:00:30 2014 From: nginx-forum at nginx.us (Larry) Date: Tue, 06 May 2014 05:00:30 -0400 Subject: X-forward-for question Message-ID: <30917c1caed2c1c0e096a0f404ea1aba.NginxMailingListEnglish@forum.nginx.org> Hello, I plan to set up two servers for a training purpose : one frontend, one database. I created a tcp server but I need to be sure of the idea : Be F the frontend and B the backend. when a client will make a request, it will hit F first (nginx + https involved) then will ask B to process the query with adding a x-forward-for header not to lose the real client ip and port. Then my tcp server will retrieve those headers, then ask the script to perform the database connection/query.. via fastcgi. Then what should I do ? Should I just send a request back to nginxwith the afformentioned headers and nginx will set it out properly and re-encrypt the connection ? I am a bit lost here but I need to improve myself on this field. Thanks Is it how it works ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249860,249860#msg-249860 From r1ch+nginx at teamliquid.net Tue May 6 09:16:27 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 6 May 2014 11:16:27 +0200 Subject: Nginx 1.6 under load gives a ton of error 403 In-Reply-To: <58e1a6a25d24bdb8e3804b4ba00a7e17.NginxMailingListEnglish@forum.nginx.org> References: <58e1a6a25d24bdb8e3804b4ba00a7e17.NginxMailingListEnglish@forum.nginx.org> Message-ID: It is probably your application / backend that is generating the 403, it's unlikely nginx is responsible for this. I guess rate / connection limiting with a custom error code may cause this, but you should know if you configured this. Please show us your config and describe your backend in more detail. On Tue, May 6, 2014 at 10:35 AM, dfumagalli wrote: > Domain anonymized sample error log: see the first entry returns 200, the > next 403 on the same page. > > 2.139.79.51 - - [06/May/2014:08:13:23 +0000] "GET /flowers/love HTTP/1.1" > 200 62 > 55 "http://dev.domain.com/flowers/friendship" "Mozilla/5.0 (Windows NT > 6.3; > WOW64; rv:28.0) Gecko/20100101 Firefox/28.0" > 2.139.79.51 - - [06/May/2014:08:13:28 +0000] "GET / HTTP/1.1" 403 134 > "http://de > v.domain.com/flowers/love" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) > Gec > ko/20100101 Firefox/28.0" > 2.139.79.51 - - [06/May/2014:08:15:01 +0000] "GET / HTTP/1.1" 403 134 > "http://de > v.domain.com/flowers/love" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) > Gec > ko/20100101 Firefox/28.0" > 2.139.79.51 - - [06/May/2014:08:17:31 +0000] "GET / HTTP/1.1" 403 134 > "http://de > v.domain.com/flowers/love" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) > Gec > ko/20100101 Firefox/28.0" > 2.139.79.51 - - [06/May/2014:08:17:52 +0000] "GET / HTTP/1.1" 403 134 > "http://de > v.domain.com/flowers/love" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) > Gec > ko/20100101 Firefox/28.0" > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249857,249859#msg-249859 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 6 09:58:07 2014 From: nginx-forum at nginx.us (kay) Date: Tue, 06 May 2014 05:58:07 -0400 Subject: nginx rewrites $request_method on error In-Reply-To: <20140430140923.GF34696@mdounin.ru> References: <20140430140923.GF34696@mdounin.ru> Message-ID: <4c12012bf15ed0f246e495bb0369f694.NginxMailingListEnglish@forum.nginx.org> Actually I suppose that this is a bug, as it is not possible to make filter by $request_method Also some external modules like https://github.com/openresty/memc-nginx-module have strange behavior when client passes TRACE and nginx gets GET. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249754,249862#msg-249862 From nginx-forum at nginx.us Tue May 6 10:30:23 2014 From: nginx-forum at nginx.us (dfumagalli) Date: Tue, 06 May 2014 06:30:23 -0400 Subject: Nginx 1.6 under load gives a ton of error 403 In-Reply-To: References: Message-ID: <81c0c2b73c7f16c2dc3d5cc4077a115d.NginxMailingListEnglish@forum.nginx.org> This is also what I thought. I have searched the whole nginx etc directory for 403 and deny /etc/nginx# grep -r '403' . and the results I got are these snippets: # Deny bad Referers if ($http_referer ~* (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen)) { return 403; } ... # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\. { access_log off; log_not_found off; deny all; } # Wordpress uses the robots.txt location = /robots.txt { access_log off; log_not_found off; } location = /favicon.ico { access_log off; log_not_found off; } location ~ ~$ { access_log off; log_not_found off; deny all; } ... The apps are several, they all follow the "index.php is the controller" paradygm. # Make sure files with the following extensions do not get loaded by nginx because nginx would display the source code, and these files can contain PASSWORDS! location ~* \.(engine|inc|ini|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|\.php_ { deny all; } location ~ /config.php { deny all; } It's not the UFW firewall as well, because the error shows up even with UFW disabled. So the potential culprit may be php-fpm or some weird nginx option. Here's the master conf file, the others don't specify anything but location Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249857,249863#msg-249863 From r1ch+nginx at teamliquid.net Tue May 6 11:09:12 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 6 May 2014 13:09:12 +0200 Subject: Nginx 1.6 under load gives a ton of error 403 In-Reply-To: <81c0c2b73c7f16c2dc3d5cc4077a115d.NginxMailingListEnglish@forum.nginx.org> References: <81c0c2b73c7f16c2dc3d5cc4077a115d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Your config is returning a 403 from any referrer containing "love" any you have such URLs on your own site according to your log excerpt. I would not recommend such referrer matching, it's unlikely to help in any case. On Tue, May 6, 2014 at 12:30 PM, dfumagalli wrote: > This is also what I thought. I have searched the whole nginx etc directory > for 403 and deny > > /etc/nginx# grep -r '403' . > > and the results I got are these snippets: > > # Deny bad Referers > if ($http_referer ~* > (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen)) { > return 403; > } > > ... > > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one > # > location ~ /\. { access_log off; log_not_found off; deny all; } > > # Wordpress uses the robots.txt > location = /robots.txt { access_log off; log_not_found off; } > location = /favicon.ico { access_log off; log_not_found off; } > location ~ ~$ { access_log off; log_not_found off; deny > all; } > > ... > > The apps are several, they all follow the "index.php is the controller" > paradygm. > > > # Make sure files with the following extensions do not get loaded > by > nginx because nginx would display the source code, and these files can > contain PASSWORDS! > location ~* > > \.(engine|inc|ini|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|\.php_ > { > deny all; > } > > location ~ /config.php { > deny all; > } > > > > > It's not the UFW firewall as well, because the error shows up even with UFW > disabled. So the potential culprit may be php-fpm or some weird nginx > option. Here's the master conf file, the others don't specify anything but > location > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249857,249863#msg-249863 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 6 12:26:20 2014 From: nginx-forum at nginx.us (dfumagalli) Date: Tue, 06 May 2014 08:26:20 -0400 Subject: Nginx 1.6 under load gives a ton of error 403 In-Reply-To: References: Message-ID: Thank you so much. I am pretty sure the 403 comes from another source as well (as it does not happen under light http server load) but the 'love' regex was really a good show of having a nice hawk eye from yours :D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249857,249866#msg-249866 From vishal.mestri at cloverinfotech.com Tue May 6 12:18:21 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Tue, 06 May 2014 17:48:21 +0530 (IST) Subject: Issue nginx - ajax In-Reply-To: <0c6cfec7-2940-431f-ad46-93efb51fbe4b@mail.cloverinfotech.com> Message-ID: <9ca69da5-d8a9-4711-923e-cfc1330a87cb@mail.cloverinfotech.com> Hi All, I have enabled debug option on the nginx now. After that I have connected using chrome , its working fine. But I am facing issue on IE. I have attached along with two different error logs with debug enabled. ssl_1_chrome.txt -> log generated when we connected using chrome successfully. ssl_1_ie.txt -> log generated when we connected using ie , but not sucess. I hope this is maximum level of inputs I can provide. Requesting all nginx experts to provide help. Thanks in advance. Regards, Vishal ----- Original Message ----- From: "Vishal Mestri" To: nginx at nginx.org, steve at greengecko.co.nz Sent: Tuesday, May 6, 2014 8:53:56 AM Subject: Re: Issue nginx Hi All, please go through below email and help me resolve issue as i am new to nginx. Regards, Vishal ----- Original Message ----- From: "Vishal Mestri" To: nginx at nginx.org, steve at greengecko.co.nz Sent: Monday, May 5, 2014 1:26:15 PM Subject: Re: Issue nginx Hi Steve, We are getting below error in log file 2014/05/05 07:50:29 [error] 8800#0: *21 upstream prematurely closed connection while reading upstream, client: 14.97.74.48, server: 168.189.9.09, request: "GET /Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399276681085 HTTP/1.1", upstream: "http://168.189.9.09:6400/Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399276681085", host: "168.189.9.09:6401", referrer: "https://168.189.9.09/Login.do" ----- Original Message ----- From: "Vishal Mestri" To: nginx at nginx.org, steve at greengecko.co.nz Sent: Monday, May 5, 2014 12:47:04 PM Subject: Re: Issue nginx Thanks for your immediate reply steve. But I just want to know whether we can log request and response headers in nginx, for more debugging. As far as code is concerned , we are using , it is running if we don't use nginx. Only when we use proxy pass, it is failing. One important point , we are using ajax on port 6400 and on port 6401 we have done ssl enablement. Can you please help me to debug this issue. Thanks in advance. Regards, Vishal ----- Original Message ----- From: "Steve Holdoway" To: nginx at nginx.org Sent: Monday, May 5, 2014 12:45:27 PM Subject: Re: Issue nginx Hi, On Mon, 2014-05-05 at 12:21 +0530, Vishal Mestri wrote: > I am facing issue with nginx. > > > Its working on Chrome. But not on IE10 and firefox. > > > I am using proxy pass, please find attached nginx.conf file attached > along with. > > > > > OUTPUT OF command: > [root at erttrepsg63 ~]# /usr/sbin/nginx -V > nginx version: nginx/1.6.0 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-http_auth_request_module > --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 > --with-http_spdy_module --with-cc-opt='-O2 -g -pipe > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' > > > > Thanks & Regards, > > Vishal Mestri I'd've said that the code being delivered to each browser is exactly the same ( ok, not necessarily true but you have to force it to be different ), so would look at the accuracy of the html / javascript on the page as a first step. Try a basic hello world page and see if that is displayed ok... Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ssl_1_ie.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ssl_1_chrome.txt URL: From nginx-forum at nginx.us Tue May 6 15:03:56 2014 From: nginx-forum at nginx.us (dcaillibaud) Date: Tue, 06 May 2014 11:03:56 -0400 Subject: misunderstood regex location In-Reply-To: <59F4F1B5-8E2B-43F4-8DBF-91B8F47965CE@sysoev.ru> References: <59F4F1B5-8E2B-43F4-8DBF-91B8F47965CE@sysoev.ru> Message-ID: <89c7e266a708d940ae3043ed8da94bd4.NginxMailingListEnglish@forum.nginx.org> I understand my mistake, thanks to both of you. May I suggest to insist on this in http://nginx.org/en/docs/http/ngx_http_core_module.html#location, with a remark on the fact that ^~ is not usable with a regex, for example with syntax: location [ = | ^~ ] uri { ... } location ~ | ~* regexUri { ... } location @name { ... } and adding there Igor advice (repeated many times in this forum like in http://forum.nginx.org/read.php?2,247529,247718 or http://forum.nginx.org/read.php?2,174517,174534#msg-174534), that it's a better practice to have a prefix location list at first level then put regex location within Reviewing all my server definitions, I'm starting to imagine what a nightmare could be, with on the first level a mix of standard prefix location, ^~ and ~ ^/ ;-) I was also wondering wich has precedence for /images/foo.jpeg in these case, 1) (I guess D, because ^/ before prefix shortcut regex) location ^~ /images/ { location ~* \.(gif|jpg|jpeg)$ { [ configuration D ] } } location ~* \.(gif|jpg|jpeg)$ { [ configuration E ] } 2) (I guess E but I'm not so sure), this can also be a pitfall example (how complicating things can lead to unexpected behaviour) location /images/ { location ~* \.(gif|jpg|jpeg)$ { [ configuration D ] } } location ~* \.(gif|jpg|jpeg)$ { [ configuration E ] } 2bis) this makes 2) understandable (more predictable) location /images/ { location ~* \.(gif|jpg|jpeg)$ { [ configuration D ] } location ~* \.svg$ { [ configuration D2 ] } } location / { location ~* \.(gif|jpg|jpeg|svg|pdf)$ { // static stuff not in /images/ [ configuration E ] } } PS: is there a way to preserve space in this forum to make code easier to read ? (I tried
,  and [code])

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249846,249868#msg-249868


From reallfqq-nginx at yahoo.fr  Tue May  6 17:25:09 2014
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Tue, 6 May 2014 19:25:09 +0200
Subject: misunderstood regex location
In-Reply-To: <89c7e266a708d940ae3043ed8da94bd4.NginxMailingListEnglish@forum.nginx.org>
References: <59F4F1B5-8E2B-43F4-8DBF-91B8F47965CE@sysoev.ru>
 <89c7e266a708d940ae3043ed8da94bd4.NginxMailingListEnglish@forum.nginx.org>
Message-ID: 

On Tue, May 6, 2014 at 5:03 PM, dcaillibaud  wrote:

> I understand my mistake, thanks to both of you.
>
> May I suggest to insist on this in
> http://nginx.org/en/docs/http/ngx_http_core_module.html#location, with a
> remark on the fact that ^~ is not usable with a regex, for example with
>
> syntax:
> location [ = | ^~ ] uri { ... }
> location ~ | ~*  regexUri { ... }
> location @name { ... }
>
> and adding there Igor advice (repeated many times in this forum like in
> http://forum.nginx.org/read.php?2,247529,247718 or
> http://forum.nginx.org/read.php?2,174517,174534#msg-174534), that it's a
> better practice to have a prefix location list at first level then put
> regex
> location within
>

?The $docs must stay as concise as possible, so I do not think Igorf advice
should be there, even irf it is very interesting information, should be
there.
A 'best practice/performance enhancement' page could host it though.

That wriktten, I think that yes, the docs could be clearer, by inserting:
"If the longest matching prefix location has the ?^~? modifier then regular
expressions are not checked. "
inside the third para?graph, which takes the process step-by-step but is
incomplete (one could understand every request processes regexes).
This sentence, isolated at the end of the section, produces a leap backward
in the reasoning flow and loses the reader in his attempt to understand how
things work. Definitely not user-friendly.



>
> Reviewing all my server definitions, I'm starting to imagine what a
> nightmare could be, with on the first level a mix of standard prefix
> location, ^~ and ~ ^/ ;-)
>
> I was also wondering wich has precedence for /images/foo.jpeg in these
> case,
>
>
> 1) (I guess D, because ^/ before prefix shortcut regex)
>

> location ^~ /images/ {
>     location ~* \.(gif|jpg|jpeg)$ {
>       [ configuration D ]
>     }
> }
>
> location ~* \.(gif|jpg|jpeg)$ {
>     [ configuration E ]
> }
>

??Right?. That is however exactly the example provided by the docs.
"If the longest matching prefix location has the ?^~? modifier then regular
expressions are not checked."?
The regex is not even checked against the input request string.


> 2) (I guess E but I'm not so sure), this can also be a pitfall example (how
> complicating things can lead to unexpected behaviour)
>
> location /images/ {
>     location ~* \.(gif|jpg|jpeg)$ {
>       [ configuration D ]
>     }
> }
>
> location ~* \.(gif|jpg|jpeg)$ {
>     [ configuration E ]
> }
>

"Among [prefix locations], the location with the longest matching prefix is
selected and remembered. Then regular expressions are checked, in the order
of their appearance in the configuration file."
?The prefix string matches, it is remembered, but then the regex is checked
and also matches.
"The search of regular expressions terminates on the first match, and the
corresponding configuration is used."
The request processing stops on the first matching regex.


> 2bis) this makes 2) understandable (more predictable)
>
> location /images/ {
>     location ~* \.(gif|jpg|jpeg)$ {
>       [ configuration D ]
>     }
>     location ~* \.svg$ {
>       [ configuration D2 ]
>     }
> }
>
> location / {
>   location ~* \.(gif|jpg|jpeg|svg|pdf)$ {
>       // static stuff not in /images/
>       [ configuration E ]
>   }
> }
>

?Right IMHO.
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Tue May  6 18:39:18 2014
From: nginx-forum at nginx.us (dcaillibaud)
Date: Tue, 06 May 2014 14:39:18 -0400
Subject: misunderstood regex location
In-Reply-To: 
References: 
Message-ID: <42e1c0bb6230396e587d51049ebe2ac8.NginxMailingListEnglish@forum.nginx.org>

> > 2) (I guess E but I'm not so sure), this can also be a pitfall
> example (how
> > complicating things can lead to unexpected behaviour)
> >
> > location /images/ {
> >     location ~* \.(gif|jpg|jpeg)$ {
> >       [ configuration D ]
> >     }
> > }
> >
> > location ~* \.(gif|jpg|jpeg)$ {
> >     [ configuration E ]
> > }


> The request processing stops on the first matching regex.

Which is ?

I guess E, but I'm not sure since the doc doesn't say if the inner regex is
read before or after the next 1st level one. I understand that all first
level are evaluated to see if one branch need to be explored deeper, but
it's not so obvious. 

Anyway, this example is more a school case, it should be avoid in real life,
so answer is not so important. If I need to answer such a question then I'm
probably in the wrong way.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249846,249873#msg-249873


From nginx-forum at nginx.us  Tue May  6 18:53:09 2014
From: nginx-forum at nginx.us (dcaillibaud)
Date: Tue, 06 May 2014 14:53:09 -0400
Subject: misunderstood regex location
In-Reply-To: <42e1c0bb6230396e587d51049ebe2ac8.NginxMailingListEnglish@forum.nginx.org>
References: 
 <42e1c0bb6230396e587d51049ebe2ac8.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <2873d5810a3485662701120c57b8cb11.NginxMailingListEnglish@forum.nginx.org>

this sentence is not so clear...

> I understand
> that all first level are evaluated to see if one branch need to be
> explored deeper, but it's not so obvious. 

...I mean :

I understand that all first level selectors are compared to see which block
need to be evaluated deeper, but it's not so obvious.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249846,249874#msg-249874


From jeroen.ooms at stat.ucla.edu  Tue May  6 22:16:09 2014
From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms)
Date: Tue, 6 May 2014 15:16:09 -0700
Subject: How to limit POST request per ip ?
In-Reply-To: <20140405220755.GV34696@mdounin.ru>
References: <499900be49fc454f4c05473093a2b793.NginxMailingListEnglish@forum.nginx.org>
 <20140405220755.GV34696@mdounin.ru>
Message-ID: 

On Sat, Apr 5, 2014 at 3:07 PM, Maxim Dounin  wrote:
>
> we need something like
>
>     limit_req_zone $limit zone=one:10m rate=1r/s;
>
> where the $limit variables is empty for non-POST requests (as we
> don't want to limit them), and evaluates to $binary_remote_addr
> for POST requests.

A follow-up question: are requests that hit the cache counted in the
limit_req_zone? I would like to enforce a limit on the POST requests
that actually hit the back-end; I don't mind additional requests that
hit the cache.


From agentzh at gmail.com  Tue May  6 22:28:52 2014
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Tue, 6 May 2014 15:28:52 -0700
Subject: nginx rewrites $request_method on error
In-Reply-To: <4c12012bf15ed0f246e495bb0369f694.NginxMailingListEnglish@forum.nginx.org>
References: <20140430140923.GF34696@mdounin.ru>
 <4c12012bf15ed0f246e495bb0369f694.NginxMailingListEnglish@forum.nginx.org>
Message-ID: 

Hello!

On Tue, May 6, 2014 at 2:58 AM, kay wrote:
> Actually I suppose that this is a bug, as it is not possible to make filter
> by $request_method
>
> Also some external modules like
> https://github.com/openresty/memc-nginx-module have strange behavior when
> client passes TRACE and nginx gets GET.
>

Will you elaborate your problem and use case? Preferably with a
minimal but still complete example to demonstrate your intention and
problem. As the author of the ngx_memc module, I'd like to have to
look (and find a solution for you) :)

Regards,
-agentzh


From nginx-forum at nginx.us  Wed May  7 01:49:09 2014
From: nginx-forum at nginx.us (sam.gu)
Date: Tue, 06 May 2014 21:49:09 -0400
Subject: nginx 1.4.2 [error] 2516#12636:  CreateFile()
Message-ID: 

i added the cache configuration in my nginx.conf ,like this :

 location ~ .*\.(js|css|ico|jpg|jpeg|png|gif)$
          {
              expires 7d;
              root cache;
              proxy_store on;
          
           }
         But i coudlen't login my app ,the error log is :
2014/05/07 09:27:13 [error] 2516#12636: *11 CreateFile()
"D:/myapp/nginx-1.4.2/cache/myapp/static/msgengine/msgengine.js" failed (3:
The system cannot find the path specified), client: 192.168.3.112, server:
0.0.0.0, request: "GET /myapp/static/msgengine/msgengine.js HTTP/1.1", host:
"10.1.1.7", referrer: "http://10.1.1.7/myapp/index.action"

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249879,249879#msg-249879


From nginx-forum at nginx.us  Wed May  7 05:38:04 2014
From: nginx-forum at nginx.us (Kirill K.)
Date: Wed, 07 May 2014 01:38:04 -0400
Subject: using $upstream* variables inside map directive
Message-ID: <4e056bd7463a363b3f17844a7741df18.NginxMailingListEnglish@forum.nginx.org>

Hello,
I'm trying to avoid caching of small responses from upstreams using map:
map $upstream_http_content_length $dontcache {
default 0;
~^\d\d$ 1;
~^\d$ 1;
}

Unfortunatelly, nginx seems to ignore $upstream* variables at the map
processing stage, hence variables like $upstream_http_content_length or
$upstream_response_length stay empty when map directive is processed (this
can be observed in debug log as "http map started" message). In case I use
non-upstream related variables, a map works as expected.

Question: is there any way to use $upstream* vars inside the map directive,
or maybe someone can offer alternative way to detect small upstream response
in order to bypass cache?

Thank you.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249880,249880#msg-249880


From nginx-forum at nginx.us  Wed May  7 06:30:57 2014
From: nginx-forum at nginx.us (kay)
Date: Wed, 07 May 2014 02:30:57 -0400
Subject: nginx rewrites $request_method on error
In-Reply-To: 
References: 
Message-ID: <82d7deabcc08ccfcad92aa0611fc5688.NginxMailingListEnglish@forum.nginx.org>

Sure, you can use nginx.conf from my previous message and this server
config:

server {
    listen       80;

rewrite_by_lua '
local res = ngx.location.capture("/memc?cmd=get&key=test")
ngx.say(res.body)
';

    location / {
                root /etc/nginx/www;
    }

location /memc {
    internal;
    access_log              /var/log/nginx/memc_log main;
    log_subrequest          on;
    set                     $memc_key $arg_key;
    set                     $memc_cmd $arg_cmd;
    memc_cmds_allowed       get;
    memc_pass               vmembase-2:11211;
}

}

If you enable "error_page 405 /error.html;" nginx will hang on "curl -X
TRACE localhost"
If you disable "error_page 405 /error.html;" - everything will be fine.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249754,249881#msg-249881


From nginx-forum at nginx.us  Wed May  7 06:48:01 2014
From: nginx-forum at nginx.us (itpp2012)
Date: Wed, 07 May 2014 02:48:01 -0400
Subject: nginx 1.4.2 [error] 2516#12636: CreateFile()
In-Reply-To: 
References: 
Message-ID: <799d8926dbd89bba410473a396bc7a61.NginxMailingListEnglish@forum.nginx.org>

sam.gu Wrote:
-------------------------------------------------------
>               root cache;

> "D:/myapp/nginx-1.4.2/cache/myapp/static/msgengine/msgengine.js"

root = nginx base + root value, so it looks for your app where you told it
to look.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249879,249882#msg-249882


From nginx-forum at nginx.us  Wed May  7 07:25:17 2014
From: nginx-forum at nginx.us (dompz)
Date: Wed, 07 May 2014 03:25:17 -0400
Subject: Debugging symbols for nginx-1.4.7-1.el6.ngx.x86_64.rpm
In-Reply-To: <0F585B6D-7C3B-4C22-BEE6-4C196B99F8E7@nginx.com>
References: <0F585B6D-7C3B-4C22-BEE6-4C196B99F8E7@nginx.com>
Message-ID: <63bceb85a65664b5383f865e7c1f88d8.NginxMailingListEnglish@forum.nginx.org>

Hi Sergey,

I just checked the repository again and there is indeed a debuginfo package
available now for the latest build (1.6). This really helps. 

Thanks a lot,
Dominik

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248527,249884#msg-249884


From contact at jpluscplusm.com  Wed May  7 10:31:10 2014
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Wed, 7 May 2014 11:31:10 +0100
Subject: using $upstream* variables inside map directive
In-Reply-To: <4e056bd7463a363b3f17844a7741df18.NginxMailingListEnglish@forum.nginx.org>
References: <4e056bd7463a363b3f17844a7741df18.NginxMailingListEnglish@forum.nginx.org>
Message-ID: 

On 7 May 2014 06:38, Kirill K.  wrote:
> Hello,
> I'm trying to avoid caching of small responses from upstreams using map:
> map $upstream_http_content_length $dontcache {
> default 0;
> ~^\d\d$ 1;
> ~^\d$ 1;
> }
>
> Unfortunatelly, nginx seems to ignore $upstream* variables at the map
> processing stage, hence variables like $upstream_http_content_length or
> $upstream_response_length stay empty when map directive is processed (this
> can be observed in debug log as "http map started" message). In case I use
> non-upstream related variables, a map works as expected.
>
> Question: is there any way to use $upstream* vars inside the map directive,
> or maybe someone can offer alternative way to detect small upstream response
> in order to bypass cache?

I don't explicitly know how to achieve what you're trying to, but I
seem to recall mention on this list that a map's value gets stuck
(per-request) the first time it's evaluated. I might be
misremembering, but this does ring a bell.

So - is your map somehow being evaluated /before/ the upstream vars
are available? Does your config perhaps cause it to be evaluated when
the initial request arrives, to see if the response should be served
from cache; then the request is proxy_pass'd, then after receiving a
response the caching bypass config is examined but to no avail as the
map has "stuck" with the initially set value?

Sorry I can't be more specific - I'm sure others can help more definitively!

J


From vishal.mestri at cloverinfotech.com  Wed May  7 10:38:17 2014
From: vishal.mestri at cloverinfotech.com (Vishal Mestri)
Date: Wed, 07 May 2014 16:08:17 +0530 (IST)
Subject: Issue nginx - ajax
In-Reply-To: <9ca69da5-d8a9-4711-923e-cfc1330a87cb@mail.cloverinfotech.com>
Message-ID: 

Hi All, 


today I have removed ssl configuration and tested with only http settings. 


i.e. we received request on http port 443 and forwarded it to 80 using nginx. 
Further, we also received request on http port 6401 and forwarded it to 6400 using nginx. 


Application worked successfully on chrome, but we faced issue on IE again. 


Please find attached logs for IE as well Chrome generated at Nginx. 


I guess this issue is related to some http header not being passed which might be caused issue. 


Note: Communication on 443 to 80 is working fine. But when we use ajax to call 6401 port, we face issue. 


Please let me know if any one need any more details. 


I request all Nginx master/experts to help me. 


Regards, 
Vishal 

----- Original Message -----

From: "Vishal Mestri"  
To: nginx at nginx.org, steve at greengecko.co.nz 
Sent: Tuesday, May 6, 2014 5:48:21 PM 
Subject: Re: Issue nginx - ajax 


Hi All, 


I have enabled debug option on the nginx now. 


After that I have connected using chrome , its working fine. 
But I am facing issue on IE. 


I have attached along with two different error logs with debug enabled. 
ssl_1_chrome.txt -> log generated when we connected using chrome successfully. 
ssl_1_ie.txt -> log generated when we connected using ie , but not sucess. 


I hope this is maximum level of inputs I can provide. 
Requesting all nginx experts to provide help. 


Thanks in advance. 




Regards, 
Vishal 

----- Original Message -----

From: "Vishal Mestri"  
To: nginx at nginx.org, steve at greengecko.co.nz 
Sent: Tuesday, May 6, 2014 8:53:56 AM 
Subject: Re: Issue nginx 


Hi All, 


please go through below email and help me resolve issue as i am new to nginx. 




Regards, 
Vishal 

----- Original Message -----

From: "Vishal Mestri"  
To: nginx at nginx.org, steve at greengecko.co.nz 
Sent: Monday, May 5, 2014 1:26:15 PM 
Subject: Re: Issue nginx 


Hi Steve, 


We are getting below error in log file 



2014/05/05 07:50:29 [error] 8800#0: *21 upstream prematurely closed connection while reading upstream, client: 14.97.74.48, server: 168.189.9.09, request: "GET /Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399276681085 HTTP/1.1", upstream: "http://168.189.9.09:6400/Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399276681085", host: "168.189.9.09:6401", referrer: "https://168.189.9.09/Login.do" 






----- Original Message -----

From: "Vishal Mestri"  
To: nginx at nginx.org, steve at greengecko.co.nz 
Sent: Monday, May 5, 2014 12:47:04 PM 
Subject: Re: Issue nginx 


Thanks for your immediate reply steve. 


But I just want to know whether we can log request and response headers in nginx, for more debugging. 


As far as code is concerned , we are using , it is running if we don't use nginx. 
Only when we use proxy pass, it is failing. 


One important point , we are using ajax on port 6400 and on port 6401 we have done ssl enablement. 




Can you please help me to debug this issue. 


Thanks in advance. 


Regards, 
Vishal 

----- Original Message -----

From: "Steve Holdoway"  
To: nginx at nginx.org 
Sent: Monday, May 5, 2014 12:45:27 PM 
Subject: Re: Issue nginx 

Hi, 

On Mon, 2014-05-05 at 12:21 +0530, Vishal Mestri wrote: 
> I am facing issue with nginx. 
> 
> 
> Its working on Chrome. But not on IE10 and firefox. 
> 
> 
> I am using proxy pass, please find attached nginx.conf file attached 
> along with. 
> 
> 
> 
> 
> OUTPUT OF command: 
> [root at erttrepsg63 ~]# /usr/sbin/nginx -V 
> nginx version: nginx/1.6.0 
> built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) 
> TLS SNI support enabled 
> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx 
> --conf-path=/etc/nginx/nginx.conf 
> --error-log-path=/var/log/nginx/error.log 
> --http-log-path=/var/log/nginx/access.log 
> --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock 
> --http-client-body-temp-path=/var/cache/nginx/client_temp 
> --http-proxy-temp-path=/var/cache/nginx/proxy_temp 
> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp 
> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp 
> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx 
> --group=nginx --with-http_ssl_module --with-http_realip_module 
> --with-http_addition_module --with-http_sub_module 
> --with-http_dav_module --with-http_flv_module --with-http_mp4_module 
> --with-http_gunzip_module --with-http_gzip_static_module 
> --with-http_random_index_module --with-http_secure_link_module 
> --with-http_stub_status_module --with-http_auth_request_module 
> --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 
> --with-http_spdy_module --with-cc-opt='-O2 -g -pipe 
> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
> --param=ssp-buffer-size=4 -m64 -mtune=generic' 
> 
> 
> 
> Thanks & Regards, 
> 
> Vishal Mestri 

I'd've said that the code being delivered to each browser is exactly the same ( ok, not necessarily true but you have to force it to be different ), so would look at the accuracy of the html / javascript on the page as a first step. 

Try a basic hello world page and see if that is displayed ok... 

Steve 
-- 
Steve Holdoway BSc(Hons) MIITP 
http://www.greengecko.co.nz 
Linkedin: http://www.linkedin.com/in/steveholdoway 
Skype: sholdowa 

_______________________________________________ 
nginx mailing list 
nginx at nginx.org 
http://mailman.nginx.org/mailman/listinfo/nginx 





_______________________________________________ 
nginx mailing list 
nginx at nginx.org 
http://mailman.nginx.org/mailman/listinfo/nginx 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: http_1_ie.txt
URL: 
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: http_1_chrome.txt
URL: 

From nginx-forum at nginx.us  Wed May  7 11:44:59 2014
From: nginx-forum at nginx.us (Kirill K.)
Date: Wed, 07 May 2014 07:44:59 -0400
Subject: using $upstream* variables inside map directive
In-Reply-To: 
References: 
Message-ID: <3ac930dd2873c068c31ac4a585741e6b.NginxMailingListEnglish@forum.nginx.org>

Probably that's the case, and I'm not sure if there's a way to use map
inside upstream {...} or other context apart from http {...}, which makes
your theory sound correct.
What confuses me most: I googled a bit, and using map w/
$upstream_response_length is the most common way offered to avoid caching of
small (or zero-sized) responses, yet it just does not work in a real life
scenario...

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249880,249892#msg-249892


From nginx-forum at nginx.us  Wed May  7 11:47:30 2014
From: nginx-forum at nginx.us (nginxsantos)
Date: Wed, 07 May 2014 07:47:30 -0400
Subject: nginx with userland tcp
Message-ID: 

Has anyone started looking at nginx  with userspace TCP. Is there any
opensource TCP stack available which can work well with Nginx. 
Sandstorm seems to be working in user space and claiming to handle more CPS
than Nginx.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249893,249893#msg-249893


From ru at nginx.com  Wed May  7 12:44:24 2014
From: ru at nginx.com (Ruslan Ermilov)
Date: Wed, 7 May 2014 16:44:24 +0400
Subject: using $upstream* variables inside map directive
In-Reply-To: <4e056bd7463a363b3f17844a7741df18.NginxMailingListEnglish@forum.nginx.org>
References: <4e056bd7463a363b3f17844a7741df18.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20140507124424.GB46973@lo0.su>

On Wed, May 07, 2014 at 01:38:04AM -0400, Kirill K. wrote:
> Hello,
> I'm trying to avoid caching of small responses from upstreams using map:
> map $upstream_http_content_length $dontcache {
> default 0;
> ~^\d\d$ 1;
> ~^\d$ 1;
> }
> 
> Unfortunatelly, nginx seems to ignore $upstream* variables at the map
> processing stage, hence variables like $upstream_http_content_length or
> $upstream_response_length stay empty when map directive is processed (this
> can be observed in debug log as "http map started" message). In case I use
> non-upstream related variables, a map works as expected.
> 
> Question: is there any way to use $upstream* vars inside the map directive,
> or maybe someone can offer alternative way to detect small upstream response
> in order to bypass cache?

If you use $dontcache with proxy_cache_bypass, then it's expected
behavior.  At the time proxy_cache_bypass is evaluated, there's no
response yet, so the $upstream_http_* do not exist.

If you try to use $dontcache with proxy_no_cache ONLY, it'll work,
because the latter is evaluated _after_ obtaining a response.

If you use it both with proxy_cache_bypass and proxy_no_cache,
please realize that using it with proxy_cache_bypass makes no
sense, and then the fact that "map" creates the so-called
cacheable variables plays its role.

I have a patch for "map" that makes map variables "volatile".
If you absolutely need such a "map" behavior, I can send it
to you for testing, but better limit the use of $upstream_http_*
to only proxy_no_cache.


From nginx-forum at nginx.us  Wed May  7 12:53:56 2014
From: nginx-forum at nginx.us (Kirill K.)
Date: Wed, 07 May 2014 08:53:56 -0400
Subject: using $upstream* variables inside map directive
In-Reply-To: <20140507124424.GB46973@lo0.su>
References: <20140507124424.GB46973@lo0.su>
Message-ID: <444a8a24f30826fa27964b0cc27cb4e6.NginxMailingListEnglish@forum.nginx.org>

Thanks, Ruslan,
Thing is, I tried to "debug" whether $dontcache is being set at all by
exposing it via response headers (along with content-length), and it shows
that $upstream_response_length is ignored by map completely, i.e. no matter
where I use $dontcache, it will never get any value different from default
(i.e. 0). Even though $upstream_response_length  is validated correctly (and
can be exposed in headers), the map directive just ignores it.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249880,249895#msg-249895


From reallfqq-nginx at yahoo.fr  Wed May  7 13:11:34 2014
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Wed, 7 May 2014 15:11:34 +0200
Subject: misunderstood regex location
In-Reply-To: <42e1c0bb6230396e587d51049ebe2ac8.NginxMailingListEnglish@forum.nginx.org>
References: 
 <42e1c0bb6230396e587d51049ebe2ac8.NginxMailingListEnglish@forum.nginx.org>
Message-ID: 

On Tue, May 6, 2014 at 8:39 PM, dcaillibaud  wrote:

> > > 2) (I guess E but I'm not so sure), this can also be a pitfall
> > example (how
> > > complicating things can lead to unexpected behaviour)
> > >
> > > location /images/ {
> > >     location ~* \.(gif|jpg|jpeg)$ {
> > >       [ configuration D ]
> > >     }
> > > }
> > >
> > > location ~* \.(gif|jpg|jpeg)$ {
> > >     [ configuration E ]
> > > }
>
>
> > The request processing stops on the first matching regex.
>
> Which is ?
>

?Every request is processed through one location block at the same level?.
Said otherwise, every time there is a choice to be made between several
location blocks, only one is picked up and entered. That process is
recursive.

In the provided example, there is only one regex at the root of the tree.
Since the docs I quoted in my last message say that regexes have higher
precedence, then E will be taken over D if both match. The regex inside the
prefix location '/images/' has no impact/use in the current case since it
is on the 2nd level of the tree, not its root, which was being considered.
One step at a time :o)

Since regexes have higher precedence, and since it is prefered to use
prefix locations (because they are more 'natural' and efficient, speaking
about performance), it is then advised either:
- to avoid mixing prefix and regex locations on the same level, thus
encapsulating regex locations inside prefix ones first.
- to use the ^~ operator to force prefix locations being taken over the
regex ones
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From reallfqq-nginx at yahoo.fr  Wed May  7 13:35:02 2014
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Wed, 7 May 2014 15:35:02 +0200
Subject: Issue nginx - ajax
In-Reply-To: 
References: <9ca69da5-d8a9-4711-923e-cfc1330a87cb@mail.cloverinfotech.com>
 
Message-ID: 

>
> I request all Nginx master/experts to help me.
>

?That looks like an odd way to *ask* for help.
If you wanna *request* help, consider getting paid support.? But even doing
that, that is rude...

What the error says is that the '*backend* closed connection
prematurely'... You should look into it when processing the crashing
request.

Based on the logs, for that request, nginx received different headers
depending on using either IE or Chrome:
*IE*
"GET /Stream.html?s=0&d=%22168.189.9.09%22&p=450&t=1399381478365 HTTP/1.0
Host: 168.189.9.09:6400
Connection: close
Accept: text/html, application/xhtml+xml, */*
Referer:
http://168.189.9.09:443/Login.do;jsessionid=23A0D509FE7135EC4AB85B757E8BC62E
Accept-Language: en-US
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like
Gecko
Accept-Encoding: gzip, deflate
DNT: 1

"

*Chrome*
"GET /Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399381806838 HTTP/1.0
Host: 168.189.9.09:6400
Connection: close
Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/34.0.1847.131 Safari/537.36
Referer: http://168.189.9.09:443/Login.do
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,id;q=0.6
Cookie: JSESSIONID=4965E0227151FBACCBF61DB65CDF9F9A

"

Cookie? :o)
Look into your application in the backend which does not answer when some
conditions, the problem does not seem to come from nginx which forwards
requests to yourf backend.
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From ru at nginx.com  Wed May  7 14:39:48 2014
From: ru at nginx.com (Ruslan Ermilov)
Date: Wed, 7 May 2014 18:39:48 +0400
Subject: using $upstream* variables inside map directive
In-Reply-To: <444a8a24f30826fa27964b0cc27cb4e6.NginxMailingListEnglish@forum.nginx.org>
References: <20140507124424.GB46973@lo0.su>
 <444a8a24f30826fa27964b0cc27cb4e6.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20140507143948.GC46973@lo0.su>

On Wed, May 07, 2014 at 08:53:56AM -0400, Kirill K. wrote:
> Thanks, Ruslan,
> Thing is, I tried to "debug" whether $dontcache is being set at all by
> exposing it via response headers (along with content-length), and it shows
> that $upstream_response_length is ignored by map completely, i.e. no matter
> where I use $dontcache, it will never get any value different from default
> (i.e. 0). Even though $upstream_response_length  is validated correctly (and
> can be exposed in headers), the map directive just ignores it.

I tested that your map works when used ONLY in proxy_no_cache,
and it indeed DTRT: responses with content length less than
100 aren't cached.

Here's the config snippet:

: http {
:     proxy_cache_path proxy_cache keys_zone=proxy_cache:10m;
: 
:     map $upstream_http_content_length $dontcache {
:         default 0;
:         ~^\d\d$ 1;
:         ~^\d$ 1;
:     }
: 
:     server {
:         listen 8000;
: 
:         location / {
:             proxy_pass http://127.0.0.1:8001;
:             proxy_cache proxy_cache;
:             proxy_cache_valid 300m;
:             proxy_no_cache $dontcache;
: 
:             add_header X-DontCache $dontcache;
:         }
:     }
: 
:     server {
:         listen 8001;
: 
:         return 200 "ok";
:     }
: }

$ curl -i http://127.0.0.1:8000/test
HTTP/1.1 200 OK
Server: nginx/1.7.1
Date: Wed, 07 May 2014 14:34:19 GMT
Content-Type: text/plain
Content-Length: 2
Connection: keep-alive
X-DontCache: 1

ok

And there will be no file in "proxy_cache" directory.


From thijskoerselman at gmail.com  Wed May  7 14:51:35 2014
From: thijskoerselman at gmail.com (Thijs Koerselman)
Date: Wed, 7 May 2014 16:51:35 +0200
Subject: Cache revalidate modified timezone mismatch
Message-ID: 

I have a backend returning a last modified header in CEST.

I'm using a 1sec proxy that revalidates as described here:
http://whitequark.org/blog/2014/04/05/page-caching-with-nginx/

My problem is that when nginx makes the request to the backend it changes
CEST into GMT, without adjusting the time. Therefore the backend returns a
304 for 1sec + 1 hour, instead of 1sec :(

I tried to do something like set TZ and then start nginx, but this doesn't
seem to make a difference.

export TZ="Europe/Amsterdam"
sudo nginx

What's going on and how can I fix this?

Thijs
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From thijskoerselman at gmail.com  Wed May  7 15:06:20 2014
From: thijskoerselman at gmail.com (Thijs Koerselman)
Date: Wed, 7 May 2014 17:06:20 +0200
Subject: Cache revalidate modified timezone mismatch
In-Reply-To: 
References: 
Message-ID: 

I'm using version 1.5.12 btw.


On Wed, May 7, 2014 at 4:51 PM, Thijs Koerselman
wrote:

> I have a backend returning a last modified header in CEST.
>
> I'm using a 1sec proxy that revalidates as described here:
> http://whitequark.org/blog/2014/04/05/page-caching-with-nginx/
>
> My problem is that when nginx makes the request to the backend it changes
> CEST into GMT, without adjusting the time. Therefore the backend returns a
> 304 for 1sec + 1 hour, instead of 1sec :(
>
> I tried to do something like set TZ and then start nginx, but this doesn't
> seem to make a difference.
>
> export TZ="Europe/Amsterdam"
> sudo nginx
>
> What's going on and how can I fix this?
>
> Thijs
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From contact at jpluscplusm.com  Wed May  7 15:34:15 2014
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Wed, 7 May 2014 16:34:15 +0100
Subject: Cache revalidate modified timezone mismatch
In-Reply-To: 
References: 
Message-ID: 

On 7 May 2014 15:51, Thijs Koerselman  wrote:
> I have a backend returning a last modified header in CEST.

RF2616 says "All HTTP date/time stamps MUST be represented in
Greenwich Mean Time (GMT), without exception".

I'm not very surprised nginx isn't doing what you expect. Fix your backend.

> What's going on

I suspect nginx is merely munging the header into something that
matches the HTTP spec, and (arguably correctly) isn't looking at the
actual meaning of the header being modified.

> and how can I fix this?

Fix your backend.

J


From nginx-forum at nginx.us  Wed May  7 18:05:26 2014
From: nginx-forum at nginx.us (rodrigo.aiello)
Date: Wed, 07 May 2014 14:05:26 -0400
Subject: Nginx receiving bytes from Amazon ELB (Performance Issue)
Message-ID: <7aab890c637b1ff9801f33991cf5e5e2.NginxMailingListEnglish@forum.nginx.org>

I have a service in my java application that I get the id of an image,
height and width. I do image resizing according to the provided dimensions
and return the bytes. As I am using a nginx in front of application
server(tomcat), requests are taking too long to process because nginx is on
a different server than tomcat. Nginx is receiving the bytes and then
delivering. I wonder if anyone has gone through this problem and found a
better solution.

Infrastructure (Amazon)

1 EC2 Micro (Nginx)

1 Elastic Load Balance

2 EC2 Medium (Java Application)

Nginx send requests to ELB.

I believe the problem is the transfer of bytes between the load balance and
the nginx instance

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249904,249904#msg-249904


From contact at jpluscplusm.com  Wed May  7 18:11:49 2014
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Wed, 7 May 2014 19:11:49 +0100
Subject: Nginx receiving bytes from Amazon ELB (Performance Issue)
In-Reply-To: <7aab890c637b1ff9801f33991cf5e5e2.NginxMailingListEnglish@forum.nginx.org>
References: <7aab890c637b1ff9801f33991cf5e5e2.NginxMailingListEnglish@forum.nginx.org>
Message-ID: 

On 7 May 2014 19:05, rodrigo.aiello  wrote:
> requests are taking too long to process
[snip]
> 1 EC2 Micro (Nginx)

I found your problem. Micro instances are a false economy. Stop using them.


From nginx-forum at nginx.us  Wed May  7 18:17:41 2014
From: nginx-forum at nginx.us (rodrigo.aiello)
Date: Wed, 07 May 2014 14:17:41 -0400
Subject: Nginx receiving bytes from Amazon ELB (Performance Issue)
In-Reply-To: 
References: 
Message-ID: <8f84aa505a069d4ca2c3fc592e2a8a0c.NginxMailingListEnglish@forum.nginx.org>

Jonathan Matthews Wrote:
-------------------------------------------------------
> On 7 May 2014 19:05, rodrigo.aiello  wrote:
> > requests are taking too long to process
> [snip]
> > 1 EC2 Micro (Nginx)
> 
> I found your problem. Micro instances are a false economy. Stop using
> them.
> 
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

I will test with a better ec2 instance. Thank you.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249904,249906#msg-249906


From moseleymark at gmail.com  Wed May  7 18:29:55 2014
From: moseleymark at gmail.com (Mark Moseley)
Date: Wed, 7 May 2014 11:29:55 -0700
Subject: Issue from forum: SSL: error:1408F119:SSL
 routines:SSL3_GET_RECORD:decryption failed or bad record mac
In-Reply-To: 
References: 
 
 
 
Message-ID: 

On Wed, Apr 30, 2014 at 12:55 AM, Lukas Tribus  wrote:

> Hi,
>
>
> >> The fix is already in OpenBSD [4], Debian and Ubuntu will probably ship
> the
> >> patch soon, also see [5] and [6].
> >
> > Oh, cool, that's good news that it's upstream then. Getting the patch
> > to apply is a piece of cake. I was more worried about what would happen
> > for the next libssl update. Hopefully Ubuntu will pick that update up.
> > Thanks!
>
> FYI, debian already ships this since April, 17th:
> https://lists.debian.org/debian-security-announce/2014/msg00083.html
>
> Ubuntu not yet, as it seems.
>


Looks like it's hit Ubuntu now. Since I've updated, I've not seen a single
one of these errors, which is great. I was seeing at least a handful per
hour before, so that's a pretty good sign.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From markus at gekmihesg.de  Wed May  7 18:30:37 2014
From: markus at gekmihesg.de (Markus Weippert)
Date: Wed, 07 May 2014 20:30:37 +0200
Subject: Problem with ECC certificates
In-Reply-To: <53665437.3060004@gekmihesg.de>
References: <53665437.3060004@gekmihesg.de>
Message-ID: <536A7BCD.1080509@gekmihesg.de>

On 04.05.2014 16:52, Markus Weippert wrote:

> I'm having some strange issues using nginx 1.6 with ECC certs.
> Handshakes fail for clients using TLSv1.2 and SNI but only if the
> requested server block is not the default_server.

Had a further look into that. The problem seems to occur if nginx is
built against openssl shipped with Ubuntu 12.04. The official repository
version of nginx is also affected.
Compiling nginx with the latest upstream release works as expected.
Also, no problems on Ubuntu 13.10.


From luky-37 at hotmail.com  Wed May  7 18:42:12 2014
From: luky-37 at hotmail.com (Lukas Tribus)
Date: Wed, 7 May 2014 20:42:12 +0200
Subject: Issue from forum: SSL: error:1408F119:SSL
 routines:SSL3_GET_RECORD:decryption failed or bad record mac
In-Reply-To: 
References: ,
 ,
 ,
 ,
 
Message-ID: 

Hi Mark,

> Looks like it's hit Ubuntu now. Since I've updated, I've not seen a  
> single one of these errors, which is great. I was seeing at least a  
> handful per hour before, so that's a pretty good sign. 

Confirmed: USN-2192-1 [1] provides the fix for CVE-2010-5298.



Regards,

Lukas


[1] http://www.ubuntu.com/usn/usn-2192-1/


 		 	   		  

From shuxinyang.oss at gmail.com  Wed May  7 19:38:00 2014
From: shuxinyang.oss at gmail.com (Shuxin Yang)
Date: Wed, 07 May 2014 12:38:00 -0700
Subject: Question about slab allocator
Message-ID: <536A8B98.5030500@gmail.com>

Hi,

    I'm nginx newbie. I'm reading src/core/ngx_slab.c, and am confused 
as to
the purpose of NGX_SLAB_PAGE_START.

    As far as I can understand, when allocating a block, the most 
significant
bit (MSB) of first page's corresponding ngx_slab_page_s::slab is set "1".
as we can see from :

    cat -n ngx_slab.c at pristine-1.7-release
    644    page->slab = pages | NGX_SLAB_PAGE_START;

   However, the MSB of is cleared later on:
     362         } else if (shift == ngx_slab_exact_shift) {
     363
     364             page->slab = 1;

   So, what is the purpose of NGX_SLAB_PAGE_START (i.e the MSB of the 
field slab)?

   If we really meant to keep the MSB, I guess there is another bug over 
here:
  the condition at line 514 is always true, and hence we never get 
chance to
  free the empty page.

   512             page->slab &= ~m;
   513
   514             if (page->slab) {
   515                 goto done;
   516             }
   517
   518             ngx_slab_free_pages(pool, page, 1);

   Thanks
   Shuxin


From nginx-forum at nginx.us  Wed May  7 19:44:05 2014
From: nginx-forum at nginx.us (Kirill K.)
Date: Wed, 07 May 2014 15:44:05 -0400
Subject: using $upstream* variables inside map directive
In-Reply-To: <20140507143948.GC46973@lo0.su>
References: <20140507143948.GC46973@lo0.su>
Message-ID: <63bfa49c262468d9900ac36f996400d4.NginxMailingListEnglish@forum.nginx.org>

Ruslan, you're a hero! 
I just commented the following line in my existing config
#proxy_cache_bypass $dontcache;
and everything works now!

I won't be able to comprehend such nginx's behaviour w/o your help, greatly
appreciated.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249880,249912#msg-249912


From thijskoerselman at gmail.com  Wed May  7 20:13:32 2014
From: thijskoerselman at gmail.com (Thijs Koerselman)
Date: Wed, 7 May 2014 22:13:32 +0200
Subject: Cache revalidate modified timezone mismatch
In-Reply-To: 
References: 
 
Message-ID: 

Aha good to know! I'm not responsible for this backend, so I'll harass my
colleague about it :) Thanks.

Thijs


On Wed, May 7, 2014 at 5:34 PM, Jonathan Matthews
wrote:

> On 7 May 2014 15:51, Thijs Koerselman  wrote:
> > I have a backend returning a last modified header in CEST.
>
> RF2616 says "All HTTP date/time stamps MUST be represented in
> Greenwich Mean Time (GMT), without exception".
>
> I'm not very surprised nginx isn't doing what you expect. Fix your backend.
>
> > What's going on
>
> I suspect nginx is merely munging the header into something that
> matches the HTTP spec, and (arguably correctly) isn't looking at the
> actual meaning of the header being modified.
>
> > and how can I fix this?
>
> Fix your backend.
>
> J
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From agentzh at gmail.com  Wed May  7 21:40:55 2014
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Wed, 7 May 2014 14:40:55 -0700
Subject: nginx rewrites $request_method on error
In-Reply-To: <82d7deabcc08ccfcad92aa0611fc5688.NginxMailingListEnglish@forum.nginx.org>
References: 
 <82d7deabcc08ccfcad92aa0611fc5688.NginxMailingListEnglish@forum.nginx.org>
Message-ID: 

Hello!

On Tue, May 6, 2014 at 11:30 PM, kay wrote:
> Sure, you can use nginx.conf from my previous message and this server
> config:
>

I've noticed 2 obvious mistakes in your config. See blow.

> server {
>     listen       80;
>
> rewrite_by_lua '
> local res = ngx.location.capture("/memc?cmd=get&key=test")
> ngx.say(res.body)
> ';
>

1. It is not recommended to use the rewrite_by_lua directive directly
in the server {} block. Because this rewrite_by_lua directive will
just get inherited by every location in that server {} block by
default, including 1) your location /memc, which will create a
subrequest loop because you initiate a subrequest in your
rewrite_by_lua code to /memc, and 2) your error pages like
/error.html. And I'm sure this is not what you want.

2. If you generate a response in rewrite_by_lua by calls like
"ngx.say()", then you should really terminate the current request
immediately, via "ngx.exit(200)" or something like that, otherwise
nginx will continue to run later phases like the "content phase"
(which is supposed to generate a response) and you will face duplicate
responses for the same request.

Another suggestion is to check out your nginx error logs for hints (if
any). If the existing info is not enough, you can further enable
nginx's debugging logs: http://nginx.org/en/docs/debugging_log.html

Finally, ensure your version of ngx_lua, ngx_memc, and the nginx core
are recent enough.

Good luck!
-agentzh


From nginx-forum at nginx.us  Thu May  8 00:49:13 2014
From: nginx-forum at nginx.us (shinsterneck)
Date: Wed, 07 May 2014 20:49:13 -0400
Subject: autoindex hiding dot folders
In-Reply-To: <20111015214550.GH1137@mdounin.ru>
References: <20111015214550.GH1137@mdounin.ru>
Message-ID: <79de7d71274d1d35ce7d0d6c54b3b830.NginxMailingListEnglish@forum.nginx.org>

I realize that this is an old topic, but I bumped into this issue as well
and submitted a patch, which introduces an additional option to control
whether hidden files are shown or not by the autoindex module.

http://trac.nginx.org/nginx/ticket/557

Best Regards,
Shin

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,216751,249918#msg-249918


From n.sherlock at gmail.com  Thu May  8 02:47:47 2014
From: n.sherlock at gmail.com (Nicholas Sherlock)
Date: Thu, 8 May 2014 14:47:47 +1200
Subject: Nginx receiving bytes from Amazon ELB (Performance Issue)
In-Reply-To: <7aab890c637b1ff9801f33991cf5e5e2.NginxMailingListEnglish@forum.nginx.org>
References: <7aab890c637b1ff9801f33991cf5e5e2.NginxMailingListEnglish@forum.nginx.org>
Message-ID: 

On 8 May 2014 06:05, rodrigo.aiello  wrote:

> Nginx is receiving the bytes and then
> delivering. I wonder if anyone has gone through this problem and found a
> better solution.


Indeed, Nginx reads the whole response from the backend before a single
byte is sent to the client. That can add latency if your response is very
large. This is controlled by the proxy_buffering setting, so try setting it
to "no":

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering

Cheers,
Nicholas Sherlock
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Thu May  8 03:59:32 2014
From: nginx-forum at nginx.us (kay)
Date: Wed, 07 May 2014 23:59:32 -0400
Subject: nginx rewrites $request_method on error
In-Reply-To: 
References: 
Message-ID: <4caf03ddafd74320b58b5730832a8753.NginxMailingListEnglish@forum.nginx.org>

> 1. It is not recommended to use the rewrite_by_lua directive directly

You can do the same with access_by_lua

> Finally, ensure your version of ngx_lua, ngx_memc, and the nginx core are
recent enough.

They are recent.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249754,249921#msg-249921


From n.sherlock at gmail.com  Thu May  8 04:39:18 2014
From: n.sherlock at gmail.com (Nicholas Sherlock)
Date: Thu, 8 May 2014 16:39:18 +1200
Subject: Facing content-type issue with try_files.
In-Reply-To: 
References: 
 
Message-ID: 

On 6 May 2014 16:39, Makailol Charls  wrote:

> Is it necessary to have an extension to image file to set proper
> content-type for Nginx? Couldn't web server set the content-type from *file
> type* ?
>

That would require Nginx to inspect the file contents and make a guess as
to what file type it is. I don't know if it could do that.

You could try just using:

add_header Content-Type image/jpeg;

And hope that the user's web browser will automatically work out the
correct Content-Type, if the image was actually a PNG or something.

Cheers,
Nicholas Sherlock
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From n.sherlock at gmail.com  Thu May  8 04:43:17 2014
From: n.sherlock at gmail.com (Nicholas Sherlock)
Date: Thu, 8 May 2014 16:43:17 +1200
Subject: Image Filter Error
In-Reply-To: <1399349488.58506.YahooMailNeo@web142302.mail.bf1.yahoo.com>
References: <1399349488.58506.YahooMailNeo@web142302.mail.bf1.yahoo.com>
Message-ID: 

On 6 May 2014 16:11, Indo Php  wrote:

> Hi
>
> When doing resizing on the image, I got the error below
> gd-png:  fatal libpng error: IDAT: CRC error
>
>
I'm pretty sure this means that your PNG image file is damaged (the CRC
isn't correct). Try opening and resaving your PNG in an image editor to
regenerate the CRC.

Cheers,
Nicholas Sherlock
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Thu May  8 07:12:44 2014
From: nginx-forum at nginx.us (abstein2)
Date: Thu, 08 May 2014 03:12:44 -0400
Subject: Trying to Understand Upstream Keepalive
Message-ID: <9812cc7c3c328792aa8c6a38ca41a438.NginxMailingListEnglish@forum.nginx.org>

I'm trying to better wrap my head around the keepalive functionality in the
upstream module as when enabling keepalive, I'm seeing little to no
performance benefits using the FOSS version of nginx.

My upstream block is:

upstream upstream_test_1 { server 1.1.1.1 max_fails=0; keepalive 50; }

With a proxy block of:

proxy_set_header X-Forwarded-For $IP;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://upstream_test_1;

1) How can I tell whether there are any connections currently in the
keepalive pool for the upstream block? My origin server has keepalive
enabled and I see that there are some connections in a keepalive state,
however not the 50 defined and all seem to close much quicker than the
keepalive timeout for the backend server. (I am using the Apache server
status module to view this which is likely part of the problem)

2) Are upstream blocks shared across workers? So in this situation, would
all 4 workers I have shared the same upstream keepalive pool or would each
worker have it's own block of 50?

3) How is the length of the keepalive determined? The origin server's
keepalive settings? Do the origin server's keepalive settings factor in at
all?

4) If no traffic comes across this upstream for an extended period of time,
will the connections be closed automatically or will they stay open
infinitely?

5) Are the connections in keepalive shared across visitors to the proxy? For
example, if I have three visitors to the proxy one after the other, would
the expectation be that they use the same connection via keepalive or would
a new connection be opened for each of them?

6) Is there any common level of performance benefit I should be seeing from
enabling keepalive compared to just performing a proxy_pass directly to the
origin server with no upstream block?

Thanks for any insight!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249924,249924#msg-249924


From nginx-forum at nginx.us  Thu May  8 07:23:50 2014
From: nginx-forum at nginx.us (itpp2012)
Date: Thu, 08 May 2014 03:23:50 -0400
Subject: autoindex hiding dot folders
In-Reply-To: <79de7d71274d1d35ce7d0d6c54b3b830.NginxMailingListEnglish@forum.nginx.org>
References: <20111015214550.GH1137@mdounin.ru>
 <79de7d71274d1d35ce7d0d6c54b3b830.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <3a12e814cc57865225fe570c0726c85f.NginxMailingListEnglish@forum.nginx.org>

Very useful patch but the naming is not correct, . and .. or not your
typical hidden type, autoindex_show_dot_folders would make more sense.

And "if (ngx_de_name(&dir)[0] == '.') {" will exclude files which start with
a dot, 

if (ngx_de_name(&dir) == '.') or (ngx_de_name(&dir) == '..') {, would be
better IMHO.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,216751,249925#msg-249925


From nginx-forum at nginx.us  Thu May  8 08:19:47 2014
From: nginx-forum at nginx.us (dfumagalli)
Date: Thu, 08 May 2014 04:19:47 -0400
Subject: Nginx 1.6 under load gives a ton of error 403
In-Reply-To: 
References: 
 
Message-ID: <688fdaad996cca08023cb1b1bd0e2555.NginxMailingListEnglish@forum.nginx.org>

For future reference:
empyrical observation seems to show that 1.6 is a bit heavier than 1.4.7.
I have raised some php-fpm.conf daemon settings and timeouts and the
situation improved:

emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249857,249927#msg-249927


From nginx-forum at nginx.us  Thu May  8 08:45:18 2014
From: nginx-forum at nginx.us (JSurf)
Date: Thu, 08 May 2014 04:45:18 -0400
Subject: Proxy buffering
In-Reply-To: <20131219191515.GQ2924@reaktio.net>
References: <20131219191515.GQ2924@reaktio.net>
Message-ID: <0a700b459407088a28f7bedad1f94697.NginxMailingListEnglish@forum.nginx.org>

> I'll plan to work on this and related problems at the start of
> next year.
>

Hi, is this still somewhere on the priority list ? 
The upload_buffer patch attached to this thread does not apply to 1.6.x
without changes

It would be a great addition to this cool server.
When dealing with big file uploads there are a lot of problems to deal with
if this is not implemented. (client timeout while nginx copies request to
proxy, session timeouts during uploads in backend server, since he does not
know a transfer is going on, ...)

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244680,249929#msg-249929


From al-nginx at none.at  Thu May  8 10:09:41 2014
From: al-nginx at none.at (Aleksandar Lazic)
Date: Thu, 08 May 2014 12:09:41 +0200
Subject: High performance rsyslog setup
Message-ID: 

Dear list members.

on the rsyslog list was this description page posted.

http://www.rsyslog.com/performance-tuning-elasticsearch/

Mail from Radu Gheorghe

http://lists.adiscon.net/pipermail/rsyslog/2014-April/037219.html

Maybe some plus customer are interested in this setup.

Best regards
Aleks


From nginx-forum at nginx.us  Thu May  8 13:56:07 2014
From: nginx-forum at nginx.us (shinsterneck)
Date: Thu, 08 May 2014 09:56:07 -0400
Subject: autoindex hiding dot folders
In-Reply-To: <3a12e814cc57865225fe570c0726c85f.NginxMailingListEnglish@forum.nginx.org>
References: <20111015214550.GH1137@mdounin.ru>
 <79de7d71274d1d35ce7d0d6c54b3b830.NginxMailingListEnglish@forum.nginx.org>
 <3a12e814cc57865225fe570c0726c85f.NginxMailingListEnglish@forum.nginx.org>
Message-ID: 

Thanks for your feedback itpp2012!

>> Very useful patch but the naming is not correct, . and .. or not your
typical hidden type, autoindex_show_dot_folders would make more sense.


You are right, the wording "hidden" is not really good here because "files"
starting with a 'dot' would only be hidden primarily on unix/linux type
system and not e.g: in windows. However since in linux terminology
everything is treated like files (e.g: device files, pipes, directories) I
think 'autoindex_show_dot_files' would make more sense than
'show_dot_folders'.


>> And "if (ngx_de_name(&dir)[0] == '.') {" will exclude files which start
with a dot,
>> if (ngx_de_name(&dir) == '.') or (ngx_de_name(&dir) == '..') {, would be
better IMHO.


I wanted to target all "files" this way, including the file referencing to
the current directory ("./")as well as the parent directory ("../"). The
autoindex module would (without this patch) anyway add a html reference to
the parent directory.

Hence I took your advice and changed the text with a little twist by
changing the option text from "autoindex_show_hidden_files" to
"autoindex_show_dot_files" ;-) Hope this makes sense for you and everyone
else

I will post an update to this patch in the bug tracker.

Thanks,
Shin

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,216751,249931#msg-249931


From webmaster at cosmicperl.com  Thu May  8 14:11:24 2014
From: webmaster at cosmicperl.com (Lyle)
Date: Thu, 08 May 2014 15:11:24 +0100
Subject: CGI support - Sorry to bring it up
Message-ID: <536B908C.3030907@cosmicperl.com>

Hi All,
   We've built a new EC2 server based on Virtualmin + Nginx. I've seen 
Nginx recommended a lot over the years so thought if we are moving to 
the cloud, and want things to be optimal, then it's time to give it a 
go. Before our setup has been Virtualmin + Apache (with suexec and fcgid).

For some of our old Perl CGI scripts we've hit the issue I'm sure most 
of you are familiar with. I've searched for solutions and have found a 
number, all of which have various caveats. It's unclear as to what they 
best way to deal with this is. Along with plain CGI (and fastcgi) suexec 
is an important security feature to ensure that compromised scripts 
don't have permission to wreak havoc on other user accounts, and run 
things with tight permissions (along with sorting our FTP script upload 
issues you can have).

There are various hack arounds for suexec style behaviour, I haven't 
figured yet how they can work with the CGI workarounds.

It seems like this is such a common demand that there should be an 
established (efficient and reliable) solution to deal with it by now?

Any pointers would be greatly appreciated.


Lyle

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From roberto at unbit.it  Thu May  8 14:20:29 2014
From: roberto at unbit.it (Roberto De Ioris)
Date: Thu, 8 May 2014 16:20:29 +0200
Subject: CGI support - Sorry to bring it up
In-Reply-To: <536B908C.3030907@cosmicperl.com>
References: <536B908C.3030907@cosmicperl.com>
Message-ID: <1cf7161a4de02ec9359eb744f37f8d03.squirrel@manage.unbit.it>


> Hi All,
>    We've built a new EC2 server based on Virtualmin + Nginx. I've seen
> Nginx recommended a lot over the years so thought if we are moving to
> the cloud, and want things to be optimal, then it's time to give it a
> go. Before our setup has been Virtualmin + Apache (with suexec and fcgid).
>
> For some of our old Perl CGI scripts we've hit the issue I'm sure most
> of you are familiar with. I've searched for solutions and have found a
> number, all of which have various caveats. It's unclear as to what they
> best way to deal with this is. Along with plain CGI (and fastcgi) suexec
> is an important security feature to ensure that compromised scripts
> don't have permission to wreak havoc on other user accounts, and run
> things with tight permissions (along with sorting our FTP script upload
> issues you can have).
>
> There are various hack arounds for suexec style behaviour, I haven't
> figured yet how they can work with the CGI workarounds.
>
> It seems like this is such a common demand that there should be an
> established (efficient and reliable) solution to deal with it by now?
>
> Any pointers would be greatly appreciated.
>
>
> Lyle
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

uWSGI has full (and solid) CGI support, and it pairs perfectly with nginx:

http://uwsgi-docs.readthedocs.org/en/latest/CGI.html

recent releases (>= 2.0.2) supports async modes so you can spawn multiple
cgi script without the need of a 1:1 mapping with a thread.


You will find the plugin exposes features not available in apache
(included accelerators)

-- 
Roberto De Ioris
http://unbit.it


From nginx-forum at nginx.us  Thu May  8 19:12:38 2014
From: nginx-forum at nginx.us (ura)
Date: Thu, 08 May 2014 15:12:38 -0400
Subject: configuring for video seeking? - using projekktor media player
Message-ID: <63f1987c2b8b855edfea6a176ba3a59e.NginxMailingListEnglish@forum.nginx.org>

i am creating a plugin for the elgg open source social networking framework,
that adds the projekktor media player (http://www.projekktor.com/) to elgg.
i am so far unable to get projekktor to seek video/audio files on the
webserver (tested using nginx 1.5.13 + 1.7).
i have asked on the projekktor forum and did not find a resolution there.

essentially, i have the mp4 and flv add-ons activated for nginx and i have
added the following to my site's config:

	# streamable mp4
    location ~ .mp4$	
	{
		mp4;
		mp4_buffer_size 4M;
		mp4_max_buffer_size 20M;
		gzip off;
		gzip_static off;
		limit_rate_after 10m;
		limit_rate 1m;
	}
	
	# streamable flv
    location ~ .flv$	
	{
		flv;
	}	

...

i also looked enabling the pseudostreaming option for projekktor, however i
was informed that i didn't need to do that via the projekktor forum and that
if the server supported the appropriate streaming method then the player
would use it.

does anyone know what i am missing here?

thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249939,249939#msg-249939


From alan at chandlerfamily.org.uk  Thu May  8 19:16:13 2014
From: alan at chandlerfamily.org.uk (Alan Chandler)
Date: Thu, 08 May 2014 20:16:13 +0100
Subject: Struggling with configuration
Message-ID: <536BD7FD.10903@chandlerfamily.org.uk>

Hi

I am porting some stuff that I had working under Apache to now run under 
Nginx and I have a particular case that I don't know how to deal with.

I have a physical directory structure like this

dev/
dev/myapp/
dev/myapp/web/

in this directory is an index.php file with the following early in its 
processing
require_once($_SERVER['DOCUMENT_ROOT'].'/forum/SSI.php');

dev/test-base/
dev/test-base/forum/

In this directory is an smf forum, and there is an SSI.php file in here

my nginx configuration for this

server {
     listen  80;
     server_name apps.home;
     root /home/alan/dev/test-base/;
     error_log /var/log/nginx/app.error.log debug;
     rewrite_log on;
     location / {
         try_files $uri $uri/ =404;
         index index.html;
     }

     location /forum {
         try_files $uri /forum/index.php;
     }

     location /myapp {
         alias /home/alan/dev/myapp/web;
         try_files $uri /myapp/index.php;
         location ~* ^/myapp/(.+\.php)$ {
                fastcgi_pass unix:/var/run/php5-fpm-alan.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME 
$document_root$fastcgi_script_name;
                include /etc/nginx/fastcgi_params;
         }

     }
     include php.conf
     include common.conf;
}

where php.conf is

# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ \.php$ {
     try_files $uri =404;

     fastcgi_split_path_info ^(.+\.php)(/.+)$;
     #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini

     include fastcgi_params;
     fastcgi_index index.php;
     fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#    fastcgi_intercept_errors on;
     fastcgi_pass unix:/var/run/php5-fpm-alan.sock;
}

What seems to be happening is that myapp/index.php is being called with 
$SERVER['DOCUMENT_ROOT'] pointing to the alias location and NOT the 
location defined by the root directive.  Yet the directive reference for 
the alias directive says that the document root doesn't change.

What is the correct approach for solving this problem - I don't have a 
physical directory structure that maps neatly on to the url space, but I 
need for my applications to be able to reference files relative to the 
base directory for the site.

*PS I just saw that there is a bug with alias and try_files.  So I am 
doing it wrong by using them.  My main question still remains. What is 
the correct approach for solving this type of problem?


-- 
Alan Chandler
http://www.chandlerfamily.org.uk


From agentzh at gmail.com  Thu May  8 19:28:19 2014
From: agentzh at gmail.com (Yichun Zhang (agentzh))
Date: Thu, 8 May 2014 12:28:19 -0700
Subject: nginx rewrites $request_method on error
In-Reply-To: <4caf03ddafd74320b58b5730832a8753.NginxMailingListEnglish@forum.nginx.org>
References: 
 <4caf03ddafd74320b58b5730832a8753.NginxMailingListEnglish@forum.nginx.org>
Message-ID: 

Hello!

On Wed, May 7, 2014 at 8:59 PM, kay wrote:
>> 1. It is not recommended to use the rewrite_by_lua directive directly
>
> You can do the same with access_by_lua
>

Please do not cut my original sentence and just pick the first half.
The full sentence is "it is not recommended to use the rewrite_by_lua
directive directly in the server {} block." and the reason follows
that. The same statement also applies to access_by_lua.

Also, please read the full text of my previous email and correct all
the things I listed there.

Regards,
-agentzh


From alan at chandlerfamily.org.uk  Thu May  8 19:31:43 2014
From: alan at chandlerfamily.org.uk (Alan Chandler)
Date: Thu, 08 May 2014 20:31:43 +0100
Subject: Wordpress Multi-Site Converting Apache to Nginx
In-Reply-To: 
References: <1398925582.24481.366.camel@steve-new>
 
Message-ID: <536BDB9F.7040900@chandlerfamily.org.uk>

On 01/05/14 08:02, nrahl wrote:
> This entire configuration was 100% functional using Apache2. 

I just saw this thread for the first time, and I am wondering if its the 
same problem I hit when I moved my apache2 configuration over to nginx.  
It turned out to be nothing to do with nginx, but a problem with 
super-cache misplaying with php-apc.


Super-cache does something like

if(!class_exists) {
     Class {
...
     };
}

in a file called wp-cache-base.php

adding to php.ini
[apc]
apc.filters = wp-cache-base

solved the problem

The symptoms I had were - with display_errors off, php5-fpm was 
returning a 500, with display_errors on php5-fpm was returning 200. When 
it returned 200 there was a blank page.



-- 
Alan Chandler
http://www.chandlerfamily.org.uk


From nginx-forum at nginx.us  Thu May  8 19:50:16 2014
From: nginx-forum at nginx.us (kafonek)
Date: Thu, 08 May 2014 15:50:16 -0400
Subject: Multiple reverse proxies that read from /static/ ?
Message-ID: 

Hello,

Sorry for the beginner question, but I can't seem to find an example
anywhere that shows how to handle nginx hosting more than one reverse proxy
(such as a Django app and an Ipython notebook) where both systems have
static files in different directories.

The relevant parts of my nginx conf look something like this - 

location /django {
    #gunicorn server running on localhost port 8000
    proxy_pass http://127.0.0.1:8000/;
}

location /ipython {
    #ipython notebook server (tornado) running on localhost port 9999
    proxy_pass http://127.0.0.1:9999/;
}

location /static {
    alias /opt/django/django_project/static/;
    #alias /usr/lib/python2.6/site-packages/IPython/html/static/; 
}


Is there a way to specify two different directories of static files or nest
a location /static/ inside the other blocks?  Intuitively nesting doesn't
work, because the css is calling /static, not /ipython/static or
/django/static.

My end goal would be to go to http://mysite.com/django/ to see my Django
pages and http://mysite.com/ipython/ to get to my ipython notebook server. 
Either one works right now if I point location /static {} to the appropriate
alias, but I don't know how to get both working at the same time.

Thanks in advance.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249943,249943#msg-249943


From nginx-forum at nginx.us  Thu May  8 21:54:56 2014
From: nginx-forum at nginx.us (jakubp)
Date: Thu, 08 May 2014 17:54:56 -0400
Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz"
In-Reply-To: <20140403113411.GJ34696@mdounin.ru>
References: <20140403113411.GJ34696@mdounin.ru>
Message-ID: 

> It does so - if an allocation of a cache node fails, this will 
> trigger a forced expiration of a cache node, and then tries to 
> allocate a node again.  This is more an emergency mechanism 
> though (and not guaranteed to work, as another allocation may 
> fail, too), hence alerts are logged in such cases.
> 

Thanks for the information. It indeed helps my use case. For now I'll
disable the meassges.

Kuba

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237829,249945#msg-249945


From nginx-forum at nginx.us  Thu May  8 22:12:03 2014
From: nginx-forum at nginx.us (jakubp)
Date: Thu, 08 May 2014 18:12:03 -0400
Subject: Age header support
Message-ID: <6cc98bca6995462d1a3722935753ad86.NginxMailingListEnglish@forum.nginx.org>

Hi

Is Age header support on a roadmap for the forseeable future? I am mainly
looking at the upstream side (and I saw there was a discussion in the
developer zone a few months back) but it would be great to have full-blown
support.

Regards,
Kuba

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249946,249946#msg-249946


From shahzaib.cb at gmail.com  Thu May  8 22:14:35 2014
From: shahzaib.cb at gmail.com (shahzaib shahzaib)
Date: Fri, 9 May 2014 03:14:35 +0500
Subject: nginx hotlinking protection issue with wildcards !!
Message-ID: 

hello,

     I am using hotlinking protection for mp4 files but found that whenever
i use wildcard in hot-linking, it doesn't work properly and videos even
play on the domain where it doesn't suppose to be buffer.

My mp4 config :

location ~ \.(mp4)$ {
                mp4;
                root /var/www/html/tunefiles;
                expires 7d;
       valid_referers none blocked server_names mydomain *.mydomain *.
facebook.com *.twitter.com *.seconddomain.com *.thirddomain.com
fourthdomain.com *.fourthdomain.com fifthdomain.com www.fifthdomain.com
embed.fifthdomain.com;
              if ($invalid_referer) {
                    return   403;
}
}

The mp4 file is buffering on the following domain which doesn't suppose to
be in valid_referers variable.

http://www.tusnovelas.net/telenovela/lo-que-la-vida-me-robo-capitulo-138

Nginx-1.4.7

Help will be highly appreciated.

Regards.
Shahzaib
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From mdounin at mdounin.ru  Fri May  9 00:59:12 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 04:59:12 +0400
Subject: Cache Hit Latency for large responses.
In-Reply-To: 
References: 
Message-ID: <20140509005911.GD1849@mdounin.ru>

Hello!

On Thu, May 01, 2014 at 06:27:19PM -0400, lovekmla wrote:

> Context: 
> I am currently using nginx to serve as a Response Cache Proxy in order to
> shim-out (isolate) network latency while running some performance related
> tests.
> 
> The Perf Tests are comprised of 50 repetitions of the same set of requests.
> => I have been able to successfully set-up the proxy_cache so that it would
> Cache the requests. 
> (I'm using the request_uri and request_body as the cache_key, since I need
> to Cache POST Requests as well)
> (Our POST Requests are not necessarily Write Requests since our API is using
> the POST Params for cases where we need to specify longer Params that exceed
> the limit on GET)
> => I am logging the Cache HIT Status and Request_time in the access_log and
> basically, I am seeing a variance between 500 ms ~ 2 seconds for the Same
> Request being Served from the Cache even when it's both a HIT. The Size of
> the Response is around 170K, and the request_body length is 65k. Most of the
> other HITS are like 0~100ms for request_time and only the ones that are made
> around that huge request seems to have some latency.
> 
> Question:
> #1 I'm using proxy_cache_path. Is it possible that disk I/O is causing this
> latency? I thought the Filesystem Cache was able to handle these Cache HITs
> in memory.

This depends on OS and filesystem you are using, as well as amount 
of memory you have.

> #2 When I set the cache_key to be a combination of the request_body and
> request_uri itself, when it does the lookup, would it automatically pull
> those and to the lookup as expected?

ENOPARSE, but note that $request_body variable only works if a 
request body is small enough and fits in client_body_buffer_size. 

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 01:06:08 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 05:06:08 +0400
Subject: Problem with ECC certificates
In-Reply-To: <536A7BCD.1080509@gekmihesg.de>
References: <53665437.3060004@gekmihesg.de>
 <536A7BCD.1080509@gekmihesg.de>
Message-ID: <20140509010608.GE1849@mdounin.ru>

Hello!

On Wed, May 07, 2014 at 08:30:37PM +0200, Markus Weippert wrote:

> On 04.05.2014 16:52, Markus Weippert wrote:
> 
> > I'm having some strange issues using nginx 1.6 with ECC certs.
> > Handshakes fail for clients using TLSv1.2 and SNI but only if the
> > requested server block is not the default_server.
> 
> Had a further look into that. The problem seems to occur if nginx is
> built against openssl shipped with Ubuntu 12.04. The official repository
> version of nginx is also affected.
> Compiling nginx with the latest upstream release works as expected.
> Also, no problems on Ubuntu 13.10.

The "SSL3_SEND_SERVER_KEY_EXCHANGE:internal error" message comes 
from OpenSSL, so it looks like the problem is OpenSSL version 
used.

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 01:13:26 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 05:13:26 +0400
Subject: $memcached_key doesn't fetch unicode url
In-Reply-To: 
References: 
Message-ID: <20140509011326.GF1849@mdounin.ru>

Hello!

On Sun, May 04, 2014 at 06:42:39PM +0300, kirpit wrote:

> Hi,
> 
> I'm doing some sort of downstream cache that I save every entire page into
> memcached from the application with urls such as:
> 
> www.example.com/file/path/?query=1
> 
> then I'm fetching them from nginx if available with the config:
>     location / {
>         # try to fetch from memcached
>         set                 $memcached_key "$host$request_uri";
>         memcached_pass      localhost:11211;
>         expires             10m;
>         # not fetched from memcached, fallback
>         error_page          404 405 502 = @fallback;
>     }
> 
> 
> This works perfectly fine for latin char urls. However, it fails to catch
> unicode urls such as:
> 
> www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%C4%B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1
> 
> The unicode char "%C4%B0" appears same in the nginx logs, application cache
> setting key (that is actually taken from raw REQUEST_URI what nginx gives).
> 
> The example url content also exist in the memcached itself when if I try:
> "get
> www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%C4%B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1
> "
> 
> However, nginx cannot fetch anything from memcached, @fallbacks every time.
> 
> I'm using version 1.6.0. Any help is much appreciated.

Value of $memcached_key is escaped to make sure memcached protocol 
constraints are not violated - most notably, space and "%" 
character are escaped into "%20" and "%25", respectively (space is 
escaped as it's not allowed in memcached keys, and "%" to make the 
escaping reversible).

Given the

www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%C4%B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1

$memcached_key, nginx will try to fetch

www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%25C4%25B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1

from memcached.

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 01:25:34 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 05:25:34 +0400
Subject: proxy_buffer_size values are honored even if proxy_buffering is
 off
In-Reply-To: <1399230262.26409.YahooMailNeo@web193506.mail.sg3.yahoo.com>
References: <1399230262.26409.YahooMailNeo@web193506.mail.sg3.yahoo.com>
Message-ID: <20140509012533.GG1849@mdounin.ru>

Hello!

On Mon, May 05, 2014 at 03:04:22AM +0800, Rv Rv wrote:

> At the http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size, the documentation implies that 
> the configuration proxy_buffer_size , proxy_buffers and proxy_busy_buffers will be honored only when proxy_buffering is turned on.

The "proxy_buffering off" implies that response will not be 
buffered - that is, everything received from a backend is 
immediately sent to a client.

At least one buffer is still required though (nginx have to store 
data from a backend somewhere before it will be able to send them 
to a client), and proxy_buffer_size defines size of this buffer.

See http://nginx.org/r/proxy_buffering for details.

> I had been seeing truncated responses of files which went away 
> when I increased proxy_buffer_size even though proxy_buffering 
> was turned off. I am running nginx1.5.8. Is this expected 
> behavior ?

No, it's not.

First of all, I would recommend you to take a look into error
logs - it might have an answer.

If it doesn't help, some more debugging hints can be found at 
http://wiki.nginx.org/Debugging.

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 01:46:46 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 05:46:46 +0400
Subject: configuring for video seeking? - using projekktor media player
In-Reply-To: <63f1987c2b8b855edfea6a176ba3a59e.NginxMailingListEnglish@forum.nginx.org>
References: <63f1987c2b8b855edfea6a176ba3a59e.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20140509014646.GH1849@mdounin.ru>

Hello!

On Thu, May 08, 2014 at 03:12:38PM -0400, ura wrote:

> i am creating a plugin for the elgg open source social networking framework,
> that adds the projekktor media player (http://www.projekktor.com/) to elgg.
> i am so far unable to get projekktor to seek video/audio files on the
> webserver (tested using nginx 1.5.13 + 1.7).
> i have asked on the projekktor forum and did not find a resolution there.
> 
> essentially, i have the mp4 and flv add-ons activated for nginx and i have
> added the following to my site's config:
> 
> 	# streamable mp4
>     location ~ .mp4$	
> 	{
> 		mp4;
> 		mp4_buffer_size 4M;
> 		mp4_max_buffer_size 20M;
> 		gzip off;
> 		gzip_static off;
> 		limit_rate_after 10m;
> 		limit_rate 1m;
> 	}
> 	
> 	# streamable flv
>     location ~ .flv$	
> 	{
> 		flv;
> 	}	
> 
> ...
> 
> i also looked enabling the pseudostreaming option for projekktor, however i
> was informed that i didn't need to do that via the projekktor forum and that
> if the server supported the appropriate streaming method then the player
> would use it.
> 
> does anyone know what i am missing here?

Quick look suggests projekktor is HTML5-based, and it don't need 
(and can't use) flash pseudo-streaming helpers.  As long as 
browser is able to use byte-range requests for HTML5 video 
seeking, it will do so.

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 02:07:46 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 06:07:46 +0400
Subject: Trying to Understand Upstream Keepalive
In-Reply-To: <9812cc7c3c328792aa8c6a38ca41a438.NginxMailingListEnglish@forum.nginx.org>
References: <9812cc7c3c328792aa8c6a38ca41a438.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20140509020746.GI1849@mdounin.ru>

Hello!

On Thu, May 08, 2014 at 03:12:44AM -0400, abstein2 wrote:

> I'm trying to better wrap my head around the keepalive functionality in the
> upstream module as when enabling keepalive, I'm seeing little to no
> performance benefits using the FOSS version of nginx.
> 
> My upstream block is:
> 
> upstream upstream_test_1 { server 1.1.1.1 max_fails=0; keepalive 50; }
> 
> With a proxy block of:
> 
> proxy_set_header X-Forwarded-For $IP;
> proxy_set_header Host $http_host;
> proxy_http_version 1.1;
> proxy_set_header Connection "";
> proxy_pass http://upstream_test_1;
> 
> 1) How can I tell whether there are any connections currently in the
> keepalive pool for the upstream block? My origin server has keepalive
> enabled and I see that there are some connections in a keepalive state,
> however not the 50 defined and all seem to close much quicker than the
> keepalive timeout for the backend server. (I am using the Apache server
> status module to view this which is likely part of the problem)

As long as load is even enough, don't expect to see many keepalive 
connections on the backend - new connections will be only open if 
there are no idle connections in the cache of a worker process.

> 2) Are upstream blocks shared across workers? So in this situation, would
> all 4 workers I have shared the same upstream keepalive pool or would each
> worker have it's own block of 50?

It's per worker, see http://nginx.org/r/keepalive.

> 3) How is the length of the keepalive determined? The origin server's
> keepalive settings? Do the origin server's keepalive settings factor in at
> all?

Connections are kept in the cache till the origin server closes them.

> 4) If no traffic comes across this upstream for an extended period of time,
> will the connections be closed automatically or will they stay open
> infinitely?

See above.

> 5) Are the connections in keepalive shared across visitors to the proxy? For
> example, if I have three visitors to the proxy one after the other, would
> the expectation be that they use the same connection via keepalive or would
> a new connection be opened for each of them?

Connections in the cache are shared for all uses of the upstream.  
As long as a connection is idle (and hence in the cache), it can 
by used for any request by any visitor.

> 6) Is there any common level of performance benefit I should be seeing from
> enabling keepalive compared to just performing a proxy_pass directly to the
> origin server with no upstream block?

No.

There are two basic cases when keeping connections alive is 
really beneficial:

- Fast backends, which produce responses is a very short time, 
  comparable to a TCP handshake.

- Distant backends, when a TCP handshake takes a long time, 
  comparable to a backend response time.

There are also some bonus side effects (reducing number of sockets 
in TIME-WAIT state, less work for OS to establish new connections, 
less packets on a network), but these are unlikely to result in 
measurable performance benefits in a typical setup.

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 02:12:02 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 06:12:02 +0400
Subject: Age header support
In-Reply-To: <6cc98bca6995462d1a3722935753ad86.NginxMailingListEnglish@forum.nginx.org>
References: <6cc98bca6995462d1a3722935753ad86.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20140509021202.GJ1849@mdounin.ru>

Hello!

On Thu, May 08, 2014 at 06:12:03PM -0400, jakubp wrote:

> Hi
> 
> Is Age header support on a roadmap for the forseeable future? I am mainly
> looking at the upstream side (and I saw there was a discussion in the
> developer zone a few months back) but it would be great to have full-blown
> support.

It's something considered, but no ETA.

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 02:25:53 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 06:25:53 +0400
Subject: nginx hotlinking protection issue with wildcards !!
In-Reply-To: 
References: 
Message-ID: <20140509022553.GK1849@mdounin.ru>

Hello!

On Fri, May 09, 2014 at 03:14:35AM +0500, shahzaib shahzaib wrote:

> hello,
> 
>      I am using hotlinking protection for mp4 files but found that whenever
> i use wildcard in hot-linking, it doesn't work properly and videos even
> play on the domain where it doesn't suppose to be buffer.
> 
> My mp4 config :
> 
> location ~ \.(mp4)$ {
>                 mp4;
>                 root /var/www/html/tunefiles;
>                 expires 7d;
>        valid_referers none blocked server_names mydomain *.mydomain *.
> facebook.com *.twitter.com *.seconddomain.com *.thirddomain.com
> fourthdomain.com *.fourthdomain.com fifthdomain.com www.fifthdomain.com
> embed.fifthdomain.com;
>               if ($invalid_referer) {
>                     return   403;
> }
> }
> 
> The mp4 file is buffering on the following domain which doesn't suppose to
> be in valid_referers variable.
> 
> http://www.tusnovelas.net/telenovela/lo-que-la-vida-me-robo-capitulo-138
> 
> Nginx-1.4.7
> 
> Help will be highly appreciated.

Try looking into Referer header of a request, as well as at server 
names of a particular server{} block (as you use "valid_referers 
server_names").

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 02:46:46 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 06:46:46 +0400
Subject: Nginx receiving bytes from Amazon ELB (Performance Issue)
In-Reply-To: 
References: <7aab890c637b1ff9801f33991cf5e5e2.NginxMailingListEnglish@forum.nginx.org>
 
Message-ID: <20140509024646.GL1849@mdounin.ru>

Hello!

On Thu, May 08, 2014 at 02:47:47PM +1200, Nicholas Sherlock wrote:

> On 8 May 2014 06:05, rodrigo.aiello  wrote:
> 
> > Nginx is receiving the bytes and then
> > delivering. I wonder if anyone has gone through this problem and found a
> > better solution.

As already suggested, the original problem is likely to be related 
to micro instance.  In particular, there were reports that network 
is very slow on such instances.

> Indeed, Nginx reads the whole response from the backend before a single
> byte is sent to the client. That can add latency if your response is very

This is not true.

> large. This is controlled by the proxy_buffering setting, so try setting it
> to "no":
> 
> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering

Using buffering allows to read a resonse from a backend as fast as 
possible, while sending the response to the client.  This allows 
to save expensive backend processes while serving large responses 
to slow clients.

It doesn't imply reading "the whole response from the backend 
before a single byte is sent to the client" - as long as at least 
one buffer is full, it will be sent to a client.

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 04:07:51 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 08:07:51 +0400
Subject: Proxy buffering
In-Reply-To: <0a700b459407088a28f7bedad1f94697.NginxMailingListEnglish@forum.nginx.org>
References: <20131219191515.GQ2924@reaktio.net>
 <0a700b459407088a28f7bedad1f94697.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20140509040751.GQ1849@mdounin.ru>

Hello!

On Thu, May 08, 2014 at 04:45:18AM -0400, JSurf wrote:

> > I'll plan to work on this and related problems at the start of
> > next year.
> >
> 
> Hi, is this still somewhere on the priority list ? 

Yes, it's still in the list.

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Fri May  9 04:23:18 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 08:23:18 +0400
Subject: How to limit POST request per ip ?
In-Reply-To: 
References: <499900be49fc454f4c05473093a2b793.NginxMailingListEnglish@forum.nginx.org>
 <20140405220755.GV34696@mdounin.ru>
 
Message-ID: <20140509042318.GS1849@mdounin.ru>

Hello!

On Tue, May 06, 2014 at 03:16:09PM -0700, Jeroen Ooms wrote:

> On Sat, Apr 5, 2014 at 3:07 PM, Maxim Dounin  wrote:
> >
> > we need something like
> >
> >     limit_req_zone $limit zone=one:10m rate=1r/s;
> >
> > where the $limit variables is empty for non-POST requests (as we
> > don't want to limit them), and evaluates to $binary_remote_addr
> > for POST requests.
> 
> A follow-up question: are requests that hit the cache counted in the
> limit_req_zone? I would like to enforce a limit on the POST requests
> that actually hit the back-end; I don't mind additional requests that
> hit the cache.

Limits are checked (and counted) before a request is passed to a 
content handler, hence all requests are counted, both cached and 
not.  If you want to limit only requests which aren't cached, you 
may do so, e.g., by adding an additional proxy layer with 
limit_req.

-- 
Maxim Dounin
http://nginx.org/


From shahzaib.cb at gmail.com  Fri May  9 05:36:40 2014
From: shahzaib.cb at gmail.com (shahzaib shahzaib)
Date: Fri, 9 May 2014 10:36:40 +0500
Subject: nginx hotlinking protection issue with wildcards !!
In-Reply-To: <20140509022553.GK1849@mdounin.ru>
References: 
 <20140509022553.GK1849@mdounin.ru>
Message-ID: 

Hello Maxim,

               I am not using server_names, it was just a google stuff i
put in order to test if hotlinking works or not. One of my teammates
informed me that the link is embedded on the website and there's no
hot-linking protection issue regarding it.

@Maxim, i think embedded videos are different than remotely playable
videos. Offcourse, you'd have the better knowledge :)






On Fri, May 9, 2014 at 7:25 AM, Maxim Dounin  wrote:

> Hello!
>
> On Fri, May 09, 2014 at 03:14:35AM +0500, shahzaib shahzaib wrote:
>
> > hello,
> >
> >      I am using hotlinking protection for mp4 files but found that
> whenever
> > i use wildcard in hot-linking, it doesn't work properly and videos even
> > play on the domain where it doesn't suppose to be buffer.
> >
> > My mp4 config :
> >
> > location ~ \.(mp4)$ {
> >                 mp4;
> >                 root /var/www/html/tunefiles;
> >                 expires 7d;
> >        valid_referers none blocked server_names mydomain *.mydomain *.
> > facebook.com *.twitter.com *.seconddomain.com *.thirddomain.com
> > fourthdomain.com *.fourthdomain.com fifthdomain.com www.fifthdomain.com
> > embed.fifthdomain.com;
> >               if ($invalid_referer) {
> >                     return   403;
> > }
> > }
> >
> > The mp4 file is buffering on the following domain which doesn't suppose
> to
> > be in valid_referers variable.
> >
> > http://www.tusnovelas.net/telenovela/lo-que-la-vida-me-robo-capitulo-138
> >
> > Nginx-1.4.7
> >
> > Help will be highly appreciated.
>
> Try looking into Referer header of a request, as well as at server
> names of a particular server{} block (as you use "valid_referers
> server_names").
>
> --
> Maxim Dounin
> http://nginx.org/
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From shahzaib.cb at gmail.com  Fri May  9 05:45:08 2014
From: shahzaib.cb at gmail.com (shahzaib shahzaib)
Date: Fri, 9 May 2014 10:45:08 +0500
Subject: nginx hotlinking protection issue with wildcards !!
In-Reply-To: 
References: 
 <20140509022553.GK1849@mdounin.ru>
 
Message-ID: 

Please guide me if hot-linking protection should prevent that stream to be
playable which is embedded ?


On Fri, May 9, 2014 at 10:36 AM, shahzaib shahzaib wrote:

> Hello Maxim,
>
>                I am not using server_names, it was just a google stuff i
> put in order to test if hotlinking works or not. One of my teammates
> informed me that the link is embedded on the website and there's no
> hot-linking protection issue regarding it.
>
> @Maxim, i think embedded videos are different than remotely playable
> videos. Offcourse, you'd have the better knowledge :)
>
>
>
>
>
>
> On Fri, May 9, 2014 at 7:25 AM, Maxim Dounin  wrote:
>
>> Hello!
>>
>> On Fri, May 09, 2014 at 03:14:35AM +0500, shahzaib shahzaib wrote:
>>
>> > hello,
>> >
>> >      I am using hotlinking protection for mp4 files but found that
>> whenever
>> > i use wildcard in hot-linking, it doesn't work properly and videos even
>> > play on the domain where it doesn't suppose to be buffer.
>> >
>> > My mp4 config :
>> >
>> > location ~ \.(mp4)$ {
>> >                 mp4;
>> >                 root /var/www/html/tunefiles;
>> >                 expires 7d;
>> >        valid_referers none blocked server_names mydomain *.mydomain *.
>> > facebook.com *.twitter.com *.seconddomain.com *.thirddomain.com
>> > fourthdomain.com *.fourthdomain.com fifthdomain.com www.fifthdomain.com
>> > embed.fifthdomain.com;
>> >               if ($invalid_referer) {
>> >                     return   403;
>> > }
>> > }
>> >
>> > The mp4 file is buffering on the following domain which doesn't suppose
>> to
>> > be in valid_referers variable.
>> >
>> >
>> http://www.tusnovelas.net/telenovela/lo-que-la-vida-me-robo-capitulo-138
>> >
>> > Nginx-1.4.7
>> >
>> > Help will be highly appreciated.
>>
>> Try looking into Referer header of a request, as well as at server
>> names of a particular server{} block (as you use "valid_referers
>> server_names").
>>
>> --
>> Maxim Dounin
>> http://nginx.org/
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From mdounin at mdounin.ru  Fri May  9 06:05:25 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 10:05:25 +0400
Subject: nginx hotlinking protection issue with wildcards !!
In-Reply-To: 
References: 
 <20140509022553.GK1849@mdounin.ru>
 
Message-ID: <20140509060525.GX1849@mdounin.ru>

Hello!

On Fri, May 09, 2014 at 10:36:40AM +0500, shahzaib shahzaib wrote:

> Hello Maxim,
> 
>                I am not using server_names, it was just a google stuff i
> put in order to test if hotlinking works or not. One of my teammates
> informed me that the link is embedded on the website and there's no
> hot-linking protection issue regarding it.

Please start with reading (and understanding) documentation here:

http://nginx.org/en/docs/http/ngx_http_referer_module.html

If there are any question remain after this, feel free to ask 
again.

-- 
Maxim Dounin
http://nginx.org/


From alan at chandlerfamily.org.uk  Fri May  9 06:30:17 2014
From: alan at chandlerfamily.org.uk (Alan Chandler)
Date: Fri, 09 May 2014 07:30:17 +0100
Subject: Struggling with configuration
In-Reply-To: <536BD7FD.10903@chandlerfamily.org.uk>
References: <536BD7FD.10903@chandlerfamily.org.uk>
Message-ID: <536C75F9.609@chandlerfamily.org.uk>

On 08/05/14 20:16, Alan Chandler wrote:
> Hi
>
> I am porting some stuff that I had working under Apache to now run 
> under Nginx and I have a particular case that I don't know how to deal 
> with.
>
> I have a physical directory structure like this
>
> dev/
> dev/myapp/
> dev/myapp/web/
>
> in this directory is an index.php file with the following early in its 
> processing
> require_once($_SERVER['DOCUMENT_ROOT'].'/forum/SSI.php');
>
> dev/test-base/
> dev/test-base/forum/
>
> In this directory is an smf forum, and there is an SSI.php file in here
>
> my nginx configuration for this
>
> ...

>     location /myapp {
>         alias /home/alan/dev/myapp/web;
>         try_files $uri /myapp/index.php;
>         location ~* ^/myapp/(.+\.php)$ {
>                fastcgi_pass unix:/var/run/php5-fpm-alan.sock;
>                fastcgi_index index.php;
>                fastcgi_param SCRIPT_FILENAME 
> $document_root$fastcgi_script_name;
>                include /etc/nginx/fastcgi_params;
>         }
>
>     }
>     include php.conf
> ...
I eventually found a solution.  Whether it is the right one I don't 
know, but I have to redefine document root after I have included the 
common fastcgi_params file.

     location = /myapp {
         rewrite ^ /myapp/ permanent;
     }

     location /myapp/ {
         alias /home/alan/dev/myapp/web/;
         index index.php;
     }

     location ~ ^/myapp/(.*\.php)$ {
         alias /home/alan/dev/myapp/web/$1;
         include fastcgi_params;
         fastcgi_param DOCUMENT_ROOT /home/alan/dev/test-base;
         fastcgi_index index.php;
     #    fastcgi_intercept_errors on;
         fastcgi_pass unix:/var/run/php5-fpm-alan.sock;
     }


-- 
Alan Chandler
http://www.chandlerfamily.org.uk


From nginx-forum at nginx.us  Fri May  9 06:36:53 2014
From: nginx-forum at nginx.us (beatnut)
Date: Fri, 09 May 2014 02:36:53 -0400
Subject: fastcgi cache path keys zone=name:size
Message-ID: 

Hi all
I've simple question but unfortunately i cant' find any information.

How many entries can handle 1MB of memory configured by size in 
keys_zone=name:size ?

Thanks in advance for your reply

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249973,249973#msg-249973


From techgarry75 at yahoo.com  Fri May  9 06:49:55 2014
From: techgarry75 at yahoo.com (M. G.)
Date: Thu, 8 May 2014 23:49:55 -0700 (PDT)
Subject: proxying of POST requests based on $args not working
Message-ID: <1399618195.91874.YahooMailNeo@web125905.mail.ne1.yahoo.com>

Hi,

The proxying of GET requests on $args i.e. feedid=293634 goes to server2 properly.

But proxying of POST requests on $args i.e. feedid=293634 always goes to server1 instead of server2.

Following is configuration...

1. /etc/nginx/nginx.conf contains

user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/proxy.conf;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_names_hash_bucket_size 64;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 70;
include /tmp/routing.conf ;
}

2. /etc/nginx/proxy.conf contains

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 100m;
client_body_buffer_size 100m;
proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
proxy_buffers 32 4k;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504 http_404 invalid_header;
log_format postdata $request_body;


3. /tmp/routing.conf contains

upstream server1 {
server 192.168.20.202:8090;
}
upstream server2 {
server 127.0.0.1:8080;
}

server {
listen 80;
server_name mysubdomain.domain.co.in;

location / {
proxy_pass http://server1;

if ( $args ~ 'feedid=293634' ) {
proxy_pass http://server2;
}
if ( $request_method = POST ) {
set $test P;
}
if ( $args ~ 'feedid=293634' ) {
set $test "${test}C";
}
if ( $test = PC) {
proxy_pass http://server2;
}
}
}

Any suggestions....

Thanks,
M. G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From mdounin at mdounin.ru  Fri May  9 07:06:36 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 May 2014 11:06:36 +0400
Subject: proxying of POST requests based on $args not working
In-Reply-To: <1399618195.91874.YahooMailNeo@web125905.mail.ne1.yahoo.com>
References: <1399618195.91874.YahooMailNeo@web125905.mail.ne1.yahoo.com>
Message-ID: <20140509070636.GD1849@mdounin.ru>

Hello!

On Thu, May 08, 2014 at 11:49:55PM -0700, M. G. wrote:

> Hi,
> 
> The proxying of GET requests on $args i.e. feedid=293634 goes to server2 properly.
> 
> But proxying of POST requests on $args i.e. feedid=293634 always goes to server1 instead of server2.

The $args variable is "arguments in the request line", see 
http://nginx.org/r/$args.  It is not expected to contain any data 
from POST request body.

-- 
Maxim Dounin
http://nginx.org/


From techgarry75 at yahoo.com  Fri May  9 07:28:25 2014
From: techgarry75 at yahoo.com (M. G.)
Date: Fri, 9 May 2014 00:28:25 -0700 (PDT)
Subject: proxying of POST requests based on $args not working
In-Reply-To: <20140509070636.GD1849@mdounin.ru>
References: <1399618195.91874.YahooMailNeo@web125905.mail.ne1.yahoo.com>
 <20140509070636.GD1849@mdounin.ru>
Message-ID: <1399620505.70837.YahooMailNeo@web125903.mail.ne1.yahoo.com>

Hi,

??? > On Friday, 9 May 2014 12:36 PM, Maxim Dounin  wrote:
??? 

??? > The $args variable is "arguments in the request line", see? http://nginx.org/r/$args. It is not expected to contain any data? from POST request body.

Firstly thank you for your quick response.

We need to proxy a specific request eg. when feedid=293634 and the request method is POST.

How can we achieve the same, if possible requesting an sample configuration for our reference.

Thanks,

M. G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From kirpit at gmail.com  Fri May  9 08:57:25 2014
From: kirpit at gmail.com (kirpit)
Date: Fri, 9 May 2014 11:57:25 +0300
Subject: $memcached_key doesn't fetch unicode url
In-Reply-To: <20140509011326.GF1849@mdounin.ru>
References: 
 <20140509011326.GF1849@mdounin.ru>
Message-ID: 

A note to the history that is good to know.

Thanks Maxim, even though we choose not to use unicode urls to make things
less complicated.


On Fri, May 9, 2014 at 4:13 AM, Maxim Dounin  wrote:

> Hello!
>
> On Sun, May 04, 2014 at 06:42:39PM +0300, kirpit wrote:
>
> > Hi,
> >
> > I'm doing some sort of downstream cache that I save every entire page
> into
> > memcached from the application with urls such as:
> >
> > www.example.com/file/path/?query=1
> >
> > then I'm fetching them from nginx if available with the config:
> >     location / {
> >         # try to fetch from memcached
> >         set                 $memcached_key "$host$request_uri";
> >         memcached_pass      localhost:11211;
> >         expires             10m;
> >         # not fetched from memcached, fallback
> >         error_page          404 405 502 = @fallback;
> >     }
> >
> >
> > This works perfectly fine for latin char urls. However, it fails to catch
> > unicode urls such as:
> >
> >
> www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%C4%B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1
> >
> > The unicode char "%C4%B0" appears same in the nginx logs, application
> cache
> > setting key (that is actually taken from raw REQUEST_URI what nginx
> gives).
> >
> > The example url content also exist in the memcached itself when if I try:
> > "get
> >
> www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%C4%B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1
> > "
> >
> > However, nginx cannot fetch anything from memcached, @fallbacks every
> time.
> >
> > I'm using version 1.6.0. Any help is much appreciated.
>
> Value of $memcached_key is escaped to make sure memcached protocol
> constraints are not violated - most notably, space and "%"
> character are escaped into "%20" and "%25", respectively (space is
> escaped as it's not allowed in memcached keys, and "%" to make the
> escaping reversible).
>
> Given the
>
>
> www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%C4%B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1
>
> $memcached_key, nginx will try to fetch
>
>
> www.example.com/flights/istc-lonc-11062014/lonc-istc-19062014/Cheap-return-tickets-from-%25C4%25B0stanbul-to-Londra-in-Haziran.html?class=economy&adult=1
>
> from memcached.
>
> --
> Maxim Dounin
> http://nginx.org/
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Fri May  9 09:16:46 2014
From: nginx-forum at nginx.us (beatnut)
Date: Fri, 09 May 2014 05:16:46 -0400
Subject: $arg_name as an array
Message-ID: 

Hello,

Does it possible to use $arg_name as an array?
For example
I've query string : ?opt[test]=1

I'd like to get value od opt[test] but $arg_opt[test] doesn't work.

Is there special syntax for that case?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249982,249982#msg-249982


From nginx-forum at nginx.us  Fri May  9 09:39:24 2014
From: nginx-forum at nginx.us (JSurf)
Date: Fri, 09 May 2014 05:39:24 -0400
Subject: Proxy buffering
In-Reply-To: <20140509040751.GQ1849@mdounin.ru>
References: <20140509040751.GQ1849@mdounin.ru>
Message-ID: 

Great news! Thanks!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244680,249983#msg-249983


From contact at jpluscplusm.com  Fri May  9 09:54:44 2014
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Fri, 9 May 2014 10:54:44 +0100
Subject: $arg_name as an array
In-Reply-To: 
References: 
Message-ID: 

On 9 May 2014 10:16, beatnut  wrote:
> Hello,
>
> Does it possible to use $arg_name as an array?
> For example
> I've query string : ?opt[test]=1
>
> I'd like to get value od opt[test] but $arg_opt[test] doesn't work.
>
> Is there special syntax for that case?

Query strings arguments are just strings. Whatever encoding you layer
on top of those strings, in order to be able to treat them as arrays,
is entirely subjective: you could be using any one of a number of
encodings. Nginx has no mechanism to deal with this in its vanilla
setup, as far as I'm aware.

I don't know how its embedded perl or lua modules might be /able/ to
treat them, but you'd almost certainly have to /train/ them (at least
trivially) in how to understand your specific encoding: semi-colon
separated? Brackets of some kind? Some more densely packed binary
encoding which you've then had to base64 encode because of HTTP? Nginx
doesn't know this, and doesn't give you this functionality out of the
box.

J


From matt.gray at 7digital.com  Fri May  9 10:02:16 2014
From: matt.gray at 7digital.com (Matt Gray)
Date: Fri, 9 May 2014 11:02:16 +0100
Subject: $arg_name as an array
In-Reply-To: 
References: 
Message-ID: 

On 9 May 2014 10:16, beatnut  wrote:

>
> Does it possible to use $arg_name as an array?
> For example
> I've query string : ?opt[test]=1
>
> I'd like to get value od opt[test] but $arg_opt[test] doesn't work.
>
> Is there special syntax for that case?
>

This thread might be of interest: http://forum.nginx.org/read.php?11,241016

I haven't tested it but it suggests that the syntax ${arg_opt[test]} might
do what you require.

-- 


This email, including attachments, is private and confidential. If you have 
received this email in error please notify the sender and delete it from 
your system. Emails are not secure and may contain viruses. No liability 
can be accepted for viruses that might be transferred by this email or any 
attachment. Any unauthorised copying of this message or unauthorised 
distribution and publication of the information contained herein are 
prohibited.

7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
Registered in England and Wales. Registered No. 04843573.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From matt.gray at 7digital.com  Fri May  9 10:04:04 2014
From: matt.gray at 7digital.com (Matt Gray)
Date: Fri, 9 May 2014 11:04:04 +0100
Subject: $arg_name as an array
In-Reply-To: 
References: 
 
Message-ID: 

Note that this is not "treating $arg_name as an array", it's simply
allowing you to use the [] characters in a variable name - usually only
a-z0-9_ is allowed


On 9 May 2014 11:02, Matt Gray  wrote:

> On 9 May 2014 10:16, beatnut  wrote:
>
>>
>> Does it possible to use $arg_name as an array?
>> For example
>> I've query string : ?opt[test]=1
>>
>> I'd like to get value od opt[test] but $arg_opt[test] doesn't work.
>>
>> Is there special syntax for that case?
>>
>
> This thread might be of interest:
> http://forum.nginx.org/read.php?11,241016
>
> I haven't tested it but it suggests that the syntax ${arg_opt[test]} might
> do what you require.
>
>
>


-- 
*Matt Gray*
API Developer

*T:* +44(0)207 099 7777
*E:* matt.gray at 7digital.com
*W:* www.7digital.com

-- 


This email, including attachments, is private and confidential. If you have 
received this email in error please notify the sender and delete it from 
your system. Emails are not secure and may contain viruses. No liability 
can be accepted for viruses that might be transferred by this email or any 
attachment. Any unauthorised copying of this message or unauthorised 
distribution and publication of the information contained herein are 
prohibited.

7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
Registered in England and Wales. Registered No. 04843573.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Fri May  9 10:18:52 2014
From: nginx-forum at nginx.us (beatnut)
Date: Fri, 09 May 2014 06:18:52 -0400
Subject: $arg_name as an array
In-Reply-To: 
References: 
Message-ID: <8b4b5ca17e3c8964c7cf89152f5dd3a3.NginxMailingListEnglish@forum.nginx.org>

Thanks for explanation.
I'll try with example above.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249982,249987#msg-249987


From matt.gray at 7digital.com  Fri May  9 10:23:53 2014
From: matt.gray at 7digital.com (Matt Gray)
Date: Fri, 9 May 2014 11:23:53 +0100
Subject: Multiple reverse proxies that read from /static/ ?
In-Reply-To: 
References: 
Message-ID: 

I think your comment "intuitively nesting doesn't work" is correct - you
wish to merge the uri spaces of django and ipython into one, which means
that if both apps need (eg) /static/css/main.css, they will collide and one
app will always get the wrong file. It might be that you can avoid
collisions of this type by chance, but then you run the risk of collision
occurring due to an upgrade of either app.

I would either a) see if you can modify the static files location for the
apps, or serve one / both off a subdomain  - notebook.mysite.com or
similiar. If you really want to try the "merge" approach, try_files
http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files might be
of use.


On 8 May 2014 20:50, kafonek  wrote:

> Hello,
>
> Sorry for the beginner question, but I can't seem to find an example
> anywhere that shows how to handle nginx hosting more than one reverse proxy
> (such as a Django app and an Ipython notebook) where both systems have
> static files in different directories.
>
> The relevant parts of my nginx conf look something like this -
>
> location /django {
>     #gunicorn server running on localhost port 8000
>     proxy_pass http://127.0.0.1:8000/;
> }
>
> location /ipython {
>     #ipython notebook server (tornado) running on localhost port 9999
>     proxy_pass http://127.0.0.1:9999/;
> }
>
> location /static {
>     alias /opt/django/django_project/static/;
>     #alias /usr/lib/python2.6/site-packages/IPython/html/static/;
> }
>
>
> Is there a way to specify two different directories of static files or nest
> a location /static/ inside the other blocks?  Intuitively nesting doesn't
> work, because the css is calling /static, not /ipython/static or
> /django/static.
>
> My end goal would be to go to http://mysite.com/django/ to see my Django
> pages and http://mysite.com/ipython/ to get to my ipython notebook server.
> Either one works right now if I point location /static {} to the
> appropriate
> alias, but I don't know how to get both working at the same time.
>
> Thanks in advance.
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249943,249943#msg-249943
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Matt Gray*
API Developer

*T:* +44(0)207 099 7777
*E:* matt.gray at 7digital.com
*W:* www.7digital.com

-- 


This email, including attachments, is private and confidential. If you have 
received this email in error please notify the sender and delete it from 
your system. Emails are not secure and may contain viruses. No liability 
can be accepted for viruses that might be transferred by this email or any 
attachment. Any unauthorised copying of this message or unauthorised 
distribution and publication of the information contained herein are 
prohibited.

7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
Registered in England and Wales. Registered No. 04843573.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From me at tommehm.com  Fri May  9 12:36:48 2014
From: me at tommehm.com (Tom McLoughlin)
Date: Fri, 09 May 2014 13:36:48 +0100
Subject: subs filter error
Message-ID: <536CCBE0.9090703@tommehm.com>

I'm running a TPB proxy on nginx using subs_filter to monetize the proxy
with ads,
and I keep getting this error every time someone loads a page.
subs filter header ignored, this may be a compressed response. while
reading response header from upstream, client: xx.xx.xx.xx, server: ,
request: "GET /search/sharepoint/0/7/0 HTTP/1.1", upstream:
"http://194.71.107.80:80/search/sharepoint/0/7/0", host: "tpb.rtbt.me",
referrer: "http://tpb.rtbt.me/search/sharepoint/0/99/"

My configuration is available at,
http://p.ngx.cc/d7eacc9934caa82a


From nginx-forum at nginx.us  Fri May  9 13:20:23 2014
From: nginx-forum at nginx.us (kafonek)
Date: Fri, 09 May 2014 09:20:23 -0400
Subject: Multiple reverse proxies that read from /static/ ?
In-Reply-To: 
References: 
Message-ID: 

Yep, thanks Matt.

In case anyone else runs across this post, Django static url prefixes are
configured in your django project settings.py
(https://docs.djangoproject.com/en/1.6/ref/settings/#std:setting-STATIC_URL).
 Ipython notebook static url prefixes are configured in the
NotebookApp.webapp settings
(http://ipython.org/ipython-doc/rel-1.1.0/interactive/public_server.html#running-with-a-different-url-prefix).


I was definitely trying to tackle the problem the wrong way.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249943,249996#msg-249996


From shahzaib.cb at gmail.com  Fri May  9 14:58:44 2014
From: shahzaib.cb at gmail.com (shahzaib shahzaib)
Date: Fri, 9 May 2014 19:58:44 +0500
Subject: Caching servers in Local ISPs !!
Message-ID: 

Hello,

      We're running a high traffic website similar to youtube.com. Due to
high bandwidth utilization over the network, we're in contact with the
local ISP in order to put caching server to reduce bandwidth utilization
for file streaming. Our main front end content servers (nginx) are located
in U.S and we want to put caching servers in ASIA as most of the traffic is
originating from asia.

We've no idea how this caching would work. Would the caching servers will
be configured and deployed by Local ISP ? or we're required to do some work
with our application coding ?

I know it is bit off topic on nginx forum. But this is the only forum that
is intensively active and helpful.

Please guide me a bit, i am new to this caching environment.

Shahzaib
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From rainer at ultra-secure.de  Fri May  9 17:18:43 2014
From: rainer at ultra-secure.de (Rainer Duffner)
Date: Fri, 9 May 2014 19:18:43 +0200
Subject: Caching servers in Local ISPs !!
In-Reply-To: 
References: 
Message-ID: <728DF542-48F0-4EB1-9717-7A41B5FF84DD@ultra-secure.de>


Am 09.05.2014 um 16:58 schrieb shahzaib shahzaib :

> Hello,
> 
>       We're running a high traffic website similar to youtube.com. Due to high bandwidth utilization over the network, we're in contact with the local ISP in order to put caching server to reduce bandwidth utilization for file streaming. Our main front end content servers (nginx) are located in U.S and we want to put caching servers in ASIA as most of the traffic is originating from asia.
> 
> We've no idea how this caching would work. Would the caching servers will be configured and deployed by Local ISP ? or we're required to do some work with our application coding ?
> 
> I know it is bit off topic on nginx forum. But this is the only forum that is intensively active and helpful.
> 
> Please guide me a bit, i am new to this caching environment.
> 



I think you would need to move your DNS to somebody like easydns.com or dyn.com (just of the top of my head, there are probably lots more) and use their geo-location feature.

That way, somebody with an IP from Asia will receive the  IP of your Asian server when it asks DNS about the IP of your domain-name.

Maybe somebody who has done this before can comment - I?ve looked into it, but never seriously.

;-)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From shahzaib.cb at gmail.com  Fri May  9 17:44:22 2014
From: shahzaib.cb at gmail.com (shahzaib shahzaib)
Date: Fri, 9 May 2014 22:44:22 +0500
Subject: Caching servers in Local ISPs !!
In-Reply-To: <728DF542-48F0-4EB1-9717-7A41B5FF84DD@ultra-secure.de>
References: 
 <728DF542-48F0-4EB1-9717-7A41B5FF84DD@ultra-secure.de>
Message-ID: 

@Rainer, we're already in contact with one of our Country's ISP(80% of the
country users are using that ISP services) .So, they can do much better
work than the DNS sites you provided because we only required caching for
our country.


On Fri, May 9, 2014 at 10:18 PM, Rainer Duffner wrote:

>
> Am 09.05.2014 um 16:58 schrieb shahzaib shahzaib :
>
> Hello,
>
>       We're running a high traffic website similar to youtube.com. Due to
> high bandwidth utilization over the network, we're in contact with the
> local ISP in order to put caching server to reduce bandwidth utilization
> for file streaming. Our main front end content servers (nginx) are located
> in U.S and we want to put caching servers in ASIA as most of the traffic is
> originating from asia.
>
> We've no idea how this caching would work. Would the caching servers will
> be configured and deployed by Local ISP ? or we're required to do some work
> with our application coding ?
>
> I know it is bit off topic on nginx forum. But this is the only forum that
> is intensively active and helpful.
>
> Please guide me a bit, i am new to this caching environment.
>
>
>
>
> I think you would need to move your DNS to somebody like easydns.com or
> dyn.com (just of the top of my head, there are probably lots more) and
> use their geo-location feature.
>
> That way, somebody with an IP from Asia will receive the  IP of your Asian
> server when it asks DNS about the IP of your domain-name.
>
> Maybe somebody who has done this before can comment - I?ve looked into it,
> but never seriously.
>
> ;-)
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Fri May  9 18:49:33 2014
From: nginx-forum at nginx.us (itpp2012)
Date: Fri, 09 May 2014 14:49:33 -0400
Subject: Caching servers in Local ISPs !!
In-Reply-To: 
References: 
Message-ID: 

Its quite simple, think of it this way, a DNS entry does not have to point
to the same IP everywhere.

Place your cache machines at a ISP, have them assign its IP to your
preferred dns name, thats about it.

The rest like distribution works like a reverse riverbed with a master
mirror, rsync or the likes.

And of course this can all be done with nginx at all locations.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,250000#msg-250000


From shahzaib.cb at gmail.com  Fri May  9 19:22:30 2014
From: shahzaib.cb at gmail.com (shahzaib shahzaib)
Date: Sat, 10 May 2014 00:22:30 +0500
Subject: Caching servers in Local ISPs !!
In-Reply-To: 
References: 
 
Message-ID: 

@itpp thanks for replying.

So on easy note, i would have to assign those machines the preferred dns
and use rsync on regular basis in order to make identical data  between
local caching machines and main front end content servers ?

What if a client request a video which is not in local caching server ?
Does nginx has the configuration for it to check the files locally and then
forward the request to main content servers if requested file is not cached
locally ?

I need a bit of guidance in order to configure nginx this way.

Shahzaib



On Fri, May 9, 2014 at 11:49 PM, itpp2012  wrote:

> Its quite simple, think of it this way, a DNS entry does not have to point
> to the same IP everywhere.
>
> Place your cache machines at a ISP, have them assign its IP to your
> preferred dns name, thats about it.
>
> The rest like distribution works like a reverse riverbed with a master
> mirror, rsync or the likes.
>
> And of course this can all be done with nginx at all locations.
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249997,250000#msg-250000
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Fri May  9 20:01:46 2014
From: nginx-forum at nginx.us (itpp2012)
Date: Fri, 09 May 2014 16:01:46 -0400
Subject: Caching servers in Local ISPs !!
In-Reply-To: 
References: 
Message-ID: 

> So on easy note, i would have to assign those machines the preferred
> dns
> and use rsync on regular basis in order to make identical data 
> between
> local caching machines and main front end content servers ?

Yep.

> What if a client request a video which is not in local caching server
> ?

You need to maintain a cache index on each cache machine in order to
determine what is available to the users, for most content you need to do
this anyway since not all content can legally be everywhere and you also
might want to customize what you present for each region.

> Does nginx has the configuration for it to check the files locally and
> then
> forward the request to main content servers if requested file is not
> cached
> locally ?

There are many ways to do this with nginx and Lua but a independent cache
index would be much better, with it you can do much more like redirect a
content source from elsewhere depending on load and demand. You simply feed
nginx the cache index. A very simplistic cache index system is abusing a
local (local to nginx) dns server, assign local IP's to resources and change
them according to load and demand. Again for a local DNS you can assign
whatever you want to a dns name, with a local ttl of 15 seconds and nginx
loadbalancing between 4 regional resources it will be peanuts to change the
load based on demand (provided you have monitoring in place which can act on
such data). Basically a DIY BGP :)

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,250002#msg-250002


From contact at jpluscplusm.com  Fri May  9 22:05:41 2014
From: contact at jpluscplusm.com (Jonathan Matthews)
Date: Fri, 9 May 2014 23:05:41 +0100
Subject: subs filter error
In-Reply-To: <536CCBE0.9090703@tommehm.com>
References: <536CCBE0.9090703@tommehm.com>
Message-ID: 

On 9 May 2014 13:36, Tom McLoughlin  wrote:
> I keep getting this error every time someone loads a page.
> subs filter header ignored, this may be a compressed response. while
> reading response header from upstream, client: xx.xx.xx.xx, server: ,
> request: "GET /search/sharepoint/0/7/0 HTTP/1.1", upstream:
> "http://194.71.107.80:80/search/sharepoint/0/7/0", host: "tpb.rtbt.me",
> referrer: "http://tpb.rtbt.me/search/sharepoint/0/99/"

So why not stop the upstream responding with a compressed response?

I know how to do this for TPB, having written a *14* line nginx config
to do exactly the same thing, reverse proxying TPB for .. academic
reasons. But you're trying to make money off them, so I don't feel
like sharing. I'll let you figure it out. It's really not difficult.

J


From steve at greengecko.co.nz  Sat May 10 04:24:57 2014
From: steve at greengecko.co.nz (Steve Holdoway)
Date: Sat, 10 May 2014 16:24:57 +1200
Subject: Caching servers in Local ISPs !!
In-Reply-To: 
References: 
 
 
Message-ID: <1399695897.24481.647.camel@steve-new>

You might want to look at lsyncd - a GZSOC project - to ease the
synchronisation. I have had good results with it.

Steve
On Sat, 2014-05-10 at 00:22 +0500, shahzaib shahzaib wrote:
> @itpp thanks for replying. 
> 
> 
> So on easy note, i would have to assign those machines the preferred
> dns and use rsync on regular basis in order to make identical data
> between local caching machines and main front end content servers ?
> 
> 
> What if a client request a video which is not in local caching
> server ? Does nginx has the configuration for it to check the files
> locally and then forward the request to main content servers if
> requested file is not cached locally ?
> 
> 
> I need a bit of guidance in order to configure nginx this way. 
> 
> 
> Shahzaib
> 
> 
> 
> 
> On Fri, May 9, 2014 at 11:49 PM, itpp2012 
> wrote:
>         Its quite simple, think of it this way, a DNS entry does not
>         have to point
>         to the same IP everywhere.
>         
>         Place your cache machines at a ISP, have them assign its IP to
>         your
>         preferred dns name, thats about it.
>         
>         The rest like distribution works like a reverse riverbed with
>         a master
>         mirror, rsync or the likes.
>         
>         And of course this can all be done with nginx at all
>         locations.
>         
>         Posted at Nginx Forum:
>         http://forum.nginx.org/read.php?2,249997,250000#msg-250000
>         
>         _______________________________________________
>         nginx mailing list
>         nginx at nginx.org
>         http://mailman.nginx.org/mailman/listinfo/nginx
>         
> 
> 
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

-- 
Steve Holdoway BSc(Hons) MIITP
http://www.greengecko.co.nz
Linkedin: http://www.linkedin.com/in/steveholdoway
Skype: sholdowa


From shahzaib.cb at gmail.com  Sat May 10 09:19:37 2014
From: shahzaib.cb at gmail.com (shahzaib shahzaib)
Date: Sat, 10 May 2014 14:19:37 +0500
Subject: Caching servers in Local ISPs !!
In-Reply-To: <1399695897.24481.647.camel@steve-new>
References: 
 
 
 <1399695897.24481.647.camel@steve-new>
Message-ID: 

Thanks for replying guyz.

Can i use nginx (origin and edge) ? As the question in following link.

http://stackoverflow.com/questions/10024981/distributed-cached-mp4-pseudostreaming-seeking-with-nginx

If i use the origin and edge method, i think i'll change my application
codes to redirect local country traffic to edge webservers (ISP caching
server for video files) and that edge server will check if the requested
file is not in cache and it'll fetch the requested video file from origin
web-server located in U.S and cache it to local.

For this procedure,

I'll have to configure DNS A entries against local ISP caching servers and
put those DNS to my application code to stream videos from those LOCAL
CACHING SERVERS for specific country.

Please correct me if i am wrong.






On Sat, May 10, 2014 at 9:24 AM, Steve Holdoway wrote:

> You might want to look at lsyncd - a GZSOC project - to ease the
> synchronisation. I have had good results with it.
>
> Steve
> On Sat, 2014-05-10 at 00:22 +0500, shahzaib shahzaib wrote:
> > @itpp thanks for replying.
> >
> >
> > So on easy note, i would have to assign those machines the preferred
> > dns and use rsync on regular basis in order to make identical data
> > between local caching machines and main front end content servers ?
> >
> >
> > What if a client request a video which is not in local caching
> > server ? Does nginx has the configuration for it to check the files
> > locally and then forward the request to main content servers if
> > requested file is not cached locally ?
> >
> >
> > I need a bit of guidance in order to configure nginx this way.
> >
> >
> > Shahzaib
> >
> >
> >
> >
> > On Fri, May 9, 2014 at 11:49 PM, itpp2012 
> > wrote:
> >         Its quite simple, think of it this way, a DNS entry does not
> >         have to point
> >         to the same IP everywhere.
> >
> >         Place your cache machines at a ISP, have them assign its IP to
> >         your
> >         preferred dns name, thats about it.
> >
> >         The rest like distribution works like a reverse riverbed with
> >         a master
> >         mirror, rsync or the likes.
> >
> >         And of course this can all be done with nginx at all
> >         locations.
> >
> >         Posted at Nginx Forum:
> >         http://forum.nginx.org/read.php?2,249997,250000#msg-250000
> >
> >         _______________________________________________
> >         nginx mailing list
> >         nginx at nginx.org
> >         http://mailman.nginx.org/mailman/listinfo/nginx
> >
> >
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> --
> Steve Holdoway BSc(Hons) MIITP
> http://www.greengecko.co.nz
> Linkedin: http://www.linkedin.com/in/steveholdoway
> Skype: sholdowa
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From me at tommehm.com  Sat May 10 09:53:17 2014
From: me at tommehm.com (Tom McLoughlin)
Date: Sat, 10 May 2014 10:53:17 +0100
Subject: subs filter error
In-Reply-To: 
References: <536CCBE0.9090703@tommehm.com>
 
Message-ID: <536DF70D.2050302@tommehm.com>

That's the only upstream I'm aware of that works with proxies.

On 09/05/2014 23:05, Jonathan Matthews wrote:
> On 9 May 2014 13:36, Tom McLoughlin  wrote:
>> I keep getting this error every time someone loads a page. subs
>> filter header ignored, this may be a compressed response. while 
>> reading response header from upstream, client: xx.xx.xx.xx,
>> server: , request: "GET /search/sharepoint/0/7/0 HTTP/1.1",
>> upstream: "http://194.71.107.80:80/search/sharepoint/0/7/0",
>> host: "tpb.rtbt.me", referrer:
>> "http://tpb.rtbt.me/search/sharepoint/0/99/"
> 
> So why not stop the upstream responding with a compressed
> response?
> 
> I know how to do this for TPB, having written a *14* line nginx
> config to do exactly the same thing, reverse proxying TPB for ..
> academic reasons. But you're trying to make money off them, so I
> don't feel like sharing. I'll let you figure it out. It's really
> not difficult.
> 
> J
> 
> _______________________________________________ nginx mailing list 
> nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
> 


From nginx-forum at nginx.us  Sat May 10 10:39:55 2014
From: nginx-forum at nginx.us (itpp2012)
Date: Sat, 10 May 2014 06:39:55 -0400
Subject: Caching servers in Local ISPs !!
In-Reply-To: 
References: 
Message-ID: <461b89f745fbc7e6c616c28d3fa6a39f.NginxMailingListEnglish@forum.nginx.org>

See http://en.wikipedia.org/wiki/Content_delivery_network
and http://en.wikipedia.org/wiki/File:Akamaiprocess.png

Make yourself a HLD (high level design) before getting to technology.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249997,250007#msg-250007


From reallfqq-nginx at yahoo.fr  Sat May 10 18:59:42 2014
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Sat, 10 May 2014 20:59:42 +0200
Subject: Strange advisory
Message-ID: 

I just saw something strange on http://nginx.org/en/security_advisories.html
:
"An error log data are not sanitized
Severity: none
CVE-2009-4487 
Not vulnerable: none
Vulnerable: all"

Severity is labelled as 'None', though the CVE talks, among other stuff,
about 'arbitrary commands and file write'.
Is your advisories page wrong? Is the CVE wrong? Has this been solved?
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From kurt at x64architecture.com  Sat May 10 19:41:27 2014
From: kurt at x64architecture.com (Kurt Cancemi)
Date: Sat, 10 May 2014 15:41:27 -0400
Subject: Strange advisory
In-Reply-To: 
References: 
Message-ID: 

Hello,

This has not been fixed in current nginx releases, this is not
directly related to nginx either, the problem is outdated terminal
emulators would parse the potentially malicious commands in the log
file. This answer http://unix.stackexchange.com/a/15210 explains it
better.

---
Regards,
Kurt Cancemi


On Sat, May 10, 2014 at 2:59 PM, B.R.  wrote:
> I just saw something strange on
> http://nginx.org/en/security_advisories.html:
> "An error log data are not sanitized
> Severity: none
> CVE-2009-4487
> Not vulnerable: none
> Vulnerable: all"
>
> Severity is labelled as 'None', though the CVE talks, among other stuff,
> about 'arbitrary commands and file write'.
> Is your advisories page wrong? Is the CVE wrong? Has this been solved?
> ---
> B. R.
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


From luky-37 at hotmail.com  Sat May 10 19:45:14 2014
From: luky-37 at hotmail.com (Lukas Tribus)
Date: Sat, 10 May 2014 21:45:14 +0200
Subject: Strange advisory
In-Reply-To: 
References: 
Message-ID: 

Hi!


> I just saw something strange on
> http://nginx.org/en/security_advisories.html:
> 
> 
> "An error log data are not sanitized
> Severity: none
> CVE-2009-4487
> Not vulnerable: none
> Vulnerable: all"
> 
> 
> 
> Severity is labelled as 'None', though the CVE talks, among other stuff,
> about 'arbitrary commands and file write'.
> Is your advisories page wrong? Is the CVE wrong? Has this been solved?

Afaik the nginx developers didn't agree with this CVE advisory, because its
actually a terminal problem. Nginx cannot be exploited, but the user when
looking at the log files can.

Read the advisory for details [1].



Regards,

Lukas


[1] http://www.ush.it/team/ush/hack_httpd_escape/adv.txt 		 	   		  

From pacr.zeolite at gmail.com  Sat May 10 20:31:02 2014
From: pacr.zeolite at gmail.com (plasmaracer .)
Date: Sat, 10 May 2014 14:31:02 -0600
Subject: nginx Digest, Vol 55, Issue 26
In-Reply-To: 
References: 
Message-ID: 

On May 10, 2014 6:00 AM,  wrote:

> Send nginx mailing list submissions to
>         nginx at nginx.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://mailman.nginx.org/mailman/listinfo/nginx
> or, via email, send a message with subject or body 'help' to
>         nginx-request at nginx.org
>
> You can reach the person managing the list at
>         nginx-owner at nginx.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of nginx digest..."
>
>
> Today's Topics:
>
>    1. Re: Caching servers in Local ISPs !! (itpp2012)
>    2. Re: subs filter error (Jonathan Matthews)
>    3. Re: Caching servers in Local ISPs !! (Steve Holdoway)
>    4. Re: Caching servers in Local ISPs !! (shahzaib shahzaib)
>    5. Re: subs filter error (Tom McLoughlin)
>    6. Re: Caching servers in Local ISPs !! (itpp2012)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 09 May 2014 16:01:46 -0400
> From: "itpp2012" 
> To: nginx at nginx.org
> Subject: Re: Caching servers in Local ISPs !!
> Message-ID:
>         <
> c081cd6fb43e5b579a74dcda519cecf6.NginxMailingListEnglish at forum.nginx.org>
>
> Content-Type: text/plain; charset=UTF-8
>
> > So on easy note, i would have to assign those machines the preferred
> > dns
> > and use rsync on regular basis in order to make identical data
> > between
> > local caching machines and main front end content servers ?
>
> Yep.
>
> > What if a client request a video which is not in local caching server
> > ?
>
> You need to maintain a cache index on each cache machine in order to
> determine what is available to the users, for most content you need to do
> this anyway since not all content can legally be everywhere and you also
> might want to customize what you present for each region.
>
> > Does nginx has the configuration for it to check the files locally and
> > then
> > forward the request to main content servers if requested file is not
> > cached
> > locally ?
>
> There are many ways to do this with nginx and Lua but a independent cache
> index would be much better, with it you can do much more like redirect a
> content source from elsewhere depending on load and demand. You simply feed
> nginx the cache index. A very simplistic cache index system is abusing a
> local (local to nginx) dns server, assign local IP's to resources and
> change
> them according to load and demand. Again for a local DNS you can assign
> whatever you want to a dns name, with a local ttl of 15 seconds and nginx
> loadbalancing between 4 regional resources it will be peanuts to change the
> load based on demand (provided you have monitoring in place which can act
> on
> such data). Basically a DIY BGP :)
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249997,250002#msg-250002
>
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 9 May 2014 23:05:41 +0100
> From: Jonathan Matthews 
> To: nginx at nginx.org
> Subject: Re: subs filter error
> Message-ID:
>         <
> CAKsTx7Cy2ahdy05DOJo00_ddRFoqy-oyQBNv4BdU-QOh5UJWtQ at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On 9 May 2014 13:36, Tom McLoughlin  wrote:
> > I keep getting this error every time someone loads a page.
> > subs filter header ignored, this may be a compressed response. while
> > reading response header from upstream, client: xx.xx.xx.xx, server: ,
> > request: "GET /search/sharepoint/0/7/0 HTTP/1.1", upstream:
> > "http://194.71.107.80:80/search/sharepoint/0/7/0", host: "tpb.rtbt.me",
> > referrer: "http://tpb.rtbt.me/search/sharepoint/0/99/"
>
> So why not stop the upstream responding with a compressed response?
>
> I know how to do this for TPB, having written a *14* line nginx config
> to do exactly the same thing, reverse proxying TPB for .. academic
> reasons. But you're trying to make money off them, so I don't feel
> like sharing. I'll let you figure it out. It's really not difficult.
>
> J
>
>
>
> ------------------------------
>
> Message: 3
> Date: Sat, 10 May 2014 16:24:57 +1200
> From: Steve Holdoway 
> To: nginx at nginx.org
> Subject: Re: Caching servers in Local ISPs !!
> Message-ID: <1399695897.24481.647.camel at steve-new>
> Content-Type: text/plain; charset="UTF-8"
>
> You might want to look at lsyncd - a GZSOC project - to ease the
> synchronisation. I have had good results with it.
>
> Steve
> On Sat, 2014-05-10 at 00:22 +0500, shahzaib shahzaib wrote:
> > @itpp thanks for replying.
> >
> >
> > So on easy note, i would have to assign those machines the preferred
> > dns and use rsync on regular basis in order to make identical data
> > between local caching machines and main front end content servers ?
> >
> >
> > What if a client request a video which is not in local caching
> > server ? Does nginx has the configuration for it to check the files
> > locally and then forward the request to main content servers if
> > requested file is not cached locally ?
> >
> >
> > I need a bit of guidance in order to configure nginx this way.
> >
> >
> > Shahzaib
> >
> >
> >
> >
> > On Fri, May 9, 2014 at 11:49 PM, itpp2012 
> > wrote:
> >         Its quite simple, think of it this way, a DNS entry does not
> >         have to point
> >         to the same IP everywhere.
> >
> >         Place your cache machines at a ISP, have them assign its IP to
> >         your
> >         preferred dns name, thats about it.
> >
> >         The rest like distribution works like a reverse riverbed with
> >         a master
> >         mirror, rsync or the likes.
> >
> >         And of course this can all be done with nginx at all
> >         locations.
> >
> >         Posted at Nginx Forum:
> >         http://forum.nginx.org/read.php?2,249997,250000#msg-250000
> >
> >         _______________________________________________
> >         nginx mailing list
> >         nginx at nginx.org
> >         http://mailman.nginx.org/mailman/listinfo/nginx
> >
> >
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> --
> Steve Holdoway BSc(Hons) MIITP
> http://www.greengecko.co.nz
> Linkedin: http://www.linkedin.com/in/steveholdoway
> Skype: sholdowa
>
>
>
> ------------------------------
>
> Message: 4
> Date: Sat, 10 May 2014 14:19:37 +0500
> From: shahzaib shahzaib 
> To: nginx at nginx.org
> Subject: Re: Caching servers in Local ISPs !!
> Message-ID:
>         <
> CAD3xhrPbC-F_8cY2t+3JqspL3-g_RZM4sPYd7p40WxFcvboqSA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Thanks for replying guyz.
>
> Can i use nginx (origin and edge) ? As the question in following link.
>
>
> http://stackoverflow.com/questions/10024981/distributed-cached-mp4-pseudostreaming-seeking-with-nginx
>
> If i use the origin and edge method, i think i'll change my application
> codes to redirect local country traffic to edge webservers (ISP caching
> server for video files) and that edge server will check if the requested
> file is not in cache and it'll fetch the requested video file from origin
> web-server located in U.S and cache it to local.
>
> For this procedure,
>
> I'll have to configure DNS A entries against local ISP caching servers and
> put those DNS to my application code to stream videos from those LOCAL
> CACHING SERVERS for specific country.
>
> Please correct me if i am wrong.
>
>
>
>
>
>
> On Sat, May 10, 2014 at 9:24 AM, Steve Holdoway  >wrote:
>
> > You might want to look at lsyncd - a GZSOC project - to ease the
> > synchronisation. I have had good results with it.
> >
> > Steve
> > On Sat, 2014-05-10 at 00:22 +0500, shahzaib shahzaib wrote:
> > > @itpp thanks for replying.
> > >
> > >
> > > So on easy note, i would have to assign those machines the preferred
> > > dns and use rsync on regular basis in order to make identical data
> > > between local caching machines and main front end content servers ?
> > >
> > >
> > > What if a client request a video which is not in local caching
> > > server ? Does nginx has the configuration for it to check the files
> > > locally and then forward the request to main content servers if
> > > requested file is not cached locally ?
> > >
> > >
> > > I need a bit of guidance in order to configure nginx this way.
> > >
> > >
> > > Shahzaib
> > >
> > >
> > >
> > >
> > > On Fri, May 9, 2014 at 11:49 PM, itpp2012 
> > > wrote:
> > >         Its quite simple, think of it this way, a DNS entry does not
> > >         have to point
> > >         to the same IP everywhere.
> > >
> > >         Place your cache machines at a ISP, have them assign its IP to
> > >         your
> > >         preferred dns name, thats about it.
> > >
> > >         The rest like distribution works like a reverse riverbed with
> > >         a master
> > >         mirror, rsync or the likes.
> > >
> > >         And of course this can all be done with nginx at all
> > >         locations.
> > >
> > >         Posted at Nginx Forum:
> > >         http://forum.nginx.org/read.php?2,249997,250000#msg-250000
> > >
> > >         _______________________________________________
> > >         nginx mailing list
> > >         nginx at nginx.org
> > >         http://mailman.nginx.org/mailman/listinfo/nginx
> > >
> > >
> > >
> > > _______________________________________________
> > > nginx mailing list
> > > nginx at nginx.org
> > > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> > --
> > Steve Holdoway BSc(Hons) MIITP
> > http://www.greengecko.co.nz
> > Linkedin: http://www.linkedin.com/in/steveholdoway
> > Skype: sholdowa
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.nginx.org/pipermail/nginx/attachments/20140510/b44f1a9a/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 5
> Date: Sat, 10 May 2014 10:53:17 +0100
> From: Tom McLoughlin 
> To: nginx at nginx.org
> Subject: Re: subs filter error
> Message-ID: <536DF70D.2050302 at tommehm.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> That's the only upstream I'm aware of that works with proxies.
>
> On 09/05/2014 23:05, Jonathan Matthews wrote:
> > On 9 May 2014 13:36, Tom McLoughlin  wrote:
> >> I keep getting this error every time someone loads a page. subs
> >> filter header ignored, this may be a compressed response. while
> >> reading response header from upstream, client: xx.xx.xx.xx,
> >> server: , request: "GET /search/sharepoint/0/7/0 HTTP/1.1",
> >> upstream: "http://194.71.107.80:80/search/sharepoint/0/7/0",
> >> host: "tpb.rtbt.me", referrer:
> >> "http://tpb.rtbt.me/search/sharepoint/0/99/"
> >
> > So why not stop the upstream responding with a compressed
> > response?
> >
> > I know how to do this for TPB, having written a *14* line nginx
> > config to do exactly the same thing, reverse proxying TPB for ..
> > academic reasons. But you're trying to make money off them, so I
> > don't feel like sharing. I'll let you figure it out. It's really
> > not difficult.
> >
> > J
> >
> > _______________________________________________ nginx mailing list
> > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
> >
>
>
>
> ------------------------------
>
> Message: 6
> Date: Sat, 10 May 2014 06:39:55 -0400
> From: "itpp2012" 
> To: nginx at nginx.org
> Subject: Re: Caching servers in Local ISPs !!
> Message-ID:
>         <
> 461b89f745fbc7e6c616c28d3fa6a39f.NginxMailingListEnglish at forum.nginx.org>
>
> Content-Type: text/plain; charset=UTF-8
>
> See http://en.wikipedia.org/wiki/Content_delivery_network
> and http://en.wikipedia.org/wiki/File:Akamaiprocess.png
>
> Make yourself a HLD (high level design) before getting to technology.
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249997,250007#msg-250007
>
>
>
> ------------------------------
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> End of nginx Digest, Vol 55, Issue 26
> *************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Sat May 10 22:11:33 2014
From: nginx-forum at nginx.us (scgm11)
Date: Sat, 10 May 2014 18:11:33 -0400
Subject: WSS Proxy to a Jetty AppServer
Message-ID: <0cc44c1bf4602253c9ac1af83594a9d5.NginxMailingListEnglish@forum.nginx.org>

Hi,

Im trying to proxy the wss (websockets) to a jetty server

I have jetty server listening on 8085 http

I've made the ssl proxy to the 8085 fine
I've made the ws proxy to jetty ok getting  web sockets connecting and
transmiting data
but wss is not working 
nginx 1.6.0

default:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=UCONTACT:50m
max_size=100m;
server {


        server_name         localhost;
        listen              80;
        listen              443 ssl;
        ssl_certificate     /etc/nginx/ssl/server.crt;
        ssl_certificate_key /etc/nginx/ssl/server.key;
        ssl_protocols       SSLv3 TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers         HIGH:!aNULL:!MD5;




 location / {
        set $no_cache "";
        if ($request_method !~ ^(GET|HEAD)$) {
            set $no_cache "1";
        }
        if ($no_cache = "1") {
            add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
            add_header X-Microcachable "0";
        }
        if ($http_cookie ~* "_mcnc") {
            set $no_cache "1";
        }
        proxy_no_cache $no_cache;
        proxy_cache_bypass $no_cache;
        proxy_pass http://localhost:8085;
        proxy_cache UCONTACT;
        proxy_cache_key $scheme$host$request_method$request_uri;
        proxy_cache_valid 200 302 1s;
        proxy_cache_valid 301 1s;
        proxy_cache_valid any 1s;
        proxy_cache_use_stale updating;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_max_temp_file_size 1M;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

 location /records/ {
        alias /var/spool/asterisk/monitor/;
 }

 location /agent {
        alias /etc/IntegraServer/web/agent/;
 }

 location /portal {
        alias /etc/IntegraServer/web/portal/;
 }



}


any idea if my config is wrong??

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250012,250012#msg-250012


From reallfqq-nginx at yahoo.fr  Sun May 11 04:25:53 2014
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Sun, 11 May 2014 06:25:53 +0200
Subject: Strange advisory
In-Reply-To: 
References: 
 
Message-ID: 

I read the StackOverflow thread and it seems there are 2 teams ping-ponging
the problem:
- One says that it is a terminal problem and that control and escape
sequences should not be executed
- The other says that those features are userful and say that log files are
supposed to be text-only, thus readable safely in a terminal (no control
character should be there)

The advisory stands from the second point of view, which I tend to agree
with. If logs cannot be trusted, which are supposed to be filled wikth
text, then everything around monitoring (reading, parsing, copying) becomes
a nightmare.

What is the benefit of having those unescaped control characters in a log
file? Escaping them allows you to warn about their presence safely... and
that is directly exploitable by anything, once again safely.
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Sun May 11 09:12:34 2014
From: nginx-forum at nginx.us (itpp2012)
Date: Sun, 11 May 2014 05:12:34 -0400
Subject: Strange advisory
In-Reply-To: 
References: 
Message-ID: 

"One man's data is another man's code"

If this would happen on Windows you'd scream murder, yet in 2014 you are
advocating an insecure workspace by allowing foreign control stuff to do out
of bound stuff.

Anything and anyone can create a file which contains stuff, it is the
responsibility of whatever views/reads it for what happens not the initial
creator.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250008,250014#msg-250014


From reallfqq-nginx at yahoo.fr  Sun May 11 19:13:38 2014
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Sun, 11 May 2014 21:13:38 +0200
Subject: Nginx cache and default error pages
Message-ID: 

Quick question:

Are hardcoded default error pages being cached when *_cache directives
specify their HTTP error code? Or does it only apply to pages specified
with the error_page directive?
??
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From reallfqq-nginx at yahoo.fr  Sun May 11 19:16:49 2014
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Sun, 11 May 2014 21:16:49 +0200
Subject: SCGI and uwsgi modules docs
Message-ID: 

Those modules seem to be part of nginx since
0.8but I
could not find any docs on the uwsgi module on the official website.
Why that?
---
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From iptablez at yahoo.com  Mon May 12 02:05:53 2014
From: iptablez at yahoo.com (Indo Php)
Date: Sun, 11 May 2014 19:05:53 -0700 (PDT)
Subject: Image Filter Error
In-Reply-To: 
References: <1399349488.58506.YahooMailNeo@web142302.mail.bf1.yahoo.com>
 
Message-ID: <1399860353.64261.YahooMailNeo@web142303.mail.bf1.yahoo.com>

HI,

Thanks for your answer. However If I resize it to the bigger size, it's working fine


On Thursday, May 8, 2014 11:44 AM, Nicholas Sherlock  wrote:
 
On 6 May 2014 16:11, Indo Php  wrote:

Hi
>
>
>When doing resizing on the image, I got the error below
>gd-png: ?fatal libpng error: IDAT: CRC error
>
>

I'm pretty sure this means that your PNG image file is damaged (the CRC isn't correct). Try opening and resaving your PNG in an image editor to regenerate the CRC.

Cheers,
Nicholas Sherlock
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From techgarry75 at yahoo.com  Mon May 12 07:29:07 2014
From: techgarry75 at yahoo.com (M. G.)
Date: Mon, 12 May 2014 00:29:07 -0700 (PDT)
Subject: Proxying POST requests for the given conditions
Message-ID: <1399879747.27493.YahooMailNeo@web125902.mail.ne1.yahoo.com>

Hi,


For all POST requests to nginx. 


We want to proxy_pass to http://192.168.1.1:8000 if test=1 or to http://192.168.1.2:8080 if test=0 


How to configure the above. 


Thanks,
M. G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From shahzaib.cb at gmail.com  Mon May 12 13:01:46 2014
From: shahzaib.cb at gmail.com (shahzaib shahzaib)
Date: Mon, 12 May 2014 18:01:46 +0500
Subject: http error on upload !!
Message-ID: 

We're receiving http error most of the time when uploading files with
500MB. We're using nginx+php-fpm.

nginx client_max_body_size is set to 3000M.

php.ini settings are

upload_max_size 3000M
post_max_sive    3000M
max_execution_time 6400
max_input_time 6400
memory_limit 2048

php-fpm timeout setting is 60sec.

What more i need to check ?

Regards.
Shahzaib
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From myselfasunder at gmail.com  Mon May 12 14:41:47 2014
From: myselfasunder at gmail.com (Dustin Oprea)
Date: Mon, 12 May 2014 10:41:47 -0400
Subject: SSL Client Authentication
Message-ID: 

I have the following *server* configuration for client-authentication:

    ssl on;
    ssl_certificate     /.../deploy_api_certificate.pem;
    ssl_certificate_key /.../deploy_api_private.pem;

    ssl_client_certificate /.../ca_cert.pem;
    ssl_verify_client on;
    ssl_verify_depth 1;


It looks like I get a "Bad Request" (400) when I use a certificate signed
by a different CA. So, what's the point of the *ssl_client_verify* variable?



Dustin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From vbart at nginx.com  Mon May 12 16:58:40 2014
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Mon, 12 May 2014 20:58:40 +0400
Subject: SCGI and uwsgi modules docs
In-Reply-To: 
References: 
Message-ID: <1958266.EVH2WeNTVk@vbart-workstation>

On Sunday 11 May 2014 21:16:49 B.R. wrote:
> Those modules seem to be part of nginx since
> 0.8but I
> could not find any docs on the uwsgi module on the official website.
> Why that?

Work in progress, coming soon...

btw, http://hg.nginx.org/nginx.org/

  wbr, Valentin V. Bartenev


From mdounin at mdounin.ru  Mon May 12 17:29:47 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 12 May 2014 21:29:47 +0400
Subject: SSL Client Authentication
In-Reply-To: 
References: 
Message-ID: <20140512172947.GK1849@mdounin.ru>

Hello!

On Mon, May 12, 2014 at 10:41:47AM -0400, Dustin Oprea wrote:

> I have the following *server* configuration for client-authentication:
> 
>     ssl on;
>     ssl_certificate     /.../deploy_api_certificate.pem;
>     ssl_certificate_key /.../deploy_api_private.pem;
> 
>     ssl_client_certificate /.../ca_cert.pem;
>     ssl_verify_client on;
>     ssl_verify_depth 1;
> 
> 
> It looks like I get a "Bad Request" (400) when I use a certificate signed
> by a different CA. So, what's the point of the *ssl_client_verify* variable?

It's mostly useful with "ssl_verify_client optional", see 
http://nginx.org/r/ssl_verify_client for details.

-- 
Maxim Dounin
http://nginx.org/


From mdounin at mdounin.ru  Mon May 12 18:35:55 2014
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 12 May 2014 22:35:55 +0400
Subject: Nginx cache and default error pages
In-Reply-To: 
References: 
Message-ID: <20140512183555.GL1849@mdounin.ru>

Hello!

On Sun, May 11, 2014 at 09:13:38PM +0200, B.R. wrote:

> Quick question:
> 
> Are hardcoded default error pages being cached when *_cache directives
> specify their HTTP error code? Or does it only apply to pages specified
> with the error_page directive?

By default, nginx will cache pages as returned by an upstream 
server, regardless of status codes.

If you use proxy_intercept_errors to overwrite error pages 
returned by the upstream, this applies to error codes with 
corresponding error_page set only.

-- 
Maxim Dounin
http://nginx.org/


From lists at ruby-forum.com  Mon May 12 22:46:37 2014
From: lists at ruby-forum.com (Peter B.)
Date: Tue, 13 May 2014 00:46:37 +0200
Subject: Dropped https client connection doesn't drop backend proxy_pass
 connection
In-Reply-To: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com>
References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com>
Message-ID: 

We're affected by this issue as well. Is there an open ticket where we 
can track progress at http://trac.nginx.org/?

-- 
Posted via http://www.ruby-forum.com/.


From francis at daoine.org  Mon May 12 23:34:30 2014
From: francis at daoine.org (Francis Daly)
Date: Tue, 13 May 2014 00:34:30 +0100
Subject: CGI support - Sorry to bring it up
In-Reply-To: <536B908C.3030907@cosmicperl.com>
References: <536B908C.3030907@cosmicperl.com>
Message-ID: <20140512233430.GC16942@daoine.org>

On Thu, May 08, 2014 at 03:11:24PM +0100, Lyle wrote:

Hi there,

> For some of our old Perl CGI scripts we've hit the issue I'm sure
> most of you are familiar with. I've searched for solutions and have
> found a number, all of which have various caveats. It's unclear as
> to what they best way to deal with this is. Along with plain CGI
> (and fastcgi) suexec is an important security feature to ensure that
> compromised scripts don't have permission to wreak havoc on other
> user accounts, and run things with tight permissions (along with
> sorting our FTP script upload issues you can have).

I may be being slow here, but: what's the specific issue you're concerned
about?

suexec is a way for a (CGI) script-processing server to run scripts
under a separate user account.

nginx doesn't do CGI.

nginx does most kinds of "active" content by being a client to another
server which actually does the work. That server could run suexec,
I suppose, or it could run everything under a separate user account.

Cheers,

	f
-- 
Francis Daly        francis at daoine.org


From francis at daoine.org  Mon May 12 23:41:30 2014
From: francis at daoine.org (Francis Daly)
Date: Tue, 13 May 2014 00:41:30 +0100
Subject: http error on upload !!
In-Reply-To: 
References: 
Message-ID: <20140512234130.GD16942@daoine.org>

On Mon, May 12, 2014 at 06:01:46PM +0500, shahzaib shahzaib wrote:

Hi there,

> We're receiving http error most of the time when uploading files with
> 500MB. We're using nginx+php-fpm.

> What more i need to check ?

I'd suggest checking what http error you are getting, and what the log
files say about it.

Good luck with it,

	f
-- 
Francis Daly        francis at daoine.org


From gaoping at richinfo.cn  Tue May 13 03:25:18 2014
From: gaoping at richinfo.cn (gaoping at richinfo.cn)
Date: Tue, 13 May 2014 11:25:18 +0800
Subject: Compile nginx 1.4.7  --with-http_rewrite_module Error
Message-ID: <2014051311251824902813@richinfo.cn>

hi:
     I compiled nginx1.4.7,add to --with-pcre=/root/nginx/pcre-8.34 --with-http_rewrite_module parameter;An error occurred :./configure: error: invalid option "--with-http_rewrite_module" ,Trouble who can talk about what is the problem



gaoping at richinfo.cn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From nginx-forum at nginx.us  Tue May 13 04:08:59 2014
From: nginx-forum at nginx.us (justink101)
Date: Tue, 13 May 2014 00:08:59 -0400
Subject: Return JSON for 404 error instead of html
Message-ID: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org>

How can I return a custom JSON body on 404, instead of the default html of:


    
    
        404 Not Found
    
    
    
        

404 Not Found


nginx
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250033,250033#msg-250033 From steve at greengecko.co.nz Tue May 13 05:09:13 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 13 May 2014 17:09:13 +1200 Subject: Compile nginx 1.4.7 --with-http_rewrite_module Error In-Reply-To: <2014051311251824902813@richinfo.cn> References: <2014051311251824902813@richinfo.cn> Message-ID: <1399957753.24481.735.camel@steve-new> On Tue, 2014-05-13 at 11:25 +0800, gaoping at richinfo.cn wrote: > hi: > I compiled nginx1.4.7,add > to --with-pcre=/root/nginx/pcre-8.34 --with-http_rewrite_module parameter;An error occurred :./configure: error: invalid option "--with-http_rewrite_module" ,Trouble who can talk about what is the problem > It's on by default. You can switch it off with the --without-http_rewrite_module flag. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From gaoping at richinfo.cn Tue May 13 05:34:11 2014 From: gaoping at richinfo.cn (gaoping at richinfo.cn) Date: Tue, 13 May 2014 13:34:11 +0800 Subject: Compile nginx 1.4.7 --with-http_rewrite_module Error References: <2014051311251824902813@richinfo.cn>, <1399957753.24481.735.camel@steve-new> Message-ID: <2014051313341052208819@richinfo.cn> Think you But If I turn off the words,I have no way is to use 'if' order.An error occurred : [emerg] 14499#0: unknown directive "if($remote_addr" in /usr/local/nginx1/conf/nginx.conf:46 gaoping at richinfo.cn From: Steve Holdoway Date: 2014-05-13 13:09 To: nginx Subject: Re: Compile nginx 1.4.7 --with-http_rewrite_module Error On Tue, 2014-05-13 at 11:25 +0800, gaoping at richinfo.cn wrote: > hi: > I compiled nginx1.4.7,add > to --with-pcre=/root/nginx/pcre-8.34 --with-http_rewrite_module parameter;An error occurred :./configure: error: invalid option "--with-http_rewrite_module" ,Trouble who can talk about what is the problem > It's on by default. You can switch it off with the --without-http_rewrite_module flag. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lordnynex at gmail.com Tue May 13 06:42:03 2014 From: lordnynex at gmail.com (Lord Nynex) Date: Mon, 12 May 2014 23:42:03 -0700 Subject: Return JSON for 404 error instead of html In-Reply-To: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Justink101, Using the echo module error_page 404 @404; location @404 { echo '{"status": "Not Found"}'; } On Mon, May 12, 2014 at 9:08 PM, justink101 wrote: > How can I return a custom JSON body on 404, instead of the default html of: > > > > > 404 Not Found > > > >
>

404 Not Found

>
>
>
nginx
> > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250033,250033#msg-250033 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue May 13 07:15:18 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 May 2014 11:15:18 +0400 Subject: Compile nginx 1.4.7 --with-http_rewrite_module Error In-Reply-To: <2014051313341052208819@richinfo.cn> References: <2014051311251824902813@richinfo.cn> <1399957753.24481.735.camel@steve-new> <2014051313341052208819@richinfo.cn> Message-ID: <2153409.9PdBoxgDVe@vbart-workstation> On Tuesday 13 May 2014 13:34:11 gaoping at richinfo.cn wrote: > Think you > But If I turn off the words,I have no way is to use 'if' order.An error > occurred : [emerg] 14499#0: unknown directive "if($remote_addr" in > /usr/local/nginx1/conf/nginx.conf:46 There is no directive "if($remote_addr" in nginx. Evidently you have missed a space after the "if" directive. wbr, Valentin V. Bartenev From gaoping at richinfo.cn Tue May 13 07:20:26 2014 From: gaoping at richinfo.cn (gaoping at richinfo.cn) Date: Tue, 13 May 2014 15:20:26 +0800 Subject: Compile nginx 1.4.7 --with-http_rewrite_module Error References: <2014051311251824902813@richinfo.cn>, <1399957753.24481.735.camel@steve-new>, <2014051313341052208819@richinfo.cn>, <2153409.9PdBoxgDVe@vbart-workstation> Message-ID: <2014051315202631471322@richinfo.cn> ngx_http_rewrite_module syntax:if (condition) { ... } default:? context:server, location I use in location', gaoping at richinfo.cn From: Valentin V. Bartenev Date: 2014-05-13 15:15 To: nginx Subject: Re: Compile nginx 1.4.7 --with-http_rewrite_module Error On Tuesday 13 May 2014 13:34:11 gaoping at richinfo.cn wrote: > Think you > But If I turn off the words,I have no way is to use 'if' order.An error > occurred : [emerg] 14499#0: unknown directive "if($remote_addr" in > /usr/local/nginx1/conf/nginx.conf:46 There is no directive "if($remote_addr" in nginx. Evidently you have missed a space after the "if" directive. wbr, Valentin V. Bartenev _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue May 13 07:51:22 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 May 2014 11:51:22 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> Message-ID: <2649366.51uPRMXEoH@vbart-workstation> On Tuesday 13 May 2014 00:46:37 Peter B. wrote: > We're affected by this issue as well. Is there an open ticket where we > can track progress at http://trac.nginx.org/? What version of nginx and what operating system do you use? Nginx 1.5.5+ has been able to reliably detect close of connection on linux 2.6.17+, see ticket: http://trac.nginx.org/nginx/ticket/320 wbr, Valentin V. Bartenev From vbart at nginx.com Tue May 13 08:22:55 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 May 2014 12:22:55 +0400 Subject: Strange advisory In-Reply-To: References: Message-ID: <3716371.61USSp04B3@vbart-workstation> On Sunday 11 May 2014 06:25:53 B.R. wrote: [..] > What is the benefit of having those unescaped control characters in a log > file? Escaping them allows you to warn about their presence safely... and > that is directly exploitable by anything, once again safely. The benefit is that you can easily find in error/debug log exactly what a client has sent with binary precision, and therefore better diagnose a problem. And this actually is the main purpose of error log (normally it's just empty). wbr, Valentin V. Bartenev From gaoping at richinfo.cn Tue May 13 08:35:19 2014 From: gaoping at richinfo.cn (gaoping at richinfo.cn) Date: Tue, 13 May 2014 16:35:19 +0800 Subject: Compile nginx 1.4.7 --with-http_rewrite_module Error References: <2014051311251824902813@richinfo.cn>, <1399957753.24481.735.camel@steve-new>, <2014051313341052208819@richinfo.cn>, <2153409.9PdBoxgDVe@vbart-workstation> Message-ID: <2014051316351869417132@richinfo.cn> Think you The problem has been resolved.Because 'if($remote_addr' , 'If' and '(' a space is required between gaoping at richinfo.cn From: Valentin V. Bartenev Date: 2014-05-13 15:15 To: nginx Subject: Re: Compile nginx 1.4.7 --with-http_rewrite_module Error On Tuesday 13 May 2014 13:34:11 gaoping at richinfo.cn wrote: > Think you > But If I turn off the words,I have no way is to use 'if' order.An error > occurred : [emerg] 14499#0: unknown directive "if($remote_addr" in > /usr/local/nginx1/conf/nginx.conf:46 There is no directive "if($remote_addr" in nginx. Evidently you have missed a space after the "if" directive. wbr, Valentin V. Bartenev _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ml-nginx at zu-con.org Tue May 13 08:44:21 2014 From: ml-nginx at zu-con.org (Matthias Rieber) Date: Tue, 13 May 2014 10:44:21 +0200 (CEST) Subject: Return JSON for 404 error instead of html In-Reply-To: References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Mon, 12 May 2014, Lord Nynex wrote: > Justink101, > > Using the echo module > > error_page 404 @404; > location @404 { echo '{"status": "Not Found"}'; } if your using proxy_pass, don't forget: proxy_intercept_errors on; Matthias From nginx-forum at nginx.us Tue May 13 08:48:56 2014 From: nginx-forum at nginx.us (beatnut) Date: Tue, 13 May 2014 04:48:56 -0400 Subject: fastcgi cache path keys zone=name:size In-Reply-To: References: Message-ID: <466c2e5cb32eb98f1dc27934dffa31a5.NginxMailingListEnglish@forum.nginx.org> I found similar information about : limit_req_zone $binary_remote_addr zone=one:1m "One megabyte zone can keep about 16 thousand 64-byte states. " or ssl_session_cache shared:SSL:1m; " The cache size is specified in bytes; one megabyte can store about 4000 sessions." but there is no info about fastcgi_cache_path keys_zone shared memory. I was searching any tip in the source code of fastcgi module with no success. I'm trying to adjust my setup. There will be about 30k of cached files. How big should be size of shared memory ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249973,250043#msg-250043 From vbart at nginx.com Tue May 13 09:01:08 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 May 2014 13:01:08 +0400 Subject: Return JSON for 404 error instead of html In-Reply-To: References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2294204.GMTc7Oo0dH@vbart-workstation> On Monday 12 May 2014 23:42:03 Lord Nynex wrote: > Justink101, > > Using the echo module > > error_page 404 @404; > location @404 { echo '{"status": "Not Found"}'; } > [..] Instead of using 3rd-party echo module, you can utilize the return directive for the same purpose: return 200 '{"status": "Not Found"}'; Reference: http://nginx.org/r/return wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue May 13 11:50:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 May 2014 15:50:55 +0400 Subject: fastcgi cache path keys zone=name:size In-Reply-To: <466c2e5cb32eb98f1dc27934dffa31a5.NginxMailingListEnglish@forum.nginx.org> References: <466c2e5cb32eb98f1dc27934dffa31a5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140513115055.GN1849@mdounin.ru> Hello! On Tue, May 13, 2014 at 04:48:56AM -0400, beatnut wrote: > I found similar information about : > > limit_req_zone $binary_remote_addr zone=one:1m "One megabyte zone can > keep about 16 thousand 64-byte states. " > > or > > ssl_session_cache shared:SSL:1m; " The cache size is specified in bytes; > one megabyte can store about 4000 sessions." > > but there is no info about fastcgi_cache_path keys_zone shared memory. > > I was searching any tip in the source code of fastcgi module with no > success. > > I'm trying to adjust my setup. There will be about 30k of cached files. How > big should be size of shared memory ? Each cache node currently takes about 128 bytes of memory. So for 30k cached files you'll need about 4 megabytes. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue May 13 12:17:25 2014 From: nginx-forum at nginx.us (beatnut) Date: Tue, 13 May 2014 08:17:25 -0400 Subject: fastcgi cache path keys zone=name:size In-Reply-To: <20140513115055.GN1849@mdounin.ru> References: <20140513115055.GN1849@mdounin.ru> Message-ID: This is what I wanted to know. Thank you very much Maxim. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250046,250047#msg-250047 From reallfqq-nginx at yahoo.fr Tue May 13 13:30:56 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 13 May 2014 15:30:56 +0200 Subject: Return JSON for 404 error instead of html In-Reply-To: <2294204.GMTc7Oo0dH@vbart-workstation> References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> <2294204.GMTc7Oo0dH@vbart-workstation> Message-ID: > > Instead of using 3rd-party echo module, you can utilize the return > directive > for the same purpose: > > return 200 '{"status": "Not Found"}'; > > Reference: http://nginx.org/r/return > > wbr, Valentin V. Bartenev > ?I would have intuitively written code 404 rather than 200 on this one since the aim is to send a 404 error answer.? Am I wrong? Would that loop? ? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue May 13 13:43:08 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 13 May 2014 15:43:08 +0200 Subject: Strange advisory In-Reply-To: <3716371.61USSp04B3@vbart-workstation> References: <3716371.61USSp04B3@vbart-workstation> Message-ID: Thanks to both of you for precisions about your point of view. Having thought more about it, it seems indeed strane to *interpret* log file content to *execute* script snippet in order to change window title or alike, following the link Kurt provided. It seems that old-fahion habits have taken advantage of backward-compatible features in modern emulated terminals. Switching to the fa that emulator vendors should correct this, who to contact for it? I suppose it has nothing to do with the kernel, but rather with multiple GNU libraries around it. --- *B. R.* On Tue, May 13, 2014 at 10:22 AM, Valentin V. Bartenev wrote: > On Sunday 11 May 2014 06:25:53 B.R. wrote: > [..] > > What is the benefit of having those unescaped control characters in a log > > file? Escaping them allows you to warn about their presence safely... and > > that is directly exploitable by anything, once again safely. > > The benefit is that you can easily find in error/debug log exactly what > a client has sent with binary precision, and therefore better diagnose > a problem. And this actually is the main purpose of error log (normally > it's just empty). > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Tue May 13 14:19:19 2014 From: lists at ruby-forum.com (Peter B.) Date: Tue, 13 May 2014 16:19:19 +0200 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> References: <1363321351.3854.140661204587653.70CC51E2@webmail.messagingengine.com> Message-ID: Ubuntu 12.04 + Nginx 1.4.7 I'm not 100% sure this is the same issue, but the symptoms are the same. Nginx as SSL termination to an eventsource backend upstream. Nginx active connections continuously go up until a restart. I just upgraded to 1.6.0 to see if it resolves the issue. Thanks for your response! -- Posted via http://www.ruby-forum.com/. From vbart at nginx.com Tue May 13 14:37:09 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 May 2014 18:37:09 +0400 Subject: Return JSON for 404 error instead of html In-Reply-To: References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> <2294204.GMTc7Oo0dH@vbart-workstation> Message-ID: <7529762.SnrjvCGF1d@vbart-workstation> On Tuesday 13 May 2014 15:30:56 B.R. wrote: > > Instead of using 3rd-party echo module, you can utilize the return > > directive > > > > for the same purpose: > > return 200 '{"status": "Not Found"}'; > > > > Reference: http://nginx.org/r/return > > > > wbr, Valentin V. Bartenev > > ?I would have intuitively written code 404 rather than 200 on this one > since the aim is to send a 404 error answer.? > Am I wrong? Would that loop? > I wrote an equivalent of "echo". The logic is that in this handler we provide the page for 404 which actually exists. wbr, Valentin V. Bartenev From jdorfman at netdna.com Tue May 13 15:48:50 2014 From: jdorfman at netdna.com (Justin Dorfman) Date: Tue, 13 May 2014 08:48:50 -0700 Subject: Return JSON for 404 error instead of html In-Reply-To: <7529762.SnrjvCGF1d@vbart-workstation> References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> <2294204.GMTc7Oo0dH@vbart-workstation> <7529762.SnrjvCGF1d@vbart-workstation> Message-ID: Out of curiosity, would the mime/content type show up as application/json or text/plain? Regards, Justin Dorfman Director of Developer Relations MaxCDN Email / IM: jdorfman at maxcdn.com Mobile: 818.485.1458 Twitter: @jdorfman On Tue, May 13, 2014 at 7:37 AM, Valentin V. Bartenev wrote: > On Tuesday 13 May 2014 15:30:56 B.R. wrote: > > > Instead of using 3rd-party echo module, you can utilize the return > > > directive > > > > > > for the same purpose: > > > return 200 '{"status": "Not Found"}'; > > > > > > Reference: http://nginx.org/r/return > > > > > > wbr, Valentin V. Bartenev > > > > ?I would have intuitively written code 404 rather than 200 on this one > > since the aim is to send a 404 error answer.? > > Am I wrong? Would that loop? > > > > I wrote an equivalent of "echo". The logic is that in this handler we > provide > the page for 404 which actually exists. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue May 13 15:53:29 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 May 2014 19:53:29 +0400 Subject: Return JSON for 404 error instead of html In-Reply-To: References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> <7529762.SnrjvCGF1d@vbart-workstation> Message-ID: <4685187.UEz6sIrlFe@vbart-workstation> On Tuesday 13 May 2014 08:48:50 Justin Dorfman wrote: > Out of curiosity, would the mime/content type show up as application/json > or text/plain? > It depends on the default_type directive: http://nginx.org/r/default_type wbr, Valentin V. Bartenev From jdorfman at netdna.com Tue May 13 16:06:45 2014 From: jdorfman at netdna.com (Justin Dorfman) Date: Tue, 13 May 2014 09:06:45 -0700 Subject: Return JSON for 404 error instead of html In-Reply-To: <4685187.UEz6sIrlFe@vbart-workstation> References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> <7529762.SnrjvCGF1d@vbart-workstation> <4685187.UEz6sIrlFe@vbart-workstation> Message-ID: application/octet-stream it is =p Regards, Justin Dorfman Director of Developer Relations MaxCDN Email / IM: jdorfman at maxcdn.com Mobile: 818.485.1458 Twitter: @jdorfman On Tue, May 13, 2014 at 8:53 AM, Valentin V. Bartenev wrote: > On Tuesday 13 May 2014 08:48:50 Justin Dorfman wrote: > > Out of curiosity, would the mime/content type show up as application/json > > or text/plain? > > > > It depends on the default_type directive: > http://nginx.org/r/default_type > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue May 13 19:15:16 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 13 May 2014 21:15:16 +0200 Subject: Return JSON for 404 error instead of html In-Reply-To: <7529762.SnrjvCGF1d@vbart-workstation> References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> <2294204.GMTc7Oo0dH@vbart-workstation> <7529762.SnrjvCGF1d@vbart-workstation> Message-ID: I understand the logic, but when using that handler through error_page 404 @404, won't the handler's 200 status overload the original 404 one? --- *B. R.* On Tue, May 13, 2014 at 4:37 PM, Valentin V. Bartenev wrote: > On Tuesday 13 May 2014 15:30:56 B.R. wrote: > > > Instead of using 3rd-party echo module, you can utilize the return > > > directive > > > > > > for the same purpose: > > > return 200 '{"status": "Not Found"}'; > > > > > > Reference: http://nginx.org/r/return > > > > > > wbr, Valentin V. Bartenev > > > > ?I would have intuitively written code 404 rather than 200 on this one > > since the aim is to send a 404 error answer.? > > Am I wrong? Would that loop? > > > > I wrote an equivalent of "echo". The logic is that in this handler we > provide > the page for 404 which actually exists. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue May 13 20:13:53 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 13 May 2014 21:13:53 +0100 Subject: Return JSON for 404 error instead of html In-Reply-To: References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> <2294204.GMTc7Oo0dH@vbart-workstation> <7529762.SnrjvCGF1d@vbart-workstation> Message-ID: <20140513201353.GE16942@daoine.org> On Tue, May 13, 2014 at 09:15:16PM +0200, B.R. wrote: > I understand the logic, but when using that handler through error_page 404 > @404, won't the handler's 200 status overload the original 404 one? http://nginx.org/r/error_page indicates that it won't unless "=" is used. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 13 21:07:46 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 13 May 2014 22:07:46 +0100 Subject: subs filter error In-Reply-To: <536CCBE0.9090703@tommehm.com> References: <536CCBE0.9090703@tommehm.com> Message-ID: <20140513210746.GF16942@daoine.org> On Fri, May 09, 2014 at 01:36:48PM +0100, Tom McLoughlin wrote: Hi there, > subs filter header ignored, this may be a compressed response. while > My configuration is available at, > http://p.ngx.cc/d7eacc9934caa82a There appears to be no configuration there. If it matters to the question, could you include it in the mail? That aside: what does the documentation for whatever directive you are using say about the error message? f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Wed May 14 00:51:39 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 14 May 2014 02:51:39 +0200 Subject: Return JSON for 404 error instead of html In-Reply-To: <20140513201353.GE16942@daoine.org> References: <14c22d4fefeec8d925616abeb3500e7f.NginxMailingListEnglish@forum.nginx.org> <2294204.GMTc7Oo0dH@vbart-workstation> <7529762.SnrjvCGF1d@vbart-workstation> <20140513201353.GE16942@daoine.org> Message-ID: Many thanks to both of you! Another step in deeper understanding on how to use nginx. :o) --- *B. R.* On Tue, May 13, 2014 at 10:13 PM, Francis Daly wrote: > On Tue, May 13, 2014 at 09:15:16PM +0200, B.R. wrote: > > I understand the logic, but when using that handler through error_page > 404 > > @404, won't the handler's 200 status overload the original 404 one? > > http://nginx.org/r/error_page indicates that it won't unless "=" is used. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 14 06:52:19 2014 From: nginx-forum at nginx.us (kay) Date: Wed, 14 May 2014 02:52:19 -0400 Subject: nginx rewrites $request_method on error In-Reply-To: References: Message-ID: <40f255ce3d8a09e5b5a46ab5216bfba1.NginxMailingListEnglish@forum.nginx.org> Yichun Zhang (agentzh) Wrote: ------------------------------------------------------- > Hello! > > On Wed, May 7, 2014 at 8:59 PM, kay wrote: > >> 1. It is not recommended to use the rewrite_by_lua directive > directly > > > > You can do the same with access_by_lua > > > > Please do not cut my original sentence and just pick the first half. > The full sentence is "it is not recommended to use the rewrite_by_lua > directive directly in the server {} block." and the reason follows > that. The same statement also applies to access_by_lua. > > Also, please read the full text of my previous email and correct all > the things I listed there. > > Regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx This config produces the same nginx hang. server { listen 80; location / { access_by_lua ' local res = ngx.location.capture("/memc?cmd=get&key=test") return '; root /etc/nginx/www; } location /memc { internal; access_log /var/log/nginx/memc_log main; log_subrequest on; set $memc_key $arg_key; set $memc_cmd $arg_cmd; memc_cmds_allowed get; memc_pass localhost:11211; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249754,250070#msg-250070 From dcoutadeur at linagora.com Wed May 14 14:07:46 2014 From: dcoutadeur at linagora.com (David Coutadeur) Date: Wed, 14 May 2014 16:07:46 +0200 Subject: embedded perl for nginx official release ? Message-ID: <537378B2.2000705@linagora.com> Hi all, I am searching for information about the embedded perl Nginx roadmap... Is it planned to replace standard nginx by this one ? Or to integrate embedded perl into nginx official release ? Or maybe the two projects are too divergent ? Thank you in advance for helping. David From vbart at nginx.com Wed May 14 15:35:29 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 14 May 2014 19:35:29 +0400 Subject: embedded perl for nginx official release ? In-Reply-To: <537378B2.2000705@linagora.com> References: <537378B2.2000705@linagora.com> Message-ID: <1842722.EMkbFWhVGL@vbart-workstation> On Wednesday 14 May 2014 16:07:46 David Coutadeur wrote: > Hi all, > > I am searching for information about the embedded perl Nginx roadmap... > > Is it planned to replace standard nginx by this one ? Or to integrate > embedded perl into nginx official release ? Or maybe the two projects > are too divergent ? > > Thank you in advance for helping. > There was a thread about it: http://mailman.nginx.org/pipermail/nginx-devel/2011-October/001361.html wbr, Valentin V. Bartenev From dcoutadeur at linagora.com Wed May 14 15:54:34 2014 From: dcoutadeur at linagora.com (David Coutadeur) Date: Wed, 14 May 2014 17:54:34 +0200 Subject: embedded perl for nginx official release ? In-Reply-To: <1842722.EMkbFWhVGL@vbart-workstation> References: <537378B2.2000705@linagora.com> <1842722.EMkbFWhVGL@vbart-workstation> Message-ID: <537391BA.7030904@linagora.com> Thank you Valentin, If I understand well, this is the message at the origin of the embedded perl Nginx ? My question was more actual : now that there is a fork, is there a plan to inject embedded perl in the standard nginx ? Do somebody know anything about the position of the developpers now ? I am curious about the perenity and roadmap of each project concerning perl support. Thank you in advance, David Le 14/05/2014 17:35, Valentin V. Bartenev a ?crit : > On Wednesday 14 May 2014 16:07:46 David Coutadeur wrote: >> Hi all, >> >> I am searching for information about the embedded perl Nginx roadmap... >> >> Is it planned to replace standard nginx by this one ? Or to integrate >> embedded perl into nginx official release ? Or maybe the two projects >> are too divergent ? >> >> Thank you in advance for helping. >> > > There was a thread about it: > http://mailman.nginx.org/pipermail/nginx-devel/2011-October/001361.html > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Wed May 14 16:29:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 May 2014 20:29:01 +0400 Subject: embedded perl for nginx official release ? In-Reply-To: <537391BA.7030904@linagora.com> References: <537378B2.2000705@linagora.com> <1842722.EMkbFWhVGL@vbart-workstation> <537391BA.7030904@linagora.com> Message-ID: <20140514162901.GD1849@mdounin.ru> Hello! On Wed, May 14, 2014 at 05:54:34PM +0200, David Coutadeur wrote: > > Thank you Valentin, > > If I understand well, this is the message at the origin of the embedded perl > Nginx ? > > My question was more actual : now that there is a fork, is there a plan to > inject embedded perl in the standard nginx ? > Do somebody know anything about the position of the developpers now ? > > I am curious about the perenity and roadmap of each project concerning perl > support. The embedded perl module (see [1]) is available in nginx since its introduction in nginx 0.3.21 and works fine. An attempt of Alexandr Gomoliako to add various features to the embedded perl module had major problems from our (nginx team) point of view, see the thread linked by Valentin. Instead of trying to understand and fix the problems pointed out, Alexandr chose to maintain his own fork. As far as I understand, the fork is mostly dead now. [1] http://nginx.org/en/docs/http/ngx_http_perl_module.html -- Maxim Dounin http://nginx.org/ From dcoutadeur at linagora.com Wed May 14 16:33:05 2014 From: dcoutadeur at linagora.com (David Coutadeur) Date: Wed, 14 May 2014 18:33:05 +0200 Subject: embedded perl for nginx official release ? In-Reply-To: <20140514162901.GD1849@mdounin.ru> References: <537378B2.2000705@linagora.com> <1842722.EMkbFWhVGL@vbart-workstation> <537391BA.7030904@linagora.com> <20140514162901.GD1849@mdounin.ru> Message-ID: <53739AC1.9070604@linagora.com> Ok, thank you for your very complete answer ! Le 14/05/2014 18:29, Maxim Dounin a ?crit : > Hello! > > On Wed, May 14, 2014 at 05:54:34PM +0200, David Coutadeur wrote: > >> >> Thank you Valentin, >> >> If I understand well, this is the message at the origin of the embedded perl >> Nginx ? >> >> My question was more actual : now that there is a fork, is there a plan to >> inject embedded perl in the standard nginx ? >> Do somebody know anything about the position of the developpers now ? >> >> I am curious about the perenity and roadmap of each project concerning perl >> support. > > The embedded perl module (see [1]) is available in nginx since its > introduction in nginx 0.3.21 and works fine. > > An attempt of Alexandr Gomoliako to add various features to the > embedded perl module had major problems from our (nginx team) > point of view, see the thread linked by Valentin. Instead of > trying to understand and fix the problems pointed out, Alexandr > chose to maintain his own fork. As far as I understand, the fork > is mostly dead now. > > [1] http://nginx.org/en/docs/http/ngx_http_perl_module.html > From dcoutadeur at linagora.com Wed May 14 16:47:26 2014 From: dcoutadeur at linagora.com (David Coutadeur) Date: Wed, 14 May 2014 18:47:26 +0200 Subject: perl module for nginx Message-ID: <53739E1E.5020503@linagora.com> Hello, I am trying to build a perl access handler module into nginx, thanks to this API : http://nginx.org/en/docs/http/ngx_http_perl_module.html but I have some difficulties to do so. Indeed, I would need some tools like - a mean to share variables between handlers, - a mean to call my handler at access phase. For example, I have made a basic handler in lUA with this more complete API : http://wiki.nginx.org/HttpLuaModule Does anybody have ideas on how to achieve the two functionnalities in the current perl API ? Will this lack of features for perl API be filled in the future ? In a near future ? Or do you advise other actions ? Many thanks, David From mdounin at mdounin.ru Wed May 14 16:58:42 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 May 2014 20:58:42 +0400 Subject: perl module for nginx In-Reply-To: <53739E1E.5020503@linagora.com> References: <53739E1E.5020503@linagora.com> Message-ID: <20140514165842.GG1849@mdounin.ru> Hello! On Wed, May 14, 2014 at 06:47:26PM +0200, David Coutadeur wrote: > > Hello, > > I am trying to build a perl access handler module into nginx, thanks to this > API : > http://nginx.org/en/docs/http/ngx_http_perl_module.html > but I have some difficulties to do so. > > Indeed, I would need some tools like > - a mean to share variables between handlers, > - a mean to call my handler at access phase. > > For example, I have made a basic handler in lUA with this more complete API > : > http://wiki.nginx.org/HttpLuaModule > > Does anybody have ideas on how to achieve the two functionnalities in the > current perl API ? > Will this lack of features for perl API be filled in the future ? In a near > future ? Or do you advise other actions ? Sharing variables (I believe you really want to do so between requests) can be done using normal Perl things like global variables (or, if you want many workers to share the same data, using shared memory, see various perl modules available on CPAN). Calling the handler at access phase isn't something you can do without modifications of the code. On the other hand, it may be a good idea to use auth request module[1] instead, and write a subrequest handler in embedded perl. [1] http://nginx.org/en/docs/http/ngx_http_auth_request_module.html -- Maxim Dounin http://nginx.org/ From dcoutadeur at linagora.com Wed May 14 17:06:39 2014 From: dcoutadeur at linagora.com (David Coutadeur) Date: Wed, 14 May 2014 19:06:39 +0200 Subject: perl module for nginx In-Reply-To: <20140514165842.GG1849@mdounin.ru> References: <53739E1E.5020503@linagora.com> <20140514165842.GG1849@mdounin.ru> Message-ID: <5373A29F.9010103@linagora.com> Le 14/05/2014 18:58, Maxim Dounin a ?crit : > Hello! > > On Wed, May 14, 2014 at 06:47:26PM +0200, David Coutadeur wrote: > >> >> Hello, >> >> I am trying to build a perl access handler module into nginx, thanks to this >> API : >> http://nginx.org/en/docs/http/ngx_http_perl_module.html >> but I have some difficulties to do so. >> >> Indeed, I would need some tools like >> - a mean to share variables between handlers, >> - a mean to call my handler at access phase. >> >> For example, I have made a basic handler in lUA with this more complete API >> : >> http://wiki.nginx.org/HttpLuaModule >> >> Does anybody have ideas on how to achieve the two functionnalities in the >> current perl API ? >> Will this lack of features for perl API be filled in the future ? In a near >> future ? Or do you advise other actions ? > > Sharing variables (I believe you really want to do so between > requests) can be done using normal Perl things like global > variables (or, if you want many workers to share the same data, > using shared memory, see various perl modules available on CPAN). Yes, this is what I want to do. For example, loading some parameters from a config file, and store them in a shared memory place. Do you have some advice on a CPAN module which works best with nginx ? > > Calling the handler at access phase isn't something you can do > without modifications of the code. On the other hand, it may be a > good idea to use auth request module[1] instead, and write a > subrequest handler in embedded perl. > > [1] http://nginx.org/en/docs/http/ngx_http_auth_request_module.html > Ok, thank you, I will try this ! David From agentzh at gmail.com Wed May 14 20:16:03 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 14 May 2014 13:16:03 -0700 Subject: nginx rewrites $request_method on error In-Reply-To: <40f255ce3d8a09e5b5a46ab5216bfba1.NginxMailingListEnglish@forum.nginx.org> References: <40f255ce3d8a09e5b5a46ab5216bfba1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Tue, May 13, 2014 at 11:52 PM, kay wrote: > This config produces the same nginx hang. > > server { > listen 80; > location / { > access_by_lua ' > local res = ngx.location.capture("/memc?cmd=get&key=test") > return > '; > root /etc/nginx/www; > } > location /memc { > internal; > access_log /var/log/nginx/memc_log main; > log_subrequest on; > set $memc_key $arg_key; > set $memc_cmd $arg_cmd; > memc_cmds_allowed get; > memc_pass localhost:11211; > } > } I've tried your nginx config snippet on my side with nginx 1.7.0 + ngx_memc 0.14 + ngx_lua 0.9.7. And I cannot reproduce any hang on my side. Without further details I'm afraid I cannot really help. Several suggestions: 1. check out your nginx error log for any hints regarding your configuration issues or memcached backend issues or something. 2. try to construct a minimal but still *complete* example that can help reproducing the issue you're seeing in others' boxes, preferably with precise steps. 3. try to enable the nginx debugging logs for more details for your problematic request: http://nginx.org/en/docs/debugging_log.html If you do not understand the debugging logs, you can put it somewhere on the web (like GitHub Gist) and provide the link here. Regards, -agentzh From nginx-forum at nginx.us Wed May 14 21:26:34 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 14 May 2014 17:26:34 -0400 Subject: Return JSON for 404 error instead of html In-Reply-To: <2294204.GMTc7Oo0dH@vbart-workstation> References: <2294204.GMTc7Oo0dH@vbart-workstation> Message-ID: <2e7a0da14d81693c054792ca70fcdf1b.NginxMailingListEnglish@forum.nginx.org> Thanks for the replies and sorry about the delay in responding. This is what we ended up using: error_page 404 = @four_o_four; location @four_o_four { internal; more_set_headers "X-Host: web4.ourdomain.com"; more_set_headers "Content-Type: application/json"; return 404 '{"status":"Not Found"}'; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250033,250095#msg-250095 From nginx-forum at nginx.us Wed May 14 22:38:45 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 14 May 2014 18:38:45 -0400 Subject: Return JSON for 404 error instead of html In-Reply-To: References: Message-ID: Noticed that the proxy request response headers are being thrown away in our 404 block. Note that the proxied request is returning 404. If I try and fetch a header that I know is being returned from the proxy it is undefined. location @four_o_four { internal; more_set_headers "X-Host: $sent_http_x_host"; return 404 '{ "error": { "status_code": 404, "status": "Not Found", "message": "The requested resource does not exist." } }'; } In our block example,x-host is being returned from the proxy response, but not visible in the @four_o_four location block. Any idea why? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250033,250096#msg-250096 From nginx-forum at nginx.us Thu May 15 00:01:34 2014 From: nginx-forum at nginx.us (SAH62) Date: Wed, 14 May 2014 20:01:34 -0400 Subject: Unexpected SSL Behavior with Virtual Hosts Message-ID: <95ebca7a6ba23c5eb88bbcba5863648a.NginxMailingListEnglish@forum.nginx.org> Sorry for posting this twice. I posted it in the "How to" forum last week, there haven't been any replies, so I thought I'd try again. I'm using nginx for multiple virtual hosts on the same physical server. The issue I'm having is that a browser request for https://www.domain1.org/ is being answered with a certificate for a different domain. Here's what the slices from my config files look like: domain1.conf: (note that there's no listen directive for port 443) server { listen 80; server_name domain1.org www.domain1.org domain1.com www.domain1.com domain1.net www.domain1.net domain1.us www.domain1.us domain1.info www.domain1.info; root /home/domain1/public_html; # more stuff } domain2.conf: server { listen 80; server_name domain2 www.domain2; root /home/domain2/public_html; # more stuff } server { ## SSL config for domain2 listen 443 ssl; ssl_certificate /etc/ssl/certs/domain2-chained.crt; ssl_certificate_key /etc/ssl/private/domain2.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; ssl_prefer_server_ciphers on; server_name domain2 www.domain2; root /home/domain2/public_html; # more stuff } server { listen 80; server_name domain3 www.domain3; root /var/www; access_log /var/log/nginx/access-domain3.log; error_log /var/log/nginx/error-domain3.log; return 301 https://$host$request_uri; } server { ## SSL config for domain3 listen 443 ssl; ssl_certificate /etc/ssl/certs/domain3-chained.crt; ssl_certificate_key /etc/ssl/private/server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; ssl_prefer_server_ciphers on; root /var/www; index index.php index.html index.htm; access_log /var/log/nginx/access-domain3-ssl.log; error_log /var/log/nginx/error-domain3-ssl.log; rewrite_log on; server_name www.domain3 domain3; # more stuff } A browser request for https://www.domain1.org/ returns the certificate for domain 2 and the content found in the root for domain2. Why is that and how can I get the server to redirect to http://www.domain1.org/ instead? Thank you... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250097,250097#msg-250097 From steve at greengecko.co.nz Thu May 15 00:25:45 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 15 May 2014 12:25:45 +1200 Subject: Unexpected SSL Behavior with Virtual Hosts In-Reply-To: <95ebca7a6ba23c5eb88bbcba5863648a.NginxMailingListEnglish@forum.nginx.org> References: <95ebca7a6ba23c5eb88bbcba5863648a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1400113545.24481.892.camel@steve-new> Hi! On Wed, 2014-05-14 at 20:01 -0400, SAH62 wrote: > Sorry for posting this twice. I posted it in the "How to" forum last week, > there haven't been any replies, so I thought I'd try again. > > I'm using nginx for multiple virtual hosts on the same physical server. The > issue I'm having is that a browser request for https://www.domain1.org/ is > being answered with a certificate for a different domain. Here's what the > slices from my config files look like: > > domain1.conf: (note that there's no listen directive for port 443) > server { > listen 80; > server_name domain1.org www.domain1.org domain1.com www.domain1.com > domain1.net www.domain1.net domain1.us www.domain1.us domain1.info > www.domain1.info; > root /home/domain1/public_html; > > # more stuff > } > > domain2.conf: > server { > listen 80; > > server_name domain2 www.domain2; > root /home/domain2/public_html; > > # more stuff > } > > server { ## SSL config for domain2 > listen 443 ssl; > > ssl_certificate /etc/ssl/certs/domain2-chained.crt; > ssl_certificate_key /etc/ssl/private/domain2.key; > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; > ssl_prefer_server_ciphers on; > > server_name domain2 www.domain2; > root /home/domain2/public_html; > > # more stuff > } > > server { > listen 80; > > server_name domain3 www.domain3; > root /var/www; > > access_log /var/log/nginx/access-domain3.log; > error_log /var/log/nginx/error-domain3.log; > > return 301 https://$host$request_uri; > } > > server { ## SSL config for domain3 > listen 443 ssl; > > ssl_certificate /etc/ssl/certs/domain3-chained.crt; > ssl_certificate_key /etc/ssl/private/server.key; > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; > ssl_prefer_server_ciphers on; > > root /var/www; > index index.php index.html index.htm; > > access_log /var/log/nginx/access-domain3-ssl.log; > error_log /var/log/nginx/error-domain3-ssl.log; > rewrite_log on; > > server_name www.domain3 domain3; > > # more stuff > } > > A browser request for https://www.domain1.org/ returns the certificate for > domain 2 and the content found in the root for domain2. Why is that and how > can I get the server to redirect to http://www.domain1.org/ instead? Thank > you... If you don't specify a default browser for https, then it uses the first one it comes across. You have to specifically redirect domain1 https to http: - this *may* require a valid cert for domain 1... server { listen 443 ssl; server_name domain1.com www.domain1.com; ssl_certificate domain1.com.crt; ssl_certificate_key domain1.com.key; return 301 http://domain1.com$request_uri; } BTW I find that combining http and https: stuff for server definitions to be much simpler. I also dump as much of the SSL settings as possible in the http {} block. Both of these approaches make a setup that I find simpler to administer. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From reallfqq-nginx at yahoo.fr Thu May 15 00:58:31 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 15 May 2014 02:58:31 +0200 Subject: Directly redirecting prefix location to named one Message-ID: I am considering the following locations: location / { proxy_pass http://upstream; } location /documents/ { try_files $uri @upstream; } location @upstream { proxy_pass http://upstream; } I would like to have a single named location to handle all fallbacks to upstream (to avoid duplication: maintenance will be easier!). How does one redirect a prefix location directly to a named one? Using try_files might expose documents which are not supposed to be served outside of the /documents/ tree. Using rewrite? I learned here to avoid it as much as possible... --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu May 15 06:19:03 2014 From: nginx-forum at nginx.us (kay) Date: Thu, 15 May 2014 02:19:03 -0400 Subject: nginx rewrites $request_method on error In-Reply-To: References: Message-ID: Here is nginx version: nginx -V nginx version: nginx/1.6.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) TLS SNI support enabled configure arguments: --add-module=ngx_devel_kit-0.2.19 --add-module=lua-nginx-module-0.9.7 --add-module=memc-nginx-module-0.14 --user=apache --group=apache --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-pcre=pcre-8.35 --with-pcre-jit --with-debug --with-md5-asm --with-sha1-asm --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_ssl_module --with-http_realip_module --with-http_gzip_static_module --with-http_stub_status_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' Here is minimal config: worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 1024; } http { error_page 405 /error.html; error_page 400 /error.html; error_page 403 /error.html; server { listen 8080; location / { access_by_lua ' local res = ngx.location.capture("/memc?cmd=get&key=test") return '; root /etc/nginx/www; } location /memc { internal; set $memc_key $arg_key; set $memc_cmd $arg_cmd; memc_pass localhost:11211; } } } /etc/nginx/www content: ls /etc/nginx/www error.html curl -X GET localhost works fine (processes 403) curl -X POST localhost works fine (processes 403) Here is debug log on curl -X TRACE localhost or curl -X anythinghere localhost: 2014/05/15 09:59:17 [debug] 15900#0: epoll: fd:6 ev:0001 d:00007FC4FBE03010 2014/05/15 09:59:17 [debug] 15900#0: accept on 0.0.0.0:80, ready: 0 2014/05/15 09:59:17 [debug] 15900#0: posix_memalign: 0000000000D4AD30:256 @16 2014/05/15 09:59:17 [debug] 15900#0: *3 accept: 127.0.0.1 fd:3 2014/05/15 09:59:17 [debug] 15900#0: posix_memalign: 0000000000D4AE90:256 @16 2014/05/15 09:59:17 [debug] 15900#0: *3 event timer add: 3: 60000:1400133617621 2014/05/15 09:59:17 [debug] 15900#0: *3 reusable connection: 1 2014/05/15 09:59:17 [debug] 15900#0: *3 epoll add event: fd:3 op:1 ev:80002001 2014/05/15 09:59:17 [debug] 15900#0: timer delta: 6573 2014/05/15 09:59:17 [debug] 15900#0: posted events 0000000000000000 2014/05/15 09:59:17 [debug] 15900#0: worker cycle 2014/05/15 09:59:17 [debug] 15900#0: epoll timer: 60000 2014/05/15 09:59:17 [debug] 15900#0: epoll: fd:3 ev:0001 d:00007FC4FBE031C1 2014/05/15 09:59:17 [debug] 15900#0: *3 http wait request handler 2014/05/15 09:59:17 [debug] 15900#0: *3 malloc: 0000000000D653D0:1024 2014/05/15 09:59:17 [debug] 15900#0: *3 recv: fd:3 78 of 1024 2014/05/15 09:59:17 [debug] 15900#0: *3 reusable connection: 0 2014/05/15 09:59:17 [debug] 15900#0: *3 posix_memalign: 0000000000D571B0:4096 @16 2014/05/15 09:59:17 [debug] 15900#0: *3 http process request line 2014/05/15 09:59:17 [debug] 15900#0: *3 http request line: "TRACE / HTTP/1.1" 2014/05/15 09:59:17 [debug] 15900#0: *3 http uri: "/" 2014/05/15 09:59:17 [debug] 15900#0: *3 http args: "" 2014/05/15 09:59:17 [debug] 15900#0: *3 http exten: "" 2014/05/15 09:59:17 [debug] 15900#0: *3 http process request header line 2014/05/15 09:59:17 [debug] 15900#0: *3 http header: "User-Agent: curl/7.26.0" 2014/05/15 09:59:17 [debug] 15900#0: *3 http header: "Host: localhost" 2014/05/15 09:59:17 [debug] 15900#0: *3 http header: "Accept: */*" 2014/05/15 09:59:17 [debug] 15900#0: *3 http header done 2014/05/15 09:59:17 [info] 15900#0: *3 client sent TRACE method while reading client request headers, client: 127.0.0.1, server: , request: "TRACE / HTTP/1.1", host: "localhost" 2014/05/15 09:59:17 [debug] 15900#0: *3 http finalize request: 405, "/?" a:1, c:1 2014/05/15 09:59:17 [debug] 15900#0: *3 event timer del: 3: 1400133617621 2014/05/15 09:59:17 [debug] 15900#0: *3 http special response: 405, "/?" 2014/05/15 09:59:17 [debug] 15900#0: *3 internal redirect: "/error.html?" 2014/05/15 09:59:17 [debug] 15900#0: *3 rewrite phase: 1 2014/05/15 09:59:17 [debug] 15900#0: *3 test location: "/" 2014/05/15 09:59:17 [debug] 15900#0: *3 test location: "memc" 2014/05/15 09:59:17 [debug] 15900#0: *3 using configuration "/" 2014/05/15 09:59:17 [debug] 15900#0: *3 http cl:-1 max:1048576 2014/05/15 09:59:17 [debug] 15900#0: *3 rewrite phase: 3 2014/05/15 09:59:17 [debug] 15900#0: *3 post rewrite phase: 4 2014/05/15 09:59:17 [debug] 15900#0: *3 generic phase: 5 2014/05/15 09:59:17 [debug] 15900#0: *3 generic phase: 6 2014/05/15 09:59:17 [debug] 15900#0: *3 generic phase: 7 2014/05/15 09:59:17 [debug] 15900#0: *3 access phase: 8 2014/05/15 09:59:17 [debug] 15900#0: *3 access phase: 9 2014/05/15 09:59:17 [debug] 15900#0: *3 access phase: 10 2014/05/15 09:59:17 [debug] 15900#0: *3 lua access handler, uri:"/error.html" c:2 2014/05/15 09:59:17 [debug] 15900#0: *3 posix_memalign: 0000000000D4F2B0:4096 @16 2014/05/15 09:59:17 [debug] 15900#0: *3 lua creating new thread 2014/05/15 09:59:17 [debug] 15900#0: *3 lua reset ctx 2014/05/15 09:59:17 [debug] 15900#0: *3 http cleanup add: 0000000000D58118 2014/05/15 09:59:17 [debug] 15900#0: *3 lua run thread, top:0 c:2 2014/05/15 09:59:17 [debug] 15900#0: *3 lua location capture, uri:"/error.html" c:2 2014/05/15 09:59:17 [debug] 15900#0: *3 http subrequest "/memc?cmd=get&key=test" 2014/05/15 09:59:17 [debug] 15900#0: *3 posix_memalign: 0000000000D502C0:4096 @16 2014/05/15 09:59:17 [debug] 15900#0: *3 lua resume returned 1 2014/05/15 09:59:17 [debug] 15900#0: *3 lua thread yielded 2014/05/15 09:59:17 [debug] 15900#0: *3 http finalize request: -4, "/error.html?" a:1, c:3 2014/05/15 09:59:17 [debug] 15900#0: *3 http request count:3 blk:0 2014/05/15 09:59:17 [debug] 15900#0: timer delta: 0 2014/05/15 09:59:17 [debug] 15900#0: posted events 0000000000000000 2014/05/15 09:59:17 [debug] 15900#0: worker cycle 2014/05/15 09:59:17 [debug] 15900#0: epoll timer: -1 <<<< Here nginx hangs on Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249754,250101#msg-250101 From francis at daoine.org Thu May 15 07:17:58 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 15 May 2014 08:17:58 +0100 Subject: Return JSON for 404 error instead of html In-Reply-To: References: Message-ID: <20140515071758.GG16942@daoine.org> On Wed, May 14, 2014 at 06:38:45PM -0400, justink101 wrote: Hi there, > If I try and > fetch a header that I know is being returned from the proxy it is > undefined. > > location @four_o_four { > internal; > more_set_headers "X-Host: $sent_http_x_host"; What value do you think $sent_http_x_host has, and why do you think that? Compare $http_name and $sent_http_name in http://nginx.org/en/docs/http/ngx_http_core_module.html#variables with $upstream_http_... in http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables and then use $upstream_http_x_host. f -- Francis Daly francis at daoine.org From igor at sysoev.ru Thu May 15 09:59:00 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 15 May 2014 13:59:00 +0400 Subject: Unexpected SSL Behavior with Virtual Hosts In-Reply-To: <95ebca7a6ba23c5eb88bbcba5863648a.NginxMailingListEnglish@forum.nginx.org> References: <95ebca7a6ba23c5eb88bbcba5863648a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3A04BADA-8AF9-474E-85C7-5A8451A62815@sysoev.ru> On 15 May 2014, at 04:01, SAH62 wrote: > Sorry for posting this twice. I posted it in the "How to" forum last week, > there haven't been any replies, so I thought I'd try again. > > I'm using nginx for multiple virtual hosts on the same physical server. The > issue I'm having is that a browser request for https://www.domain1.org/ is > being answered with a certificate for a different domain. Here's what the > slices from my config files look like: > > domain1.conf: (note that there's no listen directive for port 443) > server { > listen 80; > server_name domain1.org www.domain1.org domain1.com www.domain1.com > domain1.net www.domain1.net domain1.us www.domain1.us domain1.info > www.domain1.info; > root /home/domain1/public_html; > > # more stuff > } > > domain2.conf: > server { > listen 80; > > server_name domain2 www.domain2; > root /home/domain2/public_html; > > # more stuff > } > > server { ## SSL config for domain2 > listen 443 ssl; > > ssl_certificate /etc/ssl/certs/domain2-chained.crt; > ssl_certificate_key /etc/ssl/private/domain2.key; > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; > ssl_prefer_server_ciphers on; > > server_name domain2 www.domain2; > root /home/domain2/public_html; > > # more stuff > } > > server { > listen 80; > > server_name domain3 www.domain3; > root /var/www; > > access_log /var/log/nginx/access-domain3.log; > error_log /var/log/nginx/error-domain3.log; > > return 301 https://$host$request_uri; > } > > server { ## SSL config for domain3 > listen 443 ssl; > > ssl_certificate /etc/ssl/certs/domain3-chained.crt; > ssl_certificate_key /etc/ssl/private/server.key; > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; > ssl_prefer_server_ciphers on; > > root /var/www; > index index.php index.html index.htm; > > access_log /var/log/nginx/access-domain3-ssl.log; > error_log /var/log/nginx/error-domain3-ssl.log; > rewrite_log on; > > server_name www.domain3 domain3; > > # more stuff > } > > A browser request for https://www.domain1.org/ returns the certificate for > domain 2 and the content found in the root for domain2. Why is that and how > can I get the server to redirect to http://www.domain1.org/ instead? Thank > you? http://nginx.org/en/docs/http/configuring_https_servers.html#name_based_https_servers -- Igor Sysoev http://nginx.com From nginx-forum at nginx.us Thu May 15 15:23:19 2014 From: nginx-forum at nginx.us (salsaj) Date: Thu, 15 May 2014 11:23:19 -0400 Subject: Mail proxy with SNI In-Reply-To: References: Message-ID: Is there any news on this? I would be interested to know if there are plans to include this in nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237967,250113#msg-250113 From nginx-forum at nginx.us Thu May 15 16:05:04 2014 From: nginx-forum at nginx.us (samgujrat1984) Date: Thu, 15 May 2014 12:05:04 -0400 Subject: How does the Proxy Cache Key Lookup actually happen? In-Reply-To: References: Message-ID: <87fd31640c07e56b9759094f13320773.NginxMailingListEnglish@forum.nginx.org> Hi, I am also trying to cache a url have this query but still no luck any suggestion http://x.x.x.x/aaa/splashTF=100%25&US=A&AR=D&TD=%2450&BR=T&TS=12515f542140&OR=%2410%2F100MB&DN=BAQGBgEEBwQAAw%3D%3D&ET=TD&BE=1332d9a&BN=CS&AL=2000 I tired these three different proxy_cache_key , I am not sure where I am doing wrong proxy_cache_key $scheme$host$request_method$request_uri; 1- #proxy_cache_key "$host$uri?TF=100%25&US=A&AR=D&TD=%2450&BR=T&TS=12515f542140&OR=%2410%2F100MB&DN=BAQGBgEEBwQAAw%3D%3D&ET=TD&BE=1332d9a&BN=CS&AL=2000&LG=E"; 2- #proxy_cache_key "$host$uri?TF=100%25&US=A&AR=D&TD=%2450&BR=T&TS=12515f542140&OR=%2410%2F100MB&DN=BAQGBgEEBwQAAw%3D%3D&ET=TD&BE=1332d9a&BN=CS&AL=2000"; 3- #proxy_cache_key "$scheme://$host$uri$is_args"; proxy_cache_key $uri$args_is?TF=$arg_TF &US=$arg_US &AR=$arg_AR &TD=$arg_TD&BR=$arg_BR&TS=$arg_TS&OR=$arg_OR&DN=$arg_DN&ET=$arg_ET&BE=$arg_BE&BN=$arg_BN&AL=$arg_AL; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249851,250125#msg-250125 From nginx-forum at nginx.us Thu May 15 16:23:06 2014 From: nginx-forum at nginx.us (samgujrat1984) Date: Thu, 15 May 2014 12:23:06 -0400 Subject: Unable to cache long cache url Message-ID: <766c713521511ad0d15088c137117309.NginxMailingListEnglish@forum.nginx.org> Hello Every One, I am trying to cache a long string query url with proxy_cache_key. I am not sure where I am doing wrong. I tried different proxy_cache_key patterns. Not sure where I am doing wrong. If some one can suggest me please http://x.x.x.x/uri/splash?TF=100%25&US=A&AR=D&TD=%2450&BR=T&TS=12515f542140&OR=%2410%2F100MB&DN=BAQGBgEEBwQAAw%3D%3D&ET=TD&BE=1332d9a&BN=CS&AL=2000 ____________ Configs _____________ location /keystone/splash { proxy_pass http://x.x.x.x:8080/keystone/splash; set $no_cache ""; if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/"; add_header X-Microcachable "0"; } if ($http_cookie ~* "_mcnc") { set $no_cache "1"; } proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; proxy_cache webapp; #proxy_cache_key $scheme$host$request_method$request_uri; #proxy_cache_key "$host$uri?TF=100%25&US=A&AR=D&TD=%2450&BR=T&TS=12515f542140&OR=%2410%2F100MB&DN=BAQGBgEEBwQAAw%3D%3D&ET=TD&BE=1332d9a&BN=CS&AL=2000&LG=E"; #proxy_cache_key "$host$uri?TF=100%25&US=A&AR=D&TD=%2450&BR=T&TS=12515f542140&OR=%2410%2F100MB&DN=BAQGBgEEBwQAAw%3D%3D&ET=TD&BE=1332d9a&BN=CS&AL=2000"; #proxy_cache_key "$scheme://$host$uri$is_args"; proxy_cache_key $uri$is_args?TF=$arg_TF &US=$arg_US &AR=$arg_AR &TD=$arg_TD&BR=$arg_BR&TS=$arg_TS&OR=$arg_OR&DN=$arg_DN&ET=$arg_ET&BE=$arg_BE&BN=$arg_BN&AL=$arg_AL; proxy_cache_valid 200 302 60s; proxy_cache_valid 301 60s; proxy_cache_valid any 60s; proxy_cache_use_stale updating; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 1M; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250126,250126#msg-250126 From mdounin at mdounin.ru Thu May 15 16:34:45 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 May 2014 20:34:45 +0400 Subject: Mail proxy with SNI In-Reply-To: References: Message-ID: <20140515163445.GP1849@mdounin.ru> Hello! On Thu, May 15, 2014 at 11:23:19AM -0400, salsaj wrote: > Is there any news on this? I would be interested to know if there are plans > to include this in nginx? As of now, there are no plans. -- Maxim Dounin http://nginx.org/ From agentzh at gmail.com Thu May 15 19:20:46 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 15 May 2014 12:20:46 -0700 Subject: nginx rewrites $request_method on error In-Reply-To: References: Message-ID: Hello! On Wed, May 14, 2014 at 11:19 PM, kay wrote: > http { > error_page 405 /error.html; > error_page 400 /error.html; > error_page 403 /error.html; > Okay, I can reproduce your request hang on my side now and I see what is going on here. Basically the 405 error is thrown so early in the processing flow of your TRACE request within the nginx core that you cannot do complicated processing like initiating a subrequest in your error page target (because the request state has not been fully initialized). Several suggestions: 1. prevent complicated logic (like subrequests and etc) in your error page target location, 2. avoid using error_page directives in big scope as server {} or even http {} because *every* location will inherit your error_page configurations, including your location /memc for subrequests, which can create an access dead loop. Regards, -agentzh From nginx-forum at nginx.us Thu May 15 19:59:54 2014 From: nginx-forum at nginx.us (pbrunnen) Date: Thu, 15 May 2014 15:59:54 -0400 Subject: The patch of Nginx SSL: PEM pass phrase problem In-Reply-To: References: Message-ID: <3e91da9e1b9598c64ba44d6184b0bdd4.NginxMailingListEnglish@forum.nginx.org> Has this patch still not been considered for production?? This is an important component for key security... How can we vote on such requests? Thanks! -Cheers, Peter. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,214641,250131#msg-250131 From al-nginx at none.at Thu May 15 20:14:55 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 15 May 2014 22:14:55 +0200 Subject: embedded perl for nginx official release ? In-Reply-To: <20140514162901.GD1849@mdounin.ru> References: <537378B2.2000705@linagora.com> <1842722.EMkbFWhVGL@vbart-workstation> <537391BA.7030904@linagora.com> <20140514162901.GD1849@mdounin.ru> Message-ID: <69b337e42a9032772e191f2e5fd641ad@none.at> Hi. Am 14-05-2014 18:29, schrieb Maxim Dounin: > Hello! > [snipp] > An attempt of Alexandr Gomoliako to add various features to the > embedded perl module had major problems from our (nginx team) > point of view, see the thread linked by Valentin. Instead of > trying to understand and fix the problems pointed out, Alexandr > chose to maintain his own fork. As far as I understand, the fork > is mostly dead now. > > [1] http://nginx.org/en/docs/http/ngx_http_perl_module.html Maybe David could mean the offical nginx from http://nginx.org/en/linux_packages.html ### /usr/sbin/nginx -c /etc/nginx/nginx.conf -V 2>&1|perl -MData::Dumper -ane 'print Dumper(@F),"\n";' $VAR1 = 'nginx'; $VAR2 = 'version:'; $VAR3 = 'nginx/1.7.0'; $VAR1 = 'built'; $VAR2 = 'by'; $VAR3 = 'gcc'; $VAR4 = '4.6.3'; $VAR5 = '(Ubuntu/Linaro'; $VAR6 = '4.6.3-1ubuntu5)'; $VAR1 = 'TLS'; $VAR2 = 'SNI'; $VAR3 = 'support'; $VAR4 = 'enabled'; $VAR1 = 'configure'; $VAR2 = 'arguments:'; $VAR3 = '--prefix=/etc/nginx'; $VAR4 = '--sbin-path=/usr/sbin/nginx'; $VAR5 = '--conf-path=/etc/nginx/nginx.conf'; $VAR6 = '--error-log-path=/var/log/nginx/error.log'; $VAR7 = '--http-log-path=/var/log/nginx/access.log'; $VAR8 = '--pid-path=/var/run/nginx.pid'; $VAR9 = '--lock-path=/var/run/nginx.lock'; $VAR10 = '--http-client-body-temp-path=/var/cache/nginx/client_temp'; $VAR11 = '--http-proxy-temp-path=/var/cache/nginx/proxy_temp'; $VAR12 = '--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp'; $VAR13 = '--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp'; $VAR14 = '--http-scgi-temp-path=/var/cache/nginx/scgi_temp'; $VAR15 = '--user=nginx'; $VAR16 = '--group=nginx'; $VAR17 = '--with-http_ssl_module'; $VAR18 = '--with-http_realip_module'; $VAR19 = '--with-http_addition_module'; $VAR20 = '--with-http_sub_module'; $VAR21 = '--with-http_dav_module'; $VAR22 = '--with-http_flv_module'; $VAR23 = '--with-http_mp4_module'; $VAR24 = '--with-http_gunzip_module'; $VAR25 = '--with-http_gzip_static_module'; $VAR26 = '--with-http_random_index_module'; $VAR27 = '--with-http_secure_link_module'; $VAR28 = '--with-http_stub_status_module'; $VAR29 = '--with-http_auth_request_module'; $VAR30 = '--with-mail'; $VAR31 = '--with-mail_ssl_module'; $VAR32 = '--with-file-aio'; $VAR33 = '--with-http_spdy_module'; $VAR34 = '--with-cc-opt=\'-g'; $VAR35 = '-O2'; $VAR36 = '-fstack-protector'; $VAR37 = '--param=ssp-buffer-size=4'; $VAR38 = '-Wformat'; $VAR39 = '-Wformat-security'; $VAR40 = '-Wp,-D_FORTIFY_SOURCE=2\''; $VAR41 = '--with-ld-opt=\'-Wl,-Bsymbolic-functions'; $VAR42 = '-Wl,-z,relro'; $VAR43 = '-Wl,--as-needed\''; $VAR44 = '--with-ipv6'; ### /usr/sbin/nginx -c /etc/nginx/nginx.conf -V 2>&1|perl -MData::Dumper -ane 'print Dumper(@F),"\n";'|egrep -i perl BR Aleks From eodgooch at gmail.com Thu May 15 21:01:13 2014 From: eodgooch at gmail.com (Aaron Gooch) Date: Thu, 15 May 2014 17:01:13 -0400 Subject: invalid URL prefix errors - auth_request with proxy pass to https Message-ID: I want to authorize requests using a remote server that is using ssl. When I make requests with https I get nginx errors but when I use http it works. Now that I am writing this I'm thinking the issue is that the site isn't using ssl so that could cause proxy pass fails. Thanks in advance! Aaron $ tail /var/log/nginx/error.log 2014/05/15 20:49:52 [error] 19355#0: *1 invalid URL prefix in " https://iam.ids.enernoc.net/api/v1/key/validation?permissions=dataset_DATQUAL1_read", client: 10.100.1.157, server: localhost, request: "GET /api/v1/dataset/DATQUAL1?ids=17228629&start_dttm=1382486700&end_dttm=1382573100&gran=fivemin&ts_format=iso-8601&resp_format=json HTTP/1.1", subrequest: "/iams_auth", host: "10.160.1.52" 2014/05/15 20:49:52 [error] 19355#0: *1 auth request unexpected status: 500 while sending response to client, client: 10.100.1.157, server: localhost, request: "GET /api/v1/dataset/DATQUAL1?ids=17228629&start_dttm=1382486700&end_dttm=1382573100&gran=fivemin&ts_format=iso-8601&resp_format=json HTTP/1.1", host: "10.160.1.52" Ubuntu 14 LTS Nginx info $ /opt/nginx-1.6.0/sbin/nginx -V nginx version: nginx/1.6.0 built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) configure arguments: --prefix=/opt/nginx-1.6.0 --conf-path=/etc/nginx/nginx.conf --sbin-path=/opt/nginx-1.6.0/sbin/nginx --with-http_auth_request_module server block: server { listen 80; ## listen for ipv4; this line is default and implied server_name localhost; gzip on; # authorization key to use with iam. set this to a default valid key. set $valid_key "Basic ZjNqejZNZlZTVDZuNWpjQjhLcEVkWXd3TnJqeng1VnJQQ0FYYU03V3pCY2dMU0F4Og=="; set $iams_server "https://iam.ids.enernoc.net/api/v1/key/validation" location ~ ^/api/v1/dataset { if ($request_method != GET) { set $auth_request_uri "?permissions=create_dataset"; } if ($request_method = GET) { set $auth_request_uri "?permissions=list_dataset"; } auth_request /iams_auth; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header Server-Addr $server_addr; proxy_pass http://app_server; } location /iams_auth { resolver 10.160.0.2; proxy_pass $iams_server$auth_request_uri; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; # We would like to use authentication but not enforce it upon our users immediately, therefore... # If the user does not provide basic authorization we will use the default valid key variable. # If the user does provide basic auth, pass that value along instead of the default valid key. if ($remote_user != ''){ set $valid_key $http_authorization; } proxy_set_header Authorization $valid_key; proxy_pass_request_headers on; } } upstream app_server { server unix:/tmp/ids-api.sock; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 15 21:57:06 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 May 2014 01:57:06 +0400 Subject: invalid URL prefix errors - auth_request with proxy pass to https In-Reply-To: References: Message-ID: <20140515215706.GS1849@mdounin.ru> Hello! On Thu, May 15, 2014 at 05:01:13PM -0400, Aaron Gooch wrote: > I want to authorize requests using a remote server that is using ssl. When > I make requests with https I get nginx errors but when I use http it works. > Now that I am writing this I'm thinking the issue is that the site isn't > using ssl so that could cause proxy pass fails. [...] > $ /opt/nginx-1.6.0/sbin/nginx -V > nginx version: nginx/1.6.0 > built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) > configure arguments: --prefix=/opt/nginx-1.6.0 > --conf-path=/etc/nginx/nginx.conf --sbin-path=/opt/nginx-1.6.0/sbin/nginx > --with-http_auth_request_module SSL support in proxy module requires nginx to be compiled with ngx_http_ssl_module. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu May 15 22:01:43 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 May 2014 02:01:43 +0400 Subject: The patch of Nginx SSL: PEM pass phrase problem In-Reply-To: <3e91da9e1b9598c64ba44d6184b0bdd4.NginxMailingListEnglish@forum.nginx.org> References: <3e91da9e1b9598c64ba44d6184b0bdd4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140515220142.GT1849@mdounin.ru> Hello! On Thu, May 15, 2014 at 03:59:54PM -0400, pbrunnen wrote: > Has this patch still not been considered for production?? This is an > important component for key security... > How can we vote on such requests? http://mailman.nginx.org/pipermail/nginx/2014-April/043281.html -- Maxim Dounin http://nginx.org/ From lordnynex at gmail.com Fri May 16 00:35:53 2014 From: lordnynex at gmail.com (Lord Nynex) Date: Thu, 15 May 2014 17:35:53 -0700 Subject: perl module for nginx In-Reply-To: <5373A29F.9010103@linagora.com> References: <53739E1E.5020503@linagora.com> <20140514165842.GG1849@mdounin.ru> <5373A29F.9010103@linagora.com> Message-ID: I'm a bit pressed for time but want to throw out a few incomplete thoughts on this. You will have a very difficult time doing IPC between worker processes unless you do IPC over a unix domain socket (and even then it sorta sucks). IMHO this is sub-optimal as this module has the ability to block the entire webserver on long running operations. An example to frame this in would be preserving session state without a third party process like redis or memcached. Since you have LUA built, you may see better performance and footprint if you do the following: - Create a shared dictionary at nginx init (example, perl-session-store). This dictionary will be shared across all nginx worker processes in memory. - Create an internal only LUA handler for setting/getting data out of the shared dictionary. - Implement some sort of locking mechanism in the above handler. - Use something like $r->internal_redirect to get/set data in and out of the above lua handler. In this way you can preserve something like session data across all workers. You could use something like Thaw and Freeze to serialize perl data types as well. I haven't thoroughly thought through this design so there may be some 'gotchas' but my hope is to save you a few days of Perl IPC Frustration. The obvious caveat here is avoiding memory bloat by being conservative in the amount of data you jam into shared dicts. I foresee you will have issues using internal_redirect (I dont have the docs in front of me), so you may need to be creative here as the API is limited. I have a soft spot for perl but this module leaves a lot to be desired. You must be thoughtful in your design as to what will and will not block. As a final thought, have you looked into http://zzzcpan.github.io/nginx-perl/ ? this project appears to be in need of love and if memory serves makes some changes to nginx core which prevents upgrades, however, this project seems to support a lot of async perl operations that would make life easier. On Wed, May 14, 2014 at 10:06 AM, David Coutadeur wrote: > Le 14/05/2014 18:58, Maxim Dounin a ?crit : > > Hello! >> >> On Wed, May 14, 2014 at 06:47:26PM +0200, David Coutadeur wrote: >> >> >>> Hello, >>> >>> I am trying to build a perl access handler module into nginx, thanks to >>> this >>> API : >>> http://nginx.org/en/docs/http/ngx_http_perl_module.html >>> but I have some difficulties to do so. >>> >>> Indeed, I would need some tools like >>> - a mean to share variables between handlers, >>> - a mean to call my handler at access phase. >>> >>> For example, I have made a basic handler in lUA with this more complete >>> API >>> : >>> http://wiki.nginx.org/HttpLuaModule >>> >>> Does anybody have ideas on how to achieve the two functionnalities in the >>> current perl API ? >>> Will this lack of features for perl API be filled in the future ? In a >>> near >>> future ? Or do you advise other actions ? >>> >> >> Sharing variables (I believe you really want to do so between >> requests) can be done using normal Perl things like global >> variables (or, if you want many workers to share the same data, >> using shared memory, see various perl modules available on CPAN). >> > > Yes, this is what I want to do. For example, loading some parameters from > a config file, and store them in a shared memory place. Do you have some > advice on a CPAN module which works best with nginx ? > > > >> Calling the handler at access phase isn't something you can do >> without modifications of the code. On the other hand, it may be a >> good idea to use auth request module[1] instead, and write a >> subrequest handler in embedded perl. >> >> [1] http://nginx.org/en/docs/http/ngx_http_auth_request_module.html >> >> > Ok, thank you, I will try this ! > > David > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lordnynex at gmail.com Fri May 16 00:52:13 2014 From: lordnynex at gmail.com (Lord Nynex) Date: Thu, 15 May 2014 17:52:13 -0700 Subject: Directly redirecting prefix location to named one In-Reply-To: References: Message-ID: http://wiki.nginx.org/HttpEchoModule#echo_exec I use this model in some parts of my configs. I, however, use openresty and I'm not clear if there are any functionality differences between them. On Wed, May 14, 2014 at 5:58 PM, B.R. wrote: > I am considering the following locations: > > location / { > proxy_pass http://upstream; > } > > location /documents/ { > try_files $uri @upstream; > } > > location @upstream { > proxy_pass http://upstream; > } > > I would like to have a single named location to handle all fallbacks to > upstream (to avoid duplication: maintenance will be easier!). > > How does one redirect a prefix location directly to a named one? > Using try_files might expose documents which are not supposed to be served > outside of the /documents/ tree. > > Using rewrite? I learned here to avoid it as much as possible... > --- > *B. R.* > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 16 02:38:39 2014 From: nginx-forum at nginx.us (adambenayoun) Date: Thu, 15 May 2014 22:38:39 -0400 Subject: Serving wordpress from subdomain Message-ID: <9d00ab06e37a2c021820b7db0094c073.NginxMailingListEnglish@forum.nginx.org> My setup is nginx + php-fpm. I am running a web application on domain.com and I am serving a wordpress configuration from domain.com/blog. The problem is that the web app is served from /var/www/domain/html/http and wordpress is located outside of the root directory: /var/www/domain/html/blog How can I serve the blog effectively? I did try to symlink /blog from within the root but I feel this is a bad solution. Here is my configuration file: server { listen *:80; listen 443 ssl; server_name domain; error_log /var/www/domain/logs/error_log warn; ssl_certificate /etc/nginx/certs/domain.crt; ssl_certificate_key /etc/nginx/certs/domain.key; ssl_ciphers RC4:HIGH:!aNULL:!MD5:!kEDH; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; root /var/www/domain/html/http; index index.php; client_max_body_size 250m; location / { try_files $uri $uri/ /index.php?$args; } location /blog { root /var/www/domain/html; try_files $uri $uri/ /blog/index.php?q=$1; } location ~ \.php(/|$) { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SERVER_NAME $http_host; fastcgi_pass 127.0.0.1:9000; } location ~* ^.+\.(ht|svn)$ { deny all; } # Static files location location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { expires max; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250139,250139#msg-250139 From myselfasunder at gmail.com Fri May 16 04:37:44 2014 From: myselfasunder at gmail.com (Dustin Oprea) Date: Fri, 16 May 2014 00:37:44 -0400 Subject: SSL Authentication: $ssl_client_verify Message-ID: I have the following server configuration for client-authentication: ssl on; ssl_certificate /.../certificate.pem; ssl_certificate_key /.../private.pem; ssl_client_certificate /.../ca_cert.pem; ssl_verify_client on; ssl_verify_depth 1; It looks like I get a "Bad Request" (400) when I use a certificate signed by a different CA. So, what's the point of the ssl_client_verify variable? >From Nginx's SSL module documentation ( http://nginx.org/en/docs/http/ngx_http_ssl_module.html): $ssl_client_verify returns the result of client certificate verification: ?SUCCESS?, ?FAILED?, and ?NONE? if a certificate was not present; Dustin -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan at chandlerfamily.org.uk Fri May 16 05:11:43 2014 From: alan at chandlerfamily.org.uk (Alan Chandler) Date: Fri, 16 May 2014 06:11:43 +0100 Subject: Serving wordpress from subdomain In-Reply-To: <9d00ab06e37a2c021820b7db0094c073.NginxMailingListEnglish@forum.nginx.org> References: <9d00ab06e37a2c021820b7db0094c073.NginxMailingListEnglish@forum.nginx.org> Message-ID: <53759E0F.3090207@chandlerfamily.org.uk> On 16/05/14 03:38, adambenayoun wrote: > My setup is nginx + php-fpm. I am running a web application on domain.com > and I am serving a wordpress configuration from domain.com/blog. > > The problem is that the web app is served from /var/www/domain/html/http and > wordpress is located outside of the root directory: > /var/www/domain/html/blog > ... I have been wrestling with a similar problem - except mine is worse, in that the applications running in a separate physical directory were php applications which required $_SERVER['DOCUMENT_ROOT'] to point to the actual physical directory for the real document root. I > location /blog { > root /var/www/domain/html; > try_files $uri $uri/ /blog/index.php?q=$1; > } > location ~ \.php(/|$) { > try_files $uri =404; > include /etc/nginx/fastcgi_params; > fastcgi_split_path_info ^(.+\.php)(.*)$; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_script_name; > fastcgi_param SERVER_NAME $http_host; > fastcgi_pass 127.0.0.1:9000; The problem here is that as nginx redirects to the second location block when it can't find the file with try_files and goes to /blog/index.php, it changes root back to the default The way I solved the problem is location = /blog { rewrite ^ /blog/ permanent; } location /blog/ { alias /home/alan/dev/blog/web/; index index.php; } location ~ ^/blog/(.*\.php)$ { alias /home/alan/dev/blog/web/$1; include php-apps.conf; } and the include file (used becaise I have several similar apps in this area and each one is treated the same) include fastcgi_params; fastcgi_param DOCUMENT_ROOT /home/alan/dev/test-base; fastcgi_index index.php; fastcgi_intercept_errors on; fastcgi_pass unix:/var/run/php5-fpm-alan.sock; There is a problem - which I don't really understand - that try_files does not work with alias. so I am not using it in my sub location blocks I do have location / { try_files $uri $uri/ =404; } and with the whole thing in place I do get 404s when somebody attempts to access a non existent page. I think in your situation, your location path equals the final elements of the physical filesystem path, so you could use root instead of alias and then include the try_files bits you have. -- Alan Chandler http://www.chandlerfamily.org.uk From nginx-forum at nginx.us Fri May 16 06:56:07 2014 From: nginx-forum at nginx.us (kay) Date: Fri, 16 May 2014 02:56:07 -0400 Subject: nginx rewrites $request_method on error In-Reply-To: References: Message-ID: <4aba1b94facde69e294a4c8f35685db5.NginxMailingListEnglish@forum.nginx.org> Don't you think this is a bug? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249754,250144#msg-250144 From nginx-forum at nginx.us Fri May 16 07:16:54 2014 From: nginx-forum at nginx.us (kay) Date: Fri, 16 May 2014 03:16:54 -0400 Subject: nginx rewrites $request_method on error In-Reply-To: References: Message-ID: I mean that I know how to avoid these errors, but I think that a better way to fix the issue is to fix it in low level. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249754,250145#msg-250145 From gaoping at richinfo.cn Fri May 16 07:29:31 2014 From: gaoping at richinfo.cn (gaoping at richinfo.cn) Date: Fri, 16 May 2014 15:29:31 +0800 Subject: nginx1.4.7 ip_hash load balancing strategy Message-ID: <2014051615293074075711@richinfo.cn> hi When my nginx is below this structure,Nginx access to the IP proxy server by IP, then nginx ip_hash load balancing, will all requests are routed to the same tomcat, do not know if you have what good method to solve the. The ip_hash function can be introduced into a custom IP. Prerequisite: can not change the existing structure, and nginx access to the IP is only aproxy server, and IP client request is not true, gaoping at richinfo.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx??.png Type: image/png Size: 12535 bytes Desc: not available URL: From renenglish at gmail.com Fri May 16 08:14:36 2014 From: renenglish at gmail.com (Shafreeck Sea) Date: Fri, 16 May 2014 16:14:36 +0800 Subject: nginx1.4.7 ip_hash load balancing strategy In-Reply-To: <2014051615293074075711@richinfo.cn> References: <2014051615293074075711@richinfo.cn> Message-ID: why ip_hash ? 2014-05-16 15:29 GMT+08:00 gaoping at richinfo.cn : > hi > When my nginx is below this structure,Nginx access to the IP proxy > server by IP, then nginx ip_hash load balancing, will all requests are routed > to the same tomcat, do not know if you have what good method to solve the. > The ip_hash function can be introduced into a custom IP. > Prerequisite: can not change the existing structure, and nginx > access to the IP is only aproxy server, and IP client request is not true, > > ------------------------------ > gaoping at richinfo.cn > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx??.png Type: image/png Size: 12535 bytes Desc: not available URL: From gaoping at richinfo.cn Fri May 16 08:17:05 2014 From: gaoping at richinfo.cn (gaoping at richinfo.cn) Date: Fri, 16 May 2014 16:17:05 +0800 Subject: nginx1.4.7 ip_hash load balancing strategy References: <2014051615293074075711@richinfo.cn>, Message-ID: <2014051616170478294713@richinfo.cn> ngx_http_upstream_module Load balancing algorithm upstream the instruction ip_hash; gaoping at richinfo.cn From: Shafreeck Sea Date: 2014-05-16 16:14 To: nginx at nginx.org Subject: Re: nginx1.4.7 ip_hash load balancing strategy why ip_hash ? 2014-05-16 15:29 GMT+08:00 gaoping at richinfo.cn : hi When my nginx is below this structure,Nginx access to the IP proxy server by IP, then nginx ip_hash load balancing, will all requests are routed to the same tomcat, do not know if you have what good method to solve the. The ip_hash function can be introduced into a custom IP. Prerequisite: can not change the existing structure, and nginx access to the IP is only aproxy server, and IP client request is not true, gaoping at richinfo.cn _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx??(05-16-16-15-19).png Type: image/png Size: 12535 bytes Desc: not available URL: From renenglish at gmail.com Fri May 16 10:55:58 2014 From: renenglish at gmail.com (Shafreeck Sea) Date: Fri, 16 May 2014 18:55:58 +0800 Subject: nginx1.4.7 ip_hash load balancing strategy In-Reply-To: <2014051616170478294713@richinfo.cn> References: <2014051615293074075711@richinfo.cn> <2014051616170478294713@richinfo.cn> Message-ID: ???ip_hash ? ?????????????? 2014-05-16 16:17 GMT+08:00 gaoping at richinfo.cn : > ngx_http_upstream_module Load balancing algorithm upstream the > instruction ip_hash; > > ------------------------------ > gaoping at richinfo.cn > > > *From:* Shafreeck Sea > *Date:* 2014-05-16 16:14 > *To:* nginx at nginx.org > *Subject:* Re: nginx1.4.7 ip_hash load balancing strategy > why ip_hash ? > > > 2014-05-16 15:29 GMT+08:00 gaoping at richinfo.cn : > >> hi >> When my nginx is below this structure,Nginx access to the IP proxy >> server by IP, then nginx ip_hash load balancing, will all requests are routed >> to the same tomcat, do not know if you have what good method to solve the >> . >> The ip_hash function can be introduced into a custom IP. >> Prerequisite: can not change the existing structure, and nginx >> access to the IP is only aproxy server, and IP client request is not true >> , >> >> ------------------------------ >> gaoping at richinfo.cn >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx??(05-16-16-15-19).png Type: image/png Size: 12535 bytes Desc: not available URL: From mdounin at mdounin.ru Fri May 16 11:04:35 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 May 2014 15:04:35 +0400 Subject: SSL Authentication: $ssl_client_verify In-Reply-To: References: Message-ID: <20140516110435.GU1849@mdounin.ru> Hello! On Fri, May 16, 2014 at 12:37:44AM -0400, Dustin Oprea wrote: > I have the following server configuration for client-authentication: > > ssl on; > ssl_certificate /.../certificate.pem; > ssl_certificate_key /.../private.pem; > > ssl_client_certificate /.../ca_cert.pem; > ssl_verify_client on; > ssl_verify_depth 1; > > It looks like I get a "Bad Request" (400) when I use a certificate signed > by a different CA. So, what's the point of the ssl_client_verify variable? > > From Nginx's SSL module documentation ( > http://nginx.org/en/docs/http/ngx_http_ssl_module.html): > > $ssl_client_verify > > returns the result of client certificate verification: ?SUCCESS?, > ?FAILED?, and ?NONE? if a certificate was not present; Answer was already given to your previous message 4 days ago, see here: http://mailman.nginx.org/pipermail/nginx/2014-May/043552.html -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Fri May 16 11:08:56 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 16 May 2014 13:08:56 +0200 Subject: nginx1.4.7 ip_hash load balancing strategy In-Reply-To: <2014051616170478294713@richinfo.cn> References: <2014051615293074075711@richinfo.cn>, <2014051616170478294713@richinfo.cn> Message-ID: <9dd98e158ac6a5a109be6cf60e73706c@none.at> Maybe you could use the sticky function http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky "This directive is available as part of our commercial subscription" [2] cheers Aleks Am 16-05-2014 10:17, schrieb gaoping at richinfo.cn: > ngx_http_upstream_module Load balancing algorithm upstream the instruction ip_hash; > > ------------------------- > > gaoping at richinfo.cn > > FROM: Shafreeck Sea > DATE: 2014-05-16 16:14 > TO: nginx at nginx.org > SUBJECT: Re: nginx1.4.7 ip_hash load balancing strategy > > why ip_hash ? > > 2014-05-16 15:29 GMT+08:00 gaoping at richinfo.cn : > > hi > When my nginx is below this structure,Nginx access to the IP proxy server by IP, then nginx ip_hash load balancing, will all requests are routed to the same tomcat, do not know if you have what good method to solve the. > The ip_hash function can be introduced into a custom IP. > Prerequisite: can not change the existing structure, and nginx access to the IP is only aproxy server, and IP client request is not true, > > ------------------------- > > gaoping at richinfo.cn > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx [1] _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx [2] http://nginx.com/products/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx??????(05-16-16-15-19).png Type: image/png Size: 12535 bytes Desc: not available URL: From mdounin at mdounin.ru Fri May 16 11:22:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 May 2014 15:22:26 +0400 Subject: nginx1.4.7 ip_hash load balancing strategy In-Reply-To: <2014051615293074075711@richinfo.cn> References: <2014051615293074075711@richinfo.cn> Message-ID: <20140516112226.GV1849@mdounin.ru> Hello! On Fri, May 16, 2014 at 03:29:31PM +0800, gaoping at richinfo.cn wrote: > hi > When my nginx is below this structure,Nginx access to the IP > proxy server by IP, then nginx ip_hash load balancing, will all > requests are routed to the same tomcat, do not know if you have > what good method to solve the. > The ip_hash function can be introduced into a custom IP. > Prerequisite: can not change the existing structure, and nginx > access to the IP is only aproxy server, and IP client request is > not true, As long as upstream{} blocks on both frontends are identical, and client IP stays the same, the same upstream server will be selected by ip_hash. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri May 16 13:22:55 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Fri, 16 May 2014 09:22:55 -0400 Subject: Memcached Message-ID: <6ed32b27e8d2fce16bcd6912d0225612.NginxMailingListEnglish@forum.nginx.org> Hi: Have anyone used the thirdparty "memc-nginx-module" module for the memcached operation. I am interested for a memcached module. So, I am evaluating which one to use and their differences and the stability. I also came across another third party module ngx_http_enhanced_memcached_module . Can anyone suggest me a suitable memcached module which is used in production sites? Many thanks... Santos Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250155,250155#msg-250155 From nginx-forum at nginx.us Fri May 16 13:37:12 2014 From: nginx-forum at nginx.us (SAH62) Date: Fri, 16 May 2014 09:37:12 -0400 Subject: Unexpected SSL Behavior with Virtual Hosts In-Reply-To: <3A04BADA-8AF9-474E-85C7-5A8451A62815@sysoev.ru> References: <3A04BADA-8AF9-474E-85C7-5A8451A62815@sysoev.ru> Message-ID: Igor Sysoev Wrote: ------------------------------------------------------- > On 15 May 2014, at 04:01, SAH62 wrote: > > > Sorry for posting this twice. I posted it in the "How to" forum last > week, > > there haven't been any replies, so I thought I'd try again. > > > > I'm using nginx for multiple virtual hosts on the same physical > server. The > > issue I'm having is that a browser request for > https://www.domain1.org/ is > > being answered with a certificate for a different domain. Here's > what the > > slices from my config files look like: > > > > domain1.conf: (note that there's no listen directive for port 443) > > server { > > listen 80; > > server_name domain1.org www.domain1.org domain1.com www.domain1.com > > domain1.net www.domain1.net domain1.us www.domain1.us domain1.info > > www.domain1.info; > > root /home/domain1/public_html; > > > > # more stuff > > } > > > > domain2.conf: > > server { > > listen 80; > > > > server_name domain2 www.domain2; > > root /home/domain2/public_html; > > > > # more stuff > > } > > > > server { ## SSL config for domain2 > > listen 443 ssl; > > > > ssl_certificate /etc/ssl/certs/domain2-chained.crt; > > ssl_certificate_key /etc/ssl/private/domain2.key; > > ssl_session_cache shared:SSL:10m; > > ssl_session_timeout 10m; > > ssl_protocols SSLv3 TLSv1; > > ssl_ciphers > ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; > > ssl_prefer_server_ciphers on; > > > > server_name domain2 www.domain2; > > root /home/domain2/public_html; > > > > # more stuff > > } > > > > server { > > listen 80; > > > > server_name domain3 www.domain3; > > root /var/www; > > > > access_log /var/log/nginx/access-domain3.log; > > error_log /var/log/nginx/error-domain3.log; > > > > return 301 https://$host$request_uri; > > } > > > > server { ## SSL config for domain3 > > listen 443 ssl; > > > > ssl_certificate /etc/ssl/certs/domain3-chained.crt; > > ssl_certificate_key /etc/ssl/private/server.key; > > ssl_session_cache shared:SSL:10m; > > ssl_session_timeout 10m; > > ssl_protocols SSLv3 TLSv1; > > ssl_ciphers > ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; > > ssl_prefer_server_ciphers on; > > > > root /var/www; > > index index.php index.html index.htm; > > > > access_log /var/log/nginx/access-domain3-ssl.log; > > error_log /var/log/nginx/error-domain3-ssl.log; > > rewrite_log on; > > > > server_name www.domain3 domain3; > > > > # more stuff > > } > > > > A browser request for https://www.domain1.org/ returns the > certificate for > > domain 2 and the content found in the root for domain2. Why is that > and how > > can I get the server to redirect to http://www.domain1.org/ instead? > Thank > > you? > > http://nginx.org/en/docs/http/configuring_https_servers.html#name_base > d_https_servers OK, that explains why nginx returns the default certificate. It's listening on 443, it gets a request, and it doesn't know which domain the HTTP request is for so it responds with the default certificate. Why is it sending back the content for domain2, though? Scott Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250097,250156#msg-250156 From mdounin at mdounin.ru Fri May 16 13:42:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 May 2014 17:42:02 +0400 Subject: Unexpected SSL Behavior with Virtual Hosts In-Reply-To: References: <3A04BADA-8AF9-474E-85C7-5A8451A62815@sysoev.ru> Message-ID: <20140516134202.GW1849@mdounin.ru> Hello! On Fri, May 16, 2014 at 09:37:12AM -0400, SAH62 wrote: > Igor Sysoev Wrote: > ------------------------------------------------------- > > On 15 May 2014, at 04:01, SAH62 wrote: > > > > > Sorry for posting this twice. I posted it in the "How to" forum last > > week, > > > there haven't been any replies, so I thought I'd try again. > > > > > > I'm using nginx for multiple virtual hosts on the same physical > > server. The > > > issue I'm having is that a browser request for > > https://www.domain1.org/ is > > > being answered with a certificate for a different domain. Here's > > what the > > > slices from my config files look like: > > > > > > domain1.conf: (note that there's no listen directive for port 443) > > > server { > > > listen 80; > > > server_name domain1.org www.domain1.org domain1.com www.domain1.com > > > domain1.net www.domain1.net domain1.us www.domain1.us domain1.info > > > www.domain1.info; > > > root /home/domain1/public_html; > > > > > > # more stuff > > > } > > > > > > domain2.conf: > > > server { > > > listen 80; > > > > > > server_name domain2 www.domain2; > > > root /home/domain2/public_html; > > > > > > # more stuff > > > } > > > > > > server { ## SSL config for domain2 > > > listen 443 ssl; > > > > > > ssl_certificate /etc/ssl/certs/domain2-chained.crt; > > > ssl_certificate_key /etc/ssl/private/domain2.key; > > > ssl_session_cache shared:SSL:10m; > > > ssl_session_timeout 10m; > > > ssl_protocols SSLv3 TLSv1; > > > ssl_ciphers > > ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; > > > ssl_prefer_server_ciphers on; > > > > > > server_name domain2 www.domain2; > > > root /home/domain2/public_html; > > > > > > # more stuff > > > } > > > > > > server { > > > listen 80; > > > > > > server_name domain3 www.domain3; > > > root /var/www; > > > > > > access_log /var/log/nginx/access-domain3.log; > > > error_log /var/log/nginx/error-domain3.log; > > > > > > return 301 https://$host$request_uri; > > > } > > > > > > server { ## SSL config for domain3 > > > listen 443 ssl; > > > > > > ssl_certificate /etc/ssl/certs/domain3-chained.crt; > > > ssl_certificate_key /etc/ssl/private/server.key; > > > ssl_session_cache shared:SSL:10m; > > > ssl_session_timeout 10m; > > > ssl_protocols SSLv3 TLSv1; > > > ssl_ciphers > > ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; > > > ssl_prefer_server_ciphers on; > > > > > > root /var/www; > > > index index.php index.html index.htm; > > > > > > access_log /var/log/nginx/access-domain3-ssl.log; > > > error_log /var/log/nginx/error-domain3-ssl.log; > > > rewrite_log on; > > > > > > server_name www.domain3 domain3; > > > > > > # more stuff > > > } > > > > > > A browser request for https://www.domain1.org/ returns the > > certificate for > > > domain 2 and the content found in the root for domain2. Why is that > > and how > > > can I get the server to redirect to http://www.domain1.org/ instead? > > Thank > > > you? > > > > http://nginx.org/en/docs/http/configuring_https_servers.html#name_base > > d_https_servers > > OK, that explains why nginx returns the default certificate. It's listening > on 443, it gets a request, and it doesn't know which domain the HTTP request > is for so it responds with the default certificate. Why is it sending back > the content for domain2, though? Because it's the default server for the listening socket on port 443. See here for details: http://nginx.org/en/docs/http/request_processing.html -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Fri May 16 14:07:24 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 16 May 2014 18:07:24 +0400 Subject: Memcached In-Reply-To: <6ed32b27e8d2fce16bcd6912d0225612.NginxMailingListEnglish@forum.nginx.org> References: <6ed32b27e8d2fce16bcd6912d0225612.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6670789.VI4g7X3L0S@vbart-workstation> On Friday 16 May 2014 09:22:55 nginxsantos wrote: > Hi: > > Have anyone used the thirdparty "memc-nginx-module" module for the memcached > operation. I am interested for a memcached module. So, I am evaluating > which one to use and their differences and the stability. > > I also came across another third party module > ngx_http_enhanced_memcached_module . > > Can anyone suggest me a suitable memcached module which is used in > production sites? > > Many thanks... > There is an official memcached module: http://nginx.org/en/docs/http/ngx_http_memcached_module.html wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri May 16 15:14:35 2014 From: nginx-forum at nginx.us (epluntze) Date: Fri, 16 May 2014 11:14:35 -0400 Subject: Issue with nginx_http_push_module Message-ID: <686637688817701f5c846c0f58c345fa.NginxMailingListEnglish@forum.nginx.org> We are using nginx as a web/comet server to provide push notification to our web-app. We have been running fine on nginx 1.0.11 and nginx_http_push_module verson 0.692, on ubuntu 10.04 servers, for years without issue, but re recently upgraded to nginx 1.4.6 and push module .712, on 12.04. We are seeing an issue when the GET call to the subscriber location on nginx is interrupted it creates a CLOSE_WAIT connection that is never cleared up, but once the GET call resolves once it clears up any connections that were created on that publish channel. In our old set up these connections were being cleaned up after several hours but once we switched over to the new server they persist until nginx is restarted. Is there some way I can get the server to clean these connections as before, or close the connections explicitly in the error state of the call? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250161,250161#msg-250161 From agentzh at gmail.com Fri May 16 20:25:57 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 16 May 2014 13:25:57 -0700 Subject: nginx rewrites $request_method on error In-Reply-To: <4aba1b94facde69e294a4c8f35685db5.NginxMailingListEnglish@forum.nginx.org> References: <4aba1b94facde69e294a4c8f35685db5.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Thu, May 15, 2014 at 11:56 PM, kay wrote: > Don't you think this is a bug? > I think the NGINX core should prevent bad things from happen when 1. the user configures complicated things in his error_page targets, and 2. the error page is initiated by nginx too early in the request processing lifetime (like when processing the request header). Because it should be done in the nginx core, and I can do very little on my 3rd-party modules side about this. Maxim Dounin: what is your opinion on this? Best regards, -agentzh From agentzh at gmail.com Fri May 16 20:27:02 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 16 May 2014 13:27:02 -0700 Subject: Directly redirecting prefix location to named one In-Reply-To: References: Message-ID: Hello! On Thu, May 15, 2014 at 5:52 PM, Lord Nynex wrote: > http://wiki.nginx.org/HttpEchoModule#echo_exec > > I use this model in some parts of my configs. I, however, use openresty and > I'm not clear if there are any functionality differences between them. > No difference here. OpenResty bundles this ngx_echo module directly. Regards, -agentzh From agentzh at gmail.com Fri May 16 21:24:57 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 16 May 2014 14:24:57 -0700 Subject: Memcached In-Reply-To: <6ed32b27e8d2fce16bcd6912d0225612.NginxMailingListEnglish@forum.nginx.org> References: <6ed32b27e8d2fce16bcd6912d0225612.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Fri, May 16, 2014 at 6:22 AM, nginxsantos wrote: > Have anyone used the thirdparty "memc-nginx-module" module for the memcached > operation. I am interested for a memcached module. So, I am evaluating > which one to use and their differences and the stability. > As the author of the ngx_memc module, I know a lot of people have been using ngx_memc in production for years. I've also been running ngx_memc's comprehensive test suite frequently against the latest nginx releases with various different testing modes (valgrind, mockegain, hup reload, and etc) on my Amazon EC2 test cluster for long: http://qa.openresty.org This module is also well supported on the openresty-en mailing list: https://groups.google.com/group/openresty-en Regards, -agentzh From nginx-forum at nginx.us Fri May 16 22:32:32 2014 From: nginx-forum at nginx.us (newnovice) Date: Fri, 16 May 2014 18:32:32 -0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: References: Message-ID: <1014d8b6891298405f377ac2a5895f6c.NginxMailingListEnglish@forum.nginx.org> Does this also affect a 'Connection Close' header coming from the upstream service - being sent as a connect-keep-alive to the requesting client thru my nginx ssl reverse proxy? (I am trying to setup nginx as an SSL proxy; when the upstream service sends a connection close I want this to be passed on by nginx to the client requesting pages in the fronted.) webbrowser -> 443-nginx-ssl-offload -> http-connection-upstream_service. i am trying to use: 1.2.6 with a few patches. I would truly appreciate your help/advice. I have an upstream_service on the same host that is sending a connection_close - and I think nginx is sending back a connection_keepalive out to the client webbrowser. Does this issue co-relate to the fix in 1.5.5 ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237386,250166#msg-250166 From nginx-forum at nginx.us Sat May 17 07:25:44 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Sat, 17 May 2014 03:25:44 -0400 Subject: Memcached In-Reply-To: References: Message-ID: <439d86ddc24cfcb0c19a5a957b04d7ea.NginxMailingListEnglish@forum.nginx.org> Thank you. I think agentzh's "memc-nginx-module" module comes with little more features than the default "ngx_http_memcached_module" which Valentin suggested. If I download nginx, I see the "ngx_http_memcached_module". So, it is not a third party module. I will evaluate and see which one to use. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250155,250167#msg-250167 From vbart at nginx.com Sat May 17 09:42:08 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 17 May 2014 13:42:08 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <1014d8b6891298405f377ac2a5895f6c.NginxMailingListEnglish@forum.nginx.org> References: <1014d8b6891298405f377ac2a5895f6c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2503762.FWEJDGKfCu@vbart-laptop> On Friday 16 May 2014 18:32:32 newnovice wrote: > Does this also affect a 'Connection Close' header coming from the upstream > service - being sent as a connect-keep-alive to the requesting client thru > my nginx ssl reverse proxy? > > (I am trying to setup nginx as an SSL proxy; when the upstream service sends > a connection close I want this to be passed on by nginx to the client > requesting pages in the fronted.) > > webbrowser -> 443-nginx-ssl-offload -> http-connection-upstream_service. > i am trying to use: 1.2.6 with a few patches. > > I would truly appreciate your help/advice. I have an upstream_service on the > same host that is sending a connection_close - and I think nginx is sending > back a connection_keepalive out to the client webbrowser. Does this issue > co-relate to the fix in 1.5.5 ? > No, it's not related, and there's nothing to fix. The Connection header is hop-by-hop per RFC. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sat May 17 15:28:33 2014 From: nginx-forum at nginx.us (newnovice) Date: Sat, 17 May 2014 11:28:33 -0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <2503762.FWEJDGKfCu@vbart-laptop> References: <2503762.FWEJDGKfCu@vbart-laptop> Message-ID: Can I proxy_pass on the way back also - and preserve all headers coming from the upstream service? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237386,250172#msg-250172 From agentzh at gmail.com Sat May 17 16:10:46 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 17 May 2014 09:10:46 -0700 Subject: Memcached In-Reply-To: <439d86ddc24cfcb0c19a5a957b04d7ea.NginxMailingListEnglish@forum.nginx.org> References: <439d86ddc24cfcb0c19a5a957b04d7ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Sat, May 17, 2014 at 12:25 AM, nginxsantos wrote: > I think agentzh's "memc-nginx-module" module comes with little more features > than the default "ngx_http_memcached_module" which Valentin suggested. I won't agree with the "little more features" statement. The standard ngx_memcached module only supports the memcached "get" command while ngx_memc supports way more commands like "set", "add", "replace", "append", "prepend", "delete", "incr", "decr", "flush_all", "stats", and "version". See https://github.com/openresty/memc-nginx-module#readme for more details. Regards, -agentzh From nginx-forum at nginx.us Sat May 17 19:00:10 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 17 May 2014 15:00:10 -0400 Subject: [ANN] Windows nginx 1.7.1.3 RedKnight Message-ID: <2247d70e0e4fc9171b9d3215e95bb52d.NginxMailingListEnglish@forum.nginx.org> 17:21 17-5-2014 nginx 1.7.1.3 RedKnight Go ask Alice, I think she'll know, When logic and proportion have fallen dead And the white knight is talking backwards And the red queen's lost her head Remember what the dormouse said Feed your head, feed your head, as the RedKnight rizes again from the dead ! The nginx RedKnight release is here /-> Based on nginx 1.7.1 (16-5-2014) with; + Openssl fix for out-of-bounds write in SSL_get_shared_ciphers (#3317) + integration of Mercurial and Git into our crosscompiler this reduces diff sets import and cross checks from 12 to 1 hour + lua-nginx-module v0.9.7 (upgraded 15-5-2014) + Select-boost is out of beta and is now the default + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250176,250176#msg-250176 From francis at daoine.org Sun May 18 08:41:53 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 18 May 2014 09:41:53 +0100 Subject: Unable to cache long cache url In-Reply-To: <766c713521511ad0d15088c137117309.NginxMailingListEnglish@forum.nginx.org> References: <766c713521511ad0d15088c137117309.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140518084153.GH16942@daoine.org> On Thu, May 15, 2014 at 12:23:06PM -0400, samgujrat1984 wrote: Hi there, > I am trying to cache a long string query url with proxy_cache_key. I am not > sure where I am doing wrong. I tried different proxy_cache_key patterns. Not > sure where I am doing wrong. If some one can suggest me please proxy_cache key is to decide whether two requests are the same for caching purposes (as in: can the second request be served with the cached response to the first?). Which two requests do you want to be the same for caching?; and which other requests do you want to be different? >From your mail, it is not obvious to me what you are trying to do. f -- Francis Daly francis at daoine.org From francis at daoine.org Sun May 18 08:50:59 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 18 May 2014 09:50:59 +0100 Subject: Serving wordpress from subdomain In-Reply-To: <9d00ab06e37a2c021820b7db0094c073.NginxMailingListEnglish@forum.nginx.org> References: <9d00ab06e37a2c021820b7db0094c073.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140518085059.GI16942@daoine.org> On Thu, May 15, 2014 at 10:38:39PM -0400, adambenayoun wrote: Hi there, > My setup is nginx + php-fpm. I am running a web application on domain.com > and I am serving a wordpress configuration from domain.com/blog. For info: that's not a subdomain. It's frequently called "non-root" or "sub directory". If you expand your search to include those terms, do you find the config you are looking for? If you don't, can you provide: * the request you make * the response you get * the response you want for some requests that don't do what you want? It may make it clearer which part is failing. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun May 18 10:22:56 2014 From: nginx-forum at nginx.us (shahin-slt) Date: Sun, 18 May 2014 06:22:56 -0400 Subject: problem with php script Message-ID: hi i' using a script on apache and now i want to use it on nginx, but it does not work. hears my codes and config.(i use windows) please help me. sorry for my long topic nginx.conf: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 81; server_name localhost; autoindex on; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME E:/nginx/html/$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } download.php code: set_byfile( $filePath ); $av_http->download( $fileData[0] ); } } else{ $av_http->set_byfile( $filePath ); $av_http->use_resume = false; $av_http->speed = is_numeric($av_config['free_dl_speed']) ? intval($av_config['free_dl_speed']) : 100; $av_http->download( $fileData[0] ); } } die(); config.php code: null); var $use_resume = true; var $use_autoexit = false; var $use_auth = false; var $filename = null; var $mime = null; var $bufsize = 2048; var $seek_start = 0; var $seek_end = -1; /** * Total bandwidth has been used for this download * @var int */ var $bandwidth = 0; /** * Speed limit * @var float */ var $speed = 0; /*------------------- | Download Function | -------------------*/ /** * Check authentication and get seek position * @return bool **/ function initialize() { global $HTTP_SERVER_VARS; if ($this->use_auth) //use authentication { if (!$this->_auth()) //no authentication { header('WWW-Authenticate: Basic realm="Please enter your username and password"'); header('HTTP/1.0 401 Unauthorized'); header('status: 401 Unauthorized'); if ($this->use_autoexit) exit(); return false; } } if ($this->mime == null) $this->mime = "application/octet-stream"; //default mime if (isset($_SERVER['HTTP_RANGE']) || isset($HTTP_SERVER_VARS['HTTP_RANGE'])) { if (isset($HTTP_SERVER_VARS['HTTP_RANGE'])) $seek_range = substr($HTTP_SERVER_VARS['HTTP_RANGE'] , strlen('bytes=')); else $seek_range = substr($_SERVER['HTTP_RANGE'] , strlen('bytes=')); $range = explode('-',$seek_range); if ($range[0] > 0) { $this->seek_start = intval($range[0]); } if ($range[1] > 0) $this->seek_end = intval($range[1]); else $this->seek_end = -1; if (!$this->use_resume) { $this->seek_start = 0; //header("HTTP/1.0 404 Bad Request"); //header("Status: 400 Bad Request"); //exit; //return false; } else { $this->data_section = 1; } } else { $this->seek_start = 0; $this->seek_end = -1; } return true; } /** * Send download information header **/ function header($size,$seek_start=null,$seek_end=null) { header('Content-type: ' . $this->mime); header('Content-Disposition: attachment; filename="' . $this->filename. '"'); header('Last-Modified: ' . date('D, d M Y H:i:s \G\M\T' , $this->data_mod)); if ($this->data_section && $this->use_resume){ header("HTTP/1.0 206 Partial Content"); header("Status: 206 Partial Content"); header('Accept-Ranges: bytes'); header("Content-Range: bytes $seek_start-$seek_end/$size"); header("Content-Length: " . ($seek_end - $seek_start + 1)); } else { header("Content-Length: $size"); } } function download_ex($size){ if (!$this->initialize()) return false; ignore_user_abort(true); //Use seek end here if ($this->seek_start > ($size - 1)) $this->seek_start = 0; if ($this->seek_end <= 0) $this->seek_end = $size - 1; $this->header($size,$seek,$this->seek_end); $this->data_mod = time(); return true; } /** * Start download * @return bool **/ function download( $file_name = null ) { if (!$this->initialize()) return false; $seek = $this->seek_start; $speed = $this->speed; $bufsize = $this->bufsize; $packet = 1; //do some clean up @ob_end_clean(); $old_status = ignore_user_abort(true); @set_time_limit(0); $this->bandwidth = 0; $size = $this->data_len; if ($this->data_type == 0){ $size = filesize($this->data); if ($seek > ($size - 1)) $seek = 0; if ( $file_name == null ) $this->filename = basename( $this->data ); else $this->filename = $file_name; $res = fopen($this->data,'rb'); if ($seek) fseek($res , $seek); if ($this->seek_end < $seek) $this->seek_end = $size - 1; $this->header($size,$seek,$this->seek_end); //always use the last seek $size = $this->seek_end - $seek + 1; while (!(connection_aborted() || connection_status() == 1) && $size > 0) { if ($size < $bufsize) { echo fread($res , $size); $this->bandwidth += $size; } else { echo fread($res , $bufsize); $this->bandwidth += $bufsize; } $size -= $bufsize; flush(); if ($speed > 0 && ($this->bandwidth > $speed*$packet*1024)) { sleep(1); $packet++; } } fclose($res); } elseif ($this->data_type == 1) //download from a string { if ($seek > ($size - 1)) $seek = 0; if ($this->seek_end < $seek) $this->seek_end = $this->data_len - 1; $this->data = substr($this->data , $seek , $this->seek_end - $seek + 1); //if ($this->filename == null) $this->filename = time(); $size = strlen($this->data); $this->header($this->data_len,$seek,$this->seek_end); while (!connection_aborted() && $size > 0) { if ($size < $bufsize) { $this->bandwidth += $size; } else { $this->bandwidth += $bufsize; } echo substr($this->data , 0 , $bufsize); $this->data = substr($this->data , $bufsize); $size -= $bufsize; flush(); if ($speed > 0 && ($this->bandwidth > $speed*$packet*1024)) { sleep(1); $packet++; } } } else if ($this->data_type == 2) { //just send a redirect header header('location: ' . $this->data); } if ($this->use_autoexit) exit(); //restore old status ignore_user_abort($old_status); @set_time_limit(ini_get("max_execution_time")); return true; } function set_byfile($dir) { if (is_readable($dir) && is_file($dir)) { $this->data_len = 0; $this->data = $dir; $this->data_type = 0; $this->data_mod = filemtime($dir); return true; } else return false; } function set_bydata($data) { if ($data == '') return false; $this->data = $data; $this->data_len = strlen($data); $this->data_type = 1; $this->data_mod = time(); return true; } function set_byurl($data) { $this->data = $data; $this->data_len = 0; $this->data_type = 2; return true; } function set_lastmodtime($time) { $time = intval($time); if ($time <= 0) $time = time(); $this->data_mod = $time; } /** * Check authentication * @return bool **/ function _auth() { if (!isset($_SERVER['PHP_AUTH_USER'])) return false; if (isset($this->handler['auth']) && function_exists($this->handler['auth'])) { return $this->handler['auth']('auth' , $_SERVER['PHP_AUTH_USER'],$_SERVER['PHP_AUTH_PW']); } else return true; //you must use a handler } } ?> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250179,250179#msg-250179 From nunomagalhaes at eu.ipp.pt Sun May 18 14:53:06 2014 From: nunomagalhaes at eu.ipp.pt (=?UTF-8?Q?Nuno_Magalh=C3=A3es?=) Date: Sun, 18 May 2014 15:53:06 +0100 Subject: problem with php script In-Reply-To: References: Message-ID: Hi, Maybe you should read this first: http://catb.org/~esr/faqs/smart-questions.html This is an nginx forum, not a PHP forum. If you're running copy/pasted code and don't know why if fails, maybe it shouldn't be in a live server? You could at the very least remove all the comments and, perhaps, say why it "does not work". Have you checked the logs (the ones you commented out)? How about an nginx version? Using pastebin? Your location / seems odd and you should consider using try_files (documented in the nginx wiki) instead. You set all indexes there as .htm(l) and then define the PHP pass as index.php - i'm not sure if it matters but it's odd and you don't seem to have any index.html in your mail dump. Nor do i know how your SCRIPT_FILENAME reacts to an E:/. Running quickly through your PHP you seem to use HTTP Basic Auth - without configuring it in nginx. Also documented. Cheers, Nuno From gaoping at richinfo.cn Mon May 19 00:43:04 2014 From: gaoping at richinfo.cn (gaoping at richinfo.cn) Date: Mon, 19 May 2014 08:43:04 +0800 Subject: nginx1.4.7 ip_hash load balancing strategy References: <2014051615293074075711@richinfo.cn>, <20140516112226.GV1849@mdounin.ru> Message-ID: <201405190843041837073@richinfo.cn> Thanks for your reply, Shafreeck Sea Provides solutions that is just what I need. gaoping at richinfo.cn From: Maxim Dounin Date: 2014-05-16 19:22 To: nginx Subject: Re: nginx1.4.7 ip_hash load balancing strategy Hello! On Fri, May 16, 2014 at 03:29:31PM +0800, gaoping at richinfo.cn wrote: > hi > When my nginx is below this structure,Nginx access to the IP > proxy server by IP, then nginx ip_hash load balancing, will all > requests are routed to the same tomcat, do not know if you have > what good method to solve the. > The ip_hash function can be introduced into a custom IP. > Prerequisite: can not change the existing structure, and nginx > access to the IP is only aproxy server, and IP client request is > not true, As long as upstream{} blocks on both frontends are identical, and client IP stays the same, the same upstream server will be selected by ip_hash. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaoping at richinfo.cn Mon May 19 00:44:23 2014 From: gaoping at richinfo.cn (gaoping at richinfo.cn) Date: Mon, 19 May 2014 08:44:23 +0800 Subject: nginx1.4.7 ip_hash load balancing strategy References: <2014051615293074075711@richinfo.cn>, <20140516112226.GV1849@mdounin.ru> Message-ID: <201405190844234383825@richinfo.cn> sorry, Aleksandar Lazic Provides solutions that is just what I need. gaoping at richinfo.cn From: Maxim Dounin Date: 2014-05-16 19:22 To: nginx Subject: Re: nginx1.4.7 ip_hash load balancing strategy Hello! On Fri, May 16, 2014 at 03:29:31PM +0800, gaoping at richinfo.cn wrote: > hi > When my nginx is below this structure,Nginx access to the IP > proxy server by IP, then nginx ip_hash load balancing, will all > requests are routed to the same tomcat, do not know if you have > what good method to solve the. > The ip_hash function can be introduced into a custom IP. > Prerequisite: can not change the existing structure, and nginx > access to the IP is only aproxy server, and IP client request is > not true, As long as upstream{} blocks on both frontends are identical, and client IP stays the same, the same upstream server will be selected by ip_hash. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaoping at richinfo.cn Mon May 19 00:47:47 2014 From: gaoping at richinfo.cn (gaoping at richinfo.cn) Date: Mon, 19 May 2014 08:47:47 +0800 Subject: nginx1.4.7 ip_hash load balancing strategy References: <2014051615293074075711@richinfo.cn>, , <2014051616170478294713@richinfo.cn>, Message-ID: <201405190847469058069@richinfo.cn> ??????? ??????????????????????????? ??????nginx???????????????????????web?????Aleksandar Lazic ????????????? gaoping at richinfo.cn From: Shafreeck Sea Date: 2014-05-16 18:55 To: nginx at nginx.org Subject: Re: Re: nginx1.4.7 ip_hash load balancing strategy ???ip_hash ? ?????????????? 2014-05-16 16:17 GMT+08:00 gaoping at richinfo.cn : ngx_http_upstream_module Load balancing algorithm upstream the instruction ip_hash; gaoping at richinfo.cn From: Shafreeck Sea Date: 2014-05-16 16:14 To: nginx at nginx.org Subject: Re: nginx1.4.7 ip_hash load balancing strategy why ip_hash ? 2014-05-16 15:29 GMT+08:00 gaoping at richinfo.cn : hi When my nginx is below this structure,Nginx access to the IP proxy server by IP, then nginx ip_hash load balancing, will all requests are routed to the same tomcat, do not know if you have what good method to solve the. The ip_hash function can be introduced into a custom IP. Prerequisite: can not change the existing structure, and nginx access to the IP is only aproxy server, and IP client request is not true, gaoping at richinfo.cn _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx??(05-16-16(05-19-08-44-38).png Type: image/png Size: 12535 bytes Desc: not available URL: From no-reply at alllangin.com Mon May 19 03:37:03 2014 From: no-reply at alllangin.com (Demetr) Date: Mon, 19 May 2014 07:37:03 +0400 Subject: =?UTF-8?B?aGVscCDQutCw0Log0L/QvtC70YPRh9C40YLRjCDRgdC/0LjRgdC+0Log0LrRg9C6?= Message-ID: <53797C5F.2060703@alllangin.com> ??? ????? ?? ???? ????? ?????? ???????????? nginx ? ?????????????? ??? ? ??????????. ?????????? ???????????? ??? ????. ?? ???? ???????? ?????? ??? ? nginx(?? ???????). ?? ??????????? ???????? ? ?????? ???????????. ???????, ???????. worker_processes 4; error_log /etc/nginx/error.log info; events { worker_connections 4096; } http { include mime.types; default_type application/octet-stream; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; client_header_buffer_size 1k; large_client_header_buffers 4 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; #proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=cache:30m max_size=1G; #proxy_temp_path /var/lib/nginx/proxy 1 2; #proxy_ignore_headers Expires Cache-Control; #proxy_cache_use_stale error timeout invalid_header http_502; #proxy_cache_bypass $cookie_session; #proxy_no_cache $cookie_session; #keepalive_timeout 65; #limit_zone by_vhost $binary_remote_addr 10m; #limit_conn by_vhost 50; server { listen 80; server_name 127.0.0.1 www.127.0.0.1.com; #proxy_cache_key "$request_method|$http_if_modified_since|$http_if_none_match|$host|$request_uri|$cookie_PHPSESSID|$cookie_JSSESSID"; #proxy_cache_valid 200 301 302 304 5m; #proxy_hide_header "Set-Cookie"; #proxy_ignore_headers "Cache-Control" "Expires"; location / { rewrite ^/blog/(.*)$ /blog.php?pic=$1 last; root /var/www/html/cat; index index.php index.html index.htm; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { rewrite ^/blog/(.*)$ /blog.php?pic=$1 last; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires max; # add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # proxy_pass_header Set-Cookie; # proxy_set_header Cookie $http_cookie; proxy_pass http://127.0.0.1.com:8080; proxy_redirect default; # proxy_cache cache; client_max_body_size 10m; client_body_buffer_size 128k; client_body_temp_path /home/client_body_temp; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_temp_path /home/proxy_temp; } } } From no-reply at alllangin.com Mon May 19 03:40:31 2014 From: no-reply at alllangin.com (Demetr) Date: Mon, 19 May 2014 07:40:31 +0400 Subject: =?UTF-8?B?aGVscCDQvdGD0LbQvdC+INC/0LXRgNC10LTQsNGC0Ywg0LrRg9C60LggKG5lZWQg?= =?UTF-8?B?dG8gdHJhbnNmZXIgY29va2llcyk=?= In-Reply-To: <53797D07.5070801@alllangin.com> References: <53797D07.5070801@alllangin.com> Message-ID: <53797D2F.4070904@alllangin.com> ??? ????? ?? ???? ????? ?????? ???????????? nginx ? ?????????????? ??? ? ??????????. ?????????? ???????????? ??? ????. ?? ???? ???????? ?????? ??? ? nginx(?? ???????). ?? ??????????? ???????? ? ?????? ???????????. ???????, ???????. What I doing wrong? Give an example nginx configuration with Proxy cookie in the headlines. Need to proxy all cookies. I can not get a list of cookies in nginx (not statics). Please include in the config cache. Confused, thanks. worker_processes 4; error_log /etc/nginx/error.log info; events { worker_connections 4096; } http { include mime.types; default_type application/octet-stream; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; client_header_buffer_size 1k; large_client_header_buffers 4 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; #proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=cache:30m max_size=1G; #proxy_temp_path /var/lib/nginx/proxy 1 2; #proxy_ignore_headers Expires Cache-Control; #proxy_cache_use_stale error timeout invalid_header http_502; #proxy_cache_bypass $cookie_session; #proxy_no_cache $cookie_session; #keepalive_timeout 65; #limit_zone by_vhost $binary_remote_addr 10m; #limit_conn by_vhost 50; server { listen 80; server_name 127.0.0.1 www.127.0.0.1.com; #proxy_cache_key "$request_method|$http_if_modified_since|$http_if_none_match|$host|$request_uri|$cookie_PHPSESSID|$cookie_JSSESSID"; #proxy_cache_valid 200 301 302 304 5m; #proxy_hide_header "Set-Cookie"; #proxy_ignore_headers "Cache-Control" "Expires"; location / { rewrite ^/blog/(.*)$ /blog.php?pic=$1 last; root /var/www/html/cat; index index.php index.html index.htm; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { rewrite ^/blog/(.*)$ /blog.php?pic=$1 last; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires max; # add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # proxy_pass_header Set-Cookie; # proxy_set_header Cookie $http_cookie; proxy_pass http://127.0.0.1.com:8080; proxy_redirect default; # proxy_cache cache; client_max_body_size 10m; client_body_buffer_size 128k; client_body_temp_path /home/client_body_temp; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_temp_path /home/proxy_temp; } } } From mdounin at mdounin.ru Mon May 19 11:20:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 May 2014 15:20:26 +0400 Subject: =?UTF-8?B?UmU6IGhlbHAg0LrQsNC6INC/0L7Qu9GD0YfQuNGC0Ywg0YHQv9C40YHQvtC6INC6?= =?UTF-8?B?0YPQug==?= In-Reply-To: <53797C5F.2060703@alllangin.com> References: <53797C5F.2060703@alllangin.com> Message-ID: <20140519112026.GC1849@mdounin.ru> Hello! On Mon, May 19, 2014 at 07:37:03AM +0400, Demetr wrote: > ??? ????? ?? ???? First of all, you are using wrong list. This one is English, please don't try to ask questions in Russian here. If you want to write in Russian, please use nginx-ru@ mailing list. See here for details: http://nginx.org/ru/support.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon May 19 11:33:49 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 May 2014 15:33:49 +0400 Subject: nginx rewrites $request_method on error In-Reply-To: References: <4aba1b94facde69e294a4c8f35685db5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140519113349.GE1849@mdounin.ru> Hello! On Fri, May 16, 2014 at 01:25:57PM -0700, Yichun Zhang (agentzh) wrote: > Hello! > > On Thu, May 15, 2014 at 11:56 PM, kay wrote: > > Don't you think this is a bug? > > > > I think the NGINX core should prevent bad things from happen when > > 1. the user configures complicated things in his error_page targets, and > 2. the error page is initiated by nginx too early in the request > processing lifetime (like when processing the request header). > > Because it should be done in the nginx core, and I can do very little > on my 3rd-party modules side about this. > > Maxim Dounin: what is your opinion on this? First of all, I think that people should think before specifying error processing, especially complex error processing. It is known that using subrequest for errors generated early may cause request hang (and FengGu tried to provide a patch a while ago on nginx-devel@), and it will be eventually fixed. This is relatively low priority though, see above. -- Maxim Dounin http://nginx.org/ From frank.bonnet at esiee.fr Mon May 19 11:38:15 2014 From: frank.bonnet at esiee.fr (BONNET, Frank) Date: Mon, 19 May 2014 13:38:15 +0200 Subject: nginx VPS based ? Message-ID: Hello For internal purpose ( students projects ) I need to create several VPS LAMP like but running under nginx It needs PHP, MYSQL + LDAP auth I need to have one VPS per project but running on the same nginx instance the MYSQL server must also be shared ( one database per project ) Users must NOT have physical access to the server but must administer the space trhought a web admin interface. Authentication must be done trhu a LDAP auth process ( anonymous bind ) Any info links examples welcome Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon May 19 11:48:50 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 19 May 2014 12:48:50 +0100 Subject: nginx VPS based ? In-Reply-To: References: Message-ID: On 19 May 2014 12:38, "BONNET, Frank" wrote: > > Hello > > For internal purpose ( students projects ) Are /you/ the student in this context? -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at alllangin.com Mon May 19 12:00:49 2014 From: no-reply at alllangin.com (Demetr) Date: Mon, 19 May 2014 16:00:49 +0400 Subject: nginx VPS based ? In-Reply-To: References: Message-ID: <5379F271.3020805@alllangin.com> just need to enable the transfer of cookies. collect lua. in another it is possible? 19.05.2014 15:48, Jonathan Matthews ?????: > > On 19 May 2014 12:38, "BONNET, Frank" > wrote: > > > > Hello > > > > For internal purpose ( students projects ) > > Are /you/ the student in this context? > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.bonnet at esiee.fr Mon May 19 12:53:22 2014 From: frank.bonnet at esiee.fr (BONNET, Frank) Date: Mon, 19 May 2014 14:53:22 +0200 Subject: nginx VPS based ? In-Reply-To: <5379F271.3020805@alllangin.com> References: <5379F271.3020805@alllangin.com> Message-ID: I wrote an error ... I don't need VPS but server block ... sorry 2014-05-19 14:00 GMT+02:00 Demetr : > just need to enable the transfer of cookies. > > collect lua. > > in another it is possible? > > 19.05.2014 15:48, Jonathan Matthews ?????: > > On 19 May 2014 12:38, "BONNET, Frank" wrote: > > > > Hello > > > > For internal purpose ( students projects ) > > Are /you/ the student in this context? > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From myselfasunder at gmail.com Mon May 19 13:46:34 2014 From: myselfasunder at gmail.com (Dustin Oprea) Date: Mon, 19 May 2014 09:46:34 -0400 Subject: nginx VPS based ? In-Reply-To: References: Message-ID: On May 19, 2014 7:38 AM, "BONNET, Frank" wrote: > > Hello > > For internal purpose ( students projects ) I need to create several VPS > LAMP like but running under nginx > > It needs PHP, MYSQL + LDAP auth > > I need to have one VPS per project but running on the same nginx instance > the MYSQL server must also be shared ( one database per project ) > > Users must NOT have physical access to the server but must administer > the space trhought a web admin interface. Authentication must be done > trhu a LDAP auth process ( anonymous bind ) > > Any info links examples welcome > > Thank you > > You can find plenty of links with a Google Search. Dustin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon May 19 14:47:38 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 19 May 2014 18:47:38 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: References: <2503762.FWEJDGKfCu@vbart-laptop> Message-ID: <13733424.ulmuhHEZPB@vbart-workstation> On Saturday 17 May 2014 11:28:33 newnovice wrote: > Can I proxy_pass on the way back also - and preserve all headers coming from > the upstream service? > I can try by using combination of add_header and $upstream_http_* variables. Also, some headers can be preserved by the proxy_pass_header directive. wbr, Valentin V. Bartenev From no-reply at alllangin.com Mon May 19 15:10:06 2014 From: no-reply at alllangin.com (Demetr) Date: Mon, 19 May 2014 19:10:06 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <13733424.ulmuhHEZPB@vbart-workstation> References: <2503762.FWEJDGKfCu@vbart-laptop> <13733424.ulmuhHEZPB@vbart-workstation> Message-ID: <537A1ECE.2000103@alllangin.com> thanks/ use the lua-module it is possible? 19.05.2014 18:47, Valentin V. Bartenev ?????: > I can try by using combination of add_header and $upstream_http_* variables. > > Also, some headers can be preserved by the proxy_pass_header directive. From agentzh at gmail.com Mon May 19 18:48:17 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 19 May 2014 11:48:17 -0700 Subject: nginx rewrites $request_method on error In-Reply-To: <20140519113349.GE1849@mdounin.ru> References: <4aba1b94facde69e294a4c8f35685db5.NginxMailingListEnglish@forum.nginx.org> <20140519113349.GE1849@mdounin.ru> Message-ID: Hello! On Mon, May 19, 2014 at 4:33 AM, Maxim Dounin wrote: > First of all, I think that people should think before specifying > error processing, especially complex error processing. > Unfortunately we cannot control what the users *think* :) Many users just try things away according to their intuition. I'd expect that we'll have to waste more time to explain to other users trying to do this in the near future (before it gets fixed). I already cannot remember exactly how many times my modules' users have reported such things in the past ;) IMHO, if nginx cannot handle a particular case, it should complain about it rather than hang forever. It is about usability :) > It is known that using subrequest for errors generated early may > cause request hang (and FengGu tried to provide a patch > a while ago on nginx-devel@), and it will be eventually fixed. That's cool. Looking forward to that. Regards, -agentzh From webmaster at cosmicperl.com Mon May 19 18:49:56 2014 From: webmaster at cosmicperl.com (Lyle) Date: Mon, 19 May 2014 19:49:56 +0100 Subject: Pretty printer for the Nginx config? Message-ID: <537A5254.80803@cosmicperl.com> Is there a pretty printer for this file? It doesn't seem to be in any particular language. I tried a JS beautifier but that broke things. Lyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt at x64architecture.com Mon May 19 18:59:47 2014 From: kurt at x64architecture.com (Kurt Cancemi) Date: Mon, 19 May 2014 14:59:47 -0400 Subject: Pretty printer for the Nginx config? In-Reply-To: <537A5254.80803@cosmicperl.com> References: <537A5254.80803@cosmicperl.com> Message-ID: Hello, Would thissuit your needs? And the nginx configuration files are written in there own language/syntax. On Mon, May 19, 2014 at 2:49 PM, Lyle wrote: > Is there a pretty printer for this file? It doesn't seem to be in any > particular language. I tried a JS beautifier but that broke things. > > > Lyle > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 19 19:06:06 2014 From: nginx-forum at nginx.us (samingrassia) Date: Mon, 19 May 2014 15:06:06 -0400 Subject: Quick question about using kill -USR1 to recreate access.log Message-ID: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> Thanks to everyone in advance! I have a cron that runs the following: mv $NGINX_ACCESS_LOG $ACCESS_LOG_DROPBOX/$LOG_FILENAME kill -USR1 `cat $NGINX_PID` My questions is during time between the mv and the kill, is there any log writes that are being discarded or are they being stacked in memory and dumped into the new access.log after it is recreated? Thanks! Sam Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250214,250214#msg-250214 From francis at daoine.org Mon May 19 19:36:51 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 19 May 2014 20:36:51 +0100 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140519193651.GJ16942@daoine.org> On Mon, May 19, 2014 at 03:06:06PM -0400, samingrassia wrote: Hi there, > mv $NGINX_ACCESS_LOG $ACCESS_LOG_DROPBOX/$LOG_FILENAME > kill -USR1 `cat $NGINX_PID` > > My questions is during time between the mv and the kill, is there any log > writes that are being discarded or are they being stacked in memory and > dumped into the new access.log after it is recreated? What happens when you do mv $NGINX_ACCESS_LOG $ACCESS_LOG_DROPBOX/$LOG_FILENAME and then issue a http request of your nginx server, before the kill? Do you see the log line go into $NGINX_ACCESS_LOG; onto the end of $ACCESS_LOG_DROPBOX/$LOG_FILENAME; or disappear without being written anywhere? I'd expect the first option not to happen; the second option to happen if the "mv" is a "rename"; and the third option to happen if the "mv" is a "copy and delete". So make sure that your "mv" is a "rename", and you'll be fine. Actually, I'd expect the first option to happen if you are using variables in your log file name, according to its documentation. f -- Francis Daly francis at daoine.org From lordnynex at gmail.com Mon May 19 19:53:55 2014 From: lordnynex at gmail.com (Lord Nynex) Date: Mon, 19 May 2014 12:53:55 -0700 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: <20140519193651.GJ16942@daoine.org> References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> <20140519193651.GJ16942@daoine.org> Message-ID: The name of the file is really sort of irrelevant. The file descriptor will still point at $ACCESS_LOG_DROPBOX/$LOG_FILENAME. Any log lines between MV and KILL *should* still be written there. Why not use logrotate? Why not use nginx reload? Why not use HUP? On Mon, May 19, 2014 at 12:36 PM, Francis Daly wrote: > On Mon, May 19, 2014 at 03:06:06PM -0400, samingrassia wrote: > > Hi there, > > > mv $NGINX_ACCESS_LOG $ACCESS_LOG_DROPBOX/$LOG_FILENAME > > kill -USR1 `cat $NGINX_PID` > > > > My questions is during time between the mv and the kill, is there any log > > writes that are being discarded or are they being stacked in memory and > > dumped into the new access.log after it is recreated? > > What happens when you do > > mv $NGINX_ACCESS_LOG $ACCESS_LOG_DROPBOX/$LOG_FILENAME > > and then issue a http request of your nginx server, before the kill? > > Do you see the log line go into $NGINX_ACCESS_LOG; onto the end of > $ACCESS_LOG_DROPBOX/$LOG_FILENAME; or disappear without being written > anywhere? > > I'd expect the first option not to happen; the second option to happen > if the "mv" is a "rename"; and the third option to happen if the "mv" > is a "copy and delete". So make sure that your "mv" is a "rename", > and you'll be fine. > > Actually, I'd expect the first option to happen if you are using variables > in your log file name, according to its documentation. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 19 21:32:07 2014 From: nginx-forum at nginx.us (newnovice) Date: Mon, 19 May 2014 17:32:07 -0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <13733424.ulmuhHEZPB@vbart-workstation> References: <13733424.ulmuhHEZPB@vbart-workstation> Message-ID: I tried with: " proxy_pass_header Connection;" -- which should do this according to documentation: "Permits passing otherwise disabled header fields from a proxied server to a client." (from upstream_server to client) But this did not pass thru the Connection header coming from the upstream_server to the client of this proxy. Valentin, can you please elaborate on how you suggest doing this: "I can try by using combination of add_header and $upstream_http_* variables." ??? are you saying this 'variable' can be developed to pass on the upstream_http_connection header in the response ??? Thanks for your help... Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Saturday 17 May 2014 11:28:33 newnovice wrote: > > Can I proxy_pass on the way back also - and preserve all headers > coming from > > the upstream service? > > > > I can try by using combination of add_header and $upstream_http_* > variables. > > Also, some headers can be preserved by the proxy_pass_header > directive. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237386,250217#msg-250217 From webmaster at cosmicperl.com Mon May 19 23:12:25 2014 From: webmaster at cosmicperl.com (Lyle) Date: Tue, 20 May 2014 00:12:25 +0100 Subject: Pretty printer for the Nginx config? In-Reply-To: References: <537A5254.80803@cosmicperl.com> Message-ID: <537A8FD9.30802@cosmicperl.com> It's kind of along the right lines, but I need to keep comments, etc. Just have consistent whitespace more than anything else. Lyle On 19/05/2014 19:59, Kurt Cancemi wrote: > Hello, > > Would this > > suit your needs? And the nginx configuration files are written in > there own language/syntax. > > > On Mon, May 19, 2014 at 2:49 PM, Lyle > wrote: > > Is there a pretty printer for this file? It doesn't seem to be in > any particular language. I tried a JS beautifier but that broke > things. > > > Lyle > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From myselfasunder at gmail.com Mon May 19 23:17:48 2014 From: myselfasunder at gmail.com (Dustin Oprea) Date: Mon, 19 May 2014 19:17:48 -0400 Subject: Pretty printer for the Nginx config? In-Reply-To: <537A8FD9.30802@cosmicperl.com> References: <537A5254.80803@cosmicperl.com> <537A8FD9.30802@cosmicperl.com> Message-ID: On Mon, May 19, 2014 at 7:12 PM, Lyle wrote: > It's kind of along the right lines, but I need to keep comments, etc. > Just have consistent whitespace more than anything else. > > You might write your own and make it available. Dustin > > Lyle > > > On 19/05/2014 19:59, Kurt Cancemi wrote: > > Hello, > > Would thissuit your needs? And the nginx configuration files are written in there own > language/syntax. > > > On Mon, May 19, 2014 at 2:49 PM, Lyle wrote: > >> Is there a pretty printer for this file? It doesn't seem to be in any >> particular language. I tried a JS beautifier but that broke things. >> >> >> Lyle >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lordnynex at gmail.com Mon May 19 23:37:24 2014 From: lordnynex at gmail.com (Lord Nynex) Date: Mon, 19 May 2014 16:37:24 -0700 Subject: Pretty printer for the Nginx config? In-Reply-To: References: <537A5254.80803@cosmicperl.com> <537A8FD9.30802@cosmicperl.com> Message-ID: I do something similar to this in some backend config management code I wrote awhile back. I've created a gist with some code snippets to help understand how I'm reading/writing the config. https://gist.github.com/lordnynex/1a568e54b5d1af38f549 Obviously this is not what you're looking for, and pretty printing feels less pretty when your code is formatting it, but this sufficiently meets your criteria with a bit of work. Side note, I am not the original author of the grammar. I'd love to give him credit, but it was so long ago I can't find the original. On Mon, May 19, 2014 at 4:17 PM, Dustin Oprea wrote: > On Mon, May 19, 2014 at 7:12 PM, Lyle wrote: > >> It's kind of along the right lines, but I need to keep comments, etc. >> Just have consistent whitespace more than anything else. >> >> > You might write your own and make it available. > > > > Dustin > > > >> >> Lyle >> >> >> On 19/05/2014 19:59, Kurt Cancemi wrote: >> >> Hello, >> >> Would thissuit your needs? And the nginx configuration files are written in there own >> language/syntax. >> >> >> On Mon, May 19, 2014 at 2:49 PM, Lyle wrote: >> >>> Is there a pretty printer for this file? It doesn't seem to be in any >>> particular language. I tried a JS beautifier but that broke things. >>> >>> >>> Lyle >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From myselfasunder at gmail.com Mon May 19 23:43:10 2014 From: myselfasunder at gmail.com (Dustin Oprea) Date: Mon, 19 May 2014 19:43:10 -0400 Subject: Pretty printer for the Nginx config? In-Reply-To: References: <537A5254.80803@cosmicperl.com> <537A8FD9.30802@cosmicperl.com> Message-ID: On Mon, May 19, 2014 at 7:37 PM, Lord Nynex wrote: > I do something similar to this in some backend config management code I > wrote awhile back. I've created a gist with some code snippets to help > understand how I'm reading/writing the config. > > https://gist.github.com/lordnynex/1a568e54b5d1af38f549 > > Obviously this is not what you're looking for, and pretty printing feels > less pretty when your code is formatting it, but this sufficiently meets > your criteria with a bit of work. > > Side note, I am not the original author of the grammar. I'd love to give > him credit, but it was so long ago I can't find the original. > > That's great, @LordNynex (I'm not the OP)... Though I have to admit that memories of my days at a PERL-based transaction-processor came screeching back when I saw the code. > > On Mon, May 19, 2014 at 4:17 PM, Dustin Oprea wrote: > >> On Mon, May 19, 2014 at 7:12 PM, Lyle wrote: >> >>> It's kind of along the right lines, but I need to keep comments, etc. >>> Just have consistent whitespace more than anything else. >>> >>> >> You might write your own and make it available. >> >> >> >> Dustin >> >> >> >>> >>> Lyle >>> >>> >>> On 19/05/2014 19:59, Kurt Cancemi wrote: >>> >>> Hello, >>> >>> Would thissuit your needs? And the nginx configuration files are written in there own >>> language/syntax. >>> >>> >>> On Mon, May 19, 2014 at 2:49 PM, Lyle wrote: >>> >>>> Is there a pretty printer for this file? It doesn't seem to be in any >>>> particular language. I tried a JS beautifier but that broke things. >>>> >>>> >>>> Lyle >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> >>> _______________________________________________ >>> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue May 20 04:58:23 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 20 May 2014 08:58:23 +0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: References: <13733424.ulmuhHEZPB@vbart-workstation> Message-ID: <1789409.KmxV9qQg3k@vbart-laptop> On Monday 19 May 2014 17:32:07 newnovice wrote: > I tried with: " proxy_pass_header Connection;" -- which > should do this according to documentation: "Permits passing otherwise > disabled header fields from a proxied server to a client." > (from upstream_server to client) No, it shouldn't. If you will follow the link from "otherwise disabled", you will see the full list of such headers, that can be specified. I wrote in general, not about the "Connection" header. > > But this did not pass thru the Connection header coming from the > upstream_server to the client of this proxy. > > Valentin, can you please elaborate on how you suggest doing this: "I can try Sorry, typo: s/I can/You can/ > by using combination of add_header and $upstream_http_* variables." ??? > > are you saying this 'variable' can be developed to pass on the > upstream_http_connection header in the response ??? http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_http_ wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue May 20 06:04:57 2014 From: nginx-forum at nginx.us (newnovice) Date: Tue, 20 May 2014 02:04:57 -0400 Subject: Dropped https client connection doesn't drop backend proxy_pass connection In-Reply-To: <1789409.KmxV9qQg3k@vbart-laptop> References: <1789409.KmxV9qQg3k@vbart-laptop> Message-ID: <253dfc161590e396b647f549ad434b3c.NginxMailingListEnglish@forum.nginx.org> 1. Where can i get a full list of these disabled headers: "By default, nginx does not pass the header fields ?Date?, ?Server?, ?X-Pad?, and ?X-Accel-...? from the response of a proxied server to a client. The proxy_hide_header directive sets additional fields that will not be passed. If, on the contrary, the passing of fields needs to be permitted, the proxy_pass_header directive can be used." 2. Can you suggest how I can accomplish preserving and passing on Connection and Keep-Alive headers from upstream server to client. I am not sure I follow: "using combination of add_header and $upstream_http_* variables" Thanks a lot! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237386,250224#msg-250224 From es12b1001 at iith.ac.in Tue May 20 06:35:31 2014 From: es12b1001 at iith.ac.in (Adarsh Pugalia) Date: Tue, 20 May 2014 12:05:31 +0530 Subject: Can I resuse persistent connection established in server configuration in different location configurations ? Message-ID: Hi, I am new to nginx. I would like to know if its possible to reuse persistent connections in different location modules. Suppose I establish a connection through and object through my server configuration module. Can i use this object in my different location modules? If yes, then how? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue May 20 07:01:45 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 20 May 2014 09:01:45 +0200 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> <20140519193651.GJ16942@daoine.org> Message-ID: On Mon, May 19, 2014 at 9:53 PM, Lord Nynex wrote: > > The name of the file is really sort of irrelevant. The file descriptor > will still point at $ACCESS_LOG_DROPBOX/$LOG_FILENAME. Any log lines > between MV and KILL *should* still be written there. > > Why not use logrotate? > ?Th?ere has been no precision on *why* the user wanted to mv the log file somewhere else. Stating that is for log rotation/storage purpose is speculation. Why not use nginx reload? Why not use HUP? > ?You? ?are repeating yourself, as the first usually equals the second (the service 'reload' command is usually a wrap-up of the -HUP signal)? As Francis suggested, there should be special care about what the moving command will do: The docs abut controlling nginx say that the old log file will remain opened until the -USR1 signal closes file descriptors pointing to it. - A local 'mv' will keep the same inode, thus only renaming the file and keeping opened file descriptors on it intact. You want that since all incoming log messages will continue to be sent to this file until the -USR1 signal switches to a new one. - On the contrary, a remote 'mv' (ie moving file to network storage) will copy and *delete* the file, making the file descriptor opened by nginx invalid. The docs do not say what happens to log messages trying to be sent to invalid fd, although as Francis suggest, it might be simply discarded. Maybe are there some buffers FIFO-like n front of the fd that would retain some of the messages? Pure speculation. I have not had a look to the sources, if someone is around to provide us with that information (or better: put a wuick word on it in the docs), I would be glad. :o) ? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 20 07:22:39 2014 From: nginx-forum at nginx.us (shahin-slt) Date: Tue, 20 May 2014 03:22:39 -0400 Subject: nginx config Message-ID: <16d10d42e5c3ec5cc43aaa4606fe80d2.NginxMailingListEnglish@forum.nginx.org> hi i'm looking for best nginx config that work with php scripts. please help me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250229,250229#msg-250229 From vbart at nginx.com Tue May 20 07:37:54 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 20 May 2014 11:37:54 +0400 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2112228.7ZY7ayggPC@vbart-workstation> On Tuesday 20 May 2014 09:01:45 B.R. wrote: > On Mon, May 19, 2014 at 9:53 PM, Lord Nynex wrote: > > The name of the file is really sort of irrelevant. The file descriptor > > will still point at $ACCESS_LOG_DROPBOX/$LOG_FILENAME. Any log lines > > between MV and KILL *should* still be written there. > > > > Why not use logrotate? > > ?Th?ere has been no precision on *why* the user wanted to mv the log file > somewhere else. Stating that is for log rotation/storage purpose is > speculation. > > Why not use nginx reload? Why not use HUP? > > > ?You? > > ?are repeating yourself, as the first usually equals the second (the > service 'reload' command is usually a wrap-up of the -HUP signal)? > > As Francis suggested, there should be special care about what the moving > command will do: > The docs abut controlling nginx > say that the old log file > will remain opened until the -USR1 signal closes > file descriptors pointing to it. > - A local 'mv' will keep the same inode, thus only renaming the file and > keeping opened file descriptors on it intact. You want that since all > incoming log messages will continue to be sent to this file until the -USR1 > signal switches to a new one. > - On the contrary, a remote 'mv' (ie moving file to network storage) will > copy and *delete* the file, making the file descriptor opened by nginx > invalid. [..] Deleting a file doesn't make file descriptor "invalid". It's valid and the file actually exists on file system till there is at least one descriptor pointing to that file. wbr, Valentin V. Bartenev From reallfqq-nginx at yahoo.fr Tue May 20 07:49:42 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 20 May 2014 09:49:42 +0200 Subject: nginx config In-Reply-To: <16d10d42e5c3ec5cc43aaa4606fe80d2.NginxMailingListEnglish@forum.nginx.org> References: <16d10d42e5c3ec5cc43aaa4606fe80d2.NginxMailingListEnglish@forum.nginx.org> Message-ID: http://www.catb.org/esr/faqs/smart-questions.html --- *B. R.* On Tue, May 20, 2014 at 9:22 AM, shahin-slt wrote: > hi > i'm looking for best nginx config that work with php scripts. please help > me. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250229,250229#msg-250229 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laursen at oxygen.net Tue May 20 07:52:09 2014 From: laursen at oxygen.net (Lasse Laursen) Date: Tue, 20 May 2014 09:52:09 +0200 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <542E9434-25BB-4A12-BF82-0687C6B030B0@oxygen.net> Alternatively have a look here: http://www.cambus.net/log-rotation-directly-within-nginx-configuration-file/ Kind regards Lasse Laursen > On 19 May 2014, at 21:06, "samingrassia" wrote: > > Thanks to everyone in advance! > > I have a cron that runs the following: > > mv $NGINX_ACCESS_LOG $ACCESS_LOG_DROPBOX/$LOG_FILENAME > kill -USR1 `cat $NGINX_PID` > > My questions is during time between the mv and the kill, is there any log > writes that are being discarded or are they being stacked in memory and > dumped into the new access.log after it is recreated? > > Thanks! > > Sam > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250214,250214#msg-250214 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From vbart at nginx.com Tue May 20 08:04:33 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 20 May 2014 12:04:33 +0400 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: <542E9434-25BB-4A12-BF82-0687C6B030B0@oxygen.net> References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> <542E9434-25BB-4A12-BF82-0687C6B030B0@oxygen.net> Message-ID: <11772455.IbzMiKDybx@vbart-workstation> On Tuesday 20 May 2014 09:52:09 Lasse Laursen wrote: > Alternatively have a look here: > http://www.cambus.net/log-rotation-directly-within-nginx-configuration-file > / > [..] This is a bad way to solve the problem. 1. It introduces two additional open()/close() system calls per access_log per request (not to mention all regexp and variables processing); 2. It unnecessarily complicates nginx configuration, which is intended to stay clear and simple. What is the problem with log rotation tools? Why even is someone even trying to avoid using them? wbr, Valentin V. Bartenev From reallfqq-nginx at yahoo.fr Tue May 20 10:49:25 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 20 May 2014 12:49:25 +0200 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: <2112228.7ZY7ayggPC@vbart-workstation> References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> <2112228.7ZY7ayggPC@vbart-workstation> Message-ID: On Tue, May 20, 2014 at 9:37 AM, Valentin V. Bartenev wrote: > Deleting a file doesn't make file descriptor "invalid". It's valid and the > file actually exists on file system till there is at least one descriptor > pointing to that file. > ?Thanks for that Valentin, I learned something today. I read http://stackoverflow.com/questions/2028874/what-happens-to-an-open-file-handler-on-linux-if-the-pointed-file-gets-moved-de#2031100 As I understand it, all log messages going to a moved file? ?will still be printed into it. So if I move /var/log/nginx/access.log to /foo/bar.log, all error log message will continue to be printed in /foo/bar.log, until I send the USR1 signal to nginx (either master or workers), which will then close the file descriptor (thus effectively delete /var/log/nginx/access.log which was marked for deletion until then)? and will create a new file in /var/log/nginx/access.log Am I right? I suspect the file does not really move out of /var/log/nginx if at least one fd is open on it. I suspect then that there is a symlink created in /foo/bar.log pointing to the old file until it is effectively moved. Does the same happens if the file is moved on a remote disk (ie does the fact that a file is moved through copy+delete rather than rename have any impact?) ? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue May 20 11:33:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 May 2014 15:33:07 +0400 Subject: Quick question about using kill -USR1 to recreate access.log In-Reply-To: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> References: <65023f2e14bf1cf76f1e530874e319ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140520113307.GS1849@mdounin.ru> Hello! On Mon, May 19, 2014 at 03:06:06PM -0400, samingrassia wrote: > Thanks to everyone in advance! > > I have a cron that runs the following: > > mv $NGINX_ACCESS_LOG $ACCESS_LOG_DROPBOX/$LOG_FILENAME > kill -USR1 `cat $NGINX_PID` > > My questions is during time between the mv and the kill, is there any log > writes that are being discarded or are they being stacked in memory and > dumped into the new access.log after it is recreated? Unless you are trying to move logs to a different filesystem, logging will continue to the old file till USR1 is processed. >From nginx point of view, the "mv" command does nothing - as nginx has open file descriptor, it will continue to write to it, and log lines will appear in the (old) file - the file which is now have a new name. After USR1 nginx will reopen the log, and will continue further logging to a new file. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue May 20 15:27:14 2014 From: nginx-forum at nginx.us (samgujrat1984) Date: Tue, 20 May 2014 11:27:14 -0400 Subject: Unable to cache long cache url In-Reply-To: <20140518084153.GH16942@daoine.org> References: <20140518084153.GH16942@daoine.org> Message-ID: <673f604e40bd14323b705cbeca47f8f9.NginxMailingListEnglish@forum.nginx.org> Francis Daly, Yes, I am trying the same functionality as you mentioned. I want to serve 1st request from backend weblogic application and after that I want to serve same request for another hour from nginx cache. Thanks! Sam Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250126,250241#msg-250241 From es12b1001 at iith.ac.in Wed May 21 09:22:50 2014 From: es12b1001 at iith.ac.in (Adarsh Pugalia) Date: Wed, 21 May 2014 14:52:50 +0530 Subject: Reuse of resources. Message-ID: Hi, I am just beginning with nginx and had a problem. Is there any way to reuse resources allocated, across different modules, or multiple invocations of the same module? For example, suppose I initialize a variable in module1, can I reuse that variable somehow in module2 or, may be reuse that variable in module1, when another request invokes module1. Is it possible ? Thanks in advance. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 21 11:57:00 2014 From: nginx-forum at nginx.us (ixos) Date: Wed, 21 May 2014 07:57:00 -0400 Subject: cache manager process - i/o perf Message-ID: <9fbfff5d10a0f4eec7365fad719d64de.NginxMailingListEnglish@forum.nginx.org> I'm having problem with I/O performance. I'm running nginx as caching reverse proxy server. When cache size on disk exceeds max_size cache manager starts working, but it causes two problems occur: 1) I/O %util reach 100% and nginx starts dropping connections 2) cache manager process dosen't unlink files speed enough to delete old file. So cache becomes bigger util the space on disk ends. Can you give me an idea how can I solve those problems. Below are some details. #build on 20x 300GB SAS disks with 2 SSDs for Cachecade. # storcli64 /c0 show VD LIST : ======= ---------------------------------------------------------------- DG/VD TYPE State Access Consist Cache Cac sCC Size Name ---------------------------------------------------------------- 1/2 RAID60 Optl RW Yes RaWBC R ON 4.357 TB 2/1 Cac0 Optl RW Yes RaWTD - ON 557.875 GB ---------------------------------------------------------------- # mount /dev/sdb1 on /cache type ext4 (rw,noatime,data=ordered) # df -h /dev/sdb1 /dev/sdb1 4.3T 3.2T 828G 80% /cache # for pid in `pgrep nginx `;do ionice -p $pid ;done unknown: prio 4 <- master best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 best-effort: prio 0 <- workers idle <- cache manager # grep proxy_cache_path nginx.conf proxy_cache_path /cache zone=my-cache:20000msize=3355443m # netstat -sp|grep -i drop 6335115 SYNs to LISTEN sockets dropped # iostat -dx 1 /dev/sdb |grep ^sdb | awk '{print $14}' 24.40 31.20 26.80 23.60 26.80 16.00 34.80 35.20 29.60 ... 14.40 15.60 11.60 16.00 17.20 18.00 17.20 42.00 90.80 <- cache manager process starts 100.00 100.00 29.20 100.00 100.00 100.00 52.00 100.00 100.00 100.00 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250247,250247#msg-250247 From mdounin at mdounin.ru Wed May 21 12:48:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 May 2014 16:48:31 +0400 Subject: cache manager process - i/o perf In-Reply-To: <9fbfff5d10a0f4eec7365fad719d64de.NginxMailingListEnglish@forum.nginx.org> References: <9fbfff5d10a0f4eec7365fad719d64de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140521124830.GE1849@mdounin.ru> Hello! On Wed, May 21, 2014 at 07:57:00AM -0400, ixos wrote: > I'm having problem with I/O performance. I'm running nginx as caching > reverse proxy server. > When cache size on disk exceeds max_size cache manager starts working, but > it causes two problems occur: > > 1) I/O %util reach 100% and nginx starts dropping connections > 2) cache manager process dosen't unlink files speed enough to delete old > file. So cache becomes bigger util the space on disk ends. > > Can you give me an idea how can I solve those problems. Below are some > details. > > #build on 20x 300GB SAS disks with 2 SSDs for Cachecade. [...] > # grep proxy_cache_path nginx.conf > proxy_cache_path /cache zone=my-cache:20000msize=3355443m The "proxy_cache_path" looks corrupted and incomplete. First of all, I would suggest you to make sure you are using "levels" parameter, see http://nginx.org/r/proxy_cache_path. -- Maxim Dounin http://nginx.org/ From contact at jpluscplusm.com Wed May 21 12:50:25 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 21 May 2014 13:50:25 +0100 Subject: Reuse of resources. In-Reply-To: References: Message-ID: On 21 May 2014 10:22, Adarsh Pugalia wrote: > > Hi, I am just beginning with nginx and had a problem. The rest of your email doesn't describe a problem. What is your problem? From nginx-forum at nginx.us Wed May 21 13:15:16 2014 From: nginx-forum at nginx.us (ixos) Date: Wed, 21 May 2014 09:15:16 -0400 Subject: cache manager process - i/o perf In-Reply-To: <20140521124830.GE1849@mdounin.ru> References: <20140521124830.GE1849@mdounin.ru> Message-ID: <2cee1e8f35b9706968001426d72b6a65.NginxMailingListEnglish@forum.nginx.org> > The "proxy_cache_path" looks corrupted and incomplete. First of > all, I would suggest you to make sure you are using "levels" > parameter, see http://nginx.org/r/proxy_cache_path. I didn't paste all of proxy_cache_path directive. Here you have all. proxy_temp_path /cache/tmp; proxy_cache_path /cache levels=2:2 keys_zone=my-cache:20000m max_size=3355443m inactive=7d; And also nginx version if needed: # /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.5.9 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250247,250250#msg-250250 From mdounin at mdounin.ru Wed May 21 13:58:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 May 2014 17:58:46 +0400 Subject: cache manager process - i/o perf In-Reply-To: <2cee1e8f35b9706968001426d72b6a65.NginxMailingListEnglish@forum.nginx.org> References: <20140521124830.GE1849@mdounin.ru> <2cee1e8f35b9706968001426d72b6a65.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140521135846.GH1849@mdounin.ru> Hello! On Wed, May 21, 2014 at 09:15:16AM -0400, ixos wrote: > > The "proxy_cache_path" looks corrupted and incomplete. First of > > all, I would suggest you to make sure you are using "levels" > > parameter, see http://nginx.org/r/proxy_cache_path. > > I didn't paste all of proxy_cache_path directive. Here you have all. > proxy_temp_path /cache/tmp; > proxy_cache_path /cache > levels=2:2 > keys_zone=my-cache:20000m > max_size=3355443m > inactive=7d; See no obvious problems. Try looking into system tuning then, your disk subsystem just can't cope with load. There are number of ways to improve disk i/o performance, starting from nginx tuning (aio, output_buffers etc., see http://nginx.org/r/aio) to OS tuning (in particular, tuning vnode cache may be beneficial, not sure how to do this on Linux), as well as using a RAID configuration which delivers better performance. A number of recommendations can be found in this list, see archives. An obvious workaround is to reduce disk load by using smaller max_size/inactive, and/or with proxy_cache_min_uses (see http://nginx.org/r/proxy_cache_min_uses). > And also nginx version if needed: > > # /usr/local/nginx/sbin/nginx -V > nginx version: nginx/1.5.9 While it may be a good idea to upgrade to a recent and supported version, there shouldn't be a big difference from performance point of view. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed May 21 21:37:41 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 21 May 2014 22:37:41 +0100 Subject: Unable to cache long cache url In-Reply-To: <673f604e40bd14323b705cbeca47f8f9.NginxMailingListEnglish@forum.nginx.org> References: <20140518084153.GH16942@daoine.org> <673f604e40bd14323b705cbeca47f8f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140521213741.GK16942@daoine.org> On Tue, May 20, 2014 at 11:27:14AM -0400, samgujrat1984 wrote: Hi there, > Yes, I am trying the same functionality as you mentioned. I want to serve > 1st request from backend weblogic application and after that I want to serve > same request for another hour from nginx cache. That sounds like it should just work, without any complicated non-default settings. What specific problem are you seeing? As in: what do you do; what do you see; what do you expect to see? f -- Francis Daly francis at daoine.org From hewanxiang at gmail.com Thu May 22 02:12:25 2014 From: hewanxiang at gmail.com (Andy) Date: Thu, 22 May 2014 10:12:25 +0800 Subject: cache manager process - i/o perf In-Reply-To: <9fbfff5d10a0f4eec7365fad719d64de.NginxMailingListEnglish@forum.nginx.org> References: <9fbfff5d10a0f4eec7365fad719d64de.NginxMailingListEnglish@forum.nginx.org> Message-ID: I hit similar problem ... Can I know what is the ingest Gbps into the SSDs when you hit the problem? and How many cached file nodes in cache-manager? i have millions ... On Wed, May 21, 2014 at 7:57 PM, ixos wrote: > I'm having problem with I/O performance. I'm running nginx as caching > reverse proxy server. > When cache size on disk exceeds max_size cache manager starts working, but > it causes two problems occur: > > 1) I/O %util reach 100% and nginx starts dropping connections > 2) cache manager process dosen't unlink files speed enough to delete old > file. So cache becomes bigger util the space on disk ends. > > Can you give me an idea how can I solve those problems. Below are some > details. > > #build on 20x 300GB SAS disks with 2 SSDs for Cachecade. > > # storcli64 /c0 show > VD LIST : > ======= > > ---------------------------------------------------------------- > DG/VD TYPE State Access Consist Cache Cac sCC Size Name > ---------------------------------------------------------------- > 1/2 RAID60 Optl RW Yes RaWBC R ON 4.357 TB > 2/1 Cac0 Optl RW Yes RaWTD - ON 557.875 GB > ---------------------------------------------------------------- > > # mount > /dev/sdb1 on /cache type ext4 (rw,noatime,data=ordered) > > # df -h /dev/sdb1 > /dev/sdb1 4.3T 3.2T 828G 80% /cache > > > # for pid in `pgrep nginx `;do ionice -p $pid ;done > unknown: prio 4 <- master > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 > best-effort: prio 0 <- workers > idle <- cache manager > > # grep proxy_cache_path nginx.conf > proxy_cache_path /cache zone=my-cache:20000msize=3355443m > > # netstat -sp|grep -i drop > 6335115 SYNs to LISTEN sockets dropped > > # iostat -dx 1 /dev/sdb |grep ^sdb | awk '{print $14}' > 24.40 > 31.20 > 26.80 > 23.60 > 26.80 > 16.00 > 34.80 > 35.20 > 29.60 > ... > 14.40 > 15.60 > 11.60 > 16.00 > 17.20 > 18.00 > 17.20 > 42.00 > 90.80 <- cache manager process starts > 100.00 > 100.00 > 29.20 > 100.00 > 100.00 > 100.00 > 52.00 > 100.00 > 100.00 > 100.00 > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250247,250247#msg-250247 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu May 22 07:37:01 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 22 May 2014 12:37:01 +0500 Subject: Php-fpm requests eventually goes to queue !! Message-ID: Hello, We're using nginx + php-fpm. Please check the following configurations in php-fpm and sysctl in order to handle large amount of php-fpm request, but still 1000+ requests are getting into queue every 15min. php-fpm.d/stats.conf [stats] listen = 127.0.0.1:9000 user = apache group = apache request_slowlog_timeout = 5s slowlog = /var/log/php-fpm/stats-slow.log listen.allowed_clients = 127.0.0.1 pm = dynamic pm.max_children = 250 pm.start_servers = 40 pm.min_spare_servers = 20 pm.max_spare_servers = 40 pm.max_requests = 40000 listen.backlog = -1 request_terminate_timeout = 300s rlimit_files = 13107200 rlimit_core = unlimited env[HOSTNAME] = $HOSTNAME env[TMP] = /tmp env[TMPDIR] = /tmp env[TEMP] = /tmp pm.status_path = /status The main config parameters of sysctl.conf : vm.overcommit_memory = 1 fs.file-max = 7000000 net.ipv4.tcp_max_syn_backlog = 70000 net.core.netdev_max_backlog = 4096 net.core.somaxconn=65535 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.ip_local_port_range = 1024 65000 net.ipv4.tcp_tw_reuse = 1 /etc/security/limits.conf root soft nofile 700000 root hard nofile 700000 We've 72G of Ram and also writing on disk is 100+MB/s on Sas drives which makes high io util% most of the time. Any clue why requests are still getting into php-fpm queue and max children also reached errors occuring, even max_children are 250 * 40000. ?? Php-fpm status : pool: stats process manager: dynamic start time: 22/May/2014:12:17:39 +0500 start since: 1140 accepted conn: 228244 listen queue: 579 max listen queue: 1970 listen queue len: 65535 idle processes: 167 active processes: 9 total processes: 176 max active processes: 250 max children reached: 1 Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu May 22 08:24:31 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 22 May 2014 13:24:31 +0500 Subject: Php-fpm requests eventually goes to queue !! In-Reply-To: References: Message-ID: These are the new status for php-fpm now : pool: stats process manager: dynamic start time: 22/May/2014:12:17:39 +0500 start since: 3975 accepted conn: 866645 listen queue: 0 max listen queue: 2163 listen queue len: 65535 idle processes: 153 active processes: 2 total processes: 155 max active processes: 250 max children reached: 4 On Thu, May 22, 2014 at 12:37 PM, shahzaib shahzaib wrote: > Hello, > > We're using nginx + php-fpm. Please check the following > configurations in php-fpm and sysctl in order to handle large amount of > php-fpm request, but still 1000+ requests are getting into queue every > 15min. > > php-fpm.d/stats.conf > > [stats] > listen = 127.0.0.1:9000 > user = apache > group = apache > request_slowlog_timeout = 5s > slowlog = /var/log/php-fpm/stats-slow.log > listen.allowed_clients = 127.0.0.1 > pm = dynamic > pm.max_children = 250 > pm.start_servers = 40 > pm.min_spare_servers = 20 > pm.max_spare_servers = 40 > pm.max_requests = 40000 > listen.backlog = -1 > request_terminate_timeout = 300s > rlimit_files = 13107200 > rlimit_core = unlimited > env[HOSTNAME] = $HOSTNAME > env[TMP] = /tmp > env[TMPDIR] = /tmp > env[TEMP] = /tmp > pm.status_path = /status > > The main config parameters of sysctl.conf : > > vm.overcommit_memory = 1 > fs.file-max = 7000000 > net.ipv4.tcp_max_syn_backlog = 70000 > net.core.netdev_max_backlog = 4096 > net.core.somaxconn=65535 > net.ipv4.tcp_tw_reuse = 1 > net.ipv4.tcp_tw_recycle = 1 > net.ipv4.ip_local_port_range = 1024 65000 > net.ipv4.tcp_tw_reuse = 1 > > /etc/security/limits.conf > root soft nofile 700000 > root hard nofile 700000 > > > We've 72G of Ram and also writing on disk is 100+MB/s on Sas drives which > makes high io util% most of the time. > > Any clue why requests are still getting into php-fpm queue and max > children also reached errors occuring, even max_children are 250 * 40000. ?? > > Php-fpm status : > > pool: stats > process manager: dynamic > start time: 22/May/2014:12:17:39 +0500 > start since: 1140 > accepted conn: 228244 > listen queue: 579 > max listen queue: 1970 > listen queue len: 65535 > idle processes: 167 > active processes: 9 > total processes: 176 > max active processes: 250 > max children reached: 1 > > > > Regards. > Shahzaib > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lordnynex at gmail.com Thu May 22 08:27:50 2014 From: lordnynex at gmail.com (Lord Nynex) Date: Thu, 22 May 2014 01:27:50 -0700 Subject: Php-fpm requests eventually goes to queue !! In-Reply-To: References: Message-ID: This does not seem like an nginx issue? On Thu, May 22, 2014 at 1:24 AM, shahzaib shahzaib wrote: > These are the new status for php-fpm now : > > pool: stats > process manager: dynamic > start time: 22/May/2014:12:17:39 +0500 > start since: 3975 > accepted conn: 866645 > listen queue: 0 > max listen queue: 2163 > listen queue len: 65535 > idle processes: 153 > active processes: 2 > total processes: 155 > max active processes: 250 > max children reached: 4 > > > > > > On Thu, May 22, 2014 at 12:37 PM, shahzaib shahzaib > wrote: > >> Hello, >> >> We're using nginx + php-fpm. Please check the following >> configurations in php-fpm and sysctl in order to handle large amount of >> php-fpm request, but still 1000+ requests are getting into queue every >> 15min. >> >> php-fpm.d/stats.conf >> >> [stats] >> listen = 127.0.0.1:9000 >> user = apache >> group = apache >> request_slowlog_timeout = 5s >> slowlog = /var/log/php-fpm/stats-slow.log >> listen.allowed_clients = 127.0.0.1 >> pm = dynamic >> pm.max_children = 250 >> pm.start_servers = 40 >> pm.min_spare_servers = 20 >> pm.max_spare_servers = 40 >> pm.max_requests = 40000 >> listen.backlog = -1 >> request_terminate_timeout = 300s >> rlimit_files = 13107200 >> rlimit_core = unlimited >> env[HOSTNAME] = $HOSTNAME >> env[TMP] = /tmp >> env[TMPDIR] = /tmp >> env[TEMP] = /tmp >> pm.status_path = /status >> >> The main config parameters of sysctl.conf : >> >> vm.overcommit_memory = 1 >> fs.file-max = 7000000 >> net.ipv4.tcp_max_syn_backlog = 70000 >> net.core.netdev_max_backlog = 4096 >> net.core.somaxconn=65535 >> net.ipv4.tcp_tw_reuse = 1 >> net.ipv4.tcp_tw_recycle = 1 >> net.ipv4.ip_local_port_range = 1024 65000 >> net.ipv4.tcp_tw_reuse = 1 >> >> /etc/security/limits.conf >> root soft nofile 700000 >> root hard nofile 700000 >> >> >> We've 72G of Ram and also writing on disk is 100+MB/s on Sas drives which >> makes high io util% most of the time. >> >> Any clue why requests are still getting into php-fpm queue and max >> children also reached errors occuring, even max_children are 250 * 40000. ?? >> >> Php-fpm status : >> >> pool: stats >> process manager: dynamic >> start time: 22/May/2014:12:17:39 +0500 >> start since: 1140 >> accepted conn: 228244 >> listen queue: 579 >> max listen queue: 1970 >> listen queue len: 65535 >> idle processes: 167 >> active processes: 9 >> total processes: 176 >> max active processes: 250 >> max children reached: 1 >> >> >> >> Regards. >> Shahzaib >> >> >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu May 22 08:31:55 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 22 May 2014 13:31:55 +0500 Subject: Php-fpm requests eventually goes to queue !! In-Reply-To: References: Message-ID: Following is the nginx config : user nginx; # no need for more workers in the proxy mode worker_processes 16; error_log /var/log/nginx/error.log crit; worker_rlimit_nofile 409600; events { worker_connections 10240; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/conf.d/virtual.conf"; } On Thu, May 22, 2014 at 1:27 PM, Lord Nynex wrote: > This does not seem like an nginx issue? > > > On Thu, May 22, 2014 at 1:24 AM, shahzaib shahzaib wrote: > >> These are the new status for php-fpm now : >> >> pool: stats >> process manager: dynamic >> start time: 22/May/2014:12:17:39 +0500 >> start since: 3975 >> accepted conn: 866645 >> listen queue: 0 >> max listen queue: 2163 >> listen queue len: 65535 >> idle processes: 153 >> active processes: 2 >> total processes: 155 >> max active processes: 250 >> max children reached: 4 >> >> >> >> >> >> On Thu, May 22, 2014 at 12:37 PM, shahzaib shahzaib < >> shahzaib.cb at gmail.com> wrote: >> >>> Hello, >>> >>> We're using nginx + php-fpm. Please check the following >>> configurations in php-fpm and sysctl in order to handle large amount of >>> php-fpm request, but still 1000+ requests are getting into queue every >>> 15min. >>> >>> php-fpm.d/stats.conf >>> >>> [stats] >>> listen = 127.0.0.1:9000 >>> user = apache >>> group = apache >>> request_slowlog_timeout = 5s >>> slowlog = /var/log/php-fpm/stats-slow.log >>> listen.allowed_clients = 127.0.0.1 >>> pm = dynamic >>> pm.max_children = 250 >>> pm.start_servers = 40 >>> pm.min_spare_servers = 20 >>> pm.max_spare_servers = 40 >>> pm.max_requests = 40000 >>> listen.backlog = -1 >>> request_terminate_timeout = 300s >>> rlimit_files = 13107200 >>> rlimit_core = unlimited >>> env[HOSTNAME] = $HOSTNAME >>> env[TMP] = /tmp >>> env[TMPDIR] = /tmp >>> env[TEMP] = /tmp >>> pm.status_path = /status >>> >>> The main config parameters of sysctl.conf : >>> >>> vm.overcommit_memory = 1 >>> fs.file-max = 7000000 >>> net.ipv4.tcp_max_syn_backlog = 70000 >>> net.core.netdev_max_backlog = 4096 >>> net.core.somaxconn=65535 >>> net.ipv4.tcp_tw_reuse = 1 >>> net.ipv4.tcp_tw_recycle = 1 >>> net.ipv4.ip_local_port_range = 1024 65000 >>> net.ipv4.tcp_tw_reuse = 1 >>> >>> /etc/security/limits.conf >>> root soft nofile 700000 >>> root hard nofile 700000 >>> >>> >>> We've 72G of Ram and also writing on disk is 100+MB/s on Sas drives >>> which makes high io util% most of the time. >>> >>> Any clue why requests are still getting into php-fpm queue and max >>> children also reached errors occuring, even max_children are 250 * 40000. ?? >>> >>> Php-fpm status : >>> >>> pool: stats >>> process manager: dynamic >>> start time: 22/May/2014:12:17:39 +0500 >>> start since: 1140 >>> accepted conn: 228244 >>> listen queue: 579 >>> max listen queue: 1970 >>> listen queue len: 65535 >>> idle processes: 167 >>> active processes: 9 >>> total processes: 176 >>> max active processes: 250 >>> max children reached: 1 >>> >>> >>> >>> Regards. >>> Shahzaib >>> >>> >>> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaoping at richinfo.cn Thu May 22 08:41:40 2014 From: gaoping at richinfo.cn (gaoping at richinfo.cn) Date: Thu, 22 May 2014 16:41:40 +0800 Subject: nginx1.4.7 Use ngx_http_proxy_module to realize the static resource cache Message-ID: <201405221641401522727@richinfo.cn> hi I use ngx_http_proxy_module to realize the static resource cache,Unable to generate a cache file?Below is the nginx.conf configuration information?Trouble you to help have a look what is the reason? worker_processes 4; events { use epoll; worker_connections 1024; } http { include mime.types; default_type application/octet-stream; access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; proxy_temp_path /home/se/nginx_cache/tmp_nc1 1 2; proxy_cache_path /home/se/nginx_cache/nc1 levels=1:2 keys_zone=nc1:50m; upstream bj{ server 192.168.1.107:7111; } upstream noBj{ server 192.168.1.106:7111; } server { listen 50001; server_name localhost; access_log logs/host.access.log main; location / { proxy_set_header Host $host:20001; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_next_upstream http_502 http_504 error timeout invalid_header; if ( $geoip_city != 'Beijing' ){ proxy_pass http://noBj; } if ( $geoip_city = 'Beijing' ){ proxy_pass http://bj; } } location ~*\.(jpg|gif|png|css|swf|js)$ { if ( $geoip_city != 'Beijing' ){ proxy_pass http://noBj; } if ( $geoip_city = 'Beijing' ){ proxy_pass http://bj; } proxy_cache nc1; proxy_cache_valid 200 302 304 1d; proxy_cache_key $host$uri$is_args$args; proxy_cache_lock on; proxy_cache_lock_timeout 2s; expires 1d; } } gaoping at richinfo.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cache.png Type: image/png Size: 49273 bytes Desc: not available URL: From nginx-forum at nginx.us Thu May 22 09:05:21 2014 From: nginx-forum at nginx.us (ixos) Date: Thu, 22 May 2014 05:05:21 -0400 Subject: cache manager process - i/o perf In-Reply-To: References: Message-ID: > Can I know what is the ingest Gbps into the SSDs when you hit the problem? About ~500 Mbps > and How many cached file nodes in cache-manager? I have millions ... Between 7-9 milions Can you tell more about your configuration os/nginx/cache? And how have you tried to solve the problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250247,250273#msg-250273 From shahzaib.cb at gmail.com Thu May 22 11:03:37 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 22 May 2014 16:03:37 +0500 Subject: Php-fpm requests eventually goes to queue !! In-Reply-To: References: Message-ID: Anyone ? On Thu, May 22, 2014 at 1:31 PM, shahzaib shahzaib wrote: > Following is the nginx config : > > user nginx; > # no need for more workers in the proxy mode > worker_processes 16; > error_log /var/log/nginx/error.log crit; > worker_rlimit_nofile 409600; > events { > worker_connections 10240; # increase for busier servers > use epoll; # you should use epoll here for Linux kernels 2.6.x > } > http { > server_name_in_redirect off; > server_names_hash_max_size 10240; > server_names_hash_bucket_size 1024; > include mime.types; > default_type application/octet-stream; > server_tokens off; > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 5; > gzip on; > gzip_vary on; > gzip_disable "MSIE [1-6]\."; > gzip_proxied any; > gzip_http_version 1.1; > gzip_min_length 1000; > gzip_comp_level 6; > gzip_buffers 16 8k; > # You can remove image/png image/x-icon image/gif image/jpeg if you have > slow CPU > gzip_types text/plain text/xml text/css application/x-javascript > application/xml application/xml+rss text/javascript application/atom+xml; > ignore_invalid_headers on; > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; > reset_timedout_connection on; > connection_pool_size 256; > client_header_buffer_size 256k; > large_client_header_buffers 4 256k; > client_max_body_size 200M; > client_body_buffer_size 128k; > request_pool_size 32k; > output_buffers 4 32k; > postpone_output 1460; > client_body_in_file_only on; > log_format bytes_log "$msec $bytes_sent ."; > include "/etc/nginx/conf.d/virtual.conf"; > } > > > > > > On Thu, May 22, 2014 at 1:27 PM, Lord Nynex wrote: > >> This does not seem like an nginx issue? >> >> >> On Thu, May 22, 2014 at 1:24 AM, shahzaib shahzaib > > wrote: >> >>> These are the new status for php-fpm now : >>> >>> pool: stats >>> process manager: dynamic >>> start time: 22/May/2014:12:17:39 +0500 >>> start since: 3975 >>> accepted conn: 866645 >>> listen queue: 0 >>> max listen queue: 2163 >>> listen queue len: 65535 >>> idle processes: 153 >>> active processes: 2 >>> total processes: 155 >>> max active processes: 250 >>> max children reached: 4 >>> >>> >>> >>> >>> >>> On Thu, May 22, 2014 at 12:37 PM, shahzaib shahzaib < >>> shahzaib.cb at gmail.com> wrote: >>> >>>> Hello, >>>> >>>> We're using nginx + php-fpm. Please check the following >>>> configurations in php-fpm and sysctl in order to handle large amount of >>>> php-fpm request, but still 1000+ requests are getting into queue every >>>> 15min. >>>> >>>> php-fpm.d/stats.conf >>>> >>>> [stats] >>>> listen = 127.0.0.1:9000 >>>> user = apache >>>> group = apache >>>> request_slowlog_timeout = 5s >>>> slowlog = /var/log/php-fpm/stats-slow.log >>>> listen.allowed_clients = 127.0.0.1 >>>> pm = dynamic >>>> pm.max_children = 250 >>>> pm.start_servers = 40 >>>> pm.min_spare_servers = 20 >>>> pm.max_spare_servers = 40 >>>> pm.max_requests = 40000 >>>> listen.backlog = -1 >>>> request_terminate_timeout = 300s >>>> rlimit_files = 13107200 >>>> rlimit_core = unlimited >>>> env[HOSTNAME] = $HOSTNAME >>>> env[TMP] = /tmp >>>> env[TMPDIR] = /tmp >>>> env[TEMP] = /tmp >>>> pm.status_path = /status >>>> >>>> The main config parameters of sysctl.conf : >>>> >>>> vm.overcommit_memory = 1 >>>> fs.file-max = 7000000 >>>> net.ipv4.tcp_max_syn_backlog = 70000 >>>> net.core.netdev_max_backlog = 4096 >>>> net.core.somaxconn=65535 >>>> net.ipv4.tcp_tw_reuse = 1 >>>> net.ipv4.tcp_tw_recycle = 1 >>>> net.ipv4.ip_local_port_range = 1024 65000 >>>> net.ipv4.tcp_tw_reuse = 1 >>>> >>>> /etc/security/limits.conf >>>> root soft nofile 700000 >>>> root hard nofile 700000 >>>> >>>> >>>> We've 72G of Ram and also writing on disk is 100+MB/s on Sas drives >>>> which makes high io util% most of the time. >>>> >>>> Any clue why requests are still getting into php-fpm queue and max >>>> children also reached errors occuring, even max_children are 250 * 40000. ?? >>>> >>>> Php-fpm status : >>>> >>>> pool: stats >>>> process manager: dynamic >>>> start time: 22/May/2014:12:17:39 +0500 >>>> start since: 1140 >>>> accepted conn: 228244 >>>> listen queue: 579 >>>> max listen queue: 1970 >>>> listen queue len: 65535 >>>> idle processes: 167 >>>> active processes: 9 >>>> total processes: 176 >>>> max active processes: 250 >>>> max children reached: 1 >>>> >>>> >>>> >>>> Regards. >>>> Shahzaib >>>> >>>> >>>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hewanxiang at gmail.com Thu May 22 11:39:11 2014 From: hewanxiang at gmail.com (Andy) Date: Thu, 22 May 2014 19:39:11 +0800 Subject: cache manager process - i/o perf In-Reply-To: References: Message-ID: On Thu, May 22, 2014 at 5:05 PM, ixos wrote: > > Can I know what is the ingest Gbps into the SSDs when you hit the > problem? > About ~500 Mbps > > > and How many cached file nodes in cache-manager? I have millions ... > Between 7-9 milions > > Can you tell more about your configuration os/nginx/cache? And how have you > tried to solve the problem. > No, I didnot find a way to resolve this, I have to make the cached files to a smaller count and add more devices to share the load ... We may need a feature to do disk write admission based on the disk load ... > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250247,250273#msg-250273 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 22 12:09:44 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 22 May 2014 16:09:44 +0400 Subject: Php-fpm requests eventually goes to queue !! In-Reply-To: References: Message-ID: <20140522120944.GR1849@mdounin.ru> Hello! On Thu, May 22, 2014 at 04:03:37PM +0500, shahzaib shahzaib wrote: > Anyone ? [...] > > On Thu, May 22, 2014 at 1:27 PM, Lord Nynex wrote: > > > >> This does not seem like an nginx issue? -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu May 22 13:06:21 2014 From: nginx-forum at nginx.us (ixos) Date: Thu, 22 May 2014 09:06:21 -0400 Subject: cache manager process - i/o perf In-Reply-To: References: Message-ID: >No, I didnot find a way to resolve this, I have to make the cached files to a smaller count and add more devices to share the load ... But is this split with samller devices "solve" the problem? I mean how many file could you have in cache? How many devices you have? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250247,250278#msg-250278 From shahzaib.cb at gmail.com Thu May 22 14:34:31 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 22 May 2014 19:34:31 +0500 Subject: Php-fpm requests eventually goes to queue !! In-Reply-To: <20140522120944.GR1849@mdounin.ru> References: <20140522120944.GR1849@mdounin.ru> Message-ID: Met you after a long time maxim :) how's everything ? On Thu, May 22, 2014 at 5:09 PM, Maxim Dounin wrote: > Hello! > > On Thu, May 22, 2014 at 04:03:37PM +0500, shahzaib shahzaib wrote: > > > Anyone ? > > [...] > > > > On Thu, May 22, 2014 at 1:27 PM, Lord Nynex > wrote: > > > > > >> This does not seem like an nginx issue? > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu May 22 14:51:26 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 22 May 2014 19:51:26 +0500 Subject: Php-fpm requests eventually goes to queue !! In-Reply-To: References: <20140522120944.GR1849@mdounin.ru> Message-ID: You think, issue is not related to nginx but nginx and php-fpm work together, that is the reason i posted the question on nginx forums. On Thu, May 22, 2014 at 7:34 PM, shahzaib shahzaib wrote: > Met you after a long time maxim :) how's everything ? > > > On Thu, May 22, 2014 at 5:09 PM, Maxim Dounin wrote: > >> Hello! >> >> On Thu, May 22, 2014 at 04:03:37PM +0500, shahzaib shahzaib wrote: >> >> > Anyone ? >> >> [...] >> >> > > On Thu, May 22, 2014 at 1:27 PM, Lord Nynex >> wrote: >> > > >> > >> This does not seem like an nginx issue? >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu May 22 14:52:44 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 22 May 2014 17:52:44 +0300 Subject: Php-fpm requests eventually goes to queue !! In-Reply-To: References: Message-ID: <30F8A396AF1F431CA655DAF7A6C2C6EF@MasterPC> > pm.max_children = 250 > pm.start_servers = 40 > pm.max_requests = 40000 > Any clue why requests are still getting into php-fpm queue and max > children also reached errors occuring, even max_children are 250 * 40000. > ?? Your math is wrong - there is no such 250 * 40000, because pm.max_requests is just a number of requests after which the php child restarts (to avoid possible memory leak etc (it may also be 0)). Your (maximum simultaneously running) concurrent requests are still 250 (~pm.max_children), so bassically if your request takes longer than 1 second and there are more than 250 requests per second you will end up with a queue. You either speed up the php code (and everything besides it) / increase the FPM pool (more children or more pools/fpm backends) or use caching like some frontend cache (varnish/squid etc) or nginx with fastcgi_cache. rr From shahzaib.cb at gmail.com Thu May 22 15:08:44 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 22 May 2014 20:08:44 +0500 Subject: Php-fpm requests eventually goes to queue !! In-Reply-To: <30F8A396AF1F431CA655DAF7A6C2C6EF@MasterPC> References: <30F8A396AF1F431CA655DAF7A6C2C6EF@MasterPC> Message-ID: Thanks, my math is not wrong my concept about php-fpm was wrong. I was thinking each child can handle 40000 requests/sec. :( On Thu, May 22, 2014 at 7:52 PM, Reinis Rozitis wrote: > pm.max_children = 250 >> pm.start_servers = 40 >> pm.max_requests = 40000 >> > > Any clue why requests are still getting into php-fpm queue and max >> children also reached errors occuring, even max_children are 250 * 40000. ?? >> > > > Your math is wrong - there is no such 250 * 40000, because pm.max_requests > is just a number of requests after which the php child restarts (to avoid > possible memory leak etc (it may also be 0)). > > Your (maximum simultaneously running) concurrent requests are still 250 > (~pm.max_children), so bassically if your request takes longer than 1 > second and there are more than 250 requests per second you will end up with > a queue. > > > You either speed up the php code (and everything besides it) / increase > the FPM pool (more children or more pools/fpm backends) or use caching like > some frontend cache (varnish/squid etc) or nginx with fastcgi_cache. > > rr > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phil at allaffiliatepro.co.uk Thu May 22 15:26:54 2014 From: phil at allaffiliatepro.co.uk (Phil Knight) Date: Thu, 22 May 2014 16:26:54 +0100 Subject: passing data to CGI scripts via PATH_INFO Message-ID: <537E173E.8010206@allaffiliatepro.co.uk> Hi We are having an issue passing data to CGI scripts via PATH_INFO environment variable. for example:- http://domain.com/cgi-bin/script.cgi/= On various apache servers this works fine and the PATH_INFO variable will contain "/=", on our nginx server we are getting a 403 forbidden error. We are using fcgiwrap [1] for running CGI and .cgi scripts are executing. Could this be an issue with nginx configuration? Thanks for your time. Regards Phil [1] https://nginx.localdomain.pl/wiki/FcgiWrap From vbart at nginx.com Thu May 22 18:32:52 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 22 May 2014 22:32:52 +0400 Subject: passing data to CGI scripts via PATH_INFO In-Reply-To: <537E173E.8010206@allaffiliatepro.co.uk> References: <537E173E.8010206@allaffiliatepro.co.uk> Message-ID: <2867528.OB7pURQ5X2@vbart-workstation> On Thursday 22 May 2014 16:26:54 Phil Knight wrote: > Hi > > We are having an issue passing data to CGI scripts via PATH_INFO > environment variable. > > for example:- > > http://domain.com/cgi-bin/script.cgi/= > > On various apache servers this works fine and the PATH_INFO variable > will contain "/=", on our nginx server we are getting a 403 forbidden > error. We are using fcgiwrap [1] for running CGI and .cgi scripts are > executing. > > Could this be an issue with nginx configuration? [..] Most likely this is an issue with the configuration. Actually nginx knows nothing about CGI and it's environment variables (including PATH_INFO), so you are free to set it any value you think reasonable. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu May 22 19:05:22 2014 From: nginx-forum at nginx.us (slowredbike) Date: Thu, 22 May 2014 15:05:22 -0400 Subject: Ubuntu+Nginx packet loss / dropped upstream connections In-Reply-To: References: Message-ID: Did you ever get a response to this...We are seeing the following: no live upstreams while connecting to upstream.... we know the upstream servers are not crashing but are trying to determine how/why they are being deemed as down. The nginx gives no information on this and the servers show no errors either. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244241,250288#msg-250288 From webmaster at cosmicperl.com Thu May 22 21:14:23 2014 From: webmaster at cosmicperl.com (Lyle) Date: Thu, 22 May 2014 22:14:23 +0100 Subject: passing data to CGI scripts via PATH_INFO In-Reply-To: <2867528.OB7pURQ5X2@vbart-workstation> References: <537E173E.8010206@allaffiliatepro.co.uk> <2867528.OB7pURQ5X2@vbart-workstation> Message-ID: <537E68AF.6050401@cosmicperl.com> On 22/05/2014 19:32, Valentin V. Bartenev wrote: > On Thursday 22 May 2014 16:26:54 Phil Knight wrote: >> Hi >> >> We are having an issue passing data to CGI scripts via PATH_INFO >> environment variable. >> >> for example:- >> >> http://domain.com/cgi-bin/script.cgi/= >> >> On various apache servers this works fine and the PATH_INFO variable >> will contain "/=", on our nginx server we are getting a 403 forbidden >> error. We are using fcgiwrap [1] for running CGI and .cgi scripts are >> executing. >> >> Could this be an issue with nginx configuration? > [..] > > Most likely this is an issue with the configuration. I think the relevant part is here: location /cgi-bin/ { root /users/folder; gzip off; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param DOCUMENT_ROOT /users/folder/cgi-bin/; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_connect_timeout 120; fastcgi_send_timeout 120; fastcgi_read_timeout 120; } I could be wrong. Any pointers would be very much appreciated. I've noticed that: http://domain.com/cgi-bin/no_existent_script.cgi Also gives a 403. So I suspect that nginx is looking for a file called = in a folder named api.cgi/ I'm not sure what configuration we need to do to fix this. Lyle > Actually nginx knows nothing about CGI and it's environment variables > (including PATH_INFO), so you are free to set it any value you think > reasonable. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu May 22 21:52:45 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 23 May 2014 01:52:45 +0400 Subject: passing data to CGI scripts via PATH_INFO In-Reply-To: <537E68AF.6050401@cosmicperl.com> References: <537E173E.8010206@allaffiliatepro.co.uk> <2867528.OB7pURQ5X2@vbart-workstation> <537E68AF.6050401@cosmicperl.com> Message-ID: <1644079.UIZ2Qxqo28@vbart-laptop> On Thursday 22 May 2014 22:14:23 Lyle wrote: > On 22/05/2014 19:32, Valentin V. Bartenev wrote: > > On Thursday 22 May 2014 16:26:54 Phil Knight wrote: > >> Hi > >> > >> We are having an issue passing data to CGI scripts via PATH_INFO > >> environment variable. > >> > >> for example:- > >> > >> http://domain.com/cgi-bin/script.cgi/= > >> > >> On various apache servers this works fine and the PATH_INFO variable > >> will contain "/=", on our nginx server we are getting a 403 forbidden > >> error. We are using fcgiwrap [1] for running CGI and .cgi scripts are > >> executing. > >> > >> Could this be an issue with nginx configuration? > > [..] > > > > Most likely this is an issue with the configuration. > > I think the relevant part is here: > > location /cgi-bin/ { > root /users/folder; > gzip off; > fastcgi_pass unix:/var/run/fcgiwrap.socket; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param SERVER_PROTOCOL $server_protocol; > fastcgi_param GATEWAY_INTERFACE CGI/1.1; > fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > > fastcgi_param DOCUMENT_ROOT /users/folder/cgi-bin/; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > fastcgi_connect_timeout 120; > fastcgi_send_timeout 120; > fastcgi_read_timeout 120; > } > > I could be wrong. Any pointers would be very much appreciated. > > I've noticed that: > > http://domain.com/cgi-bin/no_existent_script.cgi > > Also gives a 403. So I suspect that nginx is looking for a file called = > in a folder named api.cgi/ > I'm not sure what configuration we need to do to fix this. [..] As I already said, nginx knows nothing about CGI. It doesn't look for any files in your location with "fastcgi_pass". It just passes request and all the FastCGI params that you have configured with the values that you have set. For example, since you have: fastcgi_param PATH_INFO $fastcgi_path_info; and missing the fastcgi_split_path_info directive, then probably nginx pases empty string in PATH_INFO. Please, check the docs to figure out how exactly the $fastcgi_path_info variable works and what values it takes: http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#var_fastcgi_path_info The whole list of variables with links to descriptions can be found here: http://nginx.org/en/docs/varindex.html Then, probably, you need to check fcgiwrap documentation to figure out what variables it expects and how they are processed. wbr, Valentin V. Bartenev From vishal.mestri at cloverinfotech.com Fri May 23 07:16:46 2014 From: vishal.mestri at cloverinfotech.com (Vishal Mestri) Date: Fri, 23 May 2014 12:46:46 +0530 (IST) Subject: Issue nginx - ajax In-Reply-To: Message-ID: Hi B. R. Thanks for your reply and really sorry for the way we conveyed our request. We tried to use tcpdump command to trace the issue , but still we are not yet resolved the issue. Now we have distributed logging i.e. we are logging differently for port 6401 and we found that request is not forwarded in case of ssl. I am attaching here with logs generated. error-6401.log_chrom_21may - error log generated when we used chrome access.log_chrome_21may - access log generated when we used chrome access.log_ie_21may - access log generated when we used ie error-6401.log_ie_21may - error log generated when we used ie We went through the logs and concluded that , in case of ssl, nginx is not able to forward request to proxy server , when we use IE. But we are not able to understand , what is cause of the same. If any one can guide us , where exactly we are going wrong. Thanks & Regards, Vishal Mestri ----- Original Message ----- From: "B.R." To: "Nginx ML" Sent: Wednesday, May 7, 2014 7:05:02 PM Subject: Re: Issue nginx - ajax I request all Nginx master/experts to help me. ?That looks like an odd way to *ask* for help. If you wanna *request* help, consider getting paid support.? But even doing that, that is rude... What the error says is that the '*backend* closed connection prematurely'... You should look into it when processing the crashing request. Based on the logs, for that request, nginx received different headers depending on using either IE or Chrome: IE "GET /Stream.html?s=0&d=%22168.189.9.09%22&p=450&t=1399381478365 HTTP/1.0 Host: 168.189.9.09:6400 Connection: close Accept: text/html, application/xhtml+xml, */* Referer: http://168.189.9.09:443/Login.do;jsessionid=23A0D509FE7135EC4AB85B757E8BC62E Accept-Language: en-US User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko Accept-Encoding: gzip, deflate DNT: 1 " Chrome "GET /Stream.html?s=0&d=%22168.189.9.09%22&p=0&t=1399381806838 HTTP/1.0 Host: 168.189.9.09:6400 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36 Referer: http://168.189.9.09:443/Login.do Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,id;q=0.6 Cookie: JSESSIONID=4965E0227151FBACCBF61DB65CDF9F9A " Cookie? :o) Look into your application in the backend which does not answer when some conditions, the problem does not seem to come from nginx which forwards requests to yourf backend. --- B. R. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: access.log_ie_21may Type: application/octet-stream Size: 991 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: access.log_chrome_21may Type: application/octet-stream Size: 6742 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error-6401.log_ie_21may Type: application/octet-stream Size: 19631 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error-6401.log_chrom_21may Type: application/octet-stream Size: 173467 bytes Desc: not available URL: From mdounin at mdounin.ru Fri May 23 08:52:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 May 2014 12:52:56 +0400 Subject: Ubuntu+Nginx packet loss / dropped upstream connections In-Reply-To: References: Message-ID: <20140523085256.GB1849@mdounin.ru> Hello! On Thu, May 22, 2014 at 03:05:22PM -0400, slowredbike wrote: > Did you ever get a response to this...We are seeing the following: > > no live upstreams while connecting to upstream.... > > we know the upstream servers are not crashing but are trying to determine > how/why they are being deemed as down. The nginx gives no information on > this and the servers show no errors either. All servers in the upstream block are considered down due to errors encountered while working with the servers previously. Relevant information about errors should be in logs. See max_fails/fail_timeout parameters of the server directive in the documentation for details: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server -- Maxim Dounin http://nginx.org/ From webmaster at cosmicperl.com Fri May 23 09:17:50 2014 From: webmaster at cosmicperl.com (Lyle) Date: Fri, 23 May 2014 10:17:50 +0100 Subject: passing data to CGI scripts via PATH_INFO In-Reply-To: <1644079.UIZ2Qxqo28@vbart-laptop> References: <537E173E.8010206@allaffiliatepro.co.uk> <2867528.OB7pURQ5X2@vbart-workstation> <537E68AF.6050401@cosmicperl.com> <1644079.UIZ2Qxqo28@vbart-laptop> Message-ID: <537F123E.5090705@cosmicperl.com> On 22/05/2014 22:52, Valentin V. Bartenev wrote: > On Thursday 22 May 2014 22:14:23 Lyle wrote: >> ... >> Also gives a 403. So I suspect that nginx is looking for a file called = >> in a folder named api.cgi/ >> I'm not sure what configuration we need to do to fix this. > [..] > > As I already said, nginx knows nothing about CGI. It doesn't look for any > files in your location with "fastcgi_pass". It just passes request and > all the FastCGI params that you have configured with the values that you > have set. > > For example, since you have: > > fastcgi_param PATH_INFO $fastcgi_path_info; > > and missing the fastcgi_split_path_info directive, then probably nginx pases > empty string in PATH_INFO. > > Please, check the docs to figure out how exactly the $fastcgi_path_info > variable works and what values it takes: > http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#var_fastcgi_path_info Ah, I see. It's the FastCGI module that's reading the wrong filename. I think I understand the process now. Nginx is passing variables to the fastcgi module, as defined in the config. One of these being: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; $fastcgi_script_name being everything after http://.../ by default. So FastCGI was trying to open a file that didn't exist. The fastcgi_split_path_info variable addresses this issue, by overloading $fastcgi_script_name and $fastcgi_path_info with the split from the regexp. So adding the following: fastcgi_split_path_info ^(.+\.(?:cgi|pl))(.*)$; Before the fastcgi_param instructions has fixed the issue. Hooray! Thanks for your help. Lyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 23 11:39:36 2014 From: nginx-forum at nginx.us (Shobhit Mishra) Date: Fri, 23 May 2014 07:39:36 -0400 Subject: Using Domain Names in proxy_pass directive Message-ID: <4df6bf77ea892f7ac01a3efb744434b7.NginxMailingListEnglish@forum.nginx.org> Hi I am using nginx as reverse proxy with FQDN for the backend server . My configuration for the location block looks like this :- location / { set $ustreamsbc sbc.example.com ; proxy_pass HTTPS://$ustreamsbc ; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_connect_timeout 10s ; include /usr/local/nginx/conf/proxy.conf ; proxy_redirect off; } I have a resolver in place for this FQDN and its running fine. My doubt is that if I change the mapped IP for this FQDN in the DNS server , would nginx re-resolve the FQDN to the new IP for all the future requests. Also does nginx honor TTL for all the FQDN stored in variables as shown above ?? Thanks and Regards Shobhit Mishra Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250307,250307#msg-250307 From ru at nginx.com Fri May 23 12:20:10 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 23 May 2014 16:20:10 +0400 Subject: Using Domain Names in proxy_pass directive In-Reply-To: <4df6bf77ea892f7ac01a3efb744434b7.NginxMailingListEnglish@forum.nginx.org> References: <4df6bf77ea892f7ac01a3efb744434b7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140523122010.GJ53922@lo0.su> On Fri, May 23, 2014 at 07:39:36AM -0400, Shobhit Mishra wrote: > Hi > > I am using nginx as reverse proxy with FQDN for the backend server . > > My configuration for the location block looks like this :- > > location / { > > set $ustreamsbc sbc.example.com ; > proxy_pass HTTPS://$ustreamsbc ; > > proxy_next_upstream error timeout invalid_header http_500 > http_502 http_503 http_504; > > proxy_connect_timeout 10s ; > include /usr/local/nginx/conf/proxy.conf ; > > > > proxy_redirect off; > } > > I have a resolver in place for this FQDN and its running fine. > > > My doubt is that if I change the mapped IP for this FQDN in the DNS server , > would nginx re-resolve the FQDN to the new IP for all the future requests. > > Also does nginx honor TTL for all the FQDN stored in variables as shown > above ?? > > Thanks and Regards > > Shobhit Mishra nginx will re-resolve names as configured by the http://nginx.org/r/resolver directive. By default, the TTL of the response is honored but it can be overridden. (Whether the queried server has an up-to-date info is another question.) From nginx-forum at nginx.us Fri May 23 12:41:09 2014 From: nginx-forum at nginx.us (Shobhit Mishra) Date: Fri, 23 May 2014 08:41:09 -0400 Subject: Using Domain Names in proxy_pass directive In-Reply-To: <20140523122010.GJ53922@lo0.su> References: <20140523122010.GJ53922@lo0.su> Message-ID: <0261beb4a0aa78f63e056d60b87eb5ab.NginxMailingListEnglish@forum.nginx.org> Thanks Ruslan for the reply .. I have another query regarding this .. I am planning to use more than one backend servers for supporting Load balancing. I would be using an upstream block for the same. My default configuration is as below : upstream us1 { server sbc.example1.com:443 ; server sbc.example2.com:443 ; } and the location block is location / { proxy_pass HTTPS://us1 ; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_connect_timeout 10s ; include /usr/local/nginx/conf/proxy.conf ; proxy_redirect off; } Does nginx honor TTL for the FQDN in the upstream block as well ?? As per my understanding nginx resolves FQDN in upstream blocks only during configuration parsing .. Please suggest on this. Thanks Shobhit Mishra Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250307,250311#msg-250311 From vbart at nginx.com Fri May 23 15:03:58 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 23 May 2014 19:03:58 +0400 Subject: Using Domain Names in proxy_pass directive In-Reply-To: <0261beb4a0aa78f63e056d60b87eb5ab.NginxMailingListEnglish@forum.nginx.org> References: <20140523122010.GJ53922@lo0.su> <0261beb4a0aa78f63e056d60b87eb5ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1415075.DTQo8kr6Sl@vbart-workstation> On Friday 23 May 2014 08:41:09 Shobhit Mishra wrote: [..] > Does nginx honor TTL for the FQDN in the upstream block as well ?? > > As per my understanding nginx resolves FQDN in upstream blocks only during > configuration parsing .. > > Please suggest on this. Look at the "resolve" parameter: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri May 23 16:05:29 2014 From: nginx-forum at nginx.us (slowredbike) Date: Fri, 23 May 2014 12:05:29 -0400 Subject: Ubuntu+Nginx packet loss / dropped upstream connections In-Reply-To: <20140523085256.GB1849@mdounin.ru> References: <20140523085256.GB1849@mdounin.ru> Message-ID: <9b3d2e2f780759dbf95c61cc6a680fcc.NginxMailingListEnglish@forum.nginx.org> Would you recommend any extended/additional debugging that I should enable to help us track this down? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244241,250323#msg-250323 From nginx-forum at nginx.us Fri May 23 16:22:08 2014 From: nginx-forum at nginx.us (sfrazer) Date: Fri, 23 May 2014 12:22:08 -0400 Subject: try_files / if / access_log question Message-ID: I'm trying to create a config that doesn't log the requests from specific user agents. The site has a gunicorn backend that we proxy to, and I'm trying to set up try_files to test for the existence of static local files before proxying to the back-end. The try_files config is the new part, everything was working fine before I added that. Here's the nginx.conf I'm using for testing: user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { upstream backend { server 127.0.0.1:8004; } include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; map $http_user_agent $ignore_ua { default 0; "test-agent" 1; } server { listen 80; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header Accept-Encoding ""; location @gunicorn { proxy_pass http://backend$uri$args; add_header X-Cached $upstream_cache_status; } location / { # if ($ignore_ua) { access_log off; } try_files $uri/index.cchtml @gunicorn; } } } With the "if" statement commented out, the requests work as I expect them: curl -s -D- -A test-agent http://site.ordprofile01.example.net/uri/ | head -n 20 HTTP/1.1 200 OK Server: nginx/1.6.0 Date: Fri, 23 May 2014 16:11:38 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Cookie /account/container/filename.txt, so the new requst url could be http://swift-proxy/account/container/filename.txt, plus the auth-token. 6. swift proxy sever response the content to nginx, then nginx cache the content and pass the response to the client. Could the above requirement be accomplished by some specific configuration plus some existing nginx modules? Thanks, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 29 12:25:21 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 May 2014 16:25:21 +0400 Subject: Does nginx support openstack swift API? In-Reply-To: References: Message-ID: <20140529122521.GJ1849@mdounin.ru> Hello! On Thu, May 29, 2014 at 07:04:46PM +0800, Andy wrote: > Hello guys, > > I'm trying to find a way to use OpenStack SWIFT with nginx, the below are > request steps: > > 1. nginx is configured as proxy cache > 2. client send a request to nginx for url: http://domain.com/filename.txt > 3. nginx received the request and it is a cache miss, it need to fetch the > content from SWIFT proxy server > 4. nginx send a request to swift proxy server for authentication, the url > looks like http://swift-proxy/auth-account, account information is set in > header, the response from swift proxy server contains a auth-token for that > account if authentication success. > 5. then nginx use this auth-token and put it in a new request header, and > send the new request to the swift proxy server for the original request > content, there could be a map between client request url to the swift proxy > url, for example, /filename.txt --> /account/container/filename.txt, so the > new requst url could be http://swift-proxy/account/container/filename.txt, > plus the auth-token. > 6. swift proxy sever response the content to nginx, then nginx cache the > content and pass the response to the client. > > Could the above requirement be accomplished by some specific configuration > plus some existing nginx modules? Looks like something more or less possible with auth_request, see http://nginx.org/en/docs/http/ngx_http_auth_request_module.html. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu May 29 15:18:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 May 2014 19:18:56 +0400 Subject: Header Vary: Accept-Encoding - security risk ? In-Reply-To: <1598516cd9aeb86b92a500060dced305.NginxMailingListEnglish@forum.nginx.org> References: <1598516cd9aeb86b92a500060dced305.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140529151856.GM1849@mdounin.ru> Hello! On Wed, May 28, 2014 at 05:20:54PM -0400, chili_confits wrote: > Dear list, > > I have enabled gzip with > ... > gzip on; > gzip_http_version 1.0; > gzip_vary on; > ... > to satisfy incoming HTTP 1.0 requests. > > In a very similiar setup which got OWASP-evaluated, I read this - marked as > a defect: > "The web server sent a Vary header, which indicates that server-driven > negotiation was done to determine which content should be delivered. This > may indicate that different content is available based on the headers in the > HTTP request." > IMHO this is a false positive ... > > This is what I send: > HTTP/1.1 200 OK > Server: nginx > Date: Tue, 27 May 2014 17:55:23 GMT > Content-Type: text/html; charset=utf-8 > Connection: keep-alive > Vary: Accept-Encoding > X-Content-Type-Options: nosniff > Content-Length: ... > ... > > What do you think ? The Vary header indeed indicates server-driven negotiation, this is what gzip filter does - it returns different content (either gzipped or not) depending on whether a client supports gzip or not. The actual question is "Why it is marked as a defect?", but it's unlikely to be answered here - you'd better ask the person who marked it. -- Maxim Dounin http://nginx.org/ From Iry.Witham at jax.org Thu May 29 15:20:38 2014 From: Iry.Witham at jax.org (Iry Witham) Date: Thu, 29 May 2014 15:20:38 +0000 Subject: Issues with my galaxy server Message-ID: Hi Team, I am writing to gather some insight on how I may need to change my configuration on Nginx. I have been running a galaxy server for 3+ years now and have begun experiencing issues with accessing several of the largest libraries and get the following error: "The page you are looking for is temporarily unavailable. Please try again later." I have read that this could possibly be a configuration issue with Nginx. I am not confident that this is true, but can't leave any resource untouched. Has anyone reported this issue previously? Can you provide me with insight? Additionally, how can I confirm what version of Nginx I am using? I have it running on a linux server. Thanks, __________________________________ Iry T. Witham Scientific Applications Administrator Scientific Computing Group Computational Sciences Dept. The Jackson Laboratory 600 Main Street Bar Harbor, ME 04609 Phone: 207-288-6744 email: iry.witham at jax.org [cid:B396D034-E6BC-42C0-A50E-0799B7C803A0] The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 372D007A-1B00-4668-BA6B-F0527C1F24BE[34][3].png Type: image/png Size: 8409 bytes Desc: 372D007A-1B00-4668-BA6B-F0527C1F24BE[34][3].png URL: From mdounin at mdounin.ru Thu May 29 15:35:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 May 2014 19:35:28 +0400 Subject: Issues with my galaxy server In-Reply-To: References: Message-ID: <20140529153528.GO1849@mdounin.ru> Hello! On Thu, May 29, 2014 at 03:20:38PM +0000, Iry Witham wrote: > Hi Team, > > I am writing to gather some insight on how I may need to change > my configuration on Nginx. I have been running a galaxy server > for 3+ years now and have begun experiencing issues with > accessing several of the largest libraries and get the following > error: "The page you are looking for is temporarily unavailable. > Please try again later." I have read that this could possibly > be a configuration issue with Nginx. I am not confident that > this is true, but can't leave any resource untouched. Has > anyone reported this issue previously? Can you provide me with > insight? Additionally, how can I confirm what version of Nginx > I am using? I have it running on a linux server. Try looking into logs. -- Maxim Dounin http://nginx.org/ From r at roze.lv Thu May 29 15:36:02 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 29 May 2014 18:36:02 +0300 Subject: Issues with my galaxy server In-Reply-To: References: Message-ID: <105DB05600B44B5392FAE0FA7E0957C3@MasterPC> > "The page you are looking for is temporarily unavailable. Please try > again later." > Can you provide me with insight? The first thing would be to check what the actual HTTP error is (is it some 5xx or 4xx etc) - some browsers (like IE also Chrome) tend to display a "userfriendly" page without actually telling what the error is so it's hard to help from technical aspect. The second thing you should check are the error logs. Usually there should be a reasonable (humanreadable) answer why a particular request has failed. > Additionally, how can I confirm what version of Nginx I am using? I have > it running on a linux server. /path/to/nginx ?v or -V to see what compile time options were added rr From wmark+nginx at hurrikane.de Thu May 29 15:48:21 2014 From: wmark+nginx at hurrikane.de (W-Mark Kubacki) Date: Thu, 29 May 2014 17:48:21 +0200 Subject: Header Vary: Accept-Encoding - security risk ? In-Reply-To: <1598516cd9aeb86b92a500060dced305.NginxMailingListEnglish@forum.nginx.org> References: <1598516cd9aeb86b92a500060dced305.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2014-05-28 23:20 GMT+02:00 chili_confits : > I have enabled gzip with > ... > gzip on; > gzip_http_version 1.0; > gzip_vary on; > ... > to satisfy incoming HTTP 1.0 requests. > > In a very similiar setup which got OWASP-evaluated, I read this - marked as > a defect: > "The web server sent a Vary header, which indicates that server-driven > negotiation was done to determine which content should be delivered. This > may indicate that different content is available based on the headers in the > HTTP request." > IMHO this is a false positive ... Do not suppress header ?Vary? or you will run into problems with proxies, which would otherwise always serve the file gzip-ped regardless of a requester indicating support or lack thereof. Nginx does no content negotiation to the extend which would reveal that ?/config.inc? exists if ?/config? were requested with the intend to get ?/config.css?. As you can see, even this example is far-fetched. -- Mark From Iry.Witham at jax.org Thu May 29 16:00:39 2014 From: Iry.Witham at jax.org (Iry Witham) Date: Thu, 29 May 2014 16:00:39 +0000 Subject: Issues with my galaxy server In-Reply-To: <105DB05600B44B5392FAE0FA7E0957C3@MasterPC> References: <105DB05600B44B5392FAE0FA7E0957C3@MasterPC> Message-ID: I have checked the error.log and the following is what I am seeing: 2014/05/28 15:18:39 [info] 16146#0: *987 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: 10.40.42.12, server: localhost, request: "GET /library_common/browse_library?sort=name&operation=browse&f-description=All &f-name=All&f-deleted=False&cntrller=library_admin&async=false&show_item_ch eckboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page=1 HTTP/1.1", upstream: "http://127.0.0.1:8081/library_common/browse_library?sort=name&operation=br owse&f-description=All&f-name=All&f-deleted=False&cntrller=library_admin&as ync=false&show_item_checkboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page =1", host: "galaxy.jax.org", referrer: "http://galaxy.jax.org/library_admin/browse_libraries" 2014/05/28 15:18:45 [info] 16146#0: *1019 client closed prematurely connection while reading client request line, client: 10.40.42.12, server: localhost 2014/05/28 15:27:03 [error] 16146#0: *1016 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.40.42.12, server: localhost, request: "GET /library_common/browse_library?sort=name&operation=browse&f-description=All &f-name=All&f-deleted=False&cntrller=library_admin&async=false&show_item_ch eckboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page=1 HTTP/1.1", upstream: "http://127.0.0.1:8080/library_common/browse_library?sort=name&operation=br owse&f-description=All&f-name=All&f-deleted=False&cntrller=library_admin&as ync=false&show_item_checkboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page =1", host: "galaxy.jax.org", referrer: "http://galaxy.jax.org/library_admin/browse_libraries" 2014/05/28 16:11:52 [error] 16146#0: *1604 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.20.23.189, server: localhost, request: "GET /library_common/browse_library?sort=name&operation=browse&f-description=All &f-name=All&f-deleted=False&cntrller=library_admin&async=false&show_item_ch eckboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page=1 HTTP/1.1", upstream: "http://127.0.0.1:8081/library_common/browse_library?sort=name&operation=br owse&f-description=All&f-name=All&f-deleted=False&cntrller=library_admin&as ync=false&show_item_checkboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page =1", host: "galaxy", referrer: "http://galaxy/library_admin/browse_libraries" 2014/05/28 16:12:12 [info] 16146#0: *1604 client 10.20.23.189 closed keepalive connection 2014/05/28 16:18:20 [info] 16146#0: *1791 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: 10.1.101.207, server: localhost, request: "GET /library_common/browse_library?sort=name&webapp=galaxy&f-description=All&f- name=All&page=1&cntrller=library&show_item_checkboxes=false&async=false&ope ration=browse&id=e49526e74f52ec39 HTTP/1.1", upstream: "http://127.0.0.1:8081/library_common/browse_library?sort=name&webapp=galax y&f-description=All&f-name=All&page=1&cntrller=library&show_item_checkboxes =false&async=false&operation=browse&id=e49526e74f52ec39", host: "galaxy.jax.org", referrer: "http://galaxy.jax.org/library/browse_libraries" 2014/05/28 16:22:52 [info] 16146#0: *1869 client 10.1.101.207 closed keepalive connection 2014/05/28 16:23:52 [info] 16146#0: *1888 client 10.1.101.207 closed keepalive connection 2014/05/28 16:24:16 [info] 16146#0: *1901 client 10.1.101.207 closed keepalive connection 2014/05/28 16:26:37 [info] 16146#0: *1950 client closed prematurely connection while reading client request line, client: 10.1.101.207, server: localhost 2014/05/28 16:32:39 [info] 16146#0: *2047 client 209.222.197.239 closed keepalive connection 2014/05/28 22:29:20 [info] 16146#0: *2082 client closed prematurely connection while reading client request line, client: 192.168.253.157, server: localhost 2014/05/28 22:29:20 [info] 16146#0: *2081 client closed prematurely connection while reading client request line, client: 192.168.253.157, server: localhost 2014/05/28 22:29:21 [info] 16146#0: *2083 client closed prematurely connection while reading client request line, client: 192.168.253.157, server: localhost 2014/05/28 22:29:21 [info] 16146#0: *2084 client closed prematurely connection while reading client request line, client: 192.168.253.157, server: localhost 2014/05/28 22:29:21 [info] 16146#0: *2085 client closed prematurely connection while reading client request line, client: 192.168.253.157, server: localhost 2014/05/29 08:53:40 [info] 16146#0: *2087 client closed prematurely connection while reading client request line, client: 10.20.23.189, server: localhost 2014/05/29 08:53:40 [info] 16146#0: *2088 client closed prematurely connection while reading client request line, client: 10.20.23.189, server: localhost 2014/05/29 08:53:40 [info] 16146#0: *2089 client closed prematurely connection while reading client request line, client: 10.20.23.189, server: localhost 2014/05/29 08:53:40 [info] 16146#0: *2090 client closed prematurely connection while reading client request line, client: 10.20.23.189, server: localhost 2014/05/29 08:53:40 [info] 16146#0: *2091 client closed prematurely connection while reading client request line, client: 10.20.23.189, server: localhost 2014/05/29 09:04:22 [error] 16146#0: *2099 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.20.23.189, server: localhost, request: "GET /library_common/browse_library?sort=name&operation=browse&f-description=All &f-name=All&f-deleted=False&cntrller=library_admin&async=false&show_item_ch eckboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page=1 HTTP/1.1", upstream: "http://127.0.0.1:8082/library_common/browse_library?sort=name&operation=br owse&f-description=All&f-name=All&f-deleted=False&cntrller=library_admin&as ync=false&show_item_checkboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page =1", host: "galaxy.jax.org", referrer: "http://galaxy.jax.org/library_admin/browse_libraries" 2014/05/29 09:20:30 [error] 16146#0: *2110 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.20.23.189, server: localhost, request: "GET /library_common/browse_library?sort=name&operation=browse&f-description=All &f-name=All&f-deleted=False&cntrller=library_admin&async=false&show_item_ch eckboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page=1 HTTP/1.1", upstream: "http://127.0.0.1:8082/library_common/browse_library?sort=name&operation=br owse&f-description=All&f-name=All&f-deleted=False&cntrller=library_admin&as ync=false&show_item_checkboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page =1", host: "galaxy", referrer: "http://galaxy/library_admin/browse_libraries" I am currently running the following version of Nginx: nginx version: nginx/0.8.53 built by gcc 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) configure arguments: --prefix=/usr/local/nginx-0.8.53 --add-module=/usr/local/src/nginx_upload_module-2.2.0 I am planning an upgrade in the near future, but really need to get this issue resolved. Thanks, Iry On 5/29/14 11:36 AM, "Reinis Rozitis" wrote: >> "The page you are looking for is temporarily unavailable. Please try >> again later." >> Can you provide me with insight? > >The first thing would be to check what the actual HTTP error is (is it >some >5xx or 4xx etc) - some browsers (like IE also Chrome) tend to display a >"userfriendly" page without actually telling what the error is so it's >hard >to help from technical aspect. > >The second thing you should check are the error logs. >Usually there should be a reasonable (humanreadable) answer why a >particular >request has failed. > > > >> Additionally, how can I confirm what version of Nginx I am using? I >>have >> it running on a linux server. > >/path/to/nginx ?v > >or -V to see what compile time options were added > > >rr > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible. From contact at jpluscplusm.com Thu May 29 16:05:37 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 29 May 2014 17:05:37 +0100 Subject: Issues with my galaxy server In-Reply-To: References: <105DB05600B44B5392FAE0FA7E0957C3@MasterPC> Message-ID: Someone, perhaps you, changed something on your backend (your "galaxy" server, which means absolutely nothing to anyone on this list, do be aware), and they fucked it. Or "introduced an intermittent performance issue which is resulting in a proportion of your proxied requests to time out", if you prefer. Personally, I don't. From r at roze.lv Thu May 29 16:16:51 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 29 May 2014 19:16:51 +0300 Subject: Issues with my galaxy server In-Reply-To: References: <105DB05600B44B5392FAE0FA7E0957C3@MasterPC> Message-ID: > 2014/05/28 16:11:52 [error] 16146#0: *1604 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.20.23.189, server: localhost, request: "GET /library_common/browse_library?sort=name&operation=browse&f-description=All &f-name=All&f-deleted=False&cntrller=library_admin&async=false&show_item_ch eckboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page=1 HTTP/1.1", In general the backend (galaxy) server can't return the response in 60 seconds (default timeout). You can either look at the backend (prefferably) or a quick workarround (if acceptable) would be to increase the proxy_read_timeout ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout ) - eg look into nginx.conf and check if any timeout directives are present and change or add them accordingly. rr From Iry.Witham at jax.org Thu May 29 17:01:52 2014 From: Iry.Witham at jax.org (Iry Witham) Date: Thu, 29 May 2014 17:01:52 +0000 Subject: Issues with my galaxy server In-Reply-To: References: <105DB05600B44B5392FAE0FA7E0957C3@MasterPC> Message-ID: I modified the proxy_read_timeout and that has resolved the issue. Hopefully that will suffice until I upgrade. Thanks, Iry On 5/29/14 12:16 PM, "Reinis Rozitis" wrote: >> 2014/05/28 16:11:52 [error] 16146#0: *1604 upstream timed out (110: >Connection timed out) while reading response header from upstream, client: >10.20.23.189, server: localhost, request: "GET >/library_common/browse_library?sort=name&operation=browse&f-description=Al >l >&f-name=All&f-deleted=False&cntrller=library_admin&async=false&show_item_c >h >eckboxes=false&webapp=galaxy&id=ddd269e1ed2f8b5e&page=1 HTTP/1.1", > > > >In general the backend (galaxy) server can't return the response in 60 >seconds (default timeout). > >You can either look at the backend (prefferably) or a quick workarround >(if >acceptable) would be to increase the proxy_read_timeout ( >http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeou >t >) - eg look into nginx.conf and check if any timeout directives are >present >and change or add them accordingly. > >rr > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible. From nginx-forum at nginx.us Thu May 29 18:16:31 2014 From: nginx-forum at nginx.us (tulio84z) Date: Thu, 29 May 2014 14:16:31 -0400 Subject: Nginx 1.6.0 + spdy performance testing Message-ID: I need to evaluate if using spdy will be good for my website but i'm having trouble setting everyting up. While using the chrome benchmark extension (1) an error message stating that it was not possible to use spdy apeared. This made me believe that there might be something wrong with my configuration. However, except for two warnings, spdycheck.org tells me that spdy is enabled. In (2) you can see what what those warnings are. I have two questions: 1. is there anything wrong with my configuration? 2. is there any other tool that can be used to run performance tests? (please only suggest tools that u have tested yourself). Ps.: chrome://net-internals/#spdy also shows that spdy 3.1 is enabled. I have posted my configuration file in (2) but for convenience here it is again: server { listen 80; listen 443 ssl spdy; server_name 54.201.32.118; ssl_certificate /etc/nginx/ssl/tulio.crt; ssl_certificate_key /etc/nginx/ssl/tulio.key; if ($ssl_protocol = "") { rewrite ^ https://$server_name$request_uri? permanent; } root /usr/share/nginx/html; index index.html index.htm; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } } (1) https://sites.google.com/a/chromium.org/dev/developers/design-documents/extensions/how-the-extension-system-works/chrome-benchmarking-extension (2) http://serverfault.com/questions/599491/enabling-spdy-in-nginx-fails-spdycheck-org/599501#599501 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250485,250485#msg-250485 From reallfqq-nginx at yahoo.fr Thu May 29 18:28:17 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 29 May 2014 20:28:17 +0200 Subject: Header Vary: Accept-Encoding - security risk ? In-Reply-To: References: <1598516cd9aeb86b92a500060dced305.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Thu, May 29, 2014 at 5:48 PM, W-Mark Kubacki wrote: > Do not suppress header ?Vary? or you will run into problems with > proxies, which would otherwise always serve the file gzip-ped > regardless of a requester indicating support or lack thereof. > ?Do not worry. Reading Maxim's answer, the only thing questioned here is the 'defect' report? ?itself... ;o)? ? ? --- *B. R.*? -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulnpace at gmail.com Thu May 29 20:31:29 2014 From: paulnpace at gmail.com (Paul N. Pace) Date: Thu, 29 May 2014 13:31:29 -0700 Subject: Just looking for guide to understand query strings Message-ID: My logs have been inundated with hits at example.com/?anything, though in the actual logs 'anything' is a very long string of characters. Log entry: "GET /?anything HTTP/1.1" 200 581 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko" (note there is no location for 'anything') I didn't even know this was possible. I'm still not sure what nginx is doing when it processes this request. If someone could help me out, even just point me to a good explanation of what is happening, that would be great. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Thu May 29 21:11:11 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 30 May 2014 09:11:11 +1200 Subject: Nginx 1.6.0 + spdy performance testing In-Reply-To: References: Message-ID: <1401397871.8888.262.camel@steve-new> I add an extra header, just for those who don't understand spdy... add_header Alternate-Protocol "443:npn-spdy/3.1"; and specifically set up ssl protocols/ciphers and ocsp stapling but I'd suggest that it's a limitation of the benchmark that you're hitting. Using something like webpagetest.org does show pretty waterfalls of the differences that delivering sites over spdy makes. Sorry, pretty useless post really (: Steve On Thu, 2014-05-29 at 14:16 -0400, tulio84z wrote: > I need to evaluate if using spdy will be good for my website but i'm having > trouble setting everyting up. While using the chrome benchmark extension (1) > an error message stating that it was not possible to use spdy apeared. This > made me believe that there might be something wrong with my configuration. > However, except for two warnings, spdycheck.org tells me that spdy is > enabled. In (2) you can see what what those warnings are. > > I have two questions: > > 1. is there anything wrong with my configuration? > 2. is there any other tool that can be used to run performance tests? > (please only suggest tools that u have tested yourself). > > Ps.: chrome://net-internals/#spdy also shows that spdy 3.1 is enabled. > > I have posted my configuration file in (2) but for convenience here it is > again: > > server { > listen 80; > listen 443 ssl spdy; > > server_name 54.201.32.118; > > ssl_certificate /etc/nginx/ssl/tulio.crt; > ssl_certificate_key /etc/nginx/ssl/tulio.key; > > if ($ssl_protocol = "") { > rewrite ^ https://$server_name$request_uri? permanent; > } > > root /usr/share/nginx/html; > index index.html index.htm; > > location / { > # First attempt to serve request as file, then > # as directory, then fall back to displaying a 404. > try_files $uri $uri/ =404; > # Uncomment to enable naxsi on this location > # include /etc/nginx/naxsi.rules > } > } > > > (1) > https://sites.google.com/a/chromium.org/dev/developers/design-documents/extensions/how-the-extension-system-works/chrome-benchmarking-extension > > (2) > http://serverfault.com/questions/599491/enabling-spdy-in-nginx-fails-spdycheck-org/599501#599501 > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250485,250485#msg-250485 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From reallfqq-nginx at yahoo.fr Thu May 29 23:00:55 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 30 May 2014 01:00:55 +0200 Subject: Just looking for guide to understand query strings In-Reply-To: References: Message-ID: The question mark separates the locations with the arguments, thus the location itself is merely '/'. If you do not have a location set explicitely for '/', you probably have a default location block ('location /') which will serve all unmatched locations, thus resulting in 200. Maybe the intent of this spam is to try to trigger vulnerabilities or default credentials on the index page in backend applications (ie CMS). This is pure speculation. If the spam really takes resources or annoy you very much, you might be willing to either: - filter out those request (blacklist approach), being careful that those could not be legitimate (as you would reduce availability, which is against very basic principles of security) - only accept requests with specific format (white-list approach), being careful that it might be a maintenance nightmare each and everytime you wanna make new format of requests - investigate the source of this spam and see if it might not be possible to filter them out at a lower level (such as a firewall) - introduce requests rate limiting to still allow every request but lower their frequency and thus saving resources by sending back a built-in HTTP error code rather than content when clients exceed rate limits Those are just wild ideas coming in a snap. Pick your choice or think about better ones... ;o) --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From es12b1001 at iith.ac.in Fri May 30 06:59:12 2014 From: es12b1001 at iith.ac.in (Adarsh Pugalia) Date: Fri, 30 May 2014 12:29:12 +0530 Subject: Server context Vs Location context Message-ID: Is there any way to check in a command's set function if the command was invoked i.e. the command was called from server context or location context. For example, if I write a command that can be used both in server context and location context, is there a way to differentiate from where it was called in my function that sets up the configuration? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hewanxiang at gmail.com Fri May 30 07:20:25 2014 From: hewanxiang at gmail.com (Andy) Date: Fri, 30 May 2014 15:20:25 +0800 Subject: Does nginx support openstack swift API? In-Reply-To: <20140529122521.GJ1849@mdounin.ru> References: <20140529122521.GJ1849@mdounin.ru> Message-ID: On Thu, May 29, 2014 at 8:25 PM, Maxim Dounin wrote: > Hello! > > On Thu, May 29, 2014 at 07:04:46PM +0800, Andy wrote: > > > Hello guys, > > > > I'm trying to find a way to use OpenStack SWIFT with nginx, the below are > > request steps: > > > > 1. nginx is configured as proxy cache > > 2. client send a request to nginx for url: > http://domain.com/filename.txt > > 3. nginx received the request and it is a cache miss, it need to fetch > the > > content from SWIFT proxy server > > 4. nginx send a request to swift proxy server for authentication, the url > > looks like http://swift-proxy/auth-account, account information is set > in > > header, the response from swift proxy server contains a auth-token for > that > > account if authentication success. > > 5. then nginx use this auth-token and put it in a new request header, and > > send the new request to the swift proxy server for the original request > > content, there could be a map between client request url to the swift > proxy > > url, for example, /filename.txt --> /account/container/filename.txt, so > the > > new requst url could be > http://swift-proxy/account/container/filename.txt, > > plus the auth-token. > > 6. swift proxy sever response the content to nginx, then nginx cache the > > content and pass the response to the client. > > > > Could the above requirement be accomplished by some specific > configuration > > plus some existing nginx modules? > > Looks like something more or less possible with auth_request, see > http://nginx.org/en/docs/http/ngx_http_auth_request_module.html. > Thanks, it works after some configuration changes. > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwellsnc at gmail.com Fri May 30 15:55:32 2014 From: bwellsnc at gmail.com (bwellsnc) Date: Fri, 30 May 2014 11:55:32 -0400 Subject: Nginx Rewrite for Proxy Pass Message-ID: Hello everyone, I have an interesting issue. I am using nginx 1.6.0 to proxy back to my Jira instance. This is working great within my network. This is the issue. I am using a fortigate device to protect my network and I want to use the https connection in the web portal to access my Jira instance. The problem is that jira always expects a https://jira.internal.example.com, because that is what is set in it's base url. The fortigate sends this to nginx: https://vpn.example.com/proxy/https/jira.internal.example.com/secure/Dashboard.jspa Jira see's this as an incorrect path and causing it to not properly. I am wanting to know is there a way to rewrite the path that the fortigate is sending to nginx so that jira believe's its correct. Thanks! Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From chencw1982 at gmail.com Fri May 30 16:08:26 2014 From: chencw1982 at gmail.com (Chuanwen Chen) Date: Sat, 31 May 2014 00:08:26 +0800 Subject: [ANNOUNCE] Tengine-2.0.2 is released In-Reply-To: References: Message-ID: Hi folks, Tengine-2.0.3 (development version) has been released. You can either checkout the source code from GitHub: https://github.com/alibaba/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-2.0.3.tar.gz We introduced ngx_http_reqstat_module to provides access to running status information for each domain, url, etc. The full changelog is as follows: *) Feature: added support for collecting the running status of Tengine according to specific key (domain, url, etc). (cfsego) *) Feature: added support for generating package of debian/ubuntu format(*.deb). (betetrpm, szepeviktmr) *) Change: merged changes between nginx-1.4.6 and nginx-1.4.7. (chobits) *) Change: optimized the parsing and searching strategy of upstream by using rbtree. (SarahWang) *) Change: updated the copyright. *) Bugfix: fixed bugs of session-sticky module. (dinic) *) Bugfix: fixed compiling and installing issues of DSO modules. (cfsego) *) Bugfix: fixed bugs of SPDY protocol. (chobits) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From chencw1982 at gmail.com Fri May 30 16:09:38 2014 From: chencw1982 at gmail.com (Chuanwen Chen) Date: Sat, 31 May 2014 00:09:38 +0800 Subject: [ANNOUNCE] Tengine-2.0.3 is released Message-ID: Hi folks, Tengine-2.0.3 (development version) has been released. You can either checkout the source code from GitHub: https://github.com/alibaba/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-2.0.3.tar.gz We introduced ngx_http_reqstat_module to provides access to running status information for each domain, url, etc. The full changelog is as follows: *) Feature: added support for collecting the running status of Tengine according to specific key (domain, url, etc). (cfsego) *) Feature: added support for generating package of debian/ubuntu format(*.deb). (betetrpm, szepeviktmr) *) Change: merged changes between nginx-1.4.6 and nginx-1.4.7. (chobits) *) Change: optimized the parsing and searching strategy of upstream by using rbtree. (SarahWang) *) Change: updated the copyright. *) Bugfix: fixed bugs of session-sticky module. (dinic) *) Bugfix: fixed compiling and installing issues of DSO modules. (cfsego) *) Bugfix: fixed bugs of SPDY protocol. (chobits) -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Fri May 30 16:53:01 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 30 May 2014 17:53:01 +0100 Subject: Nginx Rewrite for Proxy Pass In-Reply-To: References: Message-ID: On 30 May 2014 16:55, bwellsnc wrote: > Hello everyone, I have an interesting issue. I am using nginx 1.6.0 to > proxy back to my Jira instance. This is working great within my network. > This is the issue. I am using a fortigate device to protect my network and > I want to use the https connection in the web portal to access my Jira > instance. The problem is that jira always expects a > https://jira.internal.example.com, because that is what is set in it's base > url. The fortigate sends this to nginx: > > https://vpn.example.com/proxy/https/jira.internal.example.com/secure/Dashboard.jspa I think someone configured your firewall wrongly. That's not just a device's specific quirk - if someone sold a device that did that *and*only*that* then they'd go out of business. Fix the misconfiguration. > Jira see's this as an incorrect path and causing it to not properly. I am > wanting to know is there a way to rewrite the path that the fortigate is > sending to nginx so that jira believe's its correct. Thanks! Not tested, but should push you in the right direction: resolver some.internal.resolver.ip; rewrite ^/proxy/https/jira.internal.example.com/(.*)$ /$1; proxy_set_header Host jira.internal.example.com; proxy_pass https://jira.internal.example.com; But don't do that. Fix your wrongly configured other device. J From francis at daoine.org Fri May 30 16:59:55 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 30 May 2014 17:59:55 +0100 Subject: Nginx Rewrite for Proxy Pass In-Reply-To: References: Message-ID: <20140530165955.GW16942@daoine.org> On Fri, May 30, 2014 at 11:55:32AM -0400, bwellsnc wrote: Hi there, > This is the issue. I am using a fortigate device to protect my network and > I want to use the https connection in the web portal to access my Jira > instance. The problem is that jira always expects a > https://jira.internal.example.com, because that is what is set in it's base > url. The fortigate sends this to nginx: > > https://vpn.example.com/proxy/https/jira.internal.example.com/secure/Dashboard.jspa According to the proxy_pass documentation, location ^~ /proxy/https/jira.internal.example.com/ { proxy_pass https://[jira hostname or ip]/; proxy_set_header Host jira.internal.example.com; } should do what you seem to be asking for. (Replace [this bit].) But I'm not aware of any proxy/firewall devices that do what you say your fortigate is doing. So it may not end up doing what you actually want. f -- Francis Daly francis at daoine.org From bwellsnc at gmail.com Fri May 30 18:37:50 2014 From: bwellsnc at gmail.com (bwellsnc) Date: Fri, 30 May 2014 14:37:50 -0400 Subject: Nginx Rewrite for Proxy Pass In-Reply-To: <20140530165955.GW16942@daoine.org> References: <20140530165955.GW16942@daoine.org> Message-ID: J, It's not a misconfigured device, that is how fortinet does it's webportal vpn access to internal websites. Talked to fortinet, that's how it works. Francis, I will look at that. Thanks for your help! On Fri, May 30, 2014 at 12:59 PM, Francis Daly wrote: > On Fri, May 30, 2014 at 11:55:32AM -0400, bwellsnc wrote: > > Hi there, > > > This is the issue. I am using a fortigate device to protect my network > and > > I want to use the https connection in the web portal to access my Jira > > instance. The problem is that jira always expects a > > https://jira.internal.example.com, because that is what is set in it's > base > > url. The fortigate sends this to nginx: > > > > > https://vpn.example.com/proxy/https/jira.internal.example.com/secure/Dashboard.jspa > > According to the proxy_pass documentation, > > location ^~ /proxy/https/jira.internal.example.com/ { > proxy_pass https://[jira hostname or ip]/; > proxy_set_header Host jira.internal.example.com; > } > > should do what you seem to be asking for. (Replace [this bit].) > > But I'm not aware of any proxy/firewall devices that do what you say your > fortigate is doing. So it may not end up doing what you actually want. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 30 23:18:08 2014 From: nginx-forum at nginx.us (SupaIrish) Date: Fri, 30 May 2014 19:18:08 -0400 Subject: Whitelisting Req/Conn Limiting Message-ID: Hello! I'm want to limit req/connections but have certain requests skip or whitelisted from the throttling. I've found some prior threads that got me this, which I think is working. Here's just the relevant config. Is this the best/correct way to do this? And if so I don't really understand the 1 "" part of the map block. Can someone explain that? The map docs (http://nginx.org/en/docs/http/ngx_http_map_module.html#map) aren't helping me figure it out. Thanks! http { map $whitelist $limit { default $binary_remote_addr; 1 ""; } limit_conn_zone $limit zone=conn_limit_per_ip:10m; limit_req_zone $limit zone=req_limit_per_ip:10m rate=5r/s; server { set $whitelist ""; if ( $hostname = some_url.com ) { set $whitelist 1; } limit_conn conn_limit_per_ip 10; limit_req zone=req_limit_per_ip burst=30 nodelay; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250510,250510#msg-250510 From nginx-forum at nginx.us Fri May 30 23:18:52 2014 From: nginx-forum at nginx.us (SupaIrish) Date: Fri, 30 May 2014 19:18:52 -0400 Subject: Whitelisting Req/Conn Limiting In-Reply-To: References: Message-ID: Gist for easier reading https://gist.github.com/supairish/748c85552b2f7047a36a Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250510,250511#msg-250511 From reallfqq-nginx at yahoo.fr Sat May 31 02:17:31 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 31 May 2014 04:17:31 +0200 Subject: Whitelisting Req/Conn Limiting In-Reply-To: References: Message-ID: 1?) To me, the map docs are fairly clear... In short, the map directive works as follow: With: map $foo $bar { "test1" "value1" ... } Whenever the value in $foo matches a value of the first column, $bar will be set to the value of the second column, ie: if $foo = "test1", then $bar <- "value1" If $foo matches nothing, then either: - there is a special value *default* in the first column, thus $bar will beset to the corresponding value - there is no *default* value, $bar will be set to an empty string 2?) Now for the best way to write that, here are my thoughts: Considering that: - all servers share 'limit_conn_zone' and 'limit_req_zone usage', except for some hostname - the 'if' directive must be avoided as much as possible - 'limit_conn_zone' and 'limit_req_zone' work in 'http' context I would try the following: - put the 'limit_*_zone' directives at 'http' level, next to the 'map' one - use 'server_name' in 'server' blocks to serve different hostnames - put 'set whitelist 1' in all whitelisted 'server' blocks - if necessary (unsure), put 'set whitelist ""' at 'http' level --- *B. R.* On Sat, May 31, 2014 at 1:18 AM, SupaIrish wrote: > Gist for easier reading > https://gist.github.com/supairish/748c85552b2f7047a36a > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250510,250511#msg-250511 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat May 31 03:03:42 2014 From: nginx-forum at nginx.us (pba) Date: Fri, 30 May 2014 23:03:42 -0400 Subject: reverse proxy through upstream proxy Message-ID: I'm trying to configure nginx as a reverse proxy where the upstream traffic has to go through another proxy (squid in this case) without success. Since nginx does not support (as far as I can tell) passing a proxy for upstream and because I need to reverse proxy only one domain (test.com) I've tried rewriting the URL and using proxy pass: location / { rewrite ^ http://test.com$request_uri; proxy_set_header X-Custom SOMETOKEN; proxy_pass http://my_squid:3128; } Unfortunately since the final URL generated by rewrite starts with http nginx immediately returns a 302 without forwarding the request to the proxy. Any ideas how to solve this or if there are alternative solutions to what I'm trying to achieve ? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250513,250513#msg-250513 From nginx-forum at nginx.us Sat May 31 05:59:18 2014 From: nginx-forum at nginx.us (TECK) Date: Sat, 31 May 2014 01:59:18 -0400 Subject: Nginx 1.7.0: location @php In-Reply-To: <20140525132809.GN16942@daoine.org> References: <20140525132809.GN16942@daoine.org> Message-ID: <008a10b77693ba0453e6be8ec82b4e59.NginxMailingListEnglish@forum.nginx.org> Hi Francis, > Answer #1: what does "does not work" mean? When I process an URI request, it downloads the file instead of executing the PHP code. What I try to achieve is very simple, use @php as location to execute PHP code instead of repeating it over and over in various locations. Here it is a better example of a functional setup: http://pastie.org/private/5gkl1pti1co0onzhgb8jpg What I try to achieve is replace the location ~ \.php$ location contents with @php ones so I only execute try_files: http://pastie.org/private/iuz58naozk6xo92ukqmwsg Regards, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250342,250514#msg-250514 From francis at daoine.org Sat May 31 09:12:33 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 31 May 2014 10:12:33 +0100 Subject: reverse proxy through upstream proxy In-Reply-To: References: Message-ID: <20140531091233.GX16942@daoine.org> On Fri, May 30, 2014 at 11:03:42PM -0400, pba wrote: Hi there, > I'm trying to configure nginx as a reverse proxy where the upstream traffic > has to go through another proxy (squid in this case) without success. I believe that nginx as a client can speak http to a http server, and http-over-ssl to a https server, but does not speak proxied-http to a http proxy server. > Any ideas how to solve this or if there are alternative solutions to what > I'm trying to achieve ? As I see it, you can: (a) encourage someone to patch nginx for your use case; or (b) configure your squid so that it responds to http, not just proxied-http. Option (b) is probably less work. Look for either "transparent" or "reverse proxy" within the squid documentation, and see if it is appropriate for your setup. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat May 31 09:59:37 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 31 May 2014 10:59:37 +0100 Subject: Nginx 1.7.0: location @php In-Reply-To: <008a10b77693ba0453e6be8ec82b4e59.NginxMailingListEnglish@forum.nginx.org> References: <20140525132809.GN16942@daoine.org> <008a10b77693ba0453e6be8ec82b4e59.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140531095937.GY16942@daoine.org> On Sat, May 31, 2014 at 01:59:18AM -0400, TECK wrote: Hi there, > > Answer #1: what does "does not work" mean? > When I process an URI request, it downloads the file instead of executing > the PHP code. Perhaps the request that you made did not match the location blocks that you showed? That's the way I can get the specific unwanted response that you report here. > What I try to achieve is very simple, use @php as location to execute PHP > code instead of repeating it over and over in various locations. Here it is > a better example of a functional setup: > http://pastie.org/private/5gkl1pti1co0onzhgb8jpg If it matters to the question, can you include the (minimal) configuration in the mail directly? That way, it will be available for the next person who searches the archives. Thanks. I'm guessing that you may want something like try_files i-dislike-macro-include @php; Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat May 31 10:31:09 2014 From: nginx-forum at nginx.us (TECK) Date: Sat, 31 May 2014 06:31:09 -0400 Subject: Nginx 1.7.0: location @php In-Reply-To: <20140531095937.GY16942@daoine.org> References: <20140531095937.GY16942@daoine.org> Message-ID: <0bd9dc35cd7a2b35d817c84af11adfcc.NginxMailingListEnglish@forum.nginx.org> Francis, > I'm guessing that you may want something like > try_files i-dislike-macro-include @php; What you posted is some deprecated configuration available on Google. > Perhaps the request that you made did not match the location blocks that you showed? If that would be the case, the proper code posted earlier would not work, so this is not the case. I'm very familiar with Nginx and I understand the try_files concept very well. My original question was how to avoid repeating a segment of configuration code endless times, more exactly this: try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass fastcgi; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; include fastcgi.conf; Using logic and the examples provided into Nginx documentation what I posted previously should work. Can you provide a solution to my question? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250342,250517#msg-250517 From francis at daoine.org Sat May 31 11:27:45 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 31 May 2014 12:27:45 +0100 Subject: Nginx 1.7.0: location @php In-Reply-To: <0bd9dc35cd7a2b35d817c84af11adfcc.NginxMailingListEnglish@forum.nginx.org> References: <20140531095937.GY16942@daoine.org> <0bd9dc35cd7a2b35d817c84af11adfcc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140531112745.GZ16942@daoine.org> On Sat, May 31, 2014 at 06:31:09AM -0400, TECK wrote: Hi there, >> Perhaps the request that you made did not match the location blocks that >> you showed? > If that would be the case, the proper code posted earlier would not work, so > this is not the case. That's a reasonable assumption to make, if I can reproduce your reported problem. When I use location ^~ /setup { try_files $uri $uri/ /setup/index.php?$uri&$args; location ~ \.php$ { try_files @php =404; } } and I request /setup/real.php, I get back 404, not the unparsed content of /usr/local/nginx/html/setup/real.php or anything else. When I request /setup/fake.php, I get back 404. When I request /other/real.php, I get back the unparsed content of /usr/local/nginx/html/other/real.php. Because I have shown you the config, the request, and the response, you (and anyone else) can repeat the tests. If you get a different response, then there is something extra involved that should be investigated before the problem can be understood. Perhaps different versions of nginx are involved. Perhaps different modules are compiled in. Perhaps other configuration applies that was not provided in the original example. After the reproducible test case shows itself to be really reproducible, those differences possibly can be ignored. > I understand the try_files concept very well. The older http://forum.nginx.org/read.php?2,217122 (and replies) suggests that you didn't; and this thread (at http://forum.nginx.org/read.php?2,250342 plus replies) suggests that you don't. > My original question was how to avoid repeating > a segment of configuration code endless times, No, it wasn't. It may have been what you intended, but it wasn't what you asked. http://forum.nginx.org/read.php?2,250342 > more exactly this: > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_pass fastcgi; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; > include fastcgi.conf; > > Using logic and the examples provided into Nginx documentation what I posted > previously should work. No, it shouldn't. http://nginx.org/r/try_files. "file", "uri", and "=code" are different things. "..." means "optional repetition of the previous" -- so there can be multiple "file" parameters, and exactly one "uri" or exactly one "=code". If there is a phrasing that would make the documentation clearer on that point, I'm sure it would be welcomed. (But it seems clear to me already, so I'm not sure what to change.) > Can you provide a solution to my question? http://nginx.org/r/include f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat May 31 16:40:48 2014 From: nginx-forum at nginx.us (Ventzy) Date: Sat, 31 May 2014 12:40:48 -0400 Subject: Wildcard proxy_cache_purge doesn't work Message-ID: <56f33106ab203397b138173eefc135fc.NginxMailingListEnglish@forum.nginx.org> I have cache setup like this: proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=pagecache:10m; location / { ... proxy_cache pagecache; proxy_cache_key "$scheme://$host$request_uri"; } And it works as expected. I want cache purging and it works for single url, but not for wildcard url. I am looking this doc http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_purge. For example I have page http://cms.local/gsm and if I do proxy_cache_purge pagecache "http://cms.local/gsm"; it is removed from cache, but proxy_cache_purge pagecache "http://cms.local/*"; doesn't have any effect. My config with purge looks like this: proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=pagecache:10m; location / { error_page 477 = @purge; if ($request_method = PURGE) { return 477; } proxy_cache pagecache; proxy_cache_key "$scheme://$host$request_uri"; } location @purge { access_log /var/log/nginx/caching.purge.log; proxy_cache_purge pagecache "http://cms.local/*"; #proxy_cache_purge pagecache "$scheme://$host$request_uri"; } Am I missing something? I am using nginx 1.6 on ubuntu 12.04. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250519,250519#msg-250519 From kurt at x64architecture.com Sat May 31 21:33:49 2014 From: kurt at x64architecture.com (Kurt Cancemi) Date: Sat, 31 May 2014 17:33:49 -0400 Subject: Wildcard proxy_cache_purge doesn't work In-Reply-To: <56f33106ab203397b138173eefc135fc.NginxMailingListEnglish@forum.nginx.org> References: <56f33106ab203397b138173eefc135fc.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, I think this is only supported in the commercial edition of nginx. You are using the ngx_cache_purge module see here , which doesn't support wildcard urls. --- Kurt Cancemi http://www.getwnmp.org On Sat, May 31, 2014 at 12:40 PM, Ventzy wrote: > I have cache setup like this: > > proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=pagecache:10m; > location / { > ... > proxy_cache pagecache; > proxy_cache_key "$scheme://$host$request_uri"; > } > > And it works as expected. > > I want cache purging and it works for single url, but not for wildcard url. > I am looking this doc > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_purge > . > > For example I have page http://cms.local/gsm and if I do proxy_cache_purge > pagecache "http://cms.local/gsm"; it is removed from cache, but > proxy_cache_purge pagecache "http://cms.local/*"; doesn't have any effect. > My config with purge looks like this: > > proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=pagecache:10m; > location / { > error_page 477 = @purge; > if ($request_method = PURGE) { > return 477; > } > proxy_cache pagecache; > proxy_cache_key "$scheme://$host$request_uri"; > } > > location @purge { > access_log /var/log/nginx/caching.purge.log; > > proxy_cache_purge pagecache "http://cms.local/*"; > #proxy_cache_purge pagecache "$scheme://$host$request_uri"; > } > > Am I missing something? > I am using nginx 1.6 on ubuntu 12.04. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,250519,250519#msg-250519 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: