From nginx-forum at nginx.us Thu Aug 1 02:11:14 2013 From: nginx-forum at nginx.us (icecola321) Date: Wed, 31 Jul 2013 22:11:14 -0400 Subject: external authentication using a custom script In-Reply-To: References: Message-ID: <629d7e4e606a9e8d56963c7fb6c70122.NginxMailingListEnglish@forum.nginx.org> Find a suitable for your stuff will have a sense of accomplishment that can come efox look, there may be something you want Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227538,241428#msg-241428 From weiyue at taobao.com Thu Aug 1 02:20:53 2013 From: weiyue at taobao.com (=?utf-8?B?5Y2r6LaK?=) Date: Thu, 1 Aug 2013 10:20:53 +0800 Subject: =?UTF-8?B?562U5aSNOiBuZ2lueC0xLjUuMw==?= In-Reply-To: References: <20130730134105.GI2130@mdounin.ru> <03b801ce8d98$87083310$95189930$@com> Message-ID: <045201ce8e5d$bf87e4e0$3e97aea0$@com> Thanks for your reply. -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Yichun Zhang (agentzh) ????: 2013?7?31? 13:00 ???: nginx at nginx.org ??: Re: nginx-1.5.3 Hello! On Tue, Jul 30, 2013 at 7:49 PM, ?? wrote: >> *) Change: now after receiving an incomplete response from a backend >> server nginx tries to send an available part of the response to a >> client, and then closes client connection. > > It is obviously different from previous nginx, but I wonder why nginx made > this change. > I originally proposed this change here: http://mailman.nginx.org/pipermail/nginx-devel/2012-September/002693.html It was just wrong that Nginx assumed that truncated upstream responses to be well formed and complete. Regards, -agentzh _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Aug 1 06:54:00 2013 From: nginx-forum at nginx.us (drook) Date: Thu, 01 Aug 2013 02:54:00 -0400 Subject: nginx, solaris, eventport In-Reply-To: <20130704065807.GQ15373@lo0.su> References: <20130704065807.GQ15373@lo0.su> Message-ID: <7263778a8d2600bfd9afeb941478dfba.NginxMailingListEnglish@forum.nginx.org> Hi. I finally managed to catch this situation. I have a log from nginx running the following config: worker_processes 8; worker_rlimit_nofile 16384; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 1024; use eventport; } It can be obtained from: http://music.enaza.ru/error.log.bad.gz Around 13:36 August, 1st, log time, nginx stopped handling web-request (I gor timeout in browser). This situation was cleared with nginx restart. nginx version and modules information: nginx version: nginx/1.4.1 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --with-http_ssl_module --with-debug --with-http_realip_module --with-http_xslt_module --with-pcre=../pcre-8.33 --with-cc-opt=-m64 --with-ld-opt=-m64 --with-http_secure_link_module --with-pcre-opt=-m64 --with-http_stub_status_module --with-poll_module --with-select_module Thanks. Eugene. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240568,241434#msg-241434 From nginx-forum at nginx.us Thu Aug 1 07:10:30 2013 From: nginx-forum at nginx.us (microwish) Date: Thu, 01 Aug 2013 03:10:30 -0400 Subject: still 400 response code, but so weird this time Message-ID: In access_log file, huge numbers of log entries like this: 115.85.238.34 1764839163 - 0.242 [01/Aug/2013:11:02:01 +0800] "foo.bar.com" "-" 400 0 "-" "-" "-" log_format defined in http conf block: '$remote_addr $connection $remote_user $request_time [$time_local] "$hostname" "$request" $status $body_bytes_sent "$http_referer" "$http_cookie" "$http_user_agent"' Points I realized: 1) Cannot catch $request, which is full original request line according to Nginx documentation. So can it tell at which phrase the connection was dropped? 2) $body_bytes_sent is zero. So no HTTP response body was generated. 3) $http_refer, $http_cookie and $http_user_agent cannot be caught. So does this indicate any issue? p.s. this might be caused by HTTPS/SSL connections from mobile client, but I'm not sure. Could anyone give any words? Thank you in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241435,241435#msg-241435 From tseveendorj at gmail.com Thu Aug 1 07:11:05 2013 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Thu, 1 Aug 2013 03:11:05 -0400 Subject: rewrite difficulty In-Reply-To: References: <517F2E46.3040000@gmail.com> Message-ID: Hello, My understanding is on Apache rules like follow and convert it to nginx If anyone request anything except follow URI /file/, /install/, /design/, /plugins/, /phpmyadmin/ then nginx will forward anything to /index.php?do=anything RewriteCond %{REQUEST_URI} !^/file/.* RewriteCond %{REQUEST_URI} !^/install/.* RewriteCond %{REQUEST_URI} !^/design/.* RewriteCond %{REQUEST_URI} !^/plugins/.* RewriteCond %{REQUEST_URI} !^/phpmyadmin/.* RewriteRule ^(/.*)$ /index.php?do=$1 [L] nginx location / { if ( $request_uri !~ ^(/file/|/install/|/design/|/plugins/|/phpmyadmin/)) { rewrite ^(/.*)$ /index.php?do=$1; } } Is this right ? It works on my server. I'm confusing where I need to put this rewrite. Am I right ? On Tue, Apr 30, 2013 at 10:01 AM, Jonathan Matthews wrote: > On 30 April 2013 03:36, tseveendorj wrote: > > Hello, > > > > I have difficulty to convert apache like rewrite to nginx. This is my > config > > file of virtualhost on nginx. http://pastebin.com/HTtKXnFy > > OMFG. You win today's prize for "Nginx config I am least likely even > to /try/ and change". Congrats! ;-) > > > My installed php script should have following rewrite > > http://pastebin.com/M2h3uAt3 > > > > Currently any requested php code displayed it's source on browser. How > could > > I migrate ? > > You need to start small. Learn how Nginx does its thing in one small > area and when you've understood that, move on the next. > > At the moment, you have literally picked up your apache config and > dumped it into Nginx's config syntax. You are unlikely to succeed if > you don't learn how to work *with* Nginx, instead of trying just to > make it behave like Apache. > > This may not be the "here's your config; I fixed it" reply you were > looking for, but it's the best I can give you. Your Nginx config is > /horrible/, and I'm not going to spend my time deciphering it! :-) > > Have a *really* good read of > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite and > http://wiki.nginx.org/HttpRewriteModule. They'd be good places to > start ... > > J > -- > Jonathan Matthews // Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 1 07:20:28 2013 From: nginx-forum at nginx.us (Rakshith) Date: Thu, 01 Aug 2013 03:20:28 -0400 Subject: Max file size which can be transferred Message-ID: <5d6e1d449e9731ee1ef7768dd05b2e6a.NginxMailingListEnglish@forum.nginx.org> Hi, I wanted to know whats the maximum file size which i can transfer using a simple CURL PUT/GET command. I ask this because when i try to send a file which is >64KB, i get a HTTP/1.1 100 Continue message: File i am trying to do a PUT: [rakshith~]$ ls -l nginx.tar -rw-r--r-- 1 rakshith engr 675840 Jul 29 14:38 nginx.tar [rakshith~]$ curl -X PUT -d @nginx.tar -o /dev/null -qvk http://x.x.x.x:80/Enginex.tar * About to connect() to x.x.x.x port 80 * Trying x.x.x.x... connected * Connected to x.x.x.x (x.x.x.x) port 80 > PUT /Enginex.tar HTTP/1.1 > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Host: x.x.x.x > Accept: */* > Content-Length: 64982 > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > < HTTP/1.1 100 Continue < Server: nginx/1.5.3 < Date: Thu, 01 Aug 2013 06:05:15 GMT < Content-Length: 0 < Location: http://x.x.x.x/Enginex.tar < Connection: keep-alive 100 64982 0 0 100 64982 0 5133k --:--:-- --:--:-- --:--:-- 9065k* Connection #0 to host x.x.x.x left intact Now the actual contents which made it through is of size close to 64KB and not 647KB bash-3.2# ls -l Enginex.tar -rw------- 1 nobody nobody 64982 Aug 1 06:05 Enginex.tar Any reply/help on this would be really helpful!!. Thanks, Rakshith Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241437,241437#msg-241437 From nginx-forum at nginx.us Thu Aug 1 07:28:56 2013 From: nginx-forum at nginx.us (microwish) Date: Thu, 01 Aug 2013 03:28:56 -0400 Subject: still 400 response code, but so weird this time In-Reply-To: References: Message-ID: addtions: no corresponding logs in error_log. Nginx version: 1.2.4 OpenSSL version: OpenSSL 1.0.1e Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241435,241439#msg-241439 From nginx-forum at nginx.us Thu Aug 1 07:37:12 2013 From: nginx-forum at nginx.us (microwish) Date: Thu, 01 Aug 2013 03:37:12 -0400 Subject: still 400 response code, but so weird this time In-Reply-To: References: Message-ID: <07863484f6ba400abae19a247aa297f6.NginxMailingListEnglish@forum.nginx.org> more additions: some SSL related config in Nginx config file ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:AES128-SHA:3DES:!EXP:!aNULL:!kEDH:!ECDH; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241435,241441#msg-241441 From nginx-forum at nginx.us Thu Aug 1 07:53:31 2013 From: nginx-forum at nginx.us (willyaranda) Date: Thu, 01 Aug 2013 03:53:31 -0400 Subject: Max connections for WebSocket SSL termination Message-ID: <91bc32cceaaf5fd572498dec79d09bb5.NginxMailingListEnglish@forum.nginx.org> Hey guys, I have been using nginx in my personal life for a couple of years, recently dropping totally apache and I must admit that this software rocks. Congrats to all the devs for the fantastic job in the new-event-driven-world. Secondly, at work, we are trying to use nginx as a SSL endpoint for our websocket connections. We need to handle handred of thousands of permanent connections, and given the problem that Node.js (our backend) has with SSL (leaks memory in combination with websockets), we decided to give nginx a try. Problem? We can only handle 64k connections due to the limit of actual ports on a unix machine. I think this cannot be workarounded with nginx configurations, right? We can switch to listen to a unix socket (so Node.js is not serving in a http port, but on a unix socket), which *I think* that could resolve the problem. Thanks for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241442,241442#msg-241442 From nginx-forum at nginx.us Thu Aug 1 07:59:45 2013 From: nginx-forum at nginx.us (hippo) Date: Thu, 01 Aug 2013 03:59:45 -0400 Subject: caching: Expires takes precedence over max-age Message-ID: <3c57fda8d4850256acfeab488b1bd26a.NginxMailingListEnglish@forum.nginx.org> Hello, I have a trouble with nginx caching pages it shouldn't cache. I have uwsgi_cache enabled: uwsgi_cache_path /tmp/cache levels=1:2 keys_zone=django:1m; location /test { uwsgi_pass unix:/tmp/uwsgi.sock; uwsgi_cache django; } nginx caches responses that have Expires header set in the future, even if Cache-Control says otherwise: Expires: Fri, 02 Aug 2013 19:47:42 GMT Cache-Control: no-cache, must-revalidate, max-age=0 And rfc2616 says: Note: if a response includes a Cache-Control field with the max- age directive (see section 14.9.3), that directive overrides the Expires field. Happens on nginx 1.2.1 and 1.4.1. If I add "uwsgi_ignore_headers Expires;" to the nginx conf, the pages don't get cached. Is there something wrong with my nginx or uwsgi response headers? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241443,241443#msg-241443 From mdounin at mdounin.ru Thu Aug 1 09:37:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Aug 2013 13:37:24 +0400 Subject: still 400 response code, but so weird this time In-Reply-To: References: Message-ID: <20130801093723.GB2130@mdounin.ru> Hello! On Thu, Aug 01, 2013 at 03:10:30AM -0400, microwish wrote: > In access_log file, huge numbers of log entries like this: > > 115.85.238.34 1764839163 - 0.242 [01/Aug/2013:11:02:01 +0800] "foo.bar.com" > "-" 400 0 "-" "-" "-" > > > log_format defined in http conf block: > > '$remote_addr $connection $remote_user $request_time [$time_local] > "$hostname" "$request" $status $body_bytes_sent "$http_referer" > "$http_cookie" "$http_user_agent"' > > > Points I realized: > 1) Cannot catch $request, which is full original request line according to > Nginx documentation. So can it tell at which phrase the connection was > dropped? > 2) $body_bytes_sent is zero. So no HTTP response body was generated. > 3) $http_refer, $http_cookie and $http_user_agent cannot be caught. So does > this indicate any issue? > > > p.s. this might be caused by HTTPS/SSL connections from mobile client, but > I'm not sure. Such lines in access log are caused by opening and closing a connection without sending any data in it. Usually this happens due to browser optimizations (e.g., Chrome opens an additional connection "just in case"), but might appear due to various other reasons as well (e.g. if browser rejects your SSL cert). As of nginx 1.3.15+ such connections are no longer logged to access log, see http://nginx.org/en/CHANGES. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Aug 1 09:57:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Aug 2013 13:57:41 +0400 Subject: Max file size which can be transferred In-Reply-To: <5d6e1d449e9731ee1ef7768dd05b2e6a.NginxMailingListEnglish@forum.nginx.org> References: <5d6e1d449e9731ee1ef7768dd05b2e6a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130801095741.GC2130@mdounin.ru> Hello! On Thu, Aug 01, 2013 at 03:20:28AM -0400, Rakshith wrote: > Hi, > > I wanted to know whats the maximum file size which i can transfer using a > simple CURL PUT/GET command. I ask this because when i try to send a file > which is >64KB, i get a HTTP/1.1 100 Continue message: > > File i am trying to do a PUT: > > [rakshith~]$ ls -l nginx.tar > -rw-r--r-- 1 rakshith engr 675840 Jul 29 14:38 nginx.tar > > [rakshith~]$ curl -X PUT -d @nginx.tar -o /dev/null -qvk > http://x.x.x.x:80/Enginex.tar > * About to connect() to x.x.x.x port 80 > * Trying x.x.x.x... connected > * Connected to x.x.x.x (x.x.x.x) port 80 > > PUT /Enginex.tar HTTP/1.1 > > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 > OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > > Host: x.x.x.x > > Accept: */* > > Content-Length: 64982 Please note: length of a document curl tries to PUT is just 64982. > > Content-Type: application/x-www-form-urlencoded > > Expect: 100-continue > > > < HTTP/1.1 100 Continue > < Server: nginx/1.5.3 Just a side note: "HTTP/1.1 201 Created" line is missed here. It's likely a problem of verbose output in the old version of curl you are using. Recent versions of curl correctly show it like this: < HTTP/1.1 100 Continue } [data not shown] < HTTP/1.1 201 Created < Server: nginx/1.5.4 ... > < Date: Thu, 01 Aug 2013 06:05:15 GMT > < Content-Length: 0 > < Location: http://x.x.x.x/Enginex.tar > < Connection: keep-alive > 100 64982 0 0 100 64982 0 5133k --:--:-- --:--:-- --:--:-- > 9065k* Connection #0 to host x.x.x.x left intact > > Now the actual contents which made it through is of size close to 64KB and > not 647KB > > bash-3.2# ls -l Enginex.tar > -rw------- 1 nobody nobody 64982 Aug 1 06:05 Enginex.tar ... and the file created is exactly as PUT by curl. That is, there is no problem in nginx. Using "curl --data-binary @..." instead of "curl -d ..." should help. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Aug 1 10:03:41 2013 From: nginx-forum at nginx.us (Rakshith) Date: Thu, 01 Aug 2013 06:03:41 -0400 Subject: Max file size which can be transferred In-Reply-To: <20130801095741.GC2130@mdounin.ru> References: <20130801095741.GC2130@mdounin.ru> Message-ID: <2eec29deab9e4c194333c8f4c99578aa.NginxMailingListEnglish@forum.nginx.org> Hi, Thanks for that reply!!.. So looks like Curl is not able to pick up the whole file for transfer..And Does Nginx has a limit on how big a file can be PUT/GET ?? -Rakshith Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241437,241454#msg-241454 From nginx-forum at nginx.us Thu Aug 1 10:11:44 2013 From: nginx-forum at nginx.us (Rakshith) Date: Thu, 01 Aug 2013 06:11:44 -0400 Subject: Max file size which can be transferred In-Reply-To: <2eec29deab9e4c194333c8f4c99578aa.NginxMailingListEnglish@forum.nginx.org> References: <20130801095741.GC2130@mdounin.ru> <2eec29deab9e4c194333c8f4c99578aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <882420980e41e3d6e7c150bfe61f8664.NginxMailingListEnglish@forum.nginx.org> I ask that question because when i tried to transfer 4GB file, i get an error logged which says: 2013/08/01 10:02:57 [error] 50935#0: *27 client intended to send too large body: 4582367864 bytes, client: y.y.y.y, server: sx1, request: "PUT /core_8GB.nz HTTP/1.1", host: "x.x.x.x".. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241437,241455#msg-241455 From nginx-forum at nginx.us Thu Aug 1 10:13:00 2013 From: nginx-forum at nginx.us (microwish) Date: Thu, 01 Aug 2013 06:13:00 -0400 Subject: still 400 response code, but so weird this time In-Reply-To: <20130801093723.GB2130@mdounin.ru> References: <20130801093723.GB2130@mdounin.ru> Message-ID: <8d07297851aa5e91d849eb3920be09fa.NginxMailingListEnglish@forum.nginx.org> Thanks, Maxim. By "Such lines in access log are caused by opening and closing a connection without sending any data in it", you are meaning that a client opens a connection and then closes the connection actively without sending any data, or that a Nginx worker process accepts a connection and then closes it actively without sending any data to the client? In any case, is the TCP handshake completed? I guess that SSL handshakes are already in process, because CPU resource is consumed much. Just as you said, if the browser rejected my SSL cert, what could I do to solve this issue? Thanks again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241435,241456#msg-241456 From nginx-forum at nginx.us Thu Aug 1 10:17:14 2013 From: nginx-forum at nginx.us (marc.cortinas) Date: Thu, 01 Aug 2013 06:17:14 -0400 Subject: 502 in Nginx as a reverse proxy whithout cache Message-ID: <276a7d70c6c5e0742b083629c1b0ba81.NginxMailingListEnglish@forum.nginx.org> Hi, Currently, we use Nginx as a reverse proxy without cache. Last monday we've rolled out a new version of PHP application which has appeared a several 502 errors. First of all, i've applied the workaround explained in thread http://forum.nginx.org/read.php?2,188352 . The number of 502 have falled down but we can see a several 502 yet, but we haven't any error log now in the nginx. This is Nginx parameters applied: {code} ## Size Limits client_body_buffer_size 128K; client_header_buffer_size 1M; client_max_body_size 1M; large_client_header_buffers 8 8k; ## Timeouts client_body_timeout 600; client_header_timeout 600; expires 24h; keepalive_timeout 60 60; send_timeout 600; ## TCP options tcp_nodelay on; tcp_nopush on; ## Proxy caching options proxy_buffering on; proxy_buffers 16 16k; proxy_buffer_size 32k; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; proxy_next_upstream error timeout http_500 http_502 http_503 http_504; proxy_connect_timeout 60s; proxy_read_timeout 600s; proxy_send_timeout 600s; {code} This is the trace lines in acces log of Nginx and Apache: Las trazas son estas: NGINX log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; {code} 2.139.8.108 - - [31/Jul/2013:23:19:31 +0200] "GET /es/descuentos-malaga/oferta-pelsandbody-masaje-tailandes-aromaterapia.html?mktc=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&utm_campaign=AAAAA&utm_content=AAAAAAA&utm_medium=email&utm_source=ExactTarget&email=emailbos at provider.com&date=AAAAAAA&AL_Hash=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&utm_referrer= HTTP/1.1" 502 118 "http://es.example.com/descuentos-malaga/oferta-pelsandbody-masaje-tailandes-aromaterapia.html?mktc=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&utm_campaign=AAAAA&utm_content=AAAAAAA&utm_medium=email&utm_source=ExactTarget&email=emailbos at provider.com&date=AAAAAAA&AL_Hash=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" "Mozilla/5.0 (Linux; U; Android 2.3.3; es-es; GT-I9100 Build/GINGERBREAD) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" "-" {code} LogFormat "\"%{True-Client-IP}i\" %h %t %T \"%r\" %>s %b %{outstream}n/%{instream}n (%{ratio}n%%) \"%{Referer}i\" \"%{Expires}o\" \"%{Cache-Control}o\" \"%{User-Agent}i\" \"%{Host}i\" \"%{X-Forwarded-For}i\" %{mod_php_memory_usage}n %P" itsysprod APACHE {code} "-" 10.253.1.61 [31/Jul/2013:23:19:31 +0200] 0 "GET /es/descuentos-malaga/oferta-pelsandbody-masaje-tailandes-aromaterapia.html?mktc=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&utm_campaign=AAAAA&utm_content=AAAAAAA&utm_medium=email&utm_source=ExactTarget&email=emailbos at provider.com&date=AAAAAAA&AL_Hash=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA HTTP/1.0" 200 303 285/475 (60%) "http://es.example.com/descuentos-malaga/oferta-pelsandbody-masaje-tailandes-aromaterapia.html?mktc=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&utm_campaign=AAAAA&utm_content=AAAAAAA&utm_medium=email&utm_source=ExactTarget&email=emailbos at provider.com&date=AAAAAAA&AL_Hash=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" "Thu, 19 Nov 1981 08:52:00 GMT" "no-store, no-cache, must-revalidate, post-check=0, pre-check=0" "Mozilla/5.0 (Linux; U; Android 2.3.3; es-es; GT-I9100 Build/GINGERBREAD) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" "m.example.com" "2.139.8.108, 10.253.1.60" 786432 30962 {code} Platform description # cat /etc/redhat-release CentOS release 6.3 (Final) # uname -a Linux balance01-prod 2.6.32-279.5.2.el6.x86_64 #1 SMP Fri Aug 24 01:07:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux # nginx -v nginx version: nginx/1.2.4 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241447,241447#msg-241447 From nginx-forum at nginx.us Thu Aug 1 10:53:20 2013 From: nginx-forum at nginx.us (marc.cortinas) Date: Thu, 01 Aug 2013 06:53:20 -0400 Subject: 502 in Nginx as a reverse proxy whithout cache In-Reply-To: <276a7d70c6c5e0742b083629c1b0ba81.NginxMailingListEnglish@forum.nginx.org> References: <276a7d70c6c5e0742b083629c1b0ba81.NginxMailingListEnglish@forum.nginx.org> Message-ID: I've upgraded the nginx version 1.24 to 1.4.2, but this behavior still happens. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241447,241459#msg-241459 From mdounin at mdounin.ru Thu Aug 1 11:52:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Aug 2013 15:52:49 +0400 Subject: Max file size which can be transferred In-Reply-To: <882420980e41e3d6e7c150bfe61f8664.NginxMailingListEnglish@forum.nginx.org> References: <20130801095741.GC2130@mdounin.ru> <2eec29deab9e4c194333c8f4c99578aa.NginxMailingListEnglish@forum.nginx.org> <882420980e41e3d6e7c150bfe61f8664.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130801115249.GE2130@mdounin.ru> Hello! On Thu, Aug 01, 2013 at 06:11:44AM -0400, Rakshith wrote: > I ask that question because when i tried to transfer 4GB file, i get an > error logged which says: > > 2013/08/01 10:02:57 [error] 50935#0: *27 client intended to send too large > body: > 4582367864 bytes, client: y.y.y.y, server: sx1, request: "PUT /core_8GB.nz > HTTP/1.1", host: "x.x.x.x".. http://nginx.org/r/client_max_body_size -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Aug 1 12:02:11 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Aug 2013 16:02:11 +0400 Subject: still 400 response code, but so weird this time In-Reply-To: <8d07297851aa5e91d849eb3920be09fa.NginxMailingListEnglish@forum.nginx.org> References: <20130801093723.GB2130@mdounin.ru> <8d07297851aa5e91d849eb3920be09fa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130801120211.GF2130@mdounin.ru> Hello! On Thu, Aug 01, 2013 at 06:13:00AM -0400, microwish wrote: > Thanks, Maxim. > > By "Such lines in access log are caused by opening and closing a connection > without sending any data in it", you are meaning that a client opens a > connection and then closes the connection actively without sending any data, > or that a Nginx worker process accepts a connection and then closes it > actively without sending any data to the client? A client opens a connection, and then closes the connection. > In any case, is the TCP handshake completed? Yes. > I guess that SSL handshakes are already in process, because CPU resource is > consumed much. > > Just as you said, if the browser rejected my SSL cert, what could I do to > solve this issue? First of all, you should check if it's the case. If it is, you should investigate further why the browser rejects the cert - there are plenty of possible reasons. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Aug 1 14:47:40 2013 From: nginx-forum at nginx.us (drook) Date: Thu, 01 Aug 2013 10:47:40 -0400 Subject: nginx, solaris, eventport In-Reply-To: <7263778a8d2600bfd9afeb941478dfba.NginxMailingListEnglish@forum.nginx.org> References: <20130704065807.GQ15373@lo0.su> <7263778a8d2600bfd9afeb941478dfba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6c7db6ecd3638be582a65e832e401f34.NginxMailingListEnglish@forum.nginx.org> I have also a cacti graph, representing the state of tcp connections for the period of the accident: http://unix.zhegan.in/files/tcp_connections.png looks like handshakes were fine, but the the exchange was stalled for each stream. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240568,241466#msg-241466 From nginx-forum at nginx.us Fri Aug 2 02:01:20 2013 From: nginx-forum at nginx.us (icecola123) Date: Thu, 01 Aug 2013 22:01:20 -0400 Subject: cannot build variables_hash In-Reply-To: References: Message-ID: Choose your favorite products and items, tell your life to bring some fun, welcome to join efox shop Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241210,241467#msg-241467 From mdounin at mdounin.ru Fri Aug 2 12:00:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 2 Aug 2013 16:00:45 +0400 Subject: caching: Expires takes precedence over max-age In-Reply-To: <3c57fda8d4850256acfeab488b1bd26a.NginxMailingListEnglish@forum.nginx.org> References: <3c57fda8d4850256acfeab488b1bd26a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130802120045.GY2130@mdounin.ru> Hello! On Thu, Aug 01, 2013 at 03:59:45AM -0400, hippo wrote: > Hello, I have a trouble with nginx caching pages it shouldn't cache. I have > uwsgi_cache enabled: > > uwsgi_cache_path /tmp/cache levels=1:2 keys_zone=django:1m; > location /test { > uwsgi_pass unix:/tmp/uwsgi.sock; > uwsgi_cache django; > } > > nginx caches responses that have Expires header set in the future, even if > Cache-Control says otherwise: > > Expires: Fri, 02 Aug 2013 19:47:42 GMT > Cache-Control: no-cache, must-revalidate, max-age=0 > > And rfc2616 says: > > Note: if a response includes a Cache-Control field with the max- > age directive (see section 14.9.3), that directive overrides the > Expires field. > > Happens on nginx 1.2.1 and 1.4.1. If I add "uwsgi_ignore_headers Expires;" > to the nginx conf, the pages don't get cached. Is there something wrong with > my nginx or uwsgi response headers? As of now nginx treats both Expires and "Cache-Control: max-age" with equal precedence, and uses whichever comes first. X-Accel-Expires takes precedence over both of them. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Aug 2 12:25:31 2013 From: nginx-forum at nginx.us (marc.cortinas) Date: Fri, 02 Aug 2013 08:25:31 -0400 Subject: 502 in Nginx as a reverse proxy whithout cache In-Reply-To: <276a7d70c6c5e0742b083629c1b0ba81.NginxMailingListEnglish@forum.nginx.org> References: <276a7d70c6c5e0742b083629c1b0ba81.NginxMailingListEnglish@forum.nginx.org> Message-ID: I've found the error in the backend server, not in Nginx. We can close this thread, apologies! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241447,241477#msg-241477 From nginx-forum at nginx.us Fri Aug 2 12:30:35 2013 From: nginx-forum at nginx.us (Payne Chu) Date: Fri, 02 Aug 2013 08:30:35 -0400 Subject: why will nginx's map directive eat all ram? Message-ID: <9855cbd0c544eff2a740cc4cbba78735.NginxMailingListEnglish@forum.nginx.org> recently I try to use map directive to make my nginx.conf DRY. like below `map $pid $public_root { default public; }` and in one of server directive I put below `root $public_root;` I try to `ab` test with retrieve a static html. the nginx will eat all ram. if I reset back to `root public;` it will maintain the ram in low evel. so we should avoid to use map directive ? My Env: MACOSX 10.8.4, Nginx 1.4.1 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241478,241478#msg-241478 From nginx-forum at nginx.us Fri Aug 2 12:38:28 2013 From: nginx-forum at nginx.us (lennart) Date: Fri, 02 Aug 2013 08:38:28 -0400 Subject: Set default nginx-conf in sites-enabled? Message-ID: <230c4ae5cd1bb168ee208c44a812ca3e.NginxMailingListEnglish@forum.nginx.org> I've several config-files in sites-enabled, all working fine. However, if a domain (foobar2.com) not is mentioned in a config, NGINX takes the conf-file of foobar1.com. How can i overrule this and make some sort of catch-all? With the line in the default-config i get an error 500: server { listen 80 default_server; server_name _; # This is just an invalid value which will never trigger on a real hostname. server_name_in_redirect off; root /home/user/www/park; location / { # This is cool because no php is touched for static content try_files $uri $uri/ /index.html; } } Help .. :-? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241479,241479#msg-241479 From francis at daoine.org Fri Aug 2 12:54:54 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 2 Aug 2013 13:54:54 +0100 Subject: Set default nginx-conf in sites-enabled? In-Reply-To: <230c4ae5cd1bb168ee208c44a812ca3e.NginxMailingListEnglish@forum.nginx.org> References: <230c4ae5cd1bb168ee208c44a812ca3e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130802125454.GM27161@craic.sysops.org> On Fri, Aug 02, 2013 at 08:38:28AM -0400, lennart wrote: Hi there, > I've several config-files in sites-enabled, all working fine. However, if a > domain (foobar2.com) not is mentioned in a config, NGINX takes the conf-file > of foobar1.com. How can i overrule this and make some sort of catch-all? http://nginx.org/en/docs/http/request_processing.html If a server name is not mentioned in any relevant server block, nginx chooses the default server for this ip:port. Make your "catch-all" be the default for this "listen" ip:port combination. > With the line in the default-config i get an error 500: What does the error log say? f -- Francis Daly francis at daoine.org From vbart at nginx.com Fri Aug 2 15:05:45 2013 From: vbart at nginx.com (=?utf-8?b?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Fri, 2 Aug 2013 19:05:45 +0400 Subject: why will nginx's map directive eat all ram? In-Reply-To: <9855cbd0c544eff2a740cc4cbba78735.NginxMailingListEnglish@forum.nginx.org> References: <9855cbd0c544eff2a740cc4cbba78735.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201308021905.45373.vbart@nginx.com> On Friday 02 August 2013 16:30:35 Payne Chu wrote: > recently I try to use map directive to make my nginx.conf DRY. like below > > `map $pid $public_root { default public; }` > > and in one of server directive I put below > > `root $public_root;` > > I try to `ab` test with retrieve a static html. the nginx will eat all ram. > if I reset back to `root public;` it will maintain the ram in low evel. > > so we should avoid to use map directive ? > > My Env: MACOSX 10.8.4, Nginx 1.4.1 > What does "nginx -V" show? wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Aug 2 19:50:43 2013 From: nginx-forum at nginx.us (atul1985) Date: Fri, 02 Aug 2013 15:50:43 -0400 Subject: Except Homepage no page opening Message-ID: <59246ab5c0c0650eeb1c155e2154c2fa.NginxMailingListEnglish@forum.nginx.org> Hi Can you please see my site www.techofweb.com I just migrated from another server None page except homepage is opening Please if somebody can reply uregently Thanks' Atul Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241490,241490#msg-241490 From farseas at gmail.com Fri Aug 2 23:27:41 2013 From: farseas at gmail.com (Bob S.) Date: Fri, 2 Aug 2013 19:27:41 -0400 Subject: Except Homepage no page opening In-Reply-To: <59246ab5c0c0650eeb1c155e2154c2fa.NginxMailingListEnglish@forum.nginx.org> References: <59246ab5c0c0650eeb1c155e2154c2fa.NginxMailingListEnglish@forum.nginx.org> Message-ID: Maybe a directory permission problem? On Fri, Aug 2, 2013 at 3:50 PM, atul1985 wrote: > Hi > > Can you please see my site www.techofweb.com > I just migrated from another server > > None page except homepage is opening > > Please if somebody can reply uregently > > Thanks' > Atul > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,241490,241490#msg-241490 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Sat Aug 3 01:17:03 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Fri, 2 Aug 2013 21:17:03 -0400 Subject: error building nginx 1.5.3 on Cygwin Message-ID: Hello, I'm trying to build nginx 1.5.3 on Cygwin using Windows 7. I see the following output during "make": cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -D FD_ SETSIZE=2048 -I src/core -I src/event -I src/event/modules -I src/os/unix -I ~/o penssl-0.9.8l/.openssl/include -I objs \ -o objs/src/core/ngx_output_chain.o \ src/core/ngx_output_chain.c cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -D FD_ SETSIZE=2048 -I src/core -I src/event -I src/event/modules -I src/os/unix -I ~/o penssl-0.9.8l/.openssl/include -I objs \ -o objs/src/core/ngx_string.o \ src/core/ngx_string.c cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -D FD_ SETSIZE=2048 -I src/core -I src/event -I src/event/modules -I src/os/unix -I ~/o penssl-0.9.8l/.openssl/include -I objs \ -o objs/src/core/ngx_parse.o \ src/core/ngx_parse.c cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -D FD_ SETSIZE=2048 -I src/core -I src/event -I src/event/modules -I src/os/unix -I ~/o penssl-0.9.8l/.openssl/include -I objs \ -o objs/src/core/ngx_inet.o \ src/core/ngx_inet.c cc1: warnings being treated as errors src/core/ngx_inet.c: In function `ngx_sock_ntop': src/core/ngx_inet.c:236: error: comparison between signed and unsigned make[1]: *** [objs/src/core/ngx_inet.o] Error 1 make[1]: Leaving directory `/home/kworthington/nginx-1.5.3' make: *** [build] Error 2 I would appreciate any help to fix this. Thank you! Best regards, Kevin -- Kevin Worthington http://kevinworthington.com/ http://twitter.com/kworthington -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Aug 3 02:08:43 2013 From: nginx-forum at nginx.us (Payne Chu) Date: Fri, 02 Aug 2013 22:08:43 -0400 Subject: why will nginx's map directive eat all ram? In-Reply-To: <201308021905.45373.vbart@nginx.com> References: <201308021905.45373.vbart@nginx.com> Message-ID: here it is nginx version: ngx_openresty/1.4.1.1 TLS SNI support enabled configure arguments: --prefix=/usr/local/openresty/nginx --with-cc-opt='-I/usr/local/Cellar/pcre/8.33/include -I/usr/local/Cellar/luajit/2.02/luajit2.0/include' --add-module=../ngx_devel_kit-0.2.18 --add-module=../echo-nginx-module-0.45 --add-module=../xss-nginx-module-0.03rc9 --add-module=../ngx_coolkit-0.2rc1 --add-module=../set-misc-nginx-module-0.22rc8 --add-module=../form-input-nginx-module-0.07 --add-module=../encrypted-session-nginx-module-0.03 --add-module=../srcache-nginx-module-0.21 --add-module=../ngx_lua-0.8.5 --add-module=../headers-more-nginx-module-0.21 --add-module=../array-var-nginx-module-0.03rc1 --add-module=../memc-nginx-module-0.13rc3 --add-module=../redis2-nginx-module-0.10 --add-module=../redis-nginx-module-0.3.6 --add-module=../auth-request-nginx-module-0.2 --add-module=../rds-json-nginx-module-0.12rc10 --add-module=../rds-csv-nginx-module-0.05rc2 --with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib -L/usr/local/Cellar/pcre/8.33/lib -L/usr/local/Cellar/luajit/2.02/luajit2.0/lib' --http-client-body-temp-path=/var/tmp/nginx/client_body --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --http-scgi-temp-path=/var/tmp/nginx/scgi --with-http_ssl_module Actually Im using OpenResty. But since OpenResty said it's just standard nginx. and in config I not even active any lua directive yet. so I thought this should the nginx probelm. But when I install nginx-1.4.1 through `Homebrew`. I cannot find same issue in the brew version. Maybe just one of OpenResty used module trigger this. below is the config trigger this issue. And let me know if I need to forward this issue to OpenResty. Thanks :) user paynechu staff; worker_processes 1; events { worker_connections 1024; } http { map $pid $public_root { default public; } server { listen 443; index index.html; ssl on; ssl_certificate ssl/sandbox.crt; ssl_certificate_key ssl/sandbox.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; root $public_root; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241478,241499#msg-241499 From nginx-forum at nginx.us Sat Aug 3 08:42:03 2013 From: nginx-forum at nginx.us (Payne Chu) Date: Sat, 03 Aug 2013 04:42:03 -0400 Subject: why will nginx's map directive eat all ram? In-Reply-To: <201308021905.45373.vbart@nginx.com> References: <201308021905.45373.vbart@nginx.com> Message-ID: https://groups.google.com/forum/#!topic/openresty-en/ArNhoL7Ol2U here is the reply from OpenResty's agentzh. seems the leak come from nginx 1.4.1's core. Maybe Homebrew have some patch fixed this issue ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241478,241502#msg-241502 From nginx-forum at nginx.us Sat Aug 3 09:57:52 2013 From: nginx-forum at nginx.us (Payne Chu) Date: Sat, 03 Aug 2013 05:57:52 -0400 Subject: why will nginx's map directive eat all ram? In-Reply-To: <201308021905.45373.vbart@nginx.com> References: <201308021905.45373.vbart@nginx.com> Message-ID: <7f95c46f26e77a4c5467572f0f988db9.NginxMailingListEnglish@forum.nginx.org> Finally I can reproduce even in the Homebrew version. It will only leak when I use relative path. and all version also can reproduce. b4 I test Homebrew version with full path not relative path that's the different....XD~ map $pid $public_root { default public; } <-- this one relative path leak happened map $pid $public_root { default /usr/local/Cellar/nginx/1.4.1/public; } <- this one full path without leak.. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241478,241503#msg-241503 From vbart at nginx.com Sat Aug 3 10:08:50 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 3 Aug 2013 14:08:50 +0400 Subject: why will nginx's map directive eat all ram? In-Reply-To: References: <201308021905.45373.vbart@nginx.com> Message-ID: <201308031408.50668.vbart@nginx.com> On Saturday 03 August 2013 06:08:43 Payne Chu wrote: > here it is > > nginx version: ngx_openresty/1.4.1.1 > TLS SNI support enabled > configure arguments: --prefix=/usr/local/openresty/nginx > --with-cc-opt='-I/usr/local/Cellar/pcre/8.33/include > -I/usr/local/Cellar/luajit/2.02/luajit2.0/include' > --add-module=../ngx_devel_kit-0.2.18 --add-module=../echo-nginx-module-0.45 > --add-module=../xss-nginx-module-0.03rc9 --add-module=../ngx_coolkit-0.2rc1 > --add-module=../set-misc-nginx-module-0.22rc8 > --add-module=../form-input-nginx-module-0.07 > --add-module=../encrypted-session-nginx-module-0.03 > --add-module=../srcache-nginx-module-0.21 --add-module=../ngx_lua-0.8.5 > --add-module=../headers-more-nginx-module-0.21 > --add-module=../array-var-nginx-module-0.03rc1 > --add-module=../memc-nginx-module-0.13rc3 > --add-module=../redis2-nginx-module-0.10 > --add-module=../redis-nginx-module-0.3.6 > --add-module=../auth-request-nginx-module-0.2 > --add-module=../rds-json-nginx-module-0.12rc10 > --add-module=../rds-csv-nginx-module-0.05rc2 > --with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib > -L/usr/local/Cellar/pcre/8.33/lib > -L/usr/local/Cellar/luajit/2.02/luajit2.0/lib' > --http-client-body-temp-path=/var/tmp/nginx/client_body > --http-proxy-temp-path=/var/tmp/nginx/proxy > --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi > --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi > --http-scgi-temp-path=/var/tmp/nginx/scgi --with-http_ssl_module > > Actually Im using OpenResty. But since OpenResty said it's just standard > nginx. No, any nginx with 3-rd party modules ("--add-module" arguments in your "nginx -V" output) isn't standard. Even if such modules do not used in config, they still can affect behavior in strange ways. > and in config I not even active any lua directive yet. so I thought > this should the nginx probelm. But when I install nginx-1.4.1 through > `Homebrew`. I cannot find same issue in the brew version. Maybe just one of > OpenResty used module trigger this. below is the config trigger this issue. > And let me know if I need to forward this issue to OpenResty. Thanks :) > [...] Yes, you should forward this issue to OpenResty, since you can't reproduce it without 3-rd party code, and there are no known problems with the map directive memory consumption. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From vbart at nginx.com Sat Aug 3 10:15:40 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 3 Aug 2013 14:15:40 +0400 Subject: why will nginx's map directive eat all ram? In-Reply-To: <201308031408.50668.vbart@nginx.com> References: <201308021905.45373.vbart@nginx.com> <201308031408.50668.vbart@nginx.com> Message-ID: <201308031415.40895.vbart@nginx.com> On Saturday 03 August 2013 14:08:50 Valentin V. Bartenev wrote: [...] > > and in config I not even active any lua directive yet. so I thought > > this should the nginx probelm. But when I install nginx-1.4.1 through > > `Homebrew`. I cannot find same issue in the brew version. Maybe just one > > of OpenResty used module trigger this. below is the config trigger this > > issue. And let me know if I need to forward this issue to OpenResty. > > Thanks :) > > [...] > > Yes, you should forward this issue to OpenResty, since you can't reproduce > it without 3-rd party code, and there are no known problems with the map > directive memory consumption. > Please disregard this statement, I've just received your last messages. I'm going to look at this issue, thank you for the report. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Aug 3 10:47:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 3 Aug 2013 14:47:34 +0400 Subject: error building nginx 1.5.3 on Cygwin In-Reply-To: References: Message-ID: <20130803104734.GN2130@mdounin.ru> Hello! On Fri, Aug 02, 2013 at 09:17:03PM -0400, Kevin Worthington wrote: > I'm trying to build nginx 1.5.3 on Cygwin using Windows 7. I see the > following output during "make": [...] > cc1: warnings being treated as errors > src/core/ngx_inet.c: In function `ngx_sock_ntop': > src/core/ngx_inet.c:236: error: comparison between signed and unsigned > make[1]: *** [objs/src/core/ngx_inet.o] Error 1 > make[1]: Leaving directory `/home/kworthington/nginx-1.5.3' > make: *** [build] Error 2 > > I would appreciate any help to fix this. Thank you! Looks like socklen_t is signed in your environment, which results in a warning. Try the following patch: --- a/src/core/ngx_inet.c +++ b/src/core/ngx_inet.c @@ -233,7 +233,7 @@ ngx_sock_ntop(struct sockaddr *sa, /* on Linux sockaddr might not include sun_path at all */ - if (socklen <= offsetof(struct sockaddr_un, sun_path)) { + if (socklen <= (socklen_t) offsetof(struct sockaddr_un, sun_path)) { p = ngx_snprintf(text, len, "unix:%Z"); } else { Alternatively, you may just ignore the warning, it's harmless. -- Maxim Dounin http://nginx.org/en/donation.html From kworthington at gmail.com Sat Aug 3 16:31:54 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Sat, 3 Aug 2013 12:31:54 -0400 Subject: error building nginx 1.5.3 on Cygwin In-Reply-To: <20130803104734.GN2130@mdounin.ru> References: <20130803104734.GN2130@mdounin.ru> Message-ID: Hi Maxim, Thanks so much. Your patch worked great. The build was failing without that change. Is there any way that patch can be incorporated into the main source, so that it doesn't happen again in 1.5.4? Thanks again, I really appreciate it. Best regards, Kevin -- Kevin Worthington http://kevinworthington.com/ http://twitter.com/kworthington On Sat, Aug 3, 2013 at 6:47 AM, Maxim Dounin wrote: > Hello! > > On Fri, Aug 02, 2013 at 09:17:03PM -0400, Kevin Worthington wrote: > > > I'm trying to build nginx 1.5.3 on Cygwin using Windows 7. I see the > > following output during "make": > > [...] > > > cc1: warnings being treated as errors > > src/core/ngx_inet.c: In function `ngx_sock_ntop': > > src/core/ngx_inet.c:236: error: comparison between signed and unsigned > > make[1]: *** [objs/src/core/ngx_inet.o] Error 1 > > make[1]: Leaving directory `/home/kworthington/nginx-1.5.3' > > make: *** [build] Error 2 > > > > I would appreciate any help to fix this. Thank you! > > Looks like socklen_t is signed in your environment, which results > in a warning. Try the following patch: > > --- a/src/core/ngx_inet.c > +++ b/src/core/ngx_inet.c > @@ -233,7 +233,7 @@ ngx_sock_ntop(struct sockaddr *sa, > > /* on Linux sockaddr might not include sun_path at all */ > > - if (socklen <= offsetof(struct sockaddr_un, sun_path)) { > + if (socklen <= (socklen_t) offsetof(struct sockaddr_un, > sun_path)) { > p = ngx_snprintf(text, len, "unix:%Z"); > > } else { > > Alternatively, you may just ignore the warning, it's harmless. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Sat Aug 3 16:54:47 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Sat, 3 Aug 2013 12:54:47 -0400 Subject: nginx-1.5.3 In-Reply-To: <20130730134105.GI2130@mdounin.ru> References: <20130730134105.GI2130@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.3 for Windows http://goo.gl/qiz6kq (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Jul 30, 2013 at 9:41 AM, Maxim Dounin wrote: > Changes with nginx 1.5.3 30 Jul > 2013 > > *) Change in internal API: now u->length defaults to -1 if working with > backends in unbuffered mode. > > *) Change: now after receiving an incomplete response from a backend > server nginx tries to send an available part of the response to a > client, and then closes client connection. > > *) Bugfix: a segmentation fault might occur in a worker process if the > ngx_http_spdy_module was used with the "client_body_in_file_only" > directive. > > *) Bugfix: the "so_keepalive" parameter of the "listen" directive might > be handled incorrectly on DragonFlyBSD. > Thanks to Sepherosa Ziehau. > > *) Bugfix: in the ngx_http_xslt_filter_module. > > *) Bugfix: in the ngx_http_sub_filter_module. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Aug 3 21:48:58 2013 From: nginx-forum at nginx.us (pablo.rodriguez) Date: Sat, 03 Aug 2013 17:48:58 -0400 Subject: PHP Fatal error Message-ID: <9a74be60183b917e6bab2518d3c31d32.NginxMailingListEnglish@forum.nginx.org> Hello! I'm having, in a PHP form, when I'm submiting one email, the next error in error.log: 2013/08/03 21:39:22 [error] 19544#0: *11 FastCGI sent in stderr: "PHP message: PHP Warning: require_once(TEMPLATEPATH/functions/theme-functions.php): failed to open stream: No such file or directory in /web/domain.com/public/wp-content/themes/launcheffect/functions.php on line 151 PHP message: PHP Fatal error: require_once(): Failed opening required 'TEMPLATEPATH/functions/theme-functions.php' (include_path='.:/usr/share/php:/usr/share/pear') in /web/domain.com/public/wp-content/themes/launcheffect/functions.php on line 151" while reading response header from upstream, client: 71.28.74.212, server: www.domain.com, request: "POST /wp-content/themes/launcheffect/post.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com", referrer: "http://www.domain.com/" I've been reading about permissions, about nginx and fastcgi, about PHPfpm, reviewing my config...without success. Of course I've that file/directory. Can anyone help me to solve this error? Thanks in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241517,241517#msg-241517 From nginx-forum at nginx.us Sat Aug 3 21:58:56 2013 From: nginx-forum at nginx.us (pablo.rodriguez) Date: Sat, 03 Aug 2013 17:58:56 -0400 Subject: PHP Fatal error In-Reply-To: <9a74be60183b917e6bab2518d3c31d32.NginxMailingListEnglish@forum.nginx.org> References: <9a74be60183b917e6bab2518d3c31d32.NginxMailingListEnglish@forum.nginx.org> Message-ID: <53c95fad13d93ce35f99794128c123c7.NginxMailingListEnglish@forum.nginx.org> Sorry, my nginx.conf: http://pastebin.com/5VB1BzHj Best. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241517,241518#msg-241518 From nginx-forum at nginx.us Sat Aug 3 22:07:20 2013 From: nginx-forum at nginx.us (justin) Date: Sat, 03 Aug 2013 18:07:20 -0400 Subject: Hide raw regular expression from $_SERVER['server_name'] Message-ID: I am using a regular expression in a server_name: server_name ~^(?!web2\.)(?.+)\.mydomain\.com$; In PHP, or any language for that matter, if I: echo $_SERVER['server_name']; //~^(?!web2\.)(?.+)\.mydomain\.com$ I get the raw regular expression back. Is it possible to mask this, and instead return the actual regular expression matchedaccount? I.E. foo.mydomain.com. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241519,241519#msg-241519 From contact at jpluscplusm.com Sat Aug 3 22:25:14 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 3 Aug 2013 23:25:14 +0100 Subject: Hide raw regular expression from $_SERVER['server_name'] In-Reply-To: References: Message-ID: On 3 Aug 2013 23:07, "justin" wrote: > > I am using a regular expression in a server_name: > > server_name ~^(?!web2\.)(?.+)\.mydomain\.com$; > > In PHP, or any language for that matter, if I: > > echo $_SERVER['server_name']; > //~^(?!web2\.)(?.+)\.mydomain\.com$ > > I get the raw regular expression back. Is it possible to mask this, and > instead return the actual regular expression matchedaccount? I.E. > foo.mydomain.com. Have you tried looking at the "Host" HTTP header? J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Aug 4 00:06:31 2013 From: nginx-forum at nginx.us (justin) Date: Sat, 03 Aug 2013 20:06:31 -0400 Subject: Hide raw regular expression from $_SERVER['server_name'] In-Reply-To: References: Message-ID: <4185f32d14d0cd73c7fb65d3703aa0d2.NginxMailingListEnglish@forum.nginx.org> J, The "HOST" http-header is correct, I am just wondering if I can modify or prevent the raw regular expression being exposed in $_SERVER['server_nam']. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241519,241522#msg-241522 From vbart at nginx.com Sun Aug 4 00:39:38 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 4 Aug 2013 04:39:38 +0400 Subject: why will nginx's map directive eat all ram? In-Reply-To: <7f95c46f26e77a4c5467572f0f988db9.NginxMailingListEnglish@forum.nginx.org> References: <201308021905.45373.vbart@nginx.com> <7f95c46f26e77a4c5467572f0f988db9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201308040439.38709.vbart@nginx.com> On Saturday 03 August 2013 13:57:52 Payne Chu wrote: > Finally I can reproduce even in the Homebrew version. It will only leak > when I use relative path. and all version also can reproduce. b4 I test > Homebrew version with full path not relative path that's the > different....XD~ > > map $pid $public_root { default public; } <-- this one relative path > leak happened > map $pid $public_root { default /usr/local/Cellar/nginx/1.4.1/public; } <- > this one full path without leak.. > Please, try this patch: http://pp.nginx.com/vbart/patches/fix_memleak.txt It fixes the leak. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From contact at jpluscplusm.com Sun Aug 4 09:14:33 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 4 Aug 2013 10:14:33 +0100 Subject: Hide raw regular expression from $_SERVER['server_name'] In-Reply-To: <4185f32d14d0cd73c7fb65d3703aa0d2.NginxMailingListEnglish@forum.nginx.org> References: <4185f32d14d0cd73c7fb65d3703aa0d2.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 4 Aug 2013 01:07, "justin" wrote: > > J, > > The "HOST" http-header is correct, I am just wondering if I can modify or > prevent the raw regular expression being exposed in $_SERVER['server_nam']. Why? The only meaningful information it could hold would be the same as that passed via the Host header. You already have that information. Just use the Host header. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Sun Aug 4 09:33:02 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sun, 4 Aug 2013 13:33:02 +0400 Subject: Hide raw regular expression from $_SERVER['server_name'] In-Reply-To: References: Message-ID: <08266303-E77E-408A-B481-AAB6B92BE64A@sysoev.ru> On Aug 4, 2013, at 2:07 , justin wrote: > I am using a regular expression in a server_name: > > server_name ~^(?!web2\.)(?.+)\.mydomain\.com$; > > In PHP, or any language for that matter, if I: > > echo $_SERVER['server_name']; > //~^(?!web2\.)(?.+)\.mydomain\.com$ > > I get the raw regular expression back. Is it possible to mask this, and > instead return the actual regular expression matchedaccount? I.E. > foo.mydomain.com. You should change in your configuration the following string fastcgi_param SERVER_NAME $server_name; to fastcgi_param SERVER_NAME $host; -- Igor Sysoev http://nginx.com/services.html From nginx-forum at nginx.us Sun Aug 4 09:39:28 2013 From: nginx-forum at nginx.us (justin) Date: Sun, 04 Aug 2013 05:39:28 -0400 Subject: Hide raw regular expression from $_SERVER['server_name'] In-Reply-To: <08266303-E77E-408A-B481-AAB6B92BE64A@sysoev.ru> References: <08266303-E77E-408A-B481-AAB6B92BE64A@sysoev.ru> Message-ID: <5e855424a7a4ddeec82f769ee498f509.NginxMailingListEnglish@forum.nginx.org> Great fix Igor. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241519,241531#msg-241531 From nginx-forum at nginx.us Sun Aug 4 20:21:38 2013 From: nginx-forum at nginx.us (vasilakisfil) Date: Sun, 04 Aug 2013 16:21:38 -0400 Subject: Problem with subdomain in localhost. Message-ID: <3164a2cca69e2fe8096b0d6f4ce91e56.NginxMailingListEnglish@forum.nginx.org> Hello! I am trying to create subdomains in my localhost. I want for instance 2 domains, fil.localhost and test.localhost that point to 2 different locations. Although I have managed to accomplish that there is a problem with my localhost: it points to a location too (actually in fil.localhost) and I can't find out why. Here is the configuration of the fil.localhost http://pastebin.com/TKt37aNz (test.localhost is exactly the same except that istead of fil.localhost in server_name, it has test.localhost) Also, here http://pastebin.com/TUZmLwiw is my /etc/hosts list. I haven't touched nginx.conf or any other file. Any suggestions are really welcomed :D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241554,241554#msg-241554 From reallfqq-nginx at yahoo.fr Sun Aug 4 20:36:15 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 4 Aug 2013 16:36:15 -0400 Subject: Problem with subdomain in localhost. In-Reply-To: <3164a2cca69e2fe8096b0d6f4ce91e56.NginxMailingListEnglish@forum.nginx.org> References: <3164a2cca69e2fe8096b0d6f4ce91e56.NginxMailingListEnglish@forum.nginx.org> Message-ID: The question is: How does Nginx process a request made with an unknown hostname? The answer is: http://nginx.org/en/docs/http/request_processing.html --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nuliknol at gmail.com Sun Aug 4 20:43:16 2013 From: nuliknol at gmail.com (Nulik Nol) Date: Sun, 4 Aug 2013 15:43:16 -0500 Subject: Conditional balancing Message-ID: I am developing a webmail service where the user's inbox and all related session variables will be loaded to one of many application servers which will be behind the balancer (running nginx). So, I need nginx to direct all requests of a particular user, to a particular application server. How can it be done? One way, I think, is to integrate zookeeper into nginx as module, this way nginx will communicate with the application daemon and always know which appserver is holding user's session and data. But this is a lot of work, is there any other solution to do this sort of conditional balancing ? Thanks in advance Nulik From mdounin at mdounin.ru Sun Aug 4 21:47:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Aug 2013 01:47:13 +0400 Subject: Conditional balancing In-Reply-To: References: Message-ID: <20130804214713.GP2130@mdounin.ru> Hello! On Sun, Aug 04, 2013 at 03:43:16PM -0500, Nulik Nol wrote: > I am developing a webmail service where the user's inbox and all > related session variables will be loaded to one of many application > servers which will be behind the balancer (running nginx). So, I need > nginx to direct all requests of a particular user, to a particular > application server. How can it be done? > One way, I think, is to integrate zookeeper into nginx as module, this > way nginx will communicate with the application daemon and always know > which appserver is holding user's session and data. But this is a lot > of work, is there any other solution to do this sort of conditional > balancing ? Trivial solution would be to use a cookie to specify backend needed, and set the cookie during login. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Aug 5 02:13:52 2013 From: nginx-forum at nginx.us (icecola123) Date: Sun, 04 Aug 2013 22:13:52 -0400 Subject: cannot build variables_hash In-Reply-To: <20130731140323.GW2130@mdounin.ru> References: <20130731140323.GW2130@mdounin.ru> Message-ID: <09d3f04e4f18e05ca48773ec25566a97.NginxMailingListEnglish@forum.nginx.org> Want to buy good quality and cheap tablet you, here is a good choice efox Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241210,241561#msg-241561 From nginx-forum at nginx.us Mon Aug 5 02:14:52 2013 From: nginx-forum at nginx.us (icecola123) Date: Sun, 04 Aug 2013 22:14:52 -0400 Subject: cannot build variables_hash In-Reply-To: References: Message-ID: Want to buy good quality and cheap tablet you, here is a good choice efox Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241210,241562#msg-241562 From nginx-forum at nginx.us Mon Aug 5 03:17:27 2013 From: nginx-forum at nginx.us (Payne Chu) Date: Sun, 04 Aug 2013 23:17:27 -0400 Subject: why will nginx's map directive eat all ram? In-Reply-To: <201308040439.38709.vbart@nginx.com> References: <201308040439.38709.vbart@nginx.com> Message-ID: Thanks Valentin, The issue fixed Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241478,241566#msg-241566 From pchychi at gmail.com Mon Aug 5 05:04:33 2013 From: pchychi at gmail.com (Payam Chychi) Date: Sun, 4 Aug 2013 22:04:33 -0700 Subject: Conditional balancing In-Reply-To: References: Message-ID: Serverside cookies for session management tho itl be more of a static Pre-determined mapping... Aka Stickysessions -- Payam Chychi Network Engineer / Security Specialist On Sunday, 4 August, 2013 at 1:43 PM, Nulik Nol wrote: > I am developing a webmail service where the user's inbox and all > related session variables will be loaded to one of many application > servers which will be behind the balancer (running nginx). So, I need > nginx to direct all requests of a particular user, to a particular > application server. How can it be done? > One way, I think, is to integrate zookeeper into nginx as module, this > way nginx will communicate with the application daemon and always know > which appserver is holding user's session and data. But this is a lot > of work, is there any other solution to do this sort of conditional > balancing ? > > Thanks in advance > Nulik > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Aug 5 07:42:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Aug 2013 11:42:44 +0400 Subject: error building nginx 1.5.3 on Cygwin In-Reply-To: References: <20130803104734.GN2130@mdounin.ru> Message-ID: <20130805074243.GS2130@mdounin.ru> Hello! On Sat, Aug 03, 2013 at 12:31:54PM -0400, Kevin Worthington wrote: > Hi Maxim, > > Thanks so much. Your patch worked great. > > The build was failing without that change. > > Is there any way that patch can be incorporated into the main source, so > that it doesn't happen again in 1.5.4? Sure, committed. Thanks for testing. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Aug 5 10:14:09 2013 From: nginx-forum at nginx.us (microwish) Date: Mon, 05 Aug 2013 06:14:09 -0400 Subject: still 400 response code, but so weird this time In-Reply-To: <20130801120211.GF2130@mdounin.ru> References: <20130801120211.GF2130@mdounin.ru> Message-ID: <06daf5bec636dd8e06596a2c1179273b.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Now I'm sure that the 400-related logs in access log file are caused by bad SSL connections, which either finish SSH handshakes and then sending no data, or don't finish SSH handshake at at. I'll be diving into it for more insights. Thank you. Maxim Dounin Wrote: ------------------------------------------------------- > > Just as you said, if the browser rejected my SSL cert, what could I > do to > > solve this issue? > > First of all, you should check if it's the case. If it is, you > should investigate further why the browser rejects the cert - > there are plenty of possible reasons. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241435,241584#msg-241584 From kworthington at gmail.com Mon Aug 5 12:31:37 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Mon, 5 Aug 2013 08:31:37 -0400 Subject: error building nginx 1.5.3 on Cygwin In-Reply-To: <20130805074243.GS2130@mdounin.ru> References: <20130803104734.GN2130@mdounin.ru> <20130805074243.GS2130@mdounin.ru> Message-ID: Hi Maxim, Thanks for committing that. And again for your help. Cheers, Kevin -- Kevin Worthington http://kevinworthington.com/ http://twitter.com/kworthington On Mon, Aug 5, 2013 at 3:42 AM, Maxim Dounin wrote: > Hello! > > On Sat, Aug 03, 2013 at 12:31:54PM -0400, Kevin Worthington wrote: > > > Hi Maxim, > > > > Thanks so much. Your patch worked great. > > > > The build was failing without that change. > > > > Is there any way that patch can be incorporated into the main source, so > > that it doesn't happen again in 1.5.4? > > Sure, committed. Thanks for testing. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raul.Rangel at disney.com Mon Aug 5 15:15:16 2013 From: Raul.Rangel at disney.com (Rangel, Raul) Date: Mon, 5 Aug 2013 08:15:16 -0700 Subject: writev function not implemented Message-ID: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> Hello, I have compiled nginx on ubuntu 12.04 but I'm seeing a really strange error when I try and POST a file through nginx. I get a line in my logs that says: 2013/08/02 17:01:11 [crit] 26#0: *7 writev() "/var/lib/nginx/client_body_temp/0000000001" failed (38: Function not implemented), client: 172.16.42.1, server: , request: "POST /tenants/cwstest/stunts/51fbe5d696bb27002d000001/uploads/51fbe5d796bb27002d000002/encrypt HTTP/1.1", host: "localhost:49172" I then get a 500 error. I have put together a gist with all the relevant information: https://gist.github.com/ismell/6156617 If anyone could provide some assistance that would be great. Thanks, Raul -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolbsd at hotmail.com Mon Aug 5 16:39:44 2013 From: coolbsd at hotmail.com (Cool) Date: Mon, 5 Aug 2013 09:39:44 -0700 Subject: PHP Fatal error In-Reply-To: <53c95fad13d93ce35f99794128c123c7.NginxMailingListEnglish@forum.nginx.org> References: <9a74be60183b917e6bab2518d3c31d32.NginxMailingListEnglish@forum.nginx.org> <53c95fad13d93ce35f99794128c123c7.NginxMailingListEnglish@forum.nginx.org> Message-ID: This is nothing to do with nginx, you should go to PHP forum/mailing list, or a human to get answer, but after a quick check ... I think you have problem with the PHP codes, and it was clearly told by the error messages you posted here, there is something wrong in file "/web/domain.com/public/wp-content/themes/launcheffect/functions.php on line 151", because of "require_once(TEMPLATEPATH/functions/theme-functions.php)" cannot find the file, maybe you failed to define the constant TEMPLATEPATH? If you still cannot get what it means, do go to PHP related sites. On 8/3/2013 2:58 PM, pablo.rodriguez wrote: > Sorry, my nginx.conf: > > http://pastebin.com/5VB1BzHj > > Best. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241517,241518#msg-241518 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > From r at roze.lv Mon Aug 5 19:44:18 2013 From: r at roze.lv (Reinis Rozitis) Date: Mon, 5 Aug 2013 22:44:18 +0300 Subject: writev function not implemented In-Reply-To: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: <03D070A9A8E8481092181BA49F62C06D@NeiRoze> > 2013/08/02 17:01:11 [crit] 26#0: *7 writev() > "/var/lib/nginx/client_body_temp/0000000001" failed (38: Function not > implemented) On what filesystem does /var/lib/nginx/client_body_temp reside (like 'cat /proc/mounts')? rr From Raul.Rangel at disney.com Mon Aug 5 20:13:20 2013 From: Raul.Rangel at disney.com (Rangel, Raul) Date: Mon, 5 Aug 2013 13:13:20 -0700 Subject: writev function not implemented In-Reply-To: <03D070A9A8E8481092181BA49F62C06D@NeiRoze> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> <03D070A9A8E8481092181BA49F62C06D@NeiRoze> Message-ID: <2465AAEEC8B8A242B26ED5F44BCA805F263651378C@SM-CALA-VXMB04A.swna.wdpr.disney.com> The filesystem is AUFS. It's mounted inside of a docker container. root at 012b3d2b6aab:/# cat /proc/mounts rootfs / rootfs rw 0 0 none / aufs rw,relatime,si=2418709ef08a7cdd 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 rpool/ROOT/ubuntu-1 /sbin/init zfs ro,relatime,xattr 0 0 data/docker /etc/resolv.conf zfs ro,relatime,xattr 0 0 devpts /dev/tty1 devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 devpts /dev/pts devpts rw,relatime,mode=600,ptmxmode=666 0 0 devpts /dev/ptmx devpts rw,relatime,mode=600,ptmxmode=666 0 0 So my assumption is that AUFS does not support writev? So I need to somehow mount a different filesystem? -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Reinis Rozitis Sent: Monday, August 05, 2013 1:44 PM To: nginx at nginx.org Subject: Re: writev function not implemented > 2013/08/02 17:01:11 [crit] 26#0: *7 writev() > "/var/lib/nginx/client_body_temp/0000000001" failed (38: Function not > implemented) On what filesystem does /var/lib/nginx/client_body_temp reside (like 'cat /proc/mounts')? rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mikydevel at yahoo.fr Mon Aug 5 21:44:25 2013 From: mikydevel at yahoo.fr (Mik J) Date: Mon, 5 Aug 2013 22:44:25 +0100 (BST) Subject: Avice for my vhost configuration Message-ID: <1375739065.33999.YahooMailNeo@web171802.mail.ir2.yahoo.com> Hello, I plan to configure my nginx server with a couple of vhosts. For each of them I want: - to use php - deny access begining by a dot - not logging access to favicon So my configuration would look like that server { ... ??????? location ~ \.php$ { ??????????? root?????????? /var/www/htdocs/sites/expertinet; ??????????? fastcgi_pass?? unix:/tmp/php.sock; #??????????? fastcgi_pass?? 127.0.0.1:9000; ??????????? fastcgi_index? index.php; ??????????? fastcgi_param? SCRIPT_FILENAME????? $document_root$fastcgi_script_name; ??????????? include??????? fastcgi_params; ??????? } ??????? location ~ /\. { ??????????? access_log off; ??????????? log_not_found off; ??????????? deny all; ??????? } ??????? location = /favicon.ico { ??????????? return 204; ??????????? access_log off; ??????????????? log_not_found off; ??????????????? expires 30d; ??????? } } This in each of my virtual host configuration. This is very redundant. For example if I want to use tcp socket for fastcgi_pass, I need to edit every single vhost configuration. What are you advices to avoid this ? What is the recommended practice ? Someone adviced my to use include... Could you show me an example ? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkearsley at blueyonder.co.uk Mon Aug 5 22:00:39 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Mon, 05 Aug 2013 23:00:39 +0100 Subject: writev function not implemented In-Reply-To: <2465AAEEC8B8A242B26ED5F44BCA805F263651378C@SM-CALA-VXMB04A.swna.wdpr.disney.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> <03D070A9A8E8481092181BA49F62C06D@NeiRoze> <2465AAEEC8B8A242B26ED5F44BCA805F263651378C@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: <52002087.9090108@blueyonder.co.uk> On 05/08/13 21:13, Rangel, Raul wrote: > The filesystem is AUFS. It's mounted inside of a docker container. > > > So my assumption is that AUFS does not support writev? So I need to somehow mount a different filesystem? Hi I can't comment about AUFS, but you can change where those temp files are stored if you wanted to make a small partition dedicated as a temp directory http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_temp_path From lists at ruby-forum.com Mon Aug 5 22:15:45 2013 From: lists at ruby-forum.com (=?UTF-8?B?SsOpcsO0bWU=?= P.) Date: Tue, 06 Aug 2013 00:15:45 +0200 Subject: writev function not implemented In-Reply-To: <2465AAEEC8B8A242B26ED5F44BCA805F263651378C@SM-CALA-VXMB04A.swna.wdpr.disney.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> <03D070A9A8E8481092181BA49F62C06D@NeiRoze> <2465AAEEC8B8A242B26ED5F44BCA805F263651378C@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: Rangel, Raul wrote in post #1117817: > So my assumption is that AUFS does not support writev? So I need to > somehow mount a different filesystem? I wrote a quick and dirty C program to test writev() on AUFS, and it worked like a charm here (3.8 Debian kernel). https://gist.github.com/jpetazzo/6160048 What's the *underlying* filesystem? ZFS? -- Posted via http://www.ruby-forum.com/. From Raul.Rangel at disney.com Mon Aug 5 22:22:57 2013 From: Raul.Rangel at disney.com (Rangel, Raul) Date: Mon, 5 Aug 2013 15:22:57 -0700 Subject: writev function not implemented In-Reply-To: References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> <03D070A9A8E8481092181BA49F62C06D@NeiRoze> <2465AAEEC8B8A242B26ED5F44BCA805F263651378C@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: <2465AAEEC8B8A242B26ED5F44BCA805F26365139B3@SM-CALA-VXMB04A.swna.wdpr.disney.com> So I just tried you little script inside my container (AUFS on top of ZFS): root at 47dfdb95e2a6:/# ./a.out writev: Function not implemented root at 47dfdb95e2a6:/# Then I tried my script outside of the container (ZFS): me at slagathor:~/Projects/service/services/upload$ ./a.out 6 Here is my uname: Linux slagathor 3.8.0-27-generic #40~precise3-Ubuntu SMP Fri Jul 19 14:38:30 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux The plot thickens.... -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of J?r?me P. Sent: Monday, August 05, 2013 4:16 PM To: nginx at nginx.org Subject: Re: RE: writev function not implemented Rangel, Raul wrote in post #1117817: > So my assumption is that AUFS does not support writev? So I need to > somehow mount a different filesystem? I wrote a quick and dirty C program to test writev() on AUFS, and it worked like a charm here (3.8 Debian kernel). https://gist.github.com/jpetazzo/6160048 What's the *underlying* filesystem? ZFS? -- Posted via http://www.ruby-forum.com/. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From Kevin.Johns at Level3.com Mon Aug 5 22:28:31 2013 From: Kevin.Johns at Level3.com (Johns, Kevin) Date: Mon, 5 Aug 2013 22:28:31 +0000 Subject: cache based on file size Message-ID: <59566FAA26861246A0E785066534B42A26F81E7F@USIDCWVEMBX07.corp.global.level3.com> Hi, In looking over Nginx configuration for the proxy module, I do not see an easy way to influence what is cached based on object size. I have two use cases of interest: 1. Store a small file in a particular zone (e.g., SSD), and 2. Have a large file bypass the cache (no-store large files) Any insight on how best to accomplish this would be greatly appreciated. Kevin From ben at indietorrent.org Mon Aug 5 22:32:51 2013 From: ben at indietorrent.org (Ben Johnson) Date: Mon, 05 Aug 2013 18:32:51 -0400 Subject: nginx on Windows returns 504 Gateway Timeout when attempting to POST form using cURL via PHP Message-ID: <52002813.8020704@indietorrent.org> Hello, I have a fairly simple PHP script that I have used in the past, under Apache, to "simulate an HTTP form POST". For some reason, when I attempt to do the same under nginx, the browser hangs until some timeout is reached, at which point nginx returns a "504 Gateway Timeout" response to the browser. This could very well be a PHP problem (or configuration issue) and have nothing to do with nginx, in which case I am happy to take this discussion to the appropriate list. But this does work as expected under Apache, running PHP as a module. If I enable verbose cURL output in PHP, all of the output is sent to the instance of cmd.exe (the Windows console) in which PHP's "php-cgi.exe" is running. This enables me to see that nginx is indeed handling the request. Here is the output: * About to connect() to ben-pc port 443 (#0) * Trying fe80::1ddc:6806:70b6:8546... * Connection refused * Trying fe80::61f9:7669:a282:252d... * Connection refused * Trying 169.254.37.45... * connected * Connected to ben-pc (169.254.37.45) port 443 (#0) * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * [redacted] * start date: 2013-07-05 18:17:36 GMT * expire date: 2014-07-05 18:17:36 GMT * [redacted] * SSL certificate verify result: self signed certificate (18), continuing anyway. * Server auth using Basic with user 'me' > POST /myproject/trunk/public/jsapi/api-router/ HTTP/1.1 Authorization: Basic [redacted] Host: ben-pc Accept: */* Content-Length: 85 Content-Type: application/x-www-form-urlencoded * upload completely sent off: 85 out of 85 bytes < HTTP/1.1 504 Gateway Time-out < Server: nginx/1.5.2 < Date: Mon, 05 Aug 2013 22:28:06 GMT < Content-Type: text/html < Content-Length: 182 < Connection: keep-alive < * Connection #0 to host ben-pc left intact * Closing connection #0 If I disable verbose cURL output (using curl_setopt($ch, CURLOPT_VERBOSE, 1);) no output is sent to the console, and the same timeout and 504 response occurs. My setup is essentially the same as that described at http://wiki.nginx.org/PHPFastCGIOnWindows . Is there a simple solution to this? I'm surprised by the dearth of search results for "nginx php-cgi curl 504", given that my stack configuration is relatively untouched. I should mention that all other PHP behavior seems normal; the server is definitely "functional" in every other way. I am happy to post details of my nginx installation, PHP configuration, script source code, etc. if any of it would be helpful. Thanks for any pointers, -Ben From contact at jpluscplusm.com Mon Aug 5 22:54:49 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 5 Aug 2013 23:54:49 +0100 Subject: writev function not implemented In-Reply-To: <2465AAEEC8B8A242B26ED5F44BCA805F26365139B3@SM-CALA-VXMB04A.swna.wdpr.disney.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> <03D070A9A8E8481092181BA49F62C06D@NeiRoze> <2465AAEEC8B8A242B26ED5F44BCA805F263651378C@SM-CALA-VXMB04A.swna.wdpr.disney.com> <2465AAEEC8B8A242B26ED5F44BCA805F26365139B3@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: On 5 August 2013 23:22, Rangel, Raul wrote: > So I just tried you little script inside my container (AUFS on top of ZFS): > > root at 47dfdb95e2a6:/# ./a.out > writev: Function not implemented > root at 47dfdb95e2a6:/# > > Then I tried my script outside of the container (ZFS): > me at slagathor:~/Projects/service/services/upload$ ./a.out > 6 > > Here is my uname: > Linux slagathor 3.8.0-27-generic #40~precise3-Ubuntu SMP Fri Jul 19 14:38:30 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux > > The plot thickens.... Not really. Check http://aufs.sourceforge.net/ for the string "writev". It's not implemented. J From lists at ruby-forum.com Tue Aug 6 00:09:47 2013 From: lists at ruby-forum.com (=?UTF-8?B?SsOpcsO0bWU=?= P.) Date: Tue, 06 Aug 2013 02:09:47 +0200 Subject: writev function not implemented In-Reply-To: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: <574db24c77382e216a3e422524df9045@ruby-forum.com> I believe it *is* implemented. I re-did my tests on: - 3.10 (Debian) - 3.8 (Debian) - 3.8.0-27-generic (Ubuntu, the same as yours) - 3.2.0-40 (Ubuntu) - 2.6.38.2 (in-house) They all worked. I don't understand exactly how AUFS passes writev to the underlying filesystem, but there might be some weird interaction with ZFS. I tried with tmpfs, ext4, and btrfs, they all worked. Then I wondered if it could have been caused by something special in Docker, so I tried within a Docker container (not just in a manual AUFS mount) - and it worked. Would you mind trying with a non-ZFS backend? (I'm asking just because it will be much faster for you to test with a non-ZFS backend, than for me to re-install ZFS on my Linux machine :-)) -- Posted via http://www.ruby-forum.com/. From reallfqq-nginx at yahoo.fr Tue Aug 6 00:40:57 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 5 Aug 2013 20:40:57 -0400 Subject: Avice for my vhost configuration In-Reply-To: <1375739065.33999.YahooMailNeo@web171802.mail.ir2.yahoo.com> References: <1375739065.33999.YahooMailNeo@web171802.mail.ir2.yahoo.com> Message-ID: Yup, include is the way I would do that personally. Documentation: http://nginx.org/en/docs/ngx_core_module.html#include The funny thing is you already are using the 'include' directive: look at your 'include fastcgi_params;' line. There must be a 'fastcgi_params' file in your configuration directory... That probably comes from the part you copied/pasted from the sample doc. The way to go would be to put the redundant configuration part in it then call it wherever necessary in the vhosts conf. The docs tell you that include can be used in any context you wish, you just need to decide on the granularity. Hope I helped, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Tue Aug 6 01:03:47 2013 From: lists at ruby-forum.com (=?UTF-8?B?SsOpcsO0bWU=?= P.) Date: Tue, 06 Aug 2013 03:03:47 +0200 Subject: writev function not implemented In-Reply-To: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: <1ba9403d57c704f96d59b6df97687706@ruby-forum.com> Actually, I went ahead and rebuilt SPL and ZFS on my machine, and did an AUFS mount over ZFS... And wvtest ran, no problem. -- Posted via http://www.ruby-forum.com/. From dennisml at conversis.de Tue Aug 6 03:02:00 2013 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Tue, 06 Aug 2013 05:02:00 +0200 Subject: Setting the status code Message-ID: <52006728.7060106@conversis.de> Hi, I'm wondering how I can set a status code and still deliver a custom web page? Specifically I want to use a status code of 403 Forbidden but depending on the exact reason I want to display different custom error pages for that case. When I use the "return 403" directive I can no longer deliver content and at most a single custom page can be returned which is defined by the error_page directive. Since I determine the reason for the denied access in lua a way to do it there would also help. I already tried "nginx.status = 403" followed by a "nginx.exec('/reason1')" but while the right page is display the status code returned gets reset to 200. Any ideas? Regards, Dennis From nginx-forum at nginx.us Tue Aug 6 05:40:48 2013 From: nginx-forum at nginx.us (daveyfx) Date: Tue, 06 Aug 2013 01:40:48 -0400 Subject: nginx and WordPress in a subdirectory Message-ID: Hello - I've got nginx as a front-end to Apache and am trying to serve a single WordPress site from a location on my site. Right now I would like to test the location, but it will eventually be served as /advertise. I cannot get the WordPress site to serve correctly, however as I am seeing a 301 redirect infinite loop. nginx location directive: location ^~ /advertise-wp { include /usr/local/nginx/proxypass.conf; proxy_pass http://nsweb1.nstein.prod:90; } contents of /usr/local/nginx/proxypass.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Apache configuration: Listen 90 ServerName www.sitename.com ServerAlias nsweb1.nstein.prod DocumentRoot /opt/nstein/advertise ErrorLog /var/log/apache2/www_error.log CustomLog /var/log/apache2/www_access.log combined DirectoryIndex index.php RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] I have tried to make this work properly by setting these in wp-config.php as well, to no avail. define('WP_HOME', 'http://www.sitename.com/advertise-wp'); define('WP_SITEURL', 'http://www.sitename.com/advertise-wp'); Thanks for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241623,241623#msg-241623 From nginx-forum at nginx.us Tue Aug 6 06:28:49 2013 From: nginx-forum at nginx.us (mex) Date: Tue, 06 Aug 2013 02:28:49 -0400 Subject: nginx and WordPress in a subdirectory In-Reply-To: References: Message-ID: did you tried to --turn it off and on again -- check it w/out the Rewrite-Stuff in your apache-config? Where did you got that snippet from? your RewriteBase looks fishy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241623,241625#msg-241625 From nginx-forum at nginx.us Tue Aug 6 06:29:55 2013 From: nginx-forum at nginx.us (mex) Date: Tue, 06 Aug 2013 02:29:55 -0400 Subject: Setting the status code In-Reply-To: <52006728.7060106@conversis.de> References: <52006728.7060106@conversis.de> Message-ID: let your app handle and deliver error-pages Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241621,241626#msg-241626 From nginx-forum at nginx.us Tue Aug 6 07:22:14 2013 From: nginx-forum at nginx.us (mex) Date: Tue, 06 Aug 2013 03:22:14 -0400 Subject: Conditional balancing In-Reply-To: References: Message-ID: essence of the other tow ansers: http://dgtool.blogspot.de/2013/02/nginx-as-sticky-balancer-for-ha-using.html you might want to google "nginx sticky sessions" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241556,241627#msg-241627 From artemrts at ukr.net Tue Aug 6 08:10:14 2013 From: artemrts at ukr.net (wishmaster) Date: Tue, 06 Aug 2013 11:10:14 +0300 Subject: Avice for my vhost configuration In-Reply-To: <1375739065.33999.YahooMailNeo@web171802.mail.ir2.yahoo.com> References: <1375739065.33999.YahooMailNeo@web171802.mail.ir2.yahoo.com> Message-ID: <1375776386.625651669.vhbqtwap@zebra-x17.ukr.net> --- Original message --- From: "Mik J" Date: 6 August 2013, 00:44:37 > Hello, > > > > > > > I plan to configure my nginx server with a couple of vhosts. > > For each of them I want: > > - to use php > > - deny access begining by a dot > > - not logging access to favicon > > > > > > > So my configuration would look like that > > server { > > ... > > > location ~ \.php$ { > root?????????? /var/www/htdocs/sites/expertinet; > fastcgi_pass?? unix:/tmp/php.sock; > #??????????? fastcgi_pass?? 127.0.0.1:9000; > fastcgi_index? index.php; > fastcgi_param? SCRIPT_FILENAME????? $document_root$fastcgi_script_name; > include??????? > fastcgi_params; > } > > location ~ /\. { > access_log off; > log_not_found off; > deny all; > } > > location = /favicon.ico { > return 204; > access_log off; > log_not_found off; > expires 30d; > } > } > > > > > > > This in each of my virtual host configuration. This is very redundant. > > For example if I want to use tcp socket for fastcgi_pass, I need to edit every single vhost configuration. > > > > > > > What are you advices to avoid this ? What is the recommended practice ? > > Someone adviced my to use include... Could you show me an example ? You must read docs. http://nginx.org/en/docs/ngx_core_module.html#include For you: > location ~ \.php$ { > root /var/www/htdocs/sites/expertinet; <- you should avoid this, read http://wiki.nginx.org/Pitfalls include my_fastcgi_params; > include > fastcgi_params; > } in my_fastcgi_params: fastcgi_pass unix:/tmp/php.sock; # fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; -- Cheers, From nginx-forum at nginx.us Tue Aug 6 10:01:19 2013 From: nginx-forum at nginx.us (Rakshith) Date: Tue, 06 Aug 2013 06:01:19 -0400 Subject: HTTP/1.1 404 Not Found but status says 100% file transferred Message-ID: Hi, I am trying to do a PUT via CURL and below is a glimpse of the request: [root at flex-c1 ~]# curl -o /dev/null -w "Connect:%{time_connect}\nTransfer Start:%{time_starttransfer}\nTotal Time:%{time_total}\n" -X PUT --data-binary @output.dat -qvk http://x.x.x.x:80/nginx/output.dat * Connected to x.x.x.x port 80 > PUT /nginx/output.dat HTTP/1.1 > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Host: x.x.x.x > Accept: */* > Content-Length: 1073741824 > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 97 1024M 0 0 97 997M 0 30.0M 0:00:34 0:00:33 0:00:01 29.9M < HTTP/1.1 404 Not Found <<<<< < Date: Tue, 06 Aug 2013 09:33:42 GMT < Server: Apache < Content-Length: 219 < Content-Type: text/html; charset=iso-8859-1 100 1024M 100 219 100 1024M 6 29.9M 0:00:36 0:00:34 0:00:02 29.6M * Connection #0 to host x.x.x.x left intact Connect:0.028 Transfer Start:2.027 Total Time:34.160 * Closing connection #0 It says it transferred 100% but then the error says 404 not found.. Wanted to know what the problem is here. And the file did not make it through to the destination. I cross checked it. Any help on this would be really great. Thanks, Rakshith Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241631,241631#msg-241631 From dennisml at conversis.de Tue Aug 6 11:02:44 2013 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Tue, 06 Aug 2013 13:02:44 +0200 Subject: Setting the status code In-Reply-To: References: <52006728.7060106@conversis.de> Message-ID: <5200D7D4.5060100@conversis.de> On 06.08.2013 08:29, mex wrote: > let your app handle and deliver error-pages See basically all I want to do is return a single static html file and having to set up php/python/etc. just to serve this file seems like overkill to me. This is pretty much the most simple case for a web server to handle. Regards, Dennis From rkearsley at blueyonder.co.uk Tue Aug 6 11:31:10 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Tue, 06 Aug 2013 12:31:10 +0100 Subject: Setting the status code In-Reply-To: <5200D7D4.5060100@conversis.de> References: <52006728.7060106@conversis.de> <5200D7D4.5060100@conversis.de> Message-ID: <5200DE7E.2080502@blueyonder.co.uk> On 06/08/13 04:02, Dennis Jacobfeuerborn wrote: > Since I determine the reason for the denied access in lua a way to do > it there would also help. I already tried "nginx.status = 403" > followed by a "nginx.exec('/reason1')" but while the right page is > display the status code returned gets reset to 200. Hi You can do it in lua.. you need to do it in the header filter stage I'm doing something similar but probably not exactly the same Hopefully example helps (untested): set $status_code ""; location / { access_by_lua ' -- your lua script here etc... -- if (an error happened) then ngx.var.status_code = "403" ngx.exec("/error/403.html") -- end '; } location /error { root html/error; header_filter_by_lua ' if ngx.var.status_code ~= "" then ngx.status = ngx.var.status_code end '; } From greg at 2lm.fr Tue Aug 6 14:07:48 2013 From: greg at 2lm.fr (Greg) Date: Tue, 06 Aug 2013 16:07:48 +0200 Subject: allow access on a sublocation Message-ID: <52010334.3080700@2lm.fr> Hi, this configuration does not work as expected : server { satisfy any; auth_basic "DING DING SONG"; auth_basic_user_file /etc/apache2/htpasswd; allow from CIDR; allow from CIDR; allow from CIDR; allow from CIDR; location ^~ /allowed/ { allow all; } deny all; } I short, I want to disallow access on my website, only some IPs can access, except for /allowed/ which is open. What's wrong ? Greg From contact at jpluscplusm.com Tue Aug 6 14:22:52 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 6 Aug 2013 15:22:52 +0100 Subject: allow access on a sublocation In-Reply-To: <52010334.3080700@2lm.fr> References: <52010334.3080700@2lm.fr> Message-ID: On 6 Aug 2013 15:08, "Greg" wrote: > > Hi, > > this configuration does not work as expected : > > server { > satisfy any; > auth_basic "DING DING SONG"; > auth_basic_user_file /etc/apache2/htpasswd; > allow from CIDR; > allow from CIDR; > allow from CIDR; > allow from CIDR; > > location ^~ /allowed/ { > allow all; > } > > deny all; > } > > I short, I want to disallow access on my website, only some IPs can > access, except for /allowed/ which is open. Just checking you're aware that this only matches "/allowed/" by itself and nothing below it. Is that what you meant? Is that what you're testing? J -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at 2lm.fr Tue Aug 6 14:34:54 2013 From: greg at 2lm.fr (Greg) Date: Tue, 06 Aug 2013 16:34:54 +0200 Subject: allow access on a sublocation In-Reply-To: References: <52010334.3080700@2lm.fr> Message-ID: <5201098E.2030503@2lm.fr> Le 06/08/2013 16:22, Jonathan Matthews a ?crit : > Just checking you're aware that this only matches "/allowed/" by > itself and nothing below it. > > Is that what you meant? Is that what you're testing? > > It match evrything that _starts_ with /allowed/ , right ? -- Greg Document sans nom -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Tue Aug 6 14:49:20 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 6 Aug 2013 15:49:20 +0100 Subject: allow access on a sublocation In-Reply-To: <5201098E.2030503@2lm.fr> References: <52010334.3080700@2lm.fr> <5201098E.2030503@2lm.fr> Message-ID: On 6 Aug 2013 15:35, "Greg" wrote: > > It match evrything that _starts_ with /allowed/ , right ? Yes it does; I had a brain-fart. Personally I omit the ^~ unless I have a situation that definitely requires it, as it's clearer without it IMHO. YMMV. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Aug 6 14:50:36 2013 From: r at roze.lv (Reinis Rozitis) Date: Tue, 6 Aug 2013 17:50:36 +0300 Subject: allow access on a sublocation In-Reply-To: <52010334.3080700@2lm.fr> References: <52010334.3080700@2lm.fr> Message-ID: > this configuration does not work as expected : > server { > satisfy any; If that is all your configuration (no extra location blocks) then just include the rules inside location / {} like: server { location / { satisfy any; auth_basic "DING DING SONG"; ... deny all; } location /allowed/ { } } p.s. http://nginx.org/en/docs/http/ngx_http_core_module.html#location rr From greg at 2lm.fr Tue Aug 6 15:01:10 2013 From: greg at 2lm.fr (Greg) Date: Tue, 06 Aug 2013 17:01:10 +0200 Subject: allow access on a sublocation In-Reply-To: References: <52010334.3080700@2lm.fr> Message-ID: <52010FB6.90608@2lm.fr> Le 06/08/2013 16:50, Reinis Rozitis a ?crit : >> this configuration does not work as expected : >> server { >> satisfy any; > > If that is all your configuration (no extra location blocks) then just > include the rules inside location / {} like: > > True, but I can't do that as "location / {}" is in a common config included by many other vhosts. -- Greg Document sans nom -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raul.Rangel at disney.com Tue Aug 6 15:17:48 2013 From: Raul.Rangel at disney.com (Rangel, Raul) Date: Tue, 6 Aug 2013 08:17:48 -0700 Subject: writev function not implemented In-Reply-To: <1ba9403d57c704f96d59b6df97687706@ruby-forum.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> <1ba9403d57c704f96d59b6df97687706@ruby-forum.com> Message-ID: <2465AAEEC8B8A242B26ED5F44BCA805F2636513D9C@SM-CALA-VXMB04A.swna.wdpr.disney.com> So I tried two different things. The first one was I used -v /var/lib/nginx to create a volume which bind mounted a zfs directory inside my container. This worked correctly. The second was I created an ext4 partition and used docker -g to set the graph path. When I tried my test again it worked. So it does seem to be a strange interaction between AUFS and zfs. I'm wondering why your setup is working but mine isn't. -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of J?r?me P. Sent: Monday, August 05, 2013 7:04 PM To: nginx at nginx.org Subject: Re: writev function not implemented Actually, I went ahead and rebuilt SPL and ZFS on my machine, and did an AUFS mount over ZFS... And wvtest ran, no problem. -- Posted via http://www.ruby-forum.com/. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From r at roze.lv Tue Aug 6 15:22:55 2013 From: r at roze.lv (Reinis Rozitis) Date: Tue, 6 Aug 2013 18:22:55 +0300 Subject: allow access on a sublocation In-Reply-To: <52010FB6.90608@2lm.fr> References: <52010334.3080700@2lm.fr> <52010FB6.90608@2lm.fr> Message-ID: <5E7DB5DC4D4645659DE2EEC388D50C1C@MasterPC> Document sans nom> True, but I can't do that as "location / {}" is in a common config included by many other vhosts. Then to clarify - you want to deny the access to all the "other vhosts" or just one? If one - per http://nginx.org/en/docs/http/server_names.html you can leave the current config for all the "other vhosts" but define the one specific host you want to deny the access with exact server_name or if you use regular expression in the server_name place it as first in the main config. If its all vhosts then just modify the included common config. But in general it is hard to give you configuration suggestions not knowing how is your existing setup. Typically vhosts (at least for me) each have their own server {} block so each one can have its own location definitions but the common parts (like *.php) can be included. rr From lists at ruby-forum.com Tue Aug 6 17:30:12 2013 From: lists at ruby-forum.com (=?UTF-8?B?SsOpcsO0bWU=?= P.) Date: Tue, 06 Aug 2013 19:30:12 +0200 Subject: writev function not implemented In-Reply-To: <2465AAEEC8B8A242B26ED5F44BCA805F2636513D9C@SM-CALA-VXMB04A.swna.wdpr.disney.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> <1ba9403d57c704f96d59b6df97687706@ruby-forum.com> <2465AAEEC8B8A242B26ED5F44BCA805F2636513D9C@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: <65808e8275606ba2ae5dc00509309708@ruby-forum.com> Rangel, Raul wrote in post #1117896: > The first one was I used -v /var/lib/nginx to create a volume which bind > mounted a zfs directory inside my container. This worked correctly. I was about to suggest that as a workaround. I'm glad that it worked! > The second was I created an ext4 partition and used docker -g to set the > graph path. When I tried my test again it worked. So it does seem to be > a strange interaction between AUFS and zfs. > > I'm wondering why your setup is working but mine isn't. Indeed. I'm using ZFS on Linux with DKMS, as packaged by Debian: ii spl-dkms 0.6.1-2 ii zfs-dkms 0.6.1-1~wheezy Which flavor and version of ZFS are you using? -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Tue Aug 6 19:24:54 2013 From: nginx-forum at nginx.us (nginxCoder) Date: Tue, 06 Aug 2013 15:24:54 -0400 Subject: Changing Nginx keep-alive behavior based on error response of proxied server Message-ID: I was wondering if there is a way in Nginx to force a client to close the connection (or modify the keepalive parameters) when a proxied server returns a particular error response. To elaborate a bit, if I have Nginx as a proxy in front of a backend server, can Nginx be made to change its keep alive behavior based on the error response of the back end server? For example, if I have keepalive_requests as, say 30, in my Nginx config but if the proxied server returns some 4xx or 5xx error, I'd like to send a connection close to the client or perhaps make keepalive_requests as 0 for that connection, forcing the client to open up a new connection. One approach I tried was to intercept the error (proxy_intercept_errors on) and used the error page directive to refer to a location wherein I set the keepalive_requests as 0. This seems to make the client close the current connection but there doesn't seem to be a way to return the actual response from the backend server when using proxy_intercept_errors. It would be nice to be able to return the actual error response from the backend server than just return some static content. Please let me know if anyone has any suggestions or ideas. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241653,241653#msg-241653 From mikydevel at yahoo.fr Tue Aug 6 19:40:47 2013 From: mikydevel at yahoo.fr (Mik J) Date: Tue, 6 Aug 2013 20:40:47 +0100 (BST) Subject: Avice for my vhost configuration In-Reply-To: <1375776386.625651669.vhbqtwap@zebra-x17.ukr.net> References: <1375739065.33999.YahooMailNeo@web171802.mail.ir2.yahoo.com> <1375776386.625651669.vhbqtwap@zebra-x17.ukr.net> Message-ID: <1375818047.50123.YahooMailNeo@web171801.mail.ir2.yahoo.com> Hello, Thank you both for your answer. I did read the page http://nginx.org/en/docs/ngx_core_module.html#include but I sometimes get confused how to put things in order exactly. I removed the root stanza in the location block. As for fastcgi_params I already have the line fastcgi_param? SCRIPT_NAME??????? $fastcgi_script_name; which looks like the one you wrote fastcgi_param? SCRIPT_FILENAME? ? ? $document_root$fastcgi_script_name; in your my_fastcgi_params What is the difference between those two ? I didn't see SCRIPT_NAME in the HttpFastcgiModule documentation whereas SCRIPT_FILENAME is defined by "Parameter SCRIPT_FILENAME is used by PHP for determining the name of script to execute" Also, where should I put my include files ? It seems that /etc/nginx is the default location. Or you put them in another directory ? Cheers >________________________________ > De?: wishmaster >??: nginx at nginx.org >Cc?: "nginx at nginx.org" >Envoy? le : Mardi 6 ao?t 2013 10h10 >Objet?: Re: Avice for my vhost configuration > > >--- Original message --- >From: "Mik J" >Date: 6 August 2013, 00:44:37 > >> Hello, >> >> I plan to configure my nginx server with a couple of vhosts. >> >> For each of them I want: >> >> - to use php >> >> - deny access begining by a dot >> >> - not logging access to favicon >> >> So my configuration would look like that >> >> server { >> ... >> location ~ \.php$ { >> root?????????? /var/www/htdocs/sites/expertinet; >> fastcgi_pass?? unix:/tmp/php.sock; >> #??????????? fastcgi_pass?? 127.0.0.1:9000; >> fastcgi_index? index.php; >> fastcgi_param? SCRIPT_FILENAME????? $document_root$fastcgi_script_name; >> include??????? >> fastcgi_params; >> } >> >> location ~ /\. { >> access_log off; >> log_not_found off; >> deny all; >> } >> >> location = /favicon.ico { >> return 204; >> access_log off; >> log_not_found off; >> expires 30d; >> } >> >> } >> This in each of my virtual host configuration. This is very redundant. >> >> For example if I want to use tcp socket for fastcgi_pass, I need to edit every single vhost configuration. >> >> What are you advices to avoid this ? What is the recommended practice ? >> >> Someone adviced my to use include... Could you show me an example ? >? >? ? ? You must read docs. http://nginx.org/en/docs/ngx_core_module.html#include > >For you: > >> location ~ \.php$ { >> root? ? ? ? ? /var/www/htdocs/sites/expertinet; <- you should avoid this, >read http://wiki.nginx.org/Pitfalls > >include my_fastcgi_params; >> include? ? ? >> fastcgi_params; >> } > >in my_fastcgi_params: > >fastcgi_pass? unix:/tmp/php.sock; >#? ? ? ? ? ? fastcgi_pass? 127.0.0.1:9000; >fastcgi_index? index.php; >fastcgi_param? SCRIPT_FILENAME? ? ? $document_root$fastcgi_script_name; > >-- >Cheers, > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Aug 6 21:30:35 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 6 Aug 2013 22:30:35 +0100 Subject: allow access on a sublocation In-Reply-To: <52010334.3080700@2lm.fr> References: <52010334.3080700@2lm.fr> Message-ID: <20130806213035.GP27161@craic.sysops.org> On Tue, Aug 06, 2013 at 04:07:48PM +0200, Greg wrote: Hi there, > this configuration does not work as expected : In what way does it fail for you? When I "allow 127.0.0.3/32", I am challenged http 401 for "curl -i http://127.0.0.1/normal/ok", but get the file content from both "curl -i http://127.0.0.1/allowed/ok" and "curl -i http://127.0.0.3/normal/ok" > I short, I want to disallow access on my website, only some IPs can > access, except for /allowed/ which is open. > > What's wrong ? It seems to work for me. nginx -v? nginx -V? output of specific curl commands I can use to replicate the problem? Thanks, f -- Francis Daly francis at daoine.org From nginx at 2xlp.com Tue Aug 6 21:48:46 2013 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Tue, 6 Aug 2013 17:48:46 -0400 Subject: Recommendations for safeguarding against BREACH ? In-Reply-To: <65808e8275606ba2ae5dc00509309708@ruby-forum.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> <1ba9403d57c704f96d59b6df97687706@ruby-forum.com> <2465AAEEC8B8A242B26ED5F44BCA805F2636513D9C@SM-CALA-VXMB04A.swna.wdpr.disney.com> <65808e8275606ba2ae5dc00509309708@ruby-forum.com> Message-ID: are there any official recommendations from nginx to safeguard against the BREACH exploit ? http://breachattack.com/ http://arstechnica.com/security/2013/08/gone-in-30-seconds-new-attack-plucks-secrets-from-https-protected-pages/ From aflexzor at gmail.com Tue Aug 6 21:54:47 2013 From: aflexzor at gmail.com (Alex Flex) Date: Tue, 06 Aug 2013 15:54:47 -0600 Subject: Obtaining req/s or connections/sec sent to a backend-server? In-Reply-To: <52016CE8.6000201@gmail.com> References: <52016CE8.6000201@gmail.com> Message-ID: <520170A7.1060101@gmail.com> Hello Nginx I understand that nginx when uses as a reverse proxy does not allow me to poll for stats regarding the amount of connections/requests sent to backend servers. Id like to know if there is creative way I can do this without parsing the logs ? I want to do this almost as a live feed and parsing the logs would mean a very cpu intenstive job. Iam sure many of us have been faced with the same dilema... Perhaps a way to query the kernel network stack efficiently and directly with the backend ips as keys? Thanks Alex From aflexzor at gmail.com Tue Aug 6 21:55:26 2013 From: aflexzor at gmail.com (Alex Flex) Date: Tue, 06 Aug 2013 15:55:26 -0600 Subject: Fwd: Adding a header to the status page output In-Reply-To: <52016EC2.4030602@gmail.com> References: <52016EC2.4030602@gmail.com> Message-ID: <520170CE.6080902@gmail.com> Hello ! Iam wondering if there is any way to add a custom header/footer to the output of the STATUS page? location /status { stub_status on; } I tried a couple of thigns but for some reason apparently it got ignored. Alex From dennisml at conversis.de Tue Aug 6 22:19:29 2013 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Wed, 07 Aug 2013 00:19:29 +0200 Subject: Setting the status code In-Reply-To: <5200DE7E.2080502@blueyonder.co.uk> References: <52006728.7060106@conversis.de> <5200D7D4.5060100@conversis.de> <5200DE7E.2080502@blueyonder.co.uk> Message-ID: <52017671.7080906@conversis.de> On 06.08.2013 13:31, Richard Kearsley wrote: > On 06/08/13 04:02, Dennis Jacobfeuerborn wrote: > >> Since I determine the reason for the denied access in lua a way to do >> it there would also help. I already tried "nginx.status = 403" >> followed by a "nginx.exec('/reason1')" but while the right page is >> display the status code returned gets reset to 200. > > > Hi > You can do it in lua.. you need to do it in the header filter stage > I'm doing something similar but probably not exactly the same > Hopefully example helps (untested): > > set $status_code ""; > location / > { > access_by_lua ' > -- your lua script here etc... > -- if (an error happened) then > ngx.var.status_code = "403" > ngx.exec("/error/403.html") > -- end > '; > } > > location /error > { > root html/error; > header_filter_by_lua ' > if ngx.var.status_code ~= "" then > ngx.status = ngx.var.status_code > end > '; > } That did the trick, thanks! What I basically wound up doing is: location /error { root /var/www/html; header_filter_by_lua ' ngx.status = 503 '; } Kind of awkward to be forced to use Lua just for this. There should be a "status_code " directive to make this possible without requiring the Lua module. Regards, Dennis From francis at daoine.org Tue Aug 6 22:31:39 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 6 Aug 2013 23:31:39 +0100 Subject: Fwd: Adding a header to the status page output In-Reply-To: <520170CE.6080902@gmail.com> References: <52016EC2.4030602@gmail.com> <520170CE.6080902@gmail.com> Message-ID: <20130806223139.GQ27161@craic.sysops.org> On Tue, Aug 06, 2013 at 03:55:26PM -0600, Alex Flex wrote: Hi there, > Iam wondering if there is any way to add a custom header/footer to the > output of the STATUS page? > > location /status { stub_status on; } Can whatever will read this extra information, read it from a http header? add_header would seem to be simplest. You can always patch your source, if you can't find another way to do what you want. f -- Francis Daly francis at daoine.org From agentzh at gmail.com Wed Aug 7 01:05:33 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 6 Aug 2013 18:05:33 -0700 Subject: Obtaining req/s or connections/sec sent to a backend-server? In-Reply-To: <520170A7.1060101@gmail.com> References: <52016CE8.6000201@gmail.com> <520170A7.1060101@gmail.com> Message-ID: Hello! On Tue, Aug 6, 2013 at 2:54 PM, Alex Flex wrote: > to poll for stats regarding the amount of connections/requests sent to > backend servers. Id like to know if there is creative way I can do this > without parsing the logs ? This is a trivial task if you write a simple tool based on systemtap or dtrace :) Take a look at my Nginx Systemtap Toolkit for some examples: https://github.com/agentzh/nginx-systemtap-toolkit Best regards, -agentzh From igor at sysoev.ru Wed Aug 7 03:43:15 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 7 Aug 2013 07:43:15 +0400 Subject: Recommendations for safeguarding against BREACH ? In-Reply-To: References: <2465AAEEC8B8A242B26ED5F44BCA805F263638E740@SM-CALA-VXMB04A.swna.wdpr.disney.com> <1ba9403d57c704f96d59b6df97687706@ruby-forum.com> <2465AAEEC8B8A242B26ED5F44BCA805F2636513D9C@SM-CALA-VXMB04A.swna.wdpr.disney.com> <65808e8275606ba2ae5dc00509309708@ruby-forum.com> Message-ID: <301EB14A-56C5-4CA4-B198-E190394C17C9@sysoev.ru> On Aug 7, 2013, at 1:48 , Jonathan Vanasco wrote: > are there any official recommendations from nginx to safeguard against the BREACH exploit ? > > http://breachattack.com/ > > http://arstechnica.com/security/2013/08/gone-in-30-seconds-new-attack-plucks-secrets-from-https-protected-pages/ "gzip off" ?? SSL-enabled sites. -- Igor Sysoev http://nginx.com/services.html From agentzh at gmail.com Wed Aug 7 03:44:13 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 6 Aug 2013 20:44:13 -0700 Subject: [ANN] ngx_openresty devel version 1.4.1.3 released Message-ID: Hello folks! I am happy to announce that the new development version of ngx_openresty, 1.4.1.3, is now released: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this release happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.4.1.1: * upgraded LuaNginxModule to 0.8.6. * feature: added new method get_stale to shared dict objects, which returns the value (if not freed yet) even if the key has already expired. thanks Matthieu Tourne for the patch. * bugfix: segfaults would happen in ngx.req.set_header() and ngx.req.clear_header() for HTTP 0.9 requests. thanks Bin Wang for the report. * bugfix: segfault might happen when reading or writing to a response header via the ngx.header.HEADER API in the case that the nginx core initiated a 301 (auto) redirect. this issue was caused by an optimization in the Nginx core where "ngx_http_core_find_config_phase", for example, does not fully initialize the "Location" response header after creating the header. thanks Vladimir Protasov for the report. * bugfix: memory leak would happen when using the ngx.ctx API before another Nginx module (other than LuaNginxModule) initiates an internal redirect. * bugfix: use of the ngx.ctx table in the context of ngx.timer callbacks would leak memory. * bugfix: the "connect() failed" error message was still logged even when lua_socket_log_errors was off. thanks Dong Fang Fan for the report. * bugfix: we incorrectly returned the 500 error code in our output header filter, body filter, and log-phase handlers upon Lua code loading errors. * bugfix: Lua stack overflow might happen when we failed to load Lua code from the code cache. * bugfix: the error message was misleading when the *_by_lua_file config directives failed to load the Lua file specified. * bugfix: give the argument of 'void' to function definitions which has no arguments. thanks Tatsuhiko Kubo for the patch. * bugfix: when our "at-panic" handler for Lua VM gets called, the Lua VM is not recoverable for future use. so now we try to quit the current Nginx worker gracefully so that the Nginx master can spawn a new one. * upgraded HeadersMoreNginxModule to 0.22. * bugfix: segfaults would happen in more_set_input_headers and more_clear_input_headers when processing HTTP 0.9 requests. thanks Bin Wang for the patch. * bugfix: segfault might happen when using more_set_headers or more_clear_headers in the case that the Nginx core initiated a 301 (auto) redirect. this issue was caused by an optimization in the Nginx core where "ngx_http_core_find_config_phase", for example, does not fully initialize the "Location" response header after creating the header. thanks Brian Akins for the report. * upgraded SrcacheNginxModule to 0.22. * bugfix: we did not always read the client request body before initiating srcache_fetch subrequests at the "access phase", which could lead to bad consequences. * upgraded EchoNginxModule to 0.46. * bugfix: the request body was not discarded properly in the content handler when the request body was not read yet. thanks Peter Sabaini for the report. * bugfix: we did not ensure that the main request body is always read before subrequests are initiated, which could lead to bad consequences. * bugfix: $echo_client_request_headers may evaluate to an empty value when the default header buffer ("c->buffer") can hold the request line but not the whole header. thanks KDr2 for reporting this issue. * docs: fixed a typo in Synopsis reported by saighost. * docs: use https for github links. thanks Olivier Mengu?? for the patch. * upgraded PostgresNginxModule to 1.0rc3. * bugfix: compilation error happened with nginx 1.5.3+ because the Nginx core changes the "ngx_sock_ntop" API. thanks an0ma1ia for the report. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004001 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From lists at ruby-forum.com Wed Aug 7 19:09:11 2013 From: lists at ruby-forum.com (Roger Pack) Date: Wed, 07 Aug 2013 21:09:11 +0200 Subject: How to log virtual server name In-Reply-To: References: Message-ID: <8f2d42e122990aaef6ec2ee4a49c772f@ruby-forum.com> seems for me that any of these work for showing the virtual host (for followers). $http_host $host $server_name I just had to uncomment this line for it to 'take', oddly: #access_log logs/access.log main; @nginx dev guys: it would be nice if the default config either mentioned that both log_format and access_log (commented out) lines need to be uncommented to take, or for them both to start uncommented out...to avoid this confusion for others in the future... -- Posted via http://www.ruby-forum.com/. From reallfqq-nginx at yahoo.fr Wed Aug 7 19:44:13 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 7 Aug 2013 15:44:13 -0400 Subject: How to log virtual server name In-Reply-To: <8f2d42e122990aaef6ec2ee4a49c772f@ruby-forum.com> References: <8f2d42e122990aaef6ec2ee4a49c772f@ruby-forum.com> Message-ID: Hello, On Wed, Aug 7, 2013 at 3:09 PM, Roger Pack wrote: > seems for me that any of these work for showing the virtual host (for > followers). > > $http_host $host $server_name > > ?The Nginx docs specifies different content for those 3 variables. The $http_host is a subset of the possible values that $host can take. $server_name might not reflect what you wish: it will send literally the line which is in the server configuration. It it contains a regular expression, for instance, you'll get it as you typed it in... ? > I just had to uncomment this line for it to 'take', oddly: > > > #access_log logs/access.log main; > > @nginx dev guys: it would be nice if the default config either > mentioned that both log_format and access_log (commented out) lines need > to be uncommented to take, or for them both to start uncommented > out...to avoid this confusion for others in the future... > > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Aug 7 21:28:42 2013 From: nginx-forum at nginx.us (zsero) Date: Wed, 07 Aug 2013 17:28:42 -0400 Subject: How to not log Pingdom bots Message-ID: <028e1874ae3eec2ac34ea0013c05c5bd.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to do the following: I would like to stop logging from Pingdom bots, as a checking every 5 second pollutes the access log and it doesn't have any meaningful value. My problem is that I cannot integrate it into any kind of location block. I encountered the following errors: 1. map cannot be used here 2. if cannot be used here 3. if needs to return something, while I don't want to modify what to return except I simply want to stop logging. Here is how I tried: A sample server conf: server { listen 80; server_name www.z-e-r-o.in; rewrite ^/(.*) http://z-e-r-o.in/$1 permanent; } server { listen 80; server_name z-e-r-o.in zsoltero.com; root /home/zsero/http/hosts/z-e-r-o.in; error_log /home/zsero/http/logs/z-e-r-o.in.error.log; access_log /home/zsero/http/logs/z-e-r-o.in.access.log; index index.html index.php; location / { try_files $uri $uri/ /index.php?$args; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/php.d/zsero.conf; client_max_body_size 200M; } And in /etc/nginx/conf.d/*.conf there are 5 standard configurations, one of them being nolog.conf: location = /robots.txt { access_log off; log_not_found off; } location = /favicon.ico { access_log off; log_not_found off; } location ^~ /apple-touch-icon { access_log off; log_not_found off; } I tried methods from here: http://www.kutukupret.com/2011/06/01/nginx-blocking-spoofed-google-bot/ http://serverfault.com/questions/414027/how-to-block-nginx-access-log-statements-from-specific-user-agents http://fralef.me/nginx-hardening-some-good-security-practices.html But none of them work. One reason is that they are all about returning something, which I do _not_ want to modify, since then Pingdom would stop working. The other reason is that I don't know where to put the map block, since map cannot be used there is shown. Can you tell me how to integrate non-logging of Pingdom bot into my conf? Also, does the conf look OK? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241690,241690#msg-241690 From goelvivek2011 at gmail.com Thu Aug 8 06:29:26 2013 From: goelvivek2011 at gmail.com (Vivek Goel) Date: Thu, 8 Aug 2013 06:29:26 +0000 Subject: Worker process is not getting killed , when master is killed using -9 Message-ID: Hi, I am facing one problem with nginx. I am waiting for nginx to stop withing 30 seconds. If it is not getting stopped, I am firing command to kill master process using pkill -9 But, Killing master is not killing the worker process. Due to that reason. Port is not getting released. Is it known problem or desired behavior ? regards Vivek Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From cubicdaiya at gmail.com Thu Aug 8 06:52:54 2013 From: cubicdaiya at gmail.com (cubicdaiya) Date: Thu, 8 Aug 2013 15:52:54 +0900 Subject: Worker process is not getting killed , when master is killed using -9 Message-ID: Hello. SIGKILL(9) is not able to trapped for any process. How about using SIGTERM or QUIT? Or you might want to see following about controlling nginx with signal. http://nginx.org/en/docs/control.html 2013/8/8 Vivek Goel > Hi, > I am facing one problem with nginx. I am waiting for nginx to stop > withing 30 seconds. If it is not getting stopped, I am firing command to > kill master process using > pkill -9 > > But, Killing master is not killing the worker process. Due to that reason. > Port is not getting released. > > Is it known problem or desired behavior ? > > regards > Vivek Goel > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Tatsuhiko Kubo E-Mail : cubicdaiya at gmail.com HP : http://cccis.jp/index_en.html Twitter : http://twitter.com/cubicdaiya -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkearsley at blueyonder.co.uk Thu Aug 8 09:45:14 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Thu, 08 Aug 2013 10:45:14 +0100 Subject: upstream max_fails disable Message-ID: <520368AA.4060205@blueyonder.co.uk> Hi I'm using the upstream module - with sole purpose to enable keepalives to my backend I don't want to use any of the other features, I only have 1 server in the upstream {} Does that mean max_fails is still being used? (defaults to 1?) and fail_timeout etc..? they both have default values What happens if they are "all" marked as down? If the 10.100.0.11 is down, I would like it to just keep using it and just return 502 if it's down upstream test { server 10.100.0.11; keepalive 100; } Thanks From nginx+phil at spodhuis.org Thu Aug 8 18:50:28 2013 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Thu, 8 Aug 2013 14:50:28 -0400 Subject: Worker process is not getting killed , when master is killed using -9 In-Reply-To: References: Message-ID: <20130808185028.GA29674@redoubt.spodhuis.org> On 2013-08-08 at 06:29 +0000, Vivek Goel wrote: > I am facing one problem with nginx. I am waiting for nginx to stop withing > 30 seconds. If it is not getting stopped, I am firing command to kill > master process using > pkill -9 > > But, Killing master is not killing the worker process. Due to that reason. > Port is not getting released. > > Is it known problem or desired behavior ? Fundamental Unix: the process never gets a chance to kill the children when you use -9. Instead, you want to use `pkill ... -g `. The -g says to use the "process group"; you can send a signal to all processes in a group [killpg(2)] and the nginx worker processes are part of the same process group as the nginx master process, which is the process group leader (so its pid is the pgid of all the processes you care about). Do please try to use -TERM once with the process-group kill, to give the workers a chance to clean up safely, before you hit all of the processes at once with the nuclear option. Regards, -Phil From nginx-forum at nginx.us Fri Aug 9 03:35:45 2013 From: nginx-forum at nginx.us (nginxCoder) Date: Thu, 08 Aug 2013 23:35:45 -0400 Subject: Changing Nginx keep-alive behavior based on error response of proxied server In-Reply-To: References: Message-ID: Any ideas/suggestions related to forcing a client connection close on certain errors from proxied server? Please refer my previous post for details. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241653,241721#msg-241721 From xofyarg+list at gmail.com Fri Aug 9 05:41:49 2013 From: xofyarg+list at gmail.com (Anb) Date: Fri, 9 Aug 2013 13:41:49 +0800 Subject: Poor performance when loading huge number of server section Message-ID: Hi there, I got a problem when using nginx as a reverse proxy. Configurations using per server policy to set upstream host. Nginx spends significant time loading config files as while as virtual server inscreased to a large number. Here's a rough statistics: | server sections | load time(sec) | |-----------------+----------------| | 50000 | 242 | | 80000 | 910 | | 100000 | 1764 | I know this is an unusual usage of nginx, but there is such demand. So I want to know: 1. Is nginx not designed to use such large number of config? 2. Does someone has some experience on this? Could you please show me a clue? Thanks. -- anb 2013.8.9 From ru at nginx.com Fri Aug 9 08:21:46 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 9 Aug 2013 12:21:46 +0400 Subject: Poor performance when loading huge number of server section In-Reply-To: References: Message-ID: <20130809082146.GU15216@lo0.su> On Fri, Aug 09, 2013 at 01:41:49PM +0800, Anb wrote: > Hi there, > I got a problem when using nginx as a reverse proxy. Configurations > using per server policy to set upstream host. Nginx spends significant > time loading config files as while as virtual server inscreased to a > large number. Here's a rough statistics: > > | server sections | load time(sec) | > |-----------------+----------------| > | 50000 | 242 | > | 80000 | 910 | > | 100000 | 1764 | > > I know this is an unusual usage of nginx, but there is such demand. > > So I want to know: > 1. Is nginx not designed to use such large number of config? > 2. Does someone has some experience on this? Could you please show me a clue? > > Thanks. How about mapping $server_name to upstream instead? map $server_name $upstream { default ...; [100k entries] } server { ... proxy_pass http://$upstream; ... } From nginx-forum at nginx.us Fri Aug 9 10:20:21 2013 From: nginx-forum at nginx.us (gray) Date: Fri, 09 Aug 2013 06:20:21 -0400 Subject: proxy_cache seems not working with X-Accel-Redirect Message-ID: <5f1ba60ffe6076a97efff91792e8fe32.NginxMailingListEnglish@forum.nginx.org> My config location ~ /cached/ { proxy_pass http://apache; proxy_cache cache; proxy_cache_valid 2h; proxy_cache_key "$host|$request_uri"; } location /htdocs_internal/ { internal; alias $htdocs_path; } Requests with header in reply X-Accel-Redirect not cached, every time request is sent to apache. When i add these directives proxy_pass_header X-Accel-Redirect; proxy_ignore_headers X-Accel-Redirect; cache works fine (but is useless :) ), so it isn't problem with "no cache" headers from apache. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241734,241734#msg-241734 From roinacio at gmail.com Fri Aug 9 18:00:08 2013 From: roinacio at gmail.com (Rodrigo Serra Inacio) Date: Fri, 9 Aug 2013 15:00:08 -0300 Subject: Proxy_pass and rewrite Message-ID: Hi, I'm try to do a proxy_pass and rewrite rules like this: location /directory/ { proxy_pass http://some_url_to_proxy_pass/; expires +10d; } It's working, but I need to repeat the /directory/ on the new url to work ...like this http://testurl.com/directory/directory/foo/bar/ And I want just one /directory/foo/bar/...then a try a rewrite rule, like this location /directory/ { proxy_pass http://some_url_to_proxy_pass/; rewrite /directory/(.*) $1 break; expires +10d; } But, isn't work Some help, please. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Aug 10 10:31:03 2013 From: nginx-forum at nginx.us (goelviek2011@gmail.com) Date: Sat, 10 Aug 2013 06:31:03 -0400 Subject: Worker process is not getting killed , when master is killed using -9 In-Reply-To: References: Message-ID: <3557c0f89d1682b41436b73bb7515a53.NginxMailingListEnglish@forum.nginx.org> Thanks. I will try both advise. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241693,241754#msg-241754 From nginx-forum at nginx.us Sat Aug 10 23:09:26 2013 From: nginx-forum at nginx.us (DivisionX) Date: Sat, 10 Aug 2013 19:09:26 -0400 Subject: 404 on Prestashop 1.5 under nginx In-Reply-To: References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8e2815f8643f4e8d18847b06d2c8b809.NginxMailingListEnglish@forum.nginx.org> Hello! I have the same problem with prestashop 1.5.4.1, please tell how did you resolve it. Waiting for reply. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,241757#msg-241757 From nginx-forum at nginx.us Sun Aug 11 13:15:04 2013 From: nginx-forum at nginx.us (ruslan_osmanov) Date: Sun, 11 Aug 2013 09:15:04 -0400 Subject: My filter module is not called Message-ID: <36d2beb104580f4d79f8f44f00f5f3a6.NginxMailingListEnglish@forum.nginx.org> Hi, I'm writing a filter module which will output static files according to information returned by an upstream handler like FastCGI, or Apache. There is some testing code in header/body filters. I'm launching the server merely to see whether my code is invoked. All configuration and cleanup stuff seems to be working. But the header/body filters are not called at all(both call ngx_log_error). Vhost configuration: http://bpaste.net/show/121898/ The module: http://bpaste.net/show/121900/ Please help to figure out what's wrong with it. Regards. -- Ruslan Osmanov Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241761,241761#msg-241761 From xofyarg+list at gmail.com Sun Aug 11 16:11:59 2013 From: xofyarg+list at gmail.com (Anb) Date: Mon, 12 Aug 2013 00:11:59 +0800 Subject: Poor performance when loading huge number of server section Message-ID: Thank you, Ruslan. > How about mapping $server_name to upstream instead? > > map $server_name $upstream { > default ...; > [100k entries] > } > > server { > ... > proxy_pass http://$upstream; > ... > } It works if I use simple upstream like url:80, but still very slow when using upstream module. Since I need to sepcify a `upstream' context for each server_name. -- Anb 2013.8.12 From nginx-forum at nginx.us Sun Aug 11 18:44:28 2013 From: nginx-forum at nginx.us (cubicdaiya) Date: Sun, 11 Aug 2013 14:44:28 -0400 Subject: My filter module is not called In-Reply-To: <36d2beb104580f4d79f8f44f00f5f3a6.NginxMailingListEnglish@forum.nginx.org> References: <36d2beb104580f4d79f8f44f00f5f3a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1de875f2b0a50467cb59760f2834324e.NginxMailingListEnglish@forum.nginx.org> Hi. 2013/8/11 ruslan_osmanov > There is some testing code in header/body filters. I'm launching the server > merely to see whether my code is invoked. All configuration and cleanup > stuff > seems to be working. But the header/body filters are not called at all(both > call > ngx_log_error). > > Vhost configuration: http://bpaste.net/show/121898/ > The module: http://bpaste.net/show/121900/ > > Please help to figure out what's wrong with it. https://gist.github.com/cubicdaiya/6206206 After I built your module with the above config the header/body filter functions are called in my environment. Maybe the cause is in config. For example, using HTTP_MODULES instead of HTTP_FILTER_MODULES. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241761,241763#msg-241763 From nginx-forum at nginx.us Sun Aug 11 18:48:38 2013 From: nginx-forum at nginx.us (ruslan_osmanov) Date: Sun, 11 Aug 2013 14:48:38 -0400 Subject: My filter module is not called In-Reply-To: <1de875f2b0a50467cb59760f2834324e.NginxMailingListEnglish@forum.nginx.org> References: <36d2beb104580f4d79f8f44f00f5f3a6.NginxMailingListEnglish@forum.nginx.org> <1de875f2b0a50467cb59760f2834324e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thank you for the reply. Hmm, maybe I build it different way. I have the following files to configure, make and install it: conf.sh: cd ~/src/nginx ./auto/configure --prefix=/home/ruslan \ --with-debug \ --conf-path=/home/ruslan/etc/nginx/nginx.conf \ --user=ruslan \ --group=www \ --pid-path=/home/ruslan/var/run/nginx.pid \ --lock-path=/home/ruslan/var/run/nginx.lock \ --error-log-path=/home/ruslan/var/log/nginx/error.log \ --add-module=/home/ruslan/projects/nginx/modules/file_chunks_filter make.sh cd ~/src/nginx make -j7 install.sh cd ~/src/nginx make install So I simply run ./conf.sh ; ./make.sh ; install.sh. Isn't it right? Or, should I modify the nginx configuration? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241761,241764#msg-241764 From nginx-forum at nginx.us Sun Aug 11 18:51:09 2013 From: nginx-forum at nginx.us (ruslan_osmanov) Date: Sun, 11 Aug 2013 14:51:09 -0400 Subject: My filter module is not called In-Reply-To: References: <36d2beb104580f4d79f8f44f00f5f3a6.NginxMailingListEnglish@forum.nginx.org> <1de875f2b0a50467cb59760f2834324e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <561b62a71066f3643136e049c06eed2a.NginxMailingListEnglish@forum.nginx.org> Oh, sorry, the `config` file contains: ngx_addon_name=ngx_http_file_chunks_filter_module HTTP_MODULES="$HTTP_MODULES ngx_http_file_chunks_filter_module" HTTP_INCS="$HTTP_INCS /usr/include/libxml2 " CORE_LIBS="$CORE_LIBS -lxml2" NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_file_chunks_filter_module.c" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241761,241765#msg-241765 From nginx-forum at nginx.us Sun Aug 11 18:57:53 2013 From: nginx-forum at nginx.us (ruslan_osmanov) Date: Sun, 11 Aug 2013 14:57:53 -0400 Subject: My filter module is not called In-Reply-To: <1de875f2b0a50467cb59760f2834324e.NginxMailingListEnglish@forum.nginx.org> References: <36d2beb104580f4d79f8f44f00f5f3a6.NginxMailingListEnglish@forum.nginx.org> <1de875f2b0a50467cb59760f2834324e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <284a177079a15f8cde3fdf32815b02f8.NginxMailingListEnglish@forum.nginx.org> Yes, indeed, I had to put it into HTTP_FILTER_MODULES. Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241761,241767#msg-241767 From agentzh at gmail.com Mon Aug 12 04:30:44 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 11 Aug 2013 21:30:44 -0700 Subject: [ANN] ngx_openresty devel version 1.4.2.1 released Message-ID: Hi guys! I am glad to announce that the new development version of ngx_openresty, 1.4.2.1, is now released: http://openresty.org/#Download Below is the complete change log for this release, as compared to the last (devel) release, 1.4.1.3: * upgraded the Nginx core to 1.4.2. * see for changes. * upgraded LuaRestyDNSLibrary to 0.10. * feature: now we return all the answer records even when the DNS server returns a non-zero error code, in which case the error code and error string are now set as the "errcode" and "errstr" fields in the Lua table returned. thanks Matthieu Tourne for requesting this. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004002 We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From lilb.edwin at gmail.com Mon Aug 12 05:27:11 2013 From: lilb.edwin at gmail.com (Liangbin Li) Date: Mon, 12 Aug 2013 13:27:11 +0800 Subject: fix bug in http_referer_module that using incorrect input string length in the regex matching process when header Referer starts with https:// Message-ID: --- ngx_http_referer_module.c +++ ngx_http_referer_module.c @@ -147,10 +147,12 @@ if (ngx_strncasecmp(ref, (u_char *) "http://", 7) == 0) { ref += 7; + len -= 7; goto valid_scheme; } else if (ngx_strncasecmp(ref, (u_char *) "https://", 8) == 0) { ref += 8; + len -= 8; goto valid_scheme; } } @@ -191,7 +193,7 @@ ngx_int_t rc; ngx_str_t referer; - referer.len = len - 7; + referer.len = len; referer.data = ref; rc = ngx_regex_exec_array(rlcf->regex, &referer, r->connection->log); -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Mon Aug 12 06:05:53 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Mon, 12 Aug 2013 14:05:53 +0800 Subject: fix bug in http_referer_module that using incorrect input string length in the regex matching process when header Referer starts with https:// In-Reply-To: References: Message-ID: Hi, In the rerferer module, the length of scheme in regular referer expression is treated as 'http://'. It's incorrect for the https request. And the regular referer rule will be invalid. This patch could fix this bug. 2013/8/12 Liangbin Li : > --- ngx_http_referer_module.c > +++ ngx_http_referer_module.c > @@ -147,10 +147,12 @@ > > if (ngx_strncasecmp(ref, (u_char *) "http://", 7) == 0) { > ref += 7; > + len -= 7; > goto valid_scheme; > > } else if (ngx_strncasecmp(ref, (u_char *) "https://", 8) == 0) { > ref += 8; > + len -= 8; > goto valid_scheme; > } > } > @@ -191,7 +193,7 @@ > ngx_int_t rc; > ngx_str_t referer; > > - referer.len = len - 7; > + referer.len = len; > referer.data = ref; > > rc = ngx_regex_exec_array(rlcf->regex, &referer, > r->connection->log); > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao From nginx-forum at nginx.us Mon Aug 12 07:34:35 2013 From: nginx-forum at nginx.us (Rakshith) Date: Mon, 12 Aug 2013 03:34:35 -0400 Subject: NGINX serving data via NFS mount Message-ID: <2e77004dd323b3615e45fad7ec16e705.NginxMailingListEnglish@forum.nginx.org> Hi, Can anybody tell me what are the things needed by nginx to forward the request via the NFS mount point?? Changes to the config file as such?? The config file looks like as shown below: http { ..... ......... server { listen *:80 default accept_filter=httpready; server_name vs0; root /var/home/diag; autoindex on; } The mount path is as shown above against the root entry. This config is resulting in an error when i try to send request using Curl as shown below: [rakshith at cyclnb15 ~]$ curl -X GET -qvk http://10.238.62.234:80/vol1_mnt_point/output.dat < HTTP/1.1 404 Not Found But the file actually exists: bash-3.2# pwd /var/home/diag/vol1_mnt_point bash-3.2# ls .snapshot nginx.tar output.dat Any help on this is greatly appreciated!!! Thanks, Rakshith Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241773,241773#msg-241773 From contact at jpluscplusm.com Mon Aug 12 08:23:27 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 12 Aug 2013 09:23:27 +0100 Subject: NGINX serving data via NFS mount In-Reply-To: <2e77004dd323b3615e45fad7ec16e705.NginxMailingListEnglish@forum.nginx.org> References: <2e77004dd323b3615e45fad7ec16e705.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 12 Aug 2013 08:34, "Rakshith" wrote: > > Hi, > > Can anybody tell me what are the things needed by nginx to forward the > request via the NFS mount point?? Changes to the config file as such?? Nothing special is needed in my experience. That's the point of NFS exposing a "normal" file system to user-space applications. Some things you may wish to check: Your curl invocation's host doesn't match the config Are you sure you're hitting that nginx server{}? Try it with vs0 instead of the IP (you may need a hosts file entry, of course) Check the nginx error log. Check the permissions, the directory and file ownership are all correct and allow the nginx daemon access - not just on the file you're accessing but all the directories in the FS hierarchy leading to it. Ownership mismatches are a common operational NFS problem Lastly, I would *never* point nginx to the root of a filer's FS. Even in testing it's a bad idea. Create a directory to hold your content. Cheers, Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon Aug 12 08:24:25 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 12 Aug 2013 20:24:25 +1200 Subject: NGINX serving data via NFS mount In-Reply-To: <2e77004dd323b3615e45fad7ec16e705.NginxMailingListEnglish@forum.nginx.org> References: <2e77004dd323b3615e45fad7ec16e705.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52089BB9.50306@greengecko.co.nz> It makes no difference what file system the file is on. You just need to ensure that the files are accessible, so take care with uid/gid used to mount, as well as file ownership. Standard entries in /etc/exports work from what I remember. You will have a performance hit to contend with. I usually use lsync with a backup of rsync and keep the files local. hth, Steve On 12/08/13 19:34, Rakshith wrote: > Hi, > > Can anybody tell me what are the things needed by nginx to forward the > request via the NFS mount point?? Changes to the config file as such?? > > The config file looks like as shown below: > > http { > ..... > ......... > server { > listen *:80 default accept_filter=httpready; > server_name vs0; > root /var/home/diag; > autoindex on; > > } > > > The mount path is as shown above against the root entry. > > This config is resulting in an error when i try to send request using Curl > as shown below: > > [rakshith at cyclnb15 ~]$ curl -X GET -qvk > http://10.238.62.234:80/vol1_mnt_point/output.dat > > < HTTP/1.1 404 Not Found > > But the file actually exists: > > bash-3.2# pwd > /var/home/diag/vol1_mnt_point > > bash-3.2# ls > .snapshot nginx.tar output.dat > > > Any help on this is greatly appreciated!!! > > Thanks, > Rakshith > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241773,241773#msg-241773 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Aug 12 08:41:06 2013 From: nginx-forum at nginx.us (Rakshith) Date: Mon, 12 Aug 2013 04:41:06 -0400 Subject: NGINX serving data via NFS mount In-Reply-To: <2e77004dd323b3615e45fad7ec16e705.NginxMailingListEnglish@forum.nginx.org> References: <2e77004dd323b3615e45fad7ec16e705.NginxMailingListEnglish@forum.nginx.org> Message-ID: <21f3078c97ef602038de1cb0943d9a0d.NginxMailingListEnglish@forum.nginx.org> So here is what the export policy looks like: Policy Rule Access Client RO Vserver Name Index Protocol Match Rule ------------ --------------- ------ -------- --------------------- --------- vs0 default 1 any 0.0.0.0/0 any So i would like my nginx server as below: Receive GET/PUT request from a client. Forward the request to the NFS client via the NFS mount point. The NFS client which has mounted the file system would then use NFS to fetch the file. So to summarize, nginx server just acts like a proxy here.. FYI: I did try doing a GET and PUT via the VFS and it worked... The config file looks something like below: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; dav_methods PUT DELETE MKCOL COPY MOVE; create_full_put_path on; client_max_body_size 10G; server { listen *:80 default accept_filter=httpready; server_name vs0; root /clus/vs0; autoindex on; location = /favicon.ico { access_log off; log_not_found off; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241773,241777#msg-241777 From contact at jpluscplusm.com Mon Aug 12 11:49:12 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 12 Aug 2013 12:49:12 +0100 Subject: NGINX serving data via NFS mount In-Reply-To: <21f3078c97ef602038de1cb0943d9a0d.NginxMailingListEnglish@forum.nginx.org> References: <2e77004dd323b3615e45fad7ec16e705.NginxMailingListEnglish@forum.nginx.org> <21f3078c97ef602038de1cb0943d9a0d.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 12 Aug 2013 09:41, "Rakshith" wrote: > > So here is what the export policy looks like: > > Policy Rule Access Client RO > Vserver Name Index Protocol Match Rule > ------------ --------------- ------ -------- --------------------- > --------- > vs0 default 1 any 0.0.0.0/0 any That means nothing to me (in this nginx context). You need to check *file* permissions/ownership at the Unix FS level. > So i would like my nginx server as below: > > Receive GET/PUT request from a client. > Forward the request to the NFS client via the NFS mount point. > The NFS client which has mounted the file system would then use NFS to fetch > the file. You need to explain this better. Nginx won't give a damn that the file is on NFS, but what you're explaining has nothing to do with nginx! Nginx doesn't talk "NFS" in any way. > So to summarize, nginx server just acts like a proxy here.. Given what you've explained, this is wrong. I /think/ you want Nginx to serve filesystem-accessible files (admittedly stored on a filer) and have got the concept of a proxy here in your head wrongly. > FYI: I did try doing a GET and PUT via the VFS and it worked. Demonstrate this test please. > The config > file looks something like below: So if that works, what's the problem? Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dekispi at gmail.com Mon Aug 12 12:26:25 2013 From: dekispi at gmail.com (Spirovski Dejan) Date: Mon, 12 Aug 2013 14:26:25 +0200 Subject: Character references Message-ID: If I use character reference in html file to represent a character and web server sends the file on browser request, how the browser will decode the character reference? My Nginx web server is configured to not send character encoding in the header I have set character encoding in the meta tag on page level to utf8. -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Aug 12 13:06:03 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 12 Aug 2013 15:06:03 +0200 Subject: Character references In-Reply-To: References: Message-ID: Hi Dejan, > If I use character reference in html file to represent a character and > web server sends the file on browser request, how the browser will > decode the character reference? > My Nginx web server is configured to not send character encoding in the > header I have set character encoding in the meta tag on page level to > utf8. This is off-topic, as this is about browser behavior, not webserver or nginx specific behavior. My experience is that, when both html meta tag and HTTP header are setting the charset, the one in the HTTP header takes precedence. When the HTTP header doesn't specify the charset, browser usually refer to the html meta tag. Different browser vendor and releases may have a different behavior. YMMV. I strongly suggest you set the correct charset in the HTTP header. Regards, Lukas From dekispi at gmail.com Mon Aug 12 13:27:09 2013 From: dekispi at gmail.com (Spirovski Dejan) Date: Mon, 12 Aug 2013 15:27:09 +0200 Subject: Character references In-Reply-To: References: Message-ID: Lukas yeah thanks I will I guess I aggree with you but I am jst asking On Mon, Aug 12, 2013 at 3:06 PM, Lukas Tribus wrote: > Hi Dejan, > > > > If I use character reference in html file to represent a character and > > web server sends the file on browser request, how the browser will > > decode the character reference? > > My Nginx web server is configured to not send character encoding in the > > header I have set character encoding in the meta tag on page level to > > utf8. > > This is off-topic, as this is about browser behavior, not webserver or > nginx specific behavior. > > My experience is that, when both html meta tag and HTTP header are setting > the charset, the one in the HTTP header takes precedence. When the HTTP > header doesn't specify the charset, browser usually refer to the html meta > tag. Different browser vendor and releases may have a different behavior. > > YMMV. > > > I strongly suggest you set the correct charset in the HTTP header. > > > > Regards, > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roinacio at gmail.com Mon Aug 12 14:11:38 2013 From: roinacio at gmail.com (Rodrigo Serra Inacio) Date: Mon, 12 Aug 2013 11:11:38 -0300 Subject: How to rewrite with cookie Message-ID: Hi, it's possible to rewrite a mobile URL using cookies? For example, when you acess a URL with a mobile device (android) , nginx shoul read the cookie and redirect this device according to the android model ... Something like this >From Samsung Galaxy GTI900 redirect to http://mysite.com/gti900 >From Samsung Tables I200 redirect to http://mysite.com/i200 It's possible ? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nurahmadie at gmail.com Mon Aug 12 15:09:57 2013 From: nurahmadie at gmail.com (Adie Nurahmadie) Date: Mon, 12 Aug 2013 22:09:57 +0700 Subject: How to rewrite with cookie In-Reply-To: References: Message-ID: yes, it's possible. The simplest way is to use if and check either $cookie_XXX or $http_user_agent variable. You may want to explore nginx's wiki page here http://wiki.nginx.org/HttpCoreModule#.24cookie_COOKIE On Mon, Aug 12, 2013 at 9:11 PM, Rodrigo Serra Inacio wrote: > Hi, it's possible to rewrite a mobile URL using cookies? > For example, when you acess a URL with a mobile device (android) , nginx > shoul read the cookie and redirect this device according to the android > model ... > > Something like this > > From Samsung Galaxy GTI900 redirect to http://mysite.com/gti900 > From Samsung Tables I200 redirect to http://mysite.com/i200 > > It's possible ? > Thanks. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- regards, Nurahmadie -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 12 17:32:57 2013 From: nginx-forum at nginx.us (offmind) Date: Mon, 12 Aug 2013 13:32:57 -0400 Subject: Recommendations for safeguarding against BREACH ? In-Reply-To: <301EB14A-56C5-4CA4-B198-E190394C17C9@sysoev.ru> References: <301EB14A-56C5-4CA4-B198-E190394C17C9@sysoev.ru> Message-ID: And what if we are using gzip_static? As far as I understand, we have to block gzipping page code. But what about .js .css with no secure content? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241591,241794#msg-241794 From roinacio at gmail.com Mon Aug 12 18:51:51 2013 From: roinacio at gmail.com (Rodrigo Serra Inacio) Date: Mon, 12 Aug 2013 15:51:51 -0300 Subject: How to rewrite with cookie In-Reply-To: References: Message-ID: Hi, What do you think is more efficient...cookies or redirect by the user agent ? Thank you 2013/8/12 Adie Nurahmadie > yes, it's possible. > > The simplest way is to use if and check either $cookie_XXX or > $http_user_agent variable. > You may want to explore nginx's wiki page here > http://wiki.nginx.org/HttpCoreModule#.24cookie_COOKIE > > > On Mon, Aug 12, 2013 at 9:11 PM, Rodrigo Serra Inacio wrote: > >> Hi, it's possible to rewrite a mobile URL using cookies? >> For example, when you acess a URL with a mobile device (android) , nginx >> shoul read the cookie and redirect this device according to the android >> model ... >> >> Something like this >> >> From Samsung Galaxy GTI900 redirect to http://mysite.com/gti900 >> From Samsung Tables I200 redirect to http://mysite.com/i200 >> >> It's possible ? >> Thanks. >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > regards, > Nurahmadie > -- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 12 19:30:49 2013 From: nginx-forum at nginx.us (ruslan_osmanov) Date: Mon, 12 Aug 2013 15:30:49 -0400 Subject: Internals: how do I send large file to the client? Message-ID: <794284d6d0f72d16ea91cd56d36b070a.NginxMailingListEnglish@forum.nginx.org> Hi, I'm writing a filter module which implies a backend to be sending XML with information about files that have to be concatenated and sent to the client. One way to send a file is to `ngx_read_file` into a buffer allocated in the heap(pool) and push it onto the chain. However, I obviously can't allocate ~10G in the heap. I have to send it chunk-by-chunk. How do I perform such kind of I/O? Regards. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241796,241796#msg-241796 From contact at jpluscplusm.com Mon Aug 12 19:47:10 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 12 Aug 2013 20:47:10 +0100 Subject: How to rewrite with cookie In-Reply-To: References: Message-ID: On 12 Aug 2013 19:52, "Rodrigo Serra Inacio" wrote: > > Hi, > What do you think is more efficient...cookies or redirect by the user agent ? If you do it based on UA *at*the*network*border* you'll block mobile users from switching to your desktop site if they really want to. I /hate/ sites that do that ... J -------------- next part -------------- An HTML attachment was scrubbed... URL: From farseas at gmail.com Mon Aug 12 20:03:37 2013 From: farseas at gmail.com (Bob S.) Date: Mon, 12 Aug 2013 16:03:37 -0400 Subject: How to rewrite with cookie In-Reply-To: References: Message-ID: I agree. That is not the right way to design a website. What about portable website development anyway? Lean website design works for me and the websites that we design look great on virtually any interface. They are not gaudy though and do not feature a bunch of flashy details. Lean and clean. On Mon, Aug 12, 2013 at 3:47 PM, Jonathan Matthews wrote: > On 12 Aug 2013 19:52, "Rodrigo Serra Inacio" wrote: > > > > Hi, > > What do you think is more efficient...cookies or redirect by the user > agent ? > > If you do it based on UA *at*the*network*border* you'll block mobile users > from switching to your desktop site if they really want to. I /hate/ sites > that do that ... > > J > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Tue Aug 13 12:31:01 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 13 Aug 2013 16:31:01 +0400 Subject: upstream max_fails disable In-Reply-To: <520368AA.4060205@blueyonder.co.uk> References: <520368AA.4060205@blueyonder.co.uk> Message-ID: <20130813123101.GC52681@lo0.su> On Thu, Aug 08, 2013 at 10:45:14AM +0100, Richard Kearsley wrote: > Hi > I'm using the upstream module - with sole purpose to enable keepalives > to my backend > I don't want to use any of the other features, I only have 1 server in > the upstream {} > Does that mean max_fails is still being used? (defaults to 1?) and > fail_timeout etc..? they both have default values > What happens if they are "all" marked as down? If there's a single server, max_fails and fail_timeout parameters are ignored, and such a server will never become temporarily down. Please bear in mind that defining a server using a domain name that resolves to several IP addresses essentially defines several servers: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server > If the 10.100.0.11 is down, I would like it to just keep using it and > just return 502 if it's down > > upstream test > { > server 10.100.0.11; > keepalive 100; > } Your expectations match the current nginx behavior. From j.vanarragon at lukkien.com Tue Aug 13 13:12:11 2013 From: j.vanarragon at lukkien.com (Jaap van Arragon) Date: Tue, 13 Aug 2013 15:12:11 +0200 Subject: Limit connection to specific location In-Reply-To: Message-ID: Hello, I'am looking for a way to limit the number of connection in one hour to a location named /api/ I've looked at the ngx_http_limit_conn_module module but I don't understand how to limit the amount of connection from a specific ip address per hour. For example: ip address 33.33.33.33 can only make 20 connections in one hour to the url /api/ We use nginx as a loadbalancer/proxy. Does somebody has a example for this? Thanks. Regards. Jaap -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Aug 13 15:25:13 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 13 Aug 2013 19:25:13 +0400 Subject: fix bug in http_referer_module that using incorrect input string length in the regex matching process when header Referer starts with https:// In-Reply-To: References: Message-ID: <57902FCA-ED78-4A16-90CC-5EFF059A3093@nginx.com> On Aug 12, 2013, at 9:27 AM, Liangbin Li wrote: > --- ngx_http_referer_module.c > +++ ngx_http_referer_module.c > @@ -147,10 +147,12 @@ > > if (ngx_strncasecmp(ref, (u_char *) "http://", 7) == 0) { > ref += 7; > + len -= 7; > goto valid_scheme; > > } else if (ngx_strncasecmp(ref, (u_char *) "https://", 8) == 0) { > ref += 8; > + len -= 8; > goto valid_scheme; > } > } > @@ -191,7 +193,7 @@ > ngx_int_t rc; > ngx_str_t referer; > > - referer.len = len - 7; > + referer.len = len; > referer.data = ref; > > rc = ngx_regex_exec_array(rlcf->regex, &referer, r->connection->log); Committed, thanks! -- Sergey Kandaurov pluknet at nginx.com From pablo at libo.com.ar Tue Aug 13 15:53:27 2013 From: pablo at libo.com.ar (Pablo J. Villarruel) Date: Tue, 13 Aug 2013 12:53:27 -0300 Subject: Limit connection to specific location In-Reply-To: References: Message-ID: Good question! On Tue, Aug 13, 2013 at 10:12 AM, Jaap van Arragon wrote: > Hello, > > I'am looking for a way to limit the number of connection in one hour to a > location named /api/ > > I've looked at the ngx_http_limit_conn_module module but I don't > understand how to limit the amount of connection from a specific > ip address per hour. > > For example: ip address 33.33.33.33 can only make 20 connections in one > hour to the url /api/ > > We use nginx as a loadbalancer/proxy. > > Does somebody has a example for this? > > Thanks. > > Regards. > Jaap > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ------------------- Pablo J. Villarruel / pablo at libo.com.ar -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Aug 13 16:36:25 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 13 Aug 2013 12:36:25 -0400 Subject: upstream max_fails disable In-Reply-To: <20130813123101.GC52681@lo0.su> References: <520368AA.4060205@blueyonder.co.uk> <20130813123101.GC52681@lo0.su> Message-ID: Hello, On Tue, Aug 13, 2013 at 8:31 AM, Ruslan Ermilov wrote: > > If there's a single server, max_fails and fail_timeout parameters > are ignored, and such a server will never become temporarily down. > > ?That would be worth mentioning in the Nginx documentation?... > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server > > --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From farseas at gmail.com Tue Aug 13 17:23:15 2013 From: farseas at gmail.com (Bob S.) Date: Tue, 13 Aug 2013 13:23:15 -0400 Subject: Limit connection to specific location In-Reply-To: References: Message-ID: One dirty way to do it would be to use a program to monitor the connections that access that location and then, when 20 connections in an hour have occurred, have the config file swapped out and replaced with another that does not have that location block. There is a way to get Nginx to reread it's config file without shutting it down. Have cron restart the whole sequence again every hour. You are using Linux/Unix I hope. Like I said, it's a dirty but relatively easy solution. On Tue, Aug 13, 2013 at 11:53 AM, Pablo J. Villarruel wrote: > Good question! > > > On Tue, Aug 13, 2013 at 10:12 AM, Jaap van Arragon < > j.vanarragon at lukkien.com> wrote: > >> Hello, >> >> I'am looking for a way to limit the number of connection in one hour to a >> location named /api/ >> >> I've looked at the ngx_http_limit_conn_module module but I don't >> understand how to limit the amount of connection from a specific >> ip address per hour. >> >> For example: ip address 33.33.33.33 can only make 20 connections in one >> hour to the url /api/ >> >> We use nginx as a loadbalancer/proxy. >> >> Does somebody has a example for this? >> >> Thanks. >> >> Regards. >> Jaap >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > ------------------- > Pablo J. Villarruel / pablo at libo.com.ar > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Aug 13 17:39:25 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 13 Aug 2013 13:39:25 -0400 Subject: Limit connection to specific location In-Reply-To: References: Message-ID: Hello, On Tue, Aug 13, 2013 at 9:12 AM, Jaap van Arragon wrote: > Hello, > > I'am looking for a way to limit the number of connection in one hour to a > location named /api/ > > I've looked at the ngx_http_limit_conn_module module but I don't > understand how to limit the amount of connection from a specific > ip address per hour. > > For example: ip address 33.33.33.33 can only make 20 connections in one > hour to the url /api/ > > ?Limiting connections rate sounds typically like a job that must be done by some firewall to me... Have you tried to look after iptables and its 'recent' module?? We use nginx as a loadbalancer/proxy. > > Does somebody has a example for this? > > Thanks. > > Regards. > Jaap > > --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From farseas at gmail.com Tue Aug 13 17:59:38 2013 From: farseas at gmail.com (Bob S.) Date: Tue, 13 Aug 2013 13:59:38 -0400 Subject: Limit connection to specific location In-Reply-To: References: Message-ID: Maybe you should tell us what yo are trying to do in more detail. If all you are trying to do is rate limit, there are easier ways to do it. On Tue, Aug 13, 2013 at 1:39 PM, B.R. wrote: > Hello, > > On Tue, Aug 13, 2013 at 9:12 AM, Jaap van Arragon < > j.vanarragon at lukkien.com> wrote: > >> Hello, >> >> I'am looking for a way to limit the number of connection in one hour to a >> location named /api/ >> >> I've looked at the ngx_http_limit_conn_module module but I don't >> understand how to limit the amount of connection from a specific >> ip address per hour. >> >> For example: ip address 33.33.33.33 can only make 20 connections in one >> hour to the url /api/ >> >> Limiting connections rate sounds typically like a job that must be done > by some firewall to me... > Have you tried to look after iptables and its 'recent' module? > > We use nginx as a loadbalancer/proxy. >> >> Does somebody has a example for this? >> >> Thanks. >> >> Regards. >> Jaap >> >> --- > *B. R.* > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Aug 13 19:09:43 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 Aug 2013 23:09:43 +0400 Subject: Limit connection to specific location In-Reply-To: References: Message-ID: <201308132309.43479.vbart@nginx.com> On Tuesday 13 August 2013 17:12:11 Jaap van Arragon wrote: > Hello, > > I'am looking for a way to limit the number of connection in one hour to a > location named /api/ > > I've looked at the ngx_http_limit_conn_module module but I don't understand > how to limit the amount of connection from a specific ip address per hour. > > For example: ip address 33.33.33.33 can only make 20 connections in one > hour to the url /api/ > > We use nginx as a loadbalancer/proxy. > > Does somebody has a example for this? > You can try to use the limit_req module: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html But the minimum limit you can currently set is 1 request per minute. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Aug 14 00:39:04 2013 From: nginx-forum at nginx.us (nmarques) Date: Tue, 13 Aug 2013 20:39:04 -0400 Subject: nginx-extras (1.4.1 Ubuntu precise) cache loader/manager issue Message-ID: Dear people, I used for a while the nginx-extras 1.4.1 for Ubuntu 12.04 LTS (precise); I used this package since it supported 'more_clear_headers' which was useful to hide some headers (LifeRay headers). As you have guessed, I'm using nginx for reverse proxying and from a while for caching (as an alternative to varnish). I've run into a strange problem; If I use nginx-full I have no problems what-so-ever and the cache manager/loader do run properly and it works like I charm (but I can't hide the headers! any alternative method is most welcome); If I use the nginx-extras the cache manager/loader is utterly broken and I get this output: 2013/08/14 01:14:18 [info] 30478#0: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:52 2013/08/14 01:14:19 [alert] 30500#0: epoll_ctl(1, 0) failed (1: Operation not permitted) 2013/08/14 01:14:19 [alert] 30500#0: failed to register channel handler while initializing push module worker (1: Operation not permitted) 2013/08/14 01:14:19 [alert] 30499#0: epoll_ctl(1, 0) failed (1: Operation not permitted) 2013/08/14 01:14:19 [alert] 30499#0: failed to register channel handler while initializing push module worker (1: Operation not permitted) 2013/08/14 01:14:19 [alert] 30490#0: cache manager process 30499 exited with fatal code 2 and cannot be respawned Now, either this build has some addon which breaks nginx (?) or the build somehow is broken in some weird way... Can someone please test this and checkout if everything is OK with those binaries? Any feedback is most welcomed. NM PS: By the way, awesome cache performance (over a ramdrive)! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241818,241818#msg-241818 From nginx-forum at nginx.us Wed Aug 14 01:47:29 2013 From: nginx-forum at nginx.us (sufw) Date: Tue, 13 Aug 2013 21:47:29 -0400 Subject: How to define multiple resolvers? In-Reply-To: <4e5f4e5e.6731440a.12df.fffff98f@mx.google.com> References: <4e5f4e5e.6731440a.12df.fffff98f@mx.google.com> Message-ID: <1c2de746f472f67b4ac602eea72c4212.NginxMailingListEnglish@forum.nginx.org> According to the documentation (http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver), it's been possible to define multiple resolvers since 1.1.7. However the documentation does not provide the syntax for doing so. Separating the IP addresses with spaces seems to work for me (in 1.4.2): resolver 10.0.0.254 10.1.0.254 8.8.8.8; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,214613,241819#msg-241819 From vbart at nginx.com Wed Aug 14 07:41:29 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 14 Aug 2013 11:41:29 +0400 Subject: nginx-extras (1.4.1 Ubuntu precise) cache loader/manager issue In-Reply-To: References: Message-ID: <201308141141.29964.vbart@nginx.com> On Wednesday 14 August 2013 04:39:04 nmarques wrote: > Dear people, > > I used for a while the nginx-extras 1.4.1 for Ubuntu 12.04 LTS (precise); I > used this package since it supported 'more_clear_headers' which was useful > to hide some headers (LifeRay headers). As you have guessed, I'm using > nginx for reverse proxying and from a while for caching (as an alternative > to varnish). > > I've run into a strange problem; If I use nginx-full I have no problems > what-so-ever and the cache manager/loader do run properly and it works like > I charm (but I can't hide the headers! any alternative method is most > welcome); If I use the nginx-extras the cache manager/loader is utterly > broken and I get this output: [...] Why don't you use "proxy_hide_header" (or "fastcgi_hide_header" in case if you use fastcgi)? http://nginx.org/r/proxy_hide_header http://nginx.org/r/fastcgi_hide_header [...] > Now, either this build has some addon which breaks nginx (?) or the build > somehow is broken in some weird way... The "nginx-extras" package from debian/ubuntu community repository has many 3-rd party modules that can break nginx. We recommend official nginx packages: http://nginx.org/en/linux_packages.html wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Aug 14 09:09:08 2013 From: nginx-forum at nginx.us (tcbarrett) Date: Wed, 14 Aug 2013 05:09:08 -0400 Subject: Proxying with/without listen in server block Message-ID: Does having a listen directive in a server block over ride blocks without? I have a slightly complex set up, proxying traffic depending on url to various other machines on the network. Something a bit like this: http://pastebin.com/MSAFJKLV The middle block hogs all the traffic, and all requests are sent to mysssl.example.com. What am I missing? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241824,241824#msg-241824 From nginx-forum at nginx.us Wed Aug 14 09:48:38 2013 From: nginx-forum at nginx.us (tcbarrett) Date: Wed, 14 Aug 2013 05:48:38 -0400 Subject: Proxying with/without listen in server block In-Reply-To: References: Message-ID: <5cd94684178c5e470211e57d98b9337b.NginxMailingListEnglish@forum.nginx.org> Am I missing this: "If a server is the only server for a listen port, then nginx will not test server names at all (and will not build the hash tables for the listen port). However, there is one exception. If a server name is a regular expression with captures, then nginx has to execute the expression to get the captures." http://nginx.org/en/docs/http/server_names.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241824,241826#msg-241826 From francis at daoine.org Wed Aug 14 10:50:59 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 14 Aug 2013 11:50:59 +0100 Subject: Proxying with/without listen in server block In-Reply-To: <5cd94684178c5e470211e57d98b9337b.NginxMailingListEnglish@forum.nginx.org> References: <5cd94684178c5e470211e57d98b9337b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130814105059.GX27161@craic.sysops.org> On Wed, Aug 14, 2013 at 05:48:38AM -0400, tcbarrett wrote: Hi there, > Am I missing this: I think you're missing this: http://nginx.org/en/docs/http/request_processing.html#mixed_name_ip_based_servers coupled with the default value for "listen", as in "what is meant by not having a listen in a server block", which is at http://nginx.org/r/listen The answer to your original question: > Does having a listen directive in a server block over ride blocks without? is "yes, if the listen directive is different from the default". f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Aug 14 10:56:32 2013 From: nginx-forum at nginx.us (MKl) Date: Wed, 14 Aug 2013 06:56:32 -0400 Subject: ssl_cipher for mail not working Message-ID: <1c57a7ca7627379cb969524c90db5f49.NginxMailingListEnglish@forum.nginx.org> Hello, to increase security of SSL I added some eliptic-curves-ciphers to the chain. For HTTPS it's working fine, but for the mail proxy it does not work, I only always get RC4-SHA instead of the ECDH ciphers. See configuration at the end of this message. I'm testing it with: openssl s_client -cipher 'ECDH:DH' -connect domain.de:443 openssl s_client -cipher 'ECDH:DH' -connect imap.domain.de:993 The first command gives me a successful connection with ECDHE-RSA-RC4-SHA, so for HTTPS the cipherlist is used. The second command fails with an error: "sslv3 alert handshake failure", the IMAPS server does not provide ECDH support. I used exactly the same ssl_cipher line for HTTPS and the mail proxy. When using the following command without forcing any ciphers on the client I can see that RC4-SHA is the "best" cipher that is supported and used: openssl s_client -connect imap.domain.de:993 Anybody has an idea where the problem is? Thanks in advance Michael ================ mail { auth_http 127.0.0.1/mailauth.php; proxy on; starttls on; ## enable STARTTLS for all mail servers ssl_prefer_server_ciphers on; ssl_protocols TLSv1.1 TLSv1.2 TLSv1 SSLv3; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH:!CAMELLIA; ssl_session_cache shared:TLSSL:16m; ssl_session_timeout 10m; ssl_certificate star_domain_de.crt; ssl_certificate_key star_domain_de.key; ## default, STARTTLS is appended because of starttls directive above imap_capabilities "IMAP4rev1" "LITERAL+" "SASL-IR" "LOGIN-REFERRALS" "ID" "ENABLE" "IDLE" "NAMESPACE" "AUTH=LOGIN" "AUTH=DIGEST-MD5" "AUTH=CRAM-MD5"; pop3_capabilities "TOP" "USER"; server { ssl on; listen [::]:993; protocol imap; server_name imap.domain.de; proxy_pass_error_message on; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241834,241834#msg-241834 From nginx-forum at nginx.us Wed Aug 14 11:08:57 2013 From: nginx-forum at nginx.us (nmarques) Date: Wed, 14 Aug 2013 07:08:57 -0400 Subject: nginx-extras (1.4.1 Ubuntu precise) cache loader/manager issue In-Reply-To: <201308141141.29964.vbart@nginx.com> References: <201308141141.29964.vbart@nginx.com> Message-ID: > Why don't you use "proxy_hide_header" (or "fastcgi_hide_header" in > case if you > use fastcgi)? > > http://nginx.org/r/proxy_hide_header > http://nginx.org/r/fastcgi_hide_header Worked perfectly for me; Thanks for poiting this. > The "nginx-extras" package from debian/ubuntu community repository has > many > 3-rd party modules that can break nginx. > > We recommend official nginx packages: > http://nginx.org/en/linux_packages.html Worked fine for me; Going to to update for Production soon. All the funcionality I require is available on your packages. Thank you very much. NM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241818,241838#msg-241838 From nhadie at gmail.com Wed Aug 14 11:19:53 2013 From: nhadie at gmail.com (ron ramos) Date: Wed, 14 Aug 2013 19:19:53 +0800 Subject: flush temp directory Message-ID: Hi All, I am trying to test accelerated upload on nginx/php-fpm/php-cgi setup and comparing different scenarios e.g one where /temp is a tmpfs, one where it is a disk partition and you will also notice where in i test using php-cgi. as i need to understand which can handle file uploads faster. location ~ \.php$ { include fastcgi_params; client_body_temp_path /temp; fastcgi_pass_request_body off; client_body_in_file_only on; fastcgi_param REQUEST_BODY_FILE $request_body_file; #use php-cgi #fastcgi_pass 127.0.0.1:10005; #use php-fpm fastcgi_pass 127.0.0.1:9000; fastcgi_index $dir_index; fastcgi_param DOCUMENT_ROOT $doc_root; fastcgi_split_path_info ^(.+?\.php)(/.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } i encountered one issue where in the /temp is a tmpfs (size is just 1GB), after uploading a couple of files i encountered this: *58 pwrite() "/temp/0000000053" failed (28: No space left on device) shouldn't it be flushed once the upload is done? or do i need to add some config to flush it automatic? Im using a php script that uses curl to do the uploading, here's the script that manages the upload (curlupload.php) i removed the file after it is uploaded so i can run this script in loop, the script that uploads the file: (testupload.php) '123456','file_contents'=>'@'.$file_name_with_full_path); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$target_url); curl_setopt($ch, CURLOPT_POST,1); curl_setopt($ch, CURLOPT_POSTFIELDS, $post); $result=curl_exec ($ch); curl_close ($ch); echo $result; ?> Another thing i noticed is that when not using tmpfs, I/O is high the server when this is set: client_body_temp_path /temp; fastcgi_pass_request_body off; client_body_in_file_only on; fastcgi_param REQUEST_BODY_FILE $request_body_file; Thank You in advance, any help or idea would be appreciated. Regards, Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.vanarragon at lukkien.com Wed Aug 14 11:54:50 2013 From: j.vanarragon at lukkien.com (Jaap van Arragon) Date: Wed, 14 Aug 2013 13:54:50 +0200 Subject: Limit connection to specific location In-Reply-To: <201308132309.43479.vbart@nginx.com> Message-ID: I've tried the limit_req but the problem is that it limits the simultaneous requests and I want to limit the total request per hour from one ip (not necessarily simultaneously) We've fixed it in the application now, there seemed to be a django view module for it. Thanks for the options. Regards Jaap On 8/13/13 9:09 PM, "Valentin V. Bartenev" wrote: >On Tuesday 13 August 2013 17:12:11 Jaap van Arragon wrote: >> Hello, >> >> I'am looking for a way to limit the number of connection in one hour to >>a >> location named /api/ >> >> I've looked at the ngx_http_limit_conn_module module but I don't >>understand >> how to limit the amount of connection from a specific ip address per >>hour. >> >> For example: ip address 33.33.33.33 can only make 20 connections in one >> hour to the url /api/ >> >> We use nginx as a loadbalancer/proxy. >> >> Does somebody has a example for this? >> > >You can try to use the limit_req module: >http://nginx.org/en/docs/http/ngx_http_limit_req_module.html > >But the minimum limit you can currently set is 1 request per minute. > > wbr, Valentin V. Bartenev > >-- >http://nginx.org/en/donation.html > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From coolbhushans at gmail.com Wed Aug 14 12:09:30 2013 From: coolbhushans at gmail.com (Bhushan Sonawane) Date: Wed, 14 Aug 2013 17:39:30 +0530 Subject: Internals: how do I send large file to the client? In-Reply-To: <794284d6d0f72d16ea91cd56d36b070a.NginxMailingListEnglish@forum.nginx.org> References: <794284d6d0f72d16ea91cd56d36b070a.NginxMailingListEnglish@forum.nginx.org> Message-ID: have u know the split command in linux . you can use that to split file then send it after you can use join command to join files On Tue, Aug 13, 2013 at 1:00 AM, ruslan_osmanov wrote: > Hi, > > I'm writing a filter module which implies a backend to be sending XML with > information about files that have to be concatenated and sent to the > client. > > One way to send a file is to `ngx_read_file` into a buffer allocated in the > heap(pool) and push it onto the chain. However, I obviously can't allocate > ~10G > in the heap. I have to send it chunk-by-chunk. How do I perform such kind > of > I/O? > > Regards. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,241796,241796#msg-241796 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Aug 14 14:15:21 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 14 Aug 2013 18:15:21 +0400 Subject: Limit connection to specific location In-Reply-To: References: Message-ID: <201308141815.21195.vbart@nginx.com> On Wednesday 14 August 2013 15:54:50 Jaap van Arragon wrote: > I've tried the limit_req but the problem is that it limits the > simultaneous requests and I want to limit the total request per hour from > one ip (not necessarily simultaneously) The number of simultaneous requests is limited by the *limit_conn* module: http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html But I mentioned the *limit_req* module: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Aug 14 14:32:12 2013 From: nginx-forum at nginx.us (ruslan_osmanov) Date: Wed, 14 Aug 2013 10:32:12 -0400 Subject: Internals: how do I send large file to the client? In-Reply-To: References: Message-ID: <7a4e449d0f2ef4d1b76e1c592bc1abe1.NginxMailingListEnglish@forum.nginx.org> I know those commands. But the question was about Nginx's internals. I thought somebody would suggest a pseudo-code snippet similar to the following: ngx_buf_t b; size_t length = 0; loop (files as file) { ... u_char *filename = file->name; if (ngx_open_cached_file(ccf->open_file_cache, filename, &of, r->pool) != NGX_OK) return NGX_ERROR; length += of.size; b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); if (b == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t)); if (b->file == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } b->file_pos = 0; b->file_last = of.size; b->in_file = b->file_last ? 1 : 0; b->file->fd = of.fd; b->file->name = *filename; b->file->log = r->connection->log; b->file->directio = of.is_directio; cl = ngx_alloc_chain_link(r->pool); if (cl == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } cl->buf = b; *last_out = cl; last_out = &cl->next; cl->next = NULL; ... } I've found ngx_open_cached_file and ngx_alloc_chain_link just recently. I see, there should be a way to chain open files without actually performing I/O myself. Still have no clear understanding how it works and how one should use the cached files' API. coolbhushans at gmail.com Wrote: ------------------------------------------------------- > have u know the split command in linux . you can use that to split > file > then send it after you can use join command to join files > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241796,241846#msg-241846 From nginx-forum at nginx.us Wed Aug 14 17:20:51 2013 From: nginx-forum at nginx.us (spacecwoboy) Date: Wed, 14 Aug 2013 13:20:51 -0400 Subject: Cookie/Session Expired - OWA SSL Reverse Proxy Message-ID: <9ffef8a79693b32bb791460133585d71.NginxMailingListEnglish@forum.nginx.org> Hi. Trying to configure a reverse proxy to allow external access to an outlook web access server. I am able to route traffic through the NGINX to the OWA server, present the web page, and place the username & pw into the form. OWA rejects valid username/pwd's with a: "Your session has timed out...." error. Looking through my custom log files, somehow the session ID and the expired values are munged in the GET & POST process through the proxy. There may be a simple fix that I'm not able to find. Any suggestions will be appreciated! =======Logs====== $request |[set_cookie - "$sent_http_set_cookie" ]|' ==========Logs========= POST /owa/auth.owa HTTP/1.1 |[ set_cookie - "sessionid=9a0d1af8-9406-4c3d-b225-cf28e56a8bb6; path=/" ]| GET /owa/ HTTP/1.1 |[ set_cookie - "sessionid=; path=/; expires=Thu, 01-Jan-1970 00:00:00 GMT" ]| GET /owa/auth/logon.aspx?url=https://email.internal.local/owa/&reason=3 HTTP/1.1 |[ set_cookie - "-" ]| GET /owa/auth/logon.aspx?replaceCurrent=1&reason=3&url=https%3a%2f%2femail.internal.local%2fowa%2f HTTP/1.1 |[ set_cookie - "-" ]| POST /owa/auth.owa HTTP/1.1 |[ set_cookie - "sessionid=50bfb645-4ed1-4bd8-8d69-7fa0e79d748d; path=/" ]| GET /owa/ HTTP/1.1 |[ set_cookie - "sessionid=; path=/; expires=Thu, 01-Jan-1970 00:00:00 GMT" ]| =======OWA======= server { listen 80; server_name email; rewrite ^(,*) https://email$1 permanent; } server { listen 443; server_name email; rewrite ^/$ https://email/owa permanent; ssl on; ssl_certificate /etc/ssl/certs/myssl.crt; ssl_certificate_key /etc/ssl/private/myssl.key; ssl_session_timeout 5m; proxy_read_timeout 360; location /owa { proxy_pass https://email.internal.local/owa; proxy_pass_header Set-Cookie; proxy_pass_header P3P; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241856,241856#msg-241856 From contact at jpluscplusm.com Wed Aug 14 17:35:02 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 14 Aug 2013 18:35:02 +0100 Subject: Cookie/Session Expired - OWA SSL Reverse Proxy In-Reply-To: <9ffef8a79693b32bb791460133585d71.NginxMailingListEnglish@forum.nginx.org> References: <9ffef8a79693b32bb791460133585d71.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 14 August 2013 18:20, spacecwoboy wrote: > Hi. > > Trying to configure a reverse proxy to allow external access to an outlook > web access server. I am able to route traffic through the NGINX to the OWA > server, present the web page, and place the username & pw into the form. > OWA rejects valid username/pwd's with a: "Your session has timed out...." > error. > > Looking through my custom log files, somehow the session ID and the expired > values are munged in the GET & POST process through the proxy. There may be > a simple fix that I'm not able to find. Any suggestions will be > appreciated! I have a vague recollection that OWA uses a nasty form of authentication which *requires* that each client's end-to-end connection to the backend be long-lived, and only used by that one client (as the auth is done in the first few packets and not repeated). I don't know how you'd configure that in nginx. I may be wrong about it, however. I've never tried Nginx in front of OWA myself. This question comes up on the HAProxy list sometimes, and it seems solvable by HAP users. Jonathan From nhadie at gmail.com Thu Aug 15 04:12:23 2013 From: nhadie at gmail.com (ron ramos) Date: Thu, 15 Aug 2013 12:12:23 +0800 Subject: flush temp directory In-Reply-To: References: Message-ID: oooppp sorry found the cause "client_body_in_file_only on" changed it to clean and it's doing it's job. but am i missing something on PHP side or config side as it seems i am getting the same response time using php-fpm without the accelerated support settings and also using php-cgi, response time is the same for all scenarios. with accelerated support using php-fpm: 10.254.12.84 - - [15/Aug/2013:10:26:05 +0800] "POST /curlupload.php HTTP/1.1" 200 359 "-" "-" "-" 33.404 1.342 . 10.254.12.84 - - [15/Aug/2013:10:26:51 +0800] "POST /curlupload.php HTTP/1.1" 200 359 "-" "-" "-" 32.168 1.216 . without accelerated support using php-fpm: 10.254.12.84 - - [15/Aug/2013:11:02:58 +0800] "POST /curlupload.php HTTP/1.1" 200 359 "-" "-" "-" 32.182 1.229 . 10.254.12.84 - - [15/Aug/2013:11:03:32 +0800] "POST /curlupload.php HTTP/1.1" 200 359 "-" "-" "-" 33.218 1.208 . using php-cgi 10.254.12.84 - - [15/Aug/2013:11:48:57 +0800] "POST /curlupload.php HTTP/1.1" 200 359 "-" "-" "-" 32.371 1.418 . 10.254.12.84 - - [15/Aug/2013:11:49:30 +0800] "POST /curlupload.php HTTP/1.1" 200 359 "-" "-" "-" 33.093 1.308 . TIA Regards, Ron On Wed, Aug 14, 2013 at 7:19 PM, ron ramos wrote: > Hi All, > > I am trying to test accelerated upload on nginx/php-fpm/php-cgi setup and > comparing different scenarios > e.g one where /temp is a tmpfs, one where it is a disk partition and you > will also notice where in i test using php-cgi. as i need to understand > which can handle file uploads faster. > > > location ~ \.php$ { > include fastcgi_params; > > client_body_temp_path /temp; > fastcgi_pass_request_body off; > client_body_in_file_only on; > fastcgi_param REQUEST_BODY_FILE $request_body_file; > > #use php-cgi > #fastcgi_pass 127.0.0.1:10005; > > #use php-fpm > fastcgi_pass 127.0.0.1:9000; > > fastcgi_index $dir_index; > fastcgi_param DOCUMENT_ROOT $doc_root; > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > } > > i encountered one issue where in the /temp is a tmpfs (size is just 1GB), > after uploading a couple of files i encountered this: *58 pwrite() > "/temp/0000000053" failed (28: No space left on device) > > shouldn't it be flushed once the upload is done? or do i need to add some > config to flush it automatic? > > > Im using a php script that uses curl to do the uploading, here's the > script that manages the upload (curlupload.php) > > $uploadfile = "upload/" . basename($_FILES['file_contents']['name']); > if (move_uploaded_file($_FILES['file_contents']['tmp_name'], > $uploadfile)) { > echo "File is valid, and was successfully uploaded.\n"; > unlink($uploadfile); > } else { > echo "Possible file upload attack!\n"; > } > print_r($_FILES); > print_r($_POST); > ?> > > i removed the file after it is uploaded so i can run this script in loop, > the script that uploads the file: (testupload.php) > > $target_url = 'http://10.254.12.160/curlupload.php'; > > $file_name_with_full_path = realpath('./test.exe'); > > $post = array('extra_info' => > '123456','file_contents'=>'@'.$file_name_with_full_path); > > $ch = curl_init(); > curl_setopt($ch, CURLOPT_URL,$target_url); > curl_setopt($ch, CURLOPT_POST,1); > curl_setopt($ch, CURLOPT_POSTFIELDS, $post); > $result=curl_exec ($ch); > curl_close ($ch); > echo $result; > ?> > > Another thing i noticed is that when not using tmpfs, I/O is high the > server when this is set: > > client_body_temp_path /temp; > fastcgi_pass_request_body off; > client_body_in_file_only on; > fastcgi_param REQUEST_BODY_FILE $request_body_file; > > Thank You in advance, any help or idea would be appreciated. > > Regards, > Ron > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 15 09:22:17 2013 From: nginx-forum at nginx.us (hx130321) Date: Thu, 15 Aug 2013 05:22:17 -0400 Subject: =?UTF-8?Q?est_v=C3=AAtue_d=27elle_pendant_la_saison_2009-10=2E?= Message-ID: <3179e6c7c3b4b4a6a43f695662b02fc1.NginxMailingListEnglish@forum.nginx.org> [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-womens][b]Nike Air Max 1 Womens[/b][/url] Search for the actual SKU quantity. Each and every genuine set of Nikes has a SKU quantity about the container, in addition to within the footwear about the language content label. Individuals SKU amounts usually complement. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-womens][b]Discount Nike Air Max 1 Womens[/b][/url] Evaluate colour designs, also known as colorways. Web sites for example Air-jordan.com offer a thorough listing of every Air Jordan ever released. You can look up styles, color combinations and other minute details of authentic Nike shoes and compare it to one you are looking to purchase. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-womens][b]Nike Air Max 1 Womens Sale[/b][/url] Consider the sewing. Phony Nikes might have careless sewing, as well as sewing that's unequal or even isn't directly. Real Nikes possess close to ideal, otherwise ideal, sewing. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-womens][b]Nike Air Max 1 Womens Sale Onlinee[/b][/url] Search for blood loss or even washed out colours. Occasionally phony Jordans may have blood loss from the red-colored The nike jordan image about the tongue's tabs to the material encircling this. Material colours about the footwear could also appear washed out upon knockoffs. A geniune Nike footwear won't have washed out or even blood loss colours. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-womens][b]Nike Air Max 1 Womens Online[/b][/url] Remember that high quality is actually most significant, cost is better in order to thing to consider, following a pattern is the design -- but additionally a person need to ensure how the procedure is actually security and could not really harm your own curiosity. Therefore benefit from the celebration associated with Nike Footwear Purchase as well as as being a champion associated with your self. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-womens][b]Cheap Nike Air Max 1 Womens[/b][/url] Nike shoes-everyone understand all of them. Because Nike logo design gets well-known, Nike footwear is actually distribute all over the globe. Along with any period of time background associated with improvement, Nike offers created various kinds of Nike footwear, we. at the. Nike athletic shoes, Nike Atmosphere sequence, Nike soccer footwear and so forth. Like a well-known organization, Nike had been pleased with various older of individuals it created footwear associated with various designs through kids in order to grownup, with regard to man as well as woman. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241874,241874#msg-241874 From nginx-forum at nginx.us Thu Aug 15 09:22:30 2013 From: nginx-forum at nginx.us (hx130321) Date: Thu, 15 Aug 2013 05:22:30 -0400 Subject: =?UTF-8?Q?coupes_pr=C3=A9cises_dans_tous_les_sens=2E?= Message-ID: [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-mens][b]Nike Air Max 1 Mens[/b][/url] Kobe is actually famose within golf ball video games. He or she is the greatest participant within NBA region. Nike discharge Nike Move Kobe Sixth is v footwear to consider Kobe's factor, he's the 2nd celebrity that Nike style distinctive footwear following The nike jordan footwear. The actual Nike Kobe Move Mens Golf ball Footwear may be the peak associated with light-weight as well as encouraging shoes, regardless of whether upon courtroom or even away. Evaluating within of them costing only 10. 6 oz . the actual Nike Kobe Sixth is v is actually lighter in weight as well as less than it's groundbreaking forerunner as well as constructed with regard to accuracy slashes in most path. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-mens][b]Discount Nike Air Max 1 Mens[/b][/url] Nike cannot overlook their own dedication in order to client, proceeds development, proceeds acts, proceeds happy exactly what these people need. It's exactly what Nike will, it's what we should can easily see within Nike footwear. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-mens][b]Nike Air Max 1 Mens Sale[/b][/url] Among the greatest footwear within sportworld, Nike is actually well-known associated with it's Nike Atmosphere footwear, the very first one which created for NBA gamers. Following many years improvement, Nike offers transformed it's design in order to operating region -- Nike athletic shoes; converted into the region associated with soccer footwear; competetion along with additional footwear within golfing region, every one of warm activity is actually exactly what Nike wish to contact. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-mens][b]Nike Air Max 1 Mens Sale Online[/b][/url] However, Nike offers arrived at it's objective within golf ball places. Through Atmosphere The nike jordan footwear, increasingly more youngers offers transformed their own thoughts in order to Nike NBA footwear. Using the finish associated with The nike jordan period, Nike offers discovered it's brand new spokesprison -- Kobe in order to their distinctive footwear -- Nike Kobe Sixth is v sequence. Kobe is the greatest golf ball gamers throughout NBA period. These days the actual assistance in order to Kobe is actually top than ever before, it's excellent second to select Kobe being it's tone of voice. Additionally the look associated with Nike kobe footwear is actually sophisticated. Utilizing Nike Flywire technologies offers optimum assistance along with minimum quantity of supplies, whilst tensile materials supply assistance whilst reducing pounds. The full-length Phylon midsole along with Move Atmosphere device within back heel coupled with Lunar Froth within front foot soft cushions towards courtroom surprise. Additionally strong rubberized outsole along with herringbone traction force design with regard to greatest overall performance. It may prefect your own overall performance throughout the procedure for actively playing golf ball. Through satistics these days, increasingly more Adidas along with other manufacturer customers converted into Nike embrace, it's a very good news which Nike is actually typically the most popular 1 on the planet. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-mens][b]Nike Air Max 1 Mens Online[/b][/url] Nevertheless, using the growing product sales on the market, buyying Nike footwear -- such as Nike athletic shoes, Nike coaches footwear additionally turn out to be warm purchase. Expecially the actual arriving fall provide a chance to select correct, breathable, fashionable, high-technology Nike footwear with regard to performing sports activities. The company chance as well as revenue desire business person in order to sall Nike footwear through worldwide. Aggresive competitors result in Nike footwear purchase these days, using the reduce as well as discounted, clients tend to be insane buyying footwear without having considering high quality. Perhaps a few online store stated they're guru, however at some point the merchandise a person obtained is actually phony which is just a little hard in order to general opinion along with business person. So it's essential for clients in order to indentify phony as well as guru prior to buyying, otherwise you will find 2 methods -- simply toss this aside or even continuous discussion using the company. [url=http://www.nikesalestore.ca/nike-air-max/nike-air-max-1-mens][b]Cheap Nike Air Max 1 Mens[/b][/url] Flex the actual bottoms. Genuine Nike footwear bottoms are constructed with BRS 1000 rubberized; this appears, seems as well as has the aroma of rubberized. It will not really seem like difficult plastic material. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241875,241875#msg-241875 From nginx-forum at nginx.us Thu Aug 15 09:24:05 2013 From: nginx-forum at nginx.us (hx130321) Date: Thu, 15 Aug 2013 05:24:05 -0400 Subject: =?UTF-8?Q?coupes_pr=C3=A9cises_dans_tous_les_sens=2E?= Message-ID: <56ad7e8516c2b0948a900505b6404fc8.NginxMailingListEnglish@forum.nginx.org> Nike Air Max 1 Mens Kobe is actually famose within golf ball video games. He or she is the greatest participant within NBA region. Nike discharge Nike Move Kobe Sixth is v footwear to consider Kobe's factor, he's the 2nd celebrity that Nike style distinctive footwear following The nike jordan footwear. The actual Nike Kobe Move Mens Golf ball Footwear may be the peak associated with light-weight as well as encouraging shoes, regardless of whether upon courtroom or even away. Evaluating within of them costing only 10. 6 oz . the actual Nike Kobe Sixth is v is actually lighter in weight as well as less than it's groundbreaking forerunner as well as constructed with regard to accuracy slashes in most path. Discount Nike Air Max 1 Mens Nike cannot overlook their own dedication in order to client, proceeds development, proceeds acts, proceeds happy exactly what these people need. It's exactly what Nike will, it's what we should can easily see within Nike footwear. Nike Air Max 1 Mens Sale Among the greatest footwear within sportworld, Nike is actually well-known associated with it's Nike Atmosphere footwear, the very first one which created for NBA gamers. Following many years improvement, Nike offers transformed it's design in order to operating region -- Nike athletic shoes; converted into the region associated with soccer footwear; competetion along with additional footwear within golfing region, every one of warm activity is actually exactly what Nike wish to contact. Nike Air Max 1 Mens Sale Online However, Nike offers arrived at it's objective within golf ball places. Through Atmosphere The nike jordan footwear, increasingly more youngers offers transformed their own thoughts in order to Nike NBA footwear. Using the finish associated with The nike jordan period, Nike offers discovered it's brand new spokesprison -- Kobe in order to their distinctive footwear -- Nike Kobe Sixth is v sequence. Kobe is the greatest golf ball gamers throughout NBA period. These days the actual assistance in order to Kobe is actually top than ever before, it's excellent second to select Kobe being it's tone of voice. Additionally the look associated with Nike kobe footwear is actually sophisticated. Utilizing Nike Flywire technologies offers optimum assistance along with minimum quantity of supplies, whilst tensile materials supply assistance whilst reducing pounds. The full-length Phylon midsole along with Move Atmosphere device within back heel coupled with Lunar Froth within front foot soft cushions towards courtroom surprise. Additionally strong rubberized outsole along with herringbone traction force design with regard to greatest overall performance. It may prefect your own overall performance throughout the procedure for actively playing golf ball. Through satistics these days, increasingly more Adidas along with other manufacturer customers converted into Nike embrace, it's a very good news which Nike is actually typically the most popular 1 on the planet. Nike Air Max 1 Mens Online Nevertheless, using the growing product sales on the market, buyying Nike footwear -- such as Nike athletic shoes, Nike coaches footwear additionally turn out to be warm purchase. Expecially the actual arriving fall provide a chance to select correct, breathable, fashionable, high-technology Nike footwear with regard to performing sports activities. The company chance as well as revenue desire business person in order to sall Nike footwear through worldwide. Aggresive competitors result in Nike footwear purchase these days, using the reduce as well as discounted, clients tend to be insane buyying footwear without having considering high quality. Perhaps a few online store stated they're guru, however at some point the merchandise a person obtained is actually phony which is just a little hard in order to general opinion along with business person. So it's essential for clients in order to indentify phony as well as guru prior to buyying, otherwise you will find 2 methods -- simply toss this aside or even continuous discussion using the company. Cheap Nike Air Max 1 Mens Flex the actual bottoms. Genuine Nike footwear bottoms are constructed with BRS 1000 rubberized; this appears, seems as well as has the aroma of rubberized. It will not really seem like difficult plastic material. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241876,241876#msg-241876 From igor.sverkos at googlemail.com Thu Aug 15 09:45:22 2013 From: igor.sverkos at googlemail.com (Igor Sverkos) Date: Thu, 15 Aug 2013 11:45:22 +0200 Subject: flush temp directory In-Reply-To: References: Message-ID: H i, I would really wonder if you would see a real difference between using a tmpfs or not for the webserver's tmp body location. A tmpfs is only faster, but as long as your storage has enough free IO resources and is fast enough to actual write the data, you shouldn't notice. And keep in mind: You only use the tmpfs for the request body. But you still need to write it to disk. If your disk is limited to 120MB/s and a normal upload is about 5 MB you are only able to handle ~23 concurrent uploads. Well, you could buffer millions of request per second in your super fast RAM (if you have enough RAM :P), but your PHP worker, which will move the upload from RAM to the persistent storage will become the bottleneck. I have a problem with the way it seems you test your setup: Every system should be able to handle that kind of load. After some runs, everything should be in some kind of cache. The IOs from the uploaded files are not enough (disks also have write caches, the OS may buffer writes, too...). These IOs can be handled by every disk, also, the IOs comes in sequence, not parallel. => Add more load. Run tests parallel/concurrent. Increase file size to fill up any write caches, which will trigger real writes, which will block the storage in some ways you will notice. -- Regards, Igor -------------- next part -------------- An HTML attachment was scrubbed... URL: From jens.rantil at telavox.se Thu Aug 15 11:51:33 2013 From: jens.rantil at telavox.se (Jens Rantil) Date: Thu, 15 Aug 2013 11:51:33 +0000 Subject: IPv6 range specification for allow/deny Message-ID: <5D4CF2D9655E524292A91397605B11FA0ACFB669@AM2PRD0710MB350.eurprd07.prod.outlook.com> Hi, I'd like to limit a range of IPv6 space to a "server" context using "allow" and "deny". I haven't been able to find any information on how to do this in the documentation, nor on the web. The only example I've found[1] is for fixed IPv6 adresses like so: allow 2620:100:e000::8001; [1] http://wiki.nginx.org/HttpAccessModule So far I have tried allow 2d00:1201::/32 allow [2d00:1201::]/32 but nginx configuration validation complains. Is this IPv6 ranges possible to allow/deny? What is the correct format? Any input would be appreciated. Thanks, Jens From ru at nginx.com Thu Aug 15 11:59:34 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 15 Aug 2013 15:59:34 +0400 Subject: IPv6 range specification for allow/deny In-Reply-To: <5D4CF2D9655E524292A91397605B11FA0ACFB669@AM2PRD0710MB350.eurprd07.prod.outlook.com> References: <5D4CF2D9655E524292A91397605B11FA0ACFB669@AM2PRD0710MB350.eurprd07.prod.outlook.com> Message-ID: <20130815115934.GE64735@lo0.su> On Thu, Aug 15, 2013 at 11:51:33AM +0000, Jens Rantil wrote: > Hi, > > I'd like to limit a range of IPv6 space to a "server" context using "allow" and "deny". I haven't been able to find any information on how to do this in the documentation, nor on the web. The only example I've found[1] is for fixed IPv6 adresses like so: > > allow 2620:100:e000::8001; > > [1] http://wiki.nginx.org/HttpAccessModule > > So far I have tried > > allow 2d00:1201::/32 > allow [2d00:1201::]/32 > > but nginx configuration validation complains. Is this IPv6 ranges possible to allow/deny? What is the correct format? Any input would be appreciated. See the official docs: http://nginx.org/en/docs/http/ngx_http_access_module.html From vl at nginx.com Thu Aug 15 12:10:05 2013 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 15 Aug 2013 16:10:05 +0400 Subject: upstream max_fails disable In-Reply-To: References: <520368AA.4060205@blueyonder.co.uk> <20130813123101.GC52681@lo0.su> Message-ID: <20130815121004.GA22933@vlpc.i.nginx.com> On Tue, Aug 13, 2013 at 12:36:25PM -0400, B.R. wrote: > Hello, > > On Tue, Aug 13, 2013 at 8:31 AM, Ruslan Ermilov wrote: > > > > > If there's a single server, max_fails and fail_timeout parameters > > are ignored, and such a server will never become temporarily down. > > > > ?That would be worth mentioning in the Nginx documentation?... > We've added a note regarding this into description of the "server" directive: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server From jens.rantil at telavox.se Thu Aug 15 12:13:52 2013 From: jens.rantil at telavox.se (Jens Rantil) Date: Thu, 15 Aug 2013 12:13:52 +0000 Subject: SV: IPv6 range specification for allow/deny In-Reply-To: <20130815115934.GE64735@lo0.su> References: <5D4CF2D9655E524292A91397605B11FA0ACFB669@AM2PRD0710MB350.eurprd07.prod.outlook.com> <20130815115934.GE64735@lo0.su> Message-ID: <5D4CF2D9655E524292A91397605B11FA0ACFB6CC@AM2PRD0710MB350.eurprd07.prod.outlook.com> Hi, Thanks for your link. Do you know which version that first had IPv6 ranges supported in allow/deny? I can't seem to get it to work. Cheers, Jens -----Ursprungligt meddelande----- Fr?n: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] F?r Ruslan Ermilov Skickat: den 15 augusti 2013 14:00 Till: nginx at nginx.org ?mne: Re: IPv6 range specification for allow/deny On Thu, Aug 15, 2013 at 11:51:33AM +0000, Jens Rantil wrote: > Hi, > > I'd like to limit a range of IPv6 space to a "server" context using "allow" and "deny". I haven't been able to find any information on how to do this in the documentation, nor on the web. The only example I've found[1] is for fixed IPv6 adresses like so: > > allow 2620:100:e000::8001; > > [1] http://wiki.nginx.org/HttpAccessModule > > So far I have tried > > allow 2d00:1201::/32 > allow [2d00:1201::]/32 > > but nginx configuration validation complains. Is this IPv6 ranges possible to allow/deny? What is the correct format? Any input would be appreciated. See the official docs: http://nginx.org/en/docs/http/ngx_http_access_module.html _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From maxim at nginx.com Thu Aug 15 12:20:46 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 15 Aug 2013 16:20:46 +0400 Subject: SV: IPv6 range specification for allow/deny In-Reply-To: <5D4CF2D9655E524292A91397605B11FA0ACFB6CC@AM2PRD0710MB350.eurprd07.prod.outlook.com> References: <5D4CF2D9655E524292A91397605B11FA0ACFB669@AM2PRD0710MB350.eurprd07.prod.outlook.com> <20130815115934.GE64735@lo0.su> <5D4CF2D9655E524292A91397605B11FA0ACFB6CC@AM2PRD0710MB350.eurprd07.prod.outlook.com> Message-ID: <520CC79E.20403@nginx.com> On 8/15/13 4:13 PM, Jens Rantil wrote: > Hi, > > Thanks for your link. Do you know which version that first had IPv6 ranges supported in allow/deny? I can't seem to get it to work. > Just curious: what version do you use? > Cheers, > Jens > > -----Ursprungligt meddelande----- > Fr?n: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] F?r Ruslan Ermilov > Skickat: den 15 augusti 2013 14:00 > Till: nginx at nginx.org > ?mne: Re: IPv6 range specification for allow/deny > > On Thu, Aug 15, 2013 at 11:51:33AM +0000, Jens Rantil wrote: >> Hi, >> >> I'd like to limit a range of IPv6 space to a "server" context using "allow" and "deny". I haven't been able to find any information on how to do this in the documentation, nor on the web. The only example I've found[1] is for fixed IPv6 adresses like so: >> >> allow 2620:100:e000::8001; >> >> [1] http://wiki.nginx.org/HttpAccessModule >> >> So far I have tried >> >> allow 2d00:1201::/32 >> allow [2d00:1201::]/32 >> >> but nginx configuration validation complains. Is this IPv6 ranges possible to allow/deny? What is the correct format? Any input would be appreciated. > > See the official docs: > http://nginx.org/en/docs/http/ngx_http_access_module.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov http://nginx.com From nginx-forum at nginx.us Thu Aug 15 13:04:37 2013 From: nginx-forum at nginx.us (AlexT) Date: Thu, 15 Aug 2013 09:04:37 -0400 Subject: Win32 Binary - bug in OpenSSL Message-ID: <910a9af25efd3de784806e9c5ec16cdb.NginxMailingListEnglish@forum.nginx.org> Howdy folks, Whilst I'm a militant Unix guy I'm having to use the Win32 version of nginx for a specific project which requires SSL MiTM proxying as part of a virtualised app suite. I spent a few hours battling with an SSL error whereby I would see the Client Hello rapidly followed by a TCP FIN from the remote server and couldn't figure out what was causing it. I then built from source on OSX and Linux and an identical config worked without issue. Turns out from a little reading that there's a bug in OpenSSL v1.1 which is responsible for this and as OSX and my Linux servers are on v0.9.x they aren't subject to this bug. I'm sure everyone is very busy, but the next time you get round to reviewing the build deps for Windows it would be great if you could keep this in mind. At present either the backend conversation fails and nginx serves a 502, or the .exe seg faults and dies completely (depending on what protocol/cipher combinations you specify). Thanks, Alex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241891,241891#msg-241891 From jens.rantil at telavox.se Thu Aug 15 13:37:43 2013 From: jens.rantil at telavox.se (Jens Rantil) Date: Thu, 15 Aug 2013 13:37:43 +0000 Subject: SV: SV: IPv6 range specification for allow/deny In-Reply-To: <520CC79E.20403@nginx.com> References: <5D4CF2D9655E524292A91397605B11FA0ACFB669@AM2PRD0710MB350.eurprd07.prod.outlook.com> <20130815115934.GE64735@lo0.su> <5D4CF2D9655E524292A91397605B11FA0ACFB6CC@AM2PRD0710MB350.eurprd07.prod.outlook.com> <520CC79E.20403@nginx.com> Message-ID: <5D4CF2D9655E524292A91397605B11FA0ACFB7A2@AM2PRD0710MB350.eurprd07.prod.outlook.com> I'm running 0.7.67. Cheers, Jens -----Ursprungligt meddelande----- Fr?n: Maxim Konovalov [mailto:maxim at nginx.com] Skickat: den 15 augusti 2013 14:21 Till: nginx at nginx.org Kopia: Jens Rantil ?mne: Re: SV: IPv6 range specification for allow/deny On 8/15/13 4:13 PM, Jens Rantil wrote: > Hi, > > Thanks for your link. Do you know which version that first had IPv6 ranges supported in allow/deny? I can't seem to get it to work. > Just curious: what version do you use? > Cheers, > Jens > > -----Ursprungligt meddelande----- > Fr?n: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] F?r Ruslan Ermilov > Skickat: den 15 augusti 2013 14:00 > Till: nginx at nginx.org > ?mne: Re: IPv6 range specification for allow/deny > > On Thu, Aug 15, 2013 at 11:51:33AM +0000, Jens Rantil wrote: >> Hi, >> >> I'd like to limit a range of IPv6 space to a "server" context using "allow" and "deny". I haven't been able to find any information on how to do this in the documentation, nor on the web. The only example I've found[1] is for fixed IPv6 adresses like so: >> >> allow 2620:100:e000::8001; >> >> [1] http://wiki.nginx.org/HttpAccessModule >> >> So far I have tried >> >> allow 2d00:1201::/32 >> allow [2d00:1201::]/32 >> >> but nginx configuration validation complains. Is this IPv6 ranges possible to allow/deny? What is the correct format? Any input would be appreciated. > > See the official docs: > http://nginx.org/en/docs/http/ngx_http_access_module.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov http://nginx.com From jan.algermissen at nordsc.com Thu Aug 15 13:40:59 2013 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Thu, 15 Aug 2013 15:40:59 +0200 Subject: Module development question - Variables Message-ID: <030F0701-05C7-4D1A-943D-349E59DEE966@nordsc.com> Hi, been trying to understand variables for a couple of hours - but I jut don't get it. Can anyone explain, 1) How and when the variable setter function is called? 2) Whether I should / can call it myelf to set the variable. Use Case: I write an access phase filter that extracts a bunch of information from the Authorization header (think OAuth-like: clientId, user, but maybe also debug info about cryptography performance, token expiry, acess ricghts -you get the idea). I would like to store these per request values in a variable to use them in the access log module to log them. E.g.: log_format gzip '$remote_addr - $remote_user $my_module_client, $my_module_infoxy ...' I think I understand what I have to do to create the variable (create in preconfiguartin handler, provide setter function) But how is the setter called and how do I access the value to store in the variable? Should I make the value a static bucket in the module data that is written per request and then copied to the variable in the variable setter??? Existing modules only help a little, as they mostly set variables to values that are part of the request struct anyway - which my values aren't. Jan From maxim at nginx.com Thu Aug 15 14:32:18 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 15 Aug 2013 18:32:18 +0400 Subject: SV: SV: IPv6 range specification for allow/deny In-Reply-To: <5D4CF2D9655E524292A91397605B11FA0ACFB7A2@AM2PRD0710MB350.eurprd07.prod.outlook.com> References: <5D4CF2D9655E524292A91397605B11FA0ACFB669@AM2PRD0710MB350.eurprd07.prod.outlook.com> <20130815115934.GE64735@lo0.su> <5D4CF2D9655E524292A91397605B11FA0ACFB6CC@AM2PRD0710MB350.eurprd07.prod.outlook.com> <520CC79E.20403@nginx.com> <5D4CF2D9655E524292A91397605B11FA0ACFB7A2@AM2PRD0710MB350.eurprd07.prod.outlook.com> Message-ID: <520CE672.9000806@nginx.com> On 8/15/13 5:37 PM, Jens Rantil wrote: > I'm running 0.7.67. > IPv6 support for the "access" and "deny" directives appeared in 0.8.22 almost four years ago: http://nginx.org/en/CHANGES-0.8 Nowdays we ship 1.5.3 and 1.4.2 releases: http://nginx.org/en/download.html > Cheers, > Jens > > -----Ursprungligt meddelande----- > Fr?n: Maxim Konovalov [mailto:maxim at nginx.com] > Skickat: den 15 augusti 2013 14:21 > Till: nginx at nginx.org > Kopia: Jens Rantil > ?mne: Re: SV: IPv6 range specification for allow/deny > > On 8/15/13 4:13 PM, Jens Rantil wrote: >> Hi, >> >> Thanks for your link. Do you know which version that first had IPv6 ranges supported in allow/deny? I can't seem to get it to work. >> > Just curious: what version do you use? > >> Cheers, >> Jens >> >> -----Ursprungligt meddelande----- >> Fr?n: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] F?r Ruslan Ermilov >> Skickat: den 15 augusti 2013 14:00 >> Till: nginx at nginx.org >> ?mne: Re: IPv6 range specification for allow/deny >> >> On Thu, Aug 15, 2013 at 11:51:33AM +0000, Jens Rantil wrote: >>> Hi, >>> >>> I'd like to limit a range of IPv6 space to a "server" context using "allow" and "deny". I haven't been able to find any information on how to do this in the documentation, nor on the web. The only example I've found[1] is for fixed IPv6 adresses like so: >>> >>> allow 2620:100:e000::8001; >>> >>> [1] http://wiki.nginx.org/HttpAccessModule >>> >>> So far I have tried >>> >>> allow 2d00:1201::/32 >>> allow [2d00:1201::]/32 >>> >>> but nginx configuration validation complains. Is this IPv6 ranges possible to allow/deny? What is the correct format? Any input would be appreciated. >> >> See the official docs: >> http://nginx.org/en/docs/http/ngx_http_access_module.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -- Maxim Konovalov http://nginx.com From zjay1987 at gmail.com Thu Aug 15 15:07:30 2013 From: zjay1987 at gmail.com (li zJay) Date: Thu, 15 Aug 2013 23:07:30 +0800 Subject: Nginx reload problem Message-ID: Hello: I found that some nginx config option doesn't take effect after modification with reload, the following is a simple test case: nginx version: nginx/1.2.7 nginx.conf: ============================ worker_processes 1; error_log logs/error.log info; events { worker_connections 1024; } http { limit_req_zone $arg_a zone=testzone:64m rate=1r/s; server { listen 80; location / { limit_req zone=testzone burst=2; alias /; } } } ============================ I change $arg_a to $arg_b in the line 'limit_req_zone $arg_a zone=testzone:64m rate=1r/s;' then reload nginx, but the change doesn't take effect, unless I stop nginx manually and start it again. Is this an expected behavior ? or are there any other nginx config options that not compatible with reload operation? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From aflexzor at gmail.com Thu Aug 15 16:16:10 2013 From: aflexzor at gmail.com (Alex Flex) Date: Thu, 15 Aug 2013 10:16:10 -0600 Subject: Set outgoing ip for reverse proxy? Message-ID: <520CFECA.5080603@gmail.com> Hello nginx, Iam wondering if there is anyway to bind a specific IP as the outbound IP to contact the backend server for a nginx instance? I serve many instances in a single machine and all of them using the default IP of the server is messy. Thanks Alex From B22173 at freescale.com Thu Aug 15 16:30:27 2013 From: B22173 at freescale.com (Myla John-B22173) Date: Thu, 15 Aug 2013 16:30:27 +0000 Subject: SAML2.0 support in NGINX In-Reply-To: References: Message-ID: Hi, Is there any SAML2.0 module available for NGINX? Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu Aug 15 17:42:49 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 15 Aug 2013 22:42:49 +0500 Subject: Set outgoing ip for reverse proxy? In-Reply-To: <520CFECA.5080603@gmail.com> References: <520CFECA.5080603@gmail.com> Message-ID: <520D1319.3080907@nginx.com> On 8/15/13 9:16 PM, Alex Flex wrote: > Hello nginx, > > Iam wondering if there is anyway to bind a specific IP as the > outbound IP to contact the backend server for a nginx instance? I > serve many instances in a single machine and all of them using the > default IP of the server is messy. > http://nginx.org/r/proxy_bind -- Maxim Konovalov http://nginx.com From nginx-forum at nginx.us Fri Aug 16 10:27:16 2013 From: nginx-forum at nginx.us (Matt520) Date: Fri, 16 Aug 2013 06:27:16 -0400 Subject: geoip filtering not working In-Reply-To: <65602c4bf5c586b2d6b4827b2e3ea10d.NginxMailingListEnglish@forum.nginx.org> References: <65602c4bf5c586b2d6b4827b2e3ea10d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, I was looking for the zip code from a given IP address few months back and now I've got the solution from IP2Location. You can try IP2Location module see if it helps in your issues and good luck. (http://ip2location.com/developers/nginx) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240802,241917#msg-241917 From nginx-forum at nginx.us Fri Aug 16 10:32:57 2013 From: nginx-forum at nginx.us (wojonstech) Date: Fri, 16 Aug 2013 06:32:57 -0400 Subject: Creating One-way connections or Dont wait for upstream Message-ID: Hello, I am working on an application where http ( websocket, or any type of connection) connections will be one direction for inserting data into a database and queuing data. The client side of the application does not care about the response from nginx. It would be acceptable to send a blank response or send no database at all and simple just close the connection. After or as the connection is closed I would like the data to work internal within nginx as normal selecting an upstream proxying it, if the upstream timeouts or has an error still be able to use nginx_next_upstream. What would a configuration like this look like? Thank you in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241919,241919#msg-241919 From r at roze.lv Fri Aug 16 12:17:03 2013 From: r at roze.lv (Reinis Rozitis) Date: Fri, 16 Aug 2013 15:17:03 +0300 Subject: Creating One-way connections or Dont wait for upstream In-Reply-To: References: Message-ID: <13BF79AC360D45FC9C3E134815890103@MasterPC> > The client side of the application does not care about the response from > nginx. It would be acceptable to send a blank response or send no database at all and simple just close the connection. After or as the connection is closed I would like the data to work internal within nginx as normal selecting an upstream proxying it, if the upstream timeouts or has an error still be able to use nginx_next_upstream. > What would a configuration like this look like ? If you can (force) close the connection from client side then you can try proxy_ignore_client_abort setting ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort ) - which will make nginx to complete the request to upstream. The other approach (more options to "program" nginx) approach could be to use something like Echo module ( http://wiki.nginx.org/HttpEchoModule ) from http://openresty.org/ rr From aweber at comcast.net Fri Aug 16 13:14:32 2013 From: aweber at comcast.net (AJ Weber) Date: Fri, 16 Aug 2013 09:14:32 -0400 Subject: geoip filtering not working In-Reply-To: References: <65602c4bf5c586b2d6b4827b2e3ea10d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <520E25B8.5020008@comcast.net> I have this working pretty well (ok, I think _very_well_ ) with GeoIP. I used a MAP in the main nginx.conf like this: map $geoip_country_code $allowed_country { default 0; US 1; GB 1; CA 1; EU 1; } Then, in my default.conf, the first statement(s) in the relevant "location's" is: if ($allowed_country = 0) { return 418; } Again, this works for me, and you can add "allowed countries" in just one place: the "map". -AJ From nginx-forum at nginx.us Fri Aug 16 13:43:18 2013 From: nginx-forum at nginx.us (spacecwoboy) Date: Fri, 16 Aug 2013 09:43:18 -0400 Subject: Cookie/Session Expired - OWA SSL Reverse Proxy In-Reply-To: References: Message-ID: Jonathan Matthews Wrote: ------------------------------------------------------- > On 14 August 2013 18:20, spacecwoboy wrote: > > Hi. > > > > Trying to configure a reverse proxy to allow external access to an > outlook > > web access server. I am able to route traffic through the NGINX to > the OWA > > server, present the web page, and place the username & pw into the > form. > > OWA rejects valid username/pwd's with a: "Your session has timed > out...." > > error. > > > > Looking through my custom log files, somehow the session ID and the > expired > > values are munged in the GET & POST process through the proxy. > There may be > > a simple fix that I'm not able to find. Any suggestions will be > > appreciated! > > I have a vague recollection that OWA uses a nasty form of > authentication which *requires* that each client's end-to-end > connection to the backend be long-lived, and only used by that one > client (as the auth is done in the first few packets and not > repeated). I don't know how you'd configure that in nginx. > > I may be wrong about it, however. I've never tried Nginx in front of > OWA myself. This question comes up on the HAProxy list sometimes, and > it seems solvable by HAP users. > > Jonathan Much Appreciated Jonathan - it prompted me to take some different testing steps. I pointed ngnix to a 'test' OWA back-end, which is a mirror of the prod environment, less the rigid SSL certs. Authentication passed right on through, everything was jive. I'll likely take a different route of trunking SSL to nginx, remove the OWA cert, then ipsec'ing the nginx server to the OWA server host-to-host. Seems that's the fairly common approach? ( This thread helped btw: http://forum.nginx.org/read.php?2,234641,234654#msg-234654 ) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241856,241939#msg-241939 From mdounin at mdounin.ru Sat Aug 17 00:54:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Aug 2013 04:54:19 +0400 Subject: cache based on file size In-Reply-To: <59566FAA26861246A0E785066534B42A26F81E7F@USIDCWVEMBX07.corp.global.level3.com> References: <59566FAA26861246A0E785066534B42A26F81E7F@USIDCWVEMBX07.corp.global.level3.com> Message-ID: <20130817005419.GV2130@mdounin.ru> Hello! On Mon, Aug 05, 2013 at 10:28:31PM +0000, Johns, Kevin wrote: > Hi, > > In looking over Nginx configuration for the proxy module, I do not see an easy way to influence what is cached based on object size. I have two use cases of interest: > 1. Store a small file in a particular zone (e.g., SSD), and > > 2. Have a large file bypass the cache (no-store large files) > > Any insight on how best to accomplish this would be greatly appreciated. The proxy_no_cache with appropriate variables (e.g., map'ed or produced with embedded perl from $upstream_http_content_length) might be usable. E.g. the following should disable caching of responses larger than 999 bytes or with content length not known: map $upstream_http_content_length $toolarge { default 1; ~^\d\d\d$ 0; } proxy_no_cache $toolarge; (Untested.) See http://nginx.org/r/proxy_no_cache for details. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Aug 17 02:09:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Aug 2013 06:09:29 +0400 Subject: Nginx reload problem In-Reply-To: References: Message-ID: <20130817020928.GY2130@mdounin.ru> Hello! On Thu, Aug 15, 2013 at 11:07:30PM +0800, li zJay wrote: > Hello: > > I found that some nginx config option doesn't take effect after > modification with reload, the following is a simple test case: > > nginx version: nginx/1.2.7 > nginx.conf: > ============================ > worker_processes 1; > error_log logs/error.log info; > > events { > worker_connections 1024; > } > > http { > limit_req_zone $arg_a zone=testzone:64m rate=1r/s; > > server { > listen 80; > > location / { > limit_req zone=testzone burst=2; > alias /; > } > } > } > ============================ > > I change $arg_a to $arg_b in the line 'limit_req_zone $arg_a > zone=testzone:64m rate=1r/s;' then reload nginx, but the change doesn't > take effect, unless I stop nginx manually and start it again. > > Is this an expected behavior ? or are there any other nginx config options > that not compatible with reload operation? It's expected behaviour. On such reload attempt, nginx will write something like: ... [emerg] ... limit_req "testzone" uses the "arg_b" variable while previously it used the "arg_a" variable into error log, explaining why it failed to load new configuration. -- Maxim Dounin http://nginx.org/en/donation.html From reallfqq-nginx at yahoo.fr Sat Aug 17 02:16:16 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 16 Aug 2013 22:16:16 -0400 Subject: Nginx reload problem In-Reply-To: <20130817020928.GY2130@mdounin.ru> References: <20130817020928.GY2130@mdounin.ru> Message-ID: I guess it would be nice if the doc warned about directives that need a server restart to be reloaded. Everyone supposes (as it seems obvious) that reloading Nginx is enough to apply configuration changes. An interesting part of the question was the inquiry about the potential existence other directives requiring a server restart rather than its reload. Do you have intel on this Maxim? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Aug 17 02:33:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Aug 2013 06:33:28 +0400 Subject: proxy_cache seems not working with X-Accel-Redirect In-Reply-To: <5f1ba60ffe6076a97efff91792e8fe32.NginxMailingListEnglish@forum.nginx.org> References: <5f1ba60ffe6076a97efff91792e8fe32.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130817023328.GZ2130@mdounin.ru> Hello! On Fri, Aug 09, 2013 at 06:20:21AM -0400, gray wrote: > My config > > location ~ /cached/ { > proxy_pass http://apache; > proxy_cache cache; > proxy_cache_valid 2h; > proxy_cache_key "$host|$request_uri"; > > } > > location /htdocs_internal/ { > internal; > > alias $htdocs_path; > } > > Requests with header in reply X-Accel-Redirect not cached, every time > request is sent to apache. When i add these directives > proxy_pass_header X-Accel-Redirect; > proxy_ignore_headers X-Accel-Redirect; > cache works fine (but is useless :) ), so it isn't problem with "no cache" > headers from apache. Yes, proxy_cache can't cache responses with X-Accel-Redirect. As a workaround, you may use an additional proxy layer with proxy_cache and proxy_ignore_headers X-Accel-Redirect + proxy_pass_headers X-Accel-Redirect. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Aug 17 03:16:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Aug 2013 07:16:16 +0400 Subject: Nginx reload problem In-Reply-To: References: <20130817020928.GY2130@mdounin.ru> Message-ID: <20130817031616.GB2130@mdounin.ru> Hello! On Fri, Aug 16, 2013 at 10:16:16PM -0400, B.R. wrote: > I guess it would be nice if the doc warned about directives that need a > server restart to be reloaded. > > Everyone supposes (as it seems obvious) that reloading Nginx is enough to > apply configuration changes. Reloading is enough. What is very wrong is to assume that sending a HUP signal to nginx is enough for a reload. For various reasons, ranging from configuration syntax errors to out of memory problems, configuration reload might fail. Quoting documentation: ... If this fails, it rolls back changes and continues to work with old configuration. ... http://nginx.org/en/docs/control.html#reconfiguration > An interesting part of the question was the inquiry about the potential > existence other directives requiring a server restart rather than its > reload. Do you have intel on this Maxim? There are no directives which require a server restart. But some changes are not possible on the fly - e.g. you can't change a shared memory zone size. If you want to change it - you have to create another shared memory zone, or use a binary upgrade which doesn't inherit shared memory zones and their contents (or use a restart, which will obviously work as well). -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Aug 17 03:31:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Aug 2013 07:31:40 +0400 Subject: Module development question - Variables In-Reply-To: <030F0701-05C7-4D1A-943D-349E59DEE966@nordsc.com> References: <030F0701-05C7-4D1A-943D-349E59DEE966@nordsc.com> Message-ID: <20130817033140.GC2130@mdounin.ru> Hello! On Thu, Aug 15, 2013 at 03:40:59PM +0200, Jan Algermissen wrote: > Hi, > > been trying to understand variables for a couple of hours - but I jut don't get it. > > Can anyone explain, > > > 1) How and when the variable setter function is called? Something like this in a configuration: set $variable "foo"; will result in v->set_handler() being called during the "set" directive evaluation. > 2) Whether I should / can call it myelf to set the variable. Usually no. > Use Case: > > I write an access phase filter that extracts a bunch of > information from the Authorization header (think OAuth-like: > clientId, user, but maybe also debug info about cryptography > performance, token expiry, acess ricghts -you get the idea). > > I would like to store these per request values in a variable to > use them in the access log module to log them. > > E.g.: > > log_format gzip '$remote_addr - $remote_user $my_module_client, > $my_module_infoxy ...' > > I think I understand what I have to do to create the variable > (create in preconfiguartin handler, provide setter function) > > But how is the setter called and how do I access the value to > store in the variable? > > Should I make the value a static bucket in the module data that > is written per request and then copied to the variable in the > variable setter??? > > Existing modules only help a little, as they mostly set > variables to values that are part of the request struct anyway - > which my values aren't. In most cases, it's enough to store data in a module context and provide a get_handler for a variable to access data via the module context. -- Maxim Dounin http://nginx.org/en/donation.html From reallfqq-nginx at yahoo.fr Sat Aug 17 04:07:22 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 17 Aug 2013 00:07:22 -0400 Subject: Nginx reload problem In-Reply-To: <20130817031616.GB2130@mdounin.ru> References: <20130817020928.GY2130@mdounin.ru> <20130817031616.GB2130@mdounin.ru> Message-ID: Hello Maxim! :o) On Fri, Aug 16, 2013 at 11:16 PM, Maxim Dounin wrote: > Hello! > > On Fri, Aug 16, 2013 at 10:16:16PM -0400, B.R. wrote: > > > I guess it would be nice if the doc warned about directives that need a > > server restart to be reloaded. > > > > Everyone supposes (as it seems obvious) that reloading Nginx is enough to > > apply configuration changes. > > Reloading is enough. What is very wrong is to assume that sending > a HUP signal to nginx is enough for a reload. For various > reasons, ranging from configuration syntax errors to out of memory > problems, configuration reload might fail. > > Quoting documentation: > > ... If this fails, it rolls back changes and continues to work > with old configuration. ... > > http://nginx.org/en/docs/control.html#reconfiguration > ??Yup I knew that. I thought Nginx was able to re-arrange its memory allocation for the new variable. I didn't know it was keeping the same fixed memory?, only replacing existing values. I saw on some init script on CentOS (probably with the packaged version of that OS) that the configuration check was invoked automatically when the service reload was called. That would be a nice improvement to the init script shipped with the official Nginx package (from nginx.org) to avoid a manual 'nginx -t' call before the reload. ? > > An interesting part of the question was the inquiry about the potential > > existence other directives requiring a server restart rather than its > > reload. Do you have intel on this Maxim? > > There are no directives which require a server restart. But some > changes are not possible on the fly - e.g. you can't change a > shared memory zone size. If you want to change it - you have to > create another shared memory zone, or use a binary upgrade which > doesn't inherit shared memory zones and their contents (or use a > restart, which will obviously work as well). > ?That reminds me that the init script shipped with Nginx doesn't take advantage of the 'hot binary switch'? ?to provide a 'soft restart' without downtime. It clearly would be a nice alternative to the normal restart which is basically a stop/start.? Thanks for your answer and your involvement on the mailing list btw. Your team job interfacing with us makes Nginx a product with an added reassuring sensation of quality & comfort. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From howachen at gmail.com Sat Aug 17 04:59:30 2013 From: howachen at gmail.com (howard chen) Date: Sat, 17 Aug 2013 12:59:30 +0800 Subject: How to turn off gzip compression for SSL traffic Message-ID: Hi, As you know, due the breach attack (http://breachattack.com), HTTP compression is no longer safe (I assume nginx don't use SSL compression by default?), so we should disable it. Now, We are using config like the following: gzip on; .. server { listen 127.0.0.1:80 default_server; listen 127.0.0.1:443 default_server ssl; With the need to split into two servers section, is it possible to turn off gzip when we are using SSL? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Aug 17 11:37:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Aug 2013 15:37:56 +0400 Subject: Nginx reload problem In-Reply-To: References: <20130817020928.GY2130@mdounin.ru> <20130817031616.GB2130@mdounin.ru> Message-ID: <20130817113756.GA76786@mdounin.ru> Hello! On Sat, Aug 17, 2013 at 12:07:22AM -0400, B.R. wrote: > Hello Maxim! :o) > > > On Fri, Aug 16, 2013 at 11:16 PM, Maxim Dounin wrote: > > > Hello! > > > > On Fri, Aug 16, 2013 at 10:16:16PM -0400, B.R. wrote: > > > > > I guess it would be nice if the doc warned about directives that need a > > > server restart to be reloaded. > > > > > > Everyone supposes (as it seems obvious) that reloading Nginx is enough to > > > apply configuration changes. > > > > Reloading is enough. What is very wrong is to assume that sending > > a HUP signal to nginx is enough for a reload. For various > > reasons, ranging from configuration syntax errors to out of memory > > problems, configuration reload might fail. > > > > Quoting documentation: > > > > ... If this fails, it rolls back changes and continues to work > > with old configuration. ... > > > > http://nginx.org/en/docs/control.html#reconfiguration > > > > ??Yup I knew that. I thought Nginx was able to re-arrange its memory > allocation for the new variable. > I didn't know it was keeping the same fixed memory?, only replacing > existing values. > > I saw on some init script on CentOS (probably with the packaged version of > that OS) that the configuration check was invoked automatically when the > service reload was called. > That would be a nice improvement to the init script shipped with the > official Nginx package (from nginx.org) to avoid a manual 'nginx -t' call > before the reload. I don't think that calling "nginx -t" as a mandatory step before configuration reload is a good idea: nginx binary running and nginx binary on disk might be different, and "nginx -t" result might be incorrect because of this, in some cases rejecting valid configurations. Additionally, it does duplicate work by parsing/loading a configuration which will be again parsed by a master process during configuration reload. While in most cases it's not significant, I've seen configurations taking more than 1m to load due to big geo module bases used. > > > An interesting part of the question was the inquiry about the potential > > > existence other directives requiring a server restart rather than its > > > reload. Do you have intel on this Maxim? > > > > There are no directives which require a server restart. But some > > changes are not possible on the fly - e.g. you can't change a > > shared memory zone size. If you want to change it - you have to > > create another shared memory zone, or use a binary upgrade which > > doesn't inherit shared memory zones and their contents (or use a > > restart, which will obviously work as well). > > > > ?That reminds me that the init script shipped with Nginx doesn't take > advantage of the 'hot binary switch'? > > ?to provide a 'soft restart' without downtime. > It clearly would be a nice alternative to the normal restart which is > basically a stop/start.? There is the "upgrade" command in the init script shipped with nginx.org linux packages. -- Maxim Dounin http://nginx.org/en/donation.html From igor at sysoev.ru Sat Aug 17 12:43:51 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 17 Aug 2013 16:43:51 +0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: On Aug 17, 2013, at 8:59 , howard chen wrote: > Hi, > > As you know, due the breach attack (http://breachattack.com), HTTP compression is no longer safe (I assume nginx don't use SSL compression by default?), so we should disable it. Yes, modern nginx versions do not use SSL compression. > Now, We are using config like the following: > > gzip on; > .. > > server { > listen 127.0.0.1:80 default_server; > listen 127.0.0.1:443 default_server ssl; > > > > With the need to split into two servers section, is it possible to turn off gzip when we are using SSL? You have to split the dual mode server section into two server server sections and set "gzip off" SSL-enabled on. There is no way to disable gzip in dual mode server section, but if you really worry about security in general the server sections should be different. -- Igor Sysoev http://nginx.com/services.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Aug 17 16:36:38 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 17 Aug 2013 12:36:38 -0400 Subject: Nginx reload problem In-Reply-To: <20130817113756.GA76786@mdounin.ru> References: <20130817020928.GY2130@mdounin.ru> <20130817031616.GB2130@mdounin.ru> <20130817113756.GA76786@mdounin.ru> Message-ID: Hello, On Sat, Aug 17, 2013 at 7:37 AM, Maxim Dounin wrote: > Hello! > > I don't think that calling "nginx -t" as a mandatory step before > configuration reload is a good idea: nginx binary running and > nginx binary on disk might be different, and "nginx -t" result > might be incorrect because of this, in some cases rejecting valid > configurations. > > Additionally, it does duplicate work by parsing/loading a > configuration which will be again parsed by a master process > during configuration reload. While in most cases it's not > significant, I've seen configurations taking more than 1m to load > due to big geo module bases used. > ??In that case, the server admin has a problem, since he has no way to test the configuration other the calling 'reload' on the running instance and check the logs for errors, hoping they are not already crawling under production-related log messages... One way or another, you test the configuration against an existing binary because you want to start or reload this binary with the conf. There is no point in having a running instance having already deleted its disk binary file: If you are in transition between 2? ?versions of Nginx, you shouldn't also make big changes to the conf... That's a 2-steps procedures I'd say?: One thing at a time. Testing conf is of course a duplicate of work, but that's a safe operation. The command output will determine if your new configuration will work without having to carefully watch logs with anxiety. ? > There is the "upgrade" command in the init script shipped with > nginx.org linux packages. > ??Ok, so could Li have used the 'upgrade' command insted of 'reload' to reload the configuration and change the allocated memory? ? ?Thanks,? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nhadie at gmail.com Sun Aug 18 00:53:55 2013 From: nhadie at gmail.com (ron ramos) Date: Sun, 18 Aug 2013 08:53:55 +0800 Subject: flush temp directory In-Reply-To: References: Message-ID: Thank you Igor. I was just basically looking into this: http://php-fpm.org/wiki/Features#Accelerated_upload_support so im not quite sure if i am missing something out as it has the same results enabled or disabled. I will start testing it with multiple clients and see if any difference. Thanks again. On Thu, Aug 15, 2013 at 5:45 PM, Igor Sverkos wrote: > H > i, > > I would really wonder if you would see a real difference between using a > tmpfs or not for the webserver's tmp body location. A tmpfs is only faster, > but as long as your storage has enough free IO resources and is fast enough > to actual write the data, you shouldn't notice. > And keep in mind: You only use the tmpfs for the request body. But you > still need to write it to disk. If your disk is limited to 120MB/s and a > normal upload is about 5 MB you are only able to handle ~23 concurrent > uploads. Well, you could buffer millions of request per second in your > super fast RAM (if you have enough RAM :P), but your PHP worker, which will > move the upload from RAM to the persistent storage will become the > bottleneck. > > I have a problem with the way it seems you test your setup: > Every system should be able to handle that kind of load. After some runs, > everything should be in some kind of cache. The IOs from the uploaded files > are not enough (disks also have write caches, the OS may buffer writes, > too...). These IOs can be handled by every disk, also, the IOs comes in > sequence, not parallel. > > => Add more load. Run tests parallel/concurrent. Increase file size to > fill up any write caches, which will trigger real writes, which will block > the storage in some ways you will notice. > > -- > Regards, > Igor > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Aug 18 07:46:43 2013 From: nginx-forum at nginx.us (ovidiu) Date: Sun, 18 Aug 2013 03:46:43 -0400 Subject: trouble building nginx from dotdeb Message-ID: <7c81b1aa71e0e88d73e8cfef04e5b324.NginxMailingListEnglish@forum.nginx.org> I'm trying to follow this tutorial: http://www.howtoforge.com/using-ngx_pagespeed-with-nginx-on-debian-wheezy to build nginx with ngx_pagespeed on a Debian Wheezy machine. Unfortunately so far I have been using nginx from dotdeb so I'm trying to use their sources. The error occurs when building: debuild -us -uc . . . make: *** [config.status.full] Error 1 dpkg-buildpackage: error: debian/rules build gave error exit status 2 debuild: fatal error at line 1357: dpkg-buildpackage -rfakeroot -D -us -uc failed the rules file has only 319 lines though and ends with: .PHONY: build clean binary-indep binary-arch binary install so I'm not sure what the exact problem is. Any hints how to further diagnose this error? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241972,241972#msg-241972 From steve at greengecko.co.nz Sun Aug 18 07:51:59 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sun, 18 Aug 2013 19:51:59 +1200 Subject: trouble building nginx from dotdeb In-Reply-To: <7c81b1aa71e0e88d73e8cfef04e5b324.NginxMailingListEnglish@forum.nginx.org> References: <7c81b1aa71e0e88d73e8cfef04e5b324.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52107D1F.1080203@greengecko.co.nz> Use the official instructions from https://github.com/pagespeed/ngx_pagespeed and you'll have no problems. Well, I haven't upgraded from 1.4.1 yet, but that works fine. Steve On 18/08/13 19:46, ovidiu wrote: > I'm trying to follow this tutorial: > http://www.howtoforge.com/using-ngx_pagespeed-with-nginx-on-debian-wheezy to > build nginx with ngx_pagespeed on a Debian Wheezy machine. Unfortunately so > far I have been using nginx from dotdeb so I'm trying to use their sources. > > The error occurs when building: > > > debuild -us -uc > . > . > . > make: *** [config.status.full] Error 1 > dpkg-buildpackage: error: debian/rules build gave error exit status 2 > debuild: fatal error at line 1357: > dpkg-buildpackage -rfakeroot -D -us -uc failed > > the rules file has only 319 lines though and ends with: > > .PHONY: build clean binary-indep binary-arch binary install > > so I'm not sure what the exact problem is. Any hints how to further diagnose > this error? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241972,241972#msg-241972 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Aug 18 07:59:20 2013 From: nginx-forum at nginx.us (ovidiu) Date: Sun, 18 Aug 2013 03:59:20 -0400 Subject: trouble building nginx from dotdeb In-Reply-To: <52107D1F.1080203@greengecko.co.nz> References: <52107D1F.1080203@greengecko.co.nz> Message-ID: Thanks, I knew about those instructions but I was trying to "build it hte Debian way" :-( Found this page with some more instructions/hints: http://wiki.debian.org/IntroDebianPackaging but no luck. So I guess if nobody can help me do it this way, in a few days I'll give it a try with the instructions you linked to. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241972,241974#msg-241974 From edwinlee at proxyy.biz Sun Aug 18 08:08:18 2013 From: edwinlee at proxyy.biz (Edwin Lee) Date: Sun, 18 Aug 2013 16:08:18 +0800 (SGT) Subject: multiple nginx In-Reply-To: <23421484.72.1376811950433.JavaMail.root@mx1.proxyy.biz> Message-ID: <9442195.76.1376813298002.JavaMail.root@mx1.proxyy.biz> Hi, Is is alright to have two installations of nginx on the same machine? I have a running instance of nginx with php installed from distribution package manager. Instead of writing another config, I would like to compile and install nginx from source code and run as second instance. The second instance is to optimize for load balancing, reverse proxy, cache and modsecurity. My concerns is would this break the system on debian squeeze? Thanks for answering. Edwin Lee From shangtefa at gmail.com Sun Aug 18 10:10:58 2013 From: shangtefa at gmail.com (MCoder) Date: Sun, 18 Aug 2013 18:10:58 +0800 Subject: multiple nginx In-Reply-To: <9442195.76.1376813298002.JavaMail.root@mx1.proxyy.biz> References: <23421484.72.1376811950433.JavaMail.root@mx1.proxyy.biz> <9442195.76.1376813298002.JavaMail.root@mx1.proxyy.biz> Message-ID: you could specify the configure file by -c option or even specify prefix by -p and could compile anther nginx instance by --prefix configure option 2013/8/18 Edwin Lee > Hi, > > Is is alright to have two installations of nginx on the same machine? > I have a running instance of nginx with php installed from distribution > package manager. > Instead of writing another config, I would like to compile and install > nginx from source code and run as second instance. > The second instance is to optimize for load balancing, reverse proxy, > cache and modsecurity. > > My concerns is would this break the system on debian squeeze? > > Thanks for answering. > > Edwin Lee > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From howachen at gmail.com Sun Aug 18 10:27:48 2013 From: howachen at gmail.com (howard chen) Date: Sun, 18 Aug 2013 18:27:48 +0800 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: Hi, Thanks for the insight. Finally I solved by: if ($scheme = https) { gzip off; } Separating into two servers require to duplicate the rules like rewrite, which is cumbersome. Thanks anyway On Sat, Aug 17, 2013 at 8:43 PM, Igor Sysoev wrote: > On Aug 17, 2013, at 8:59 , howard chen wrote: > > Hi, > > As you know, due the breach attack (http://breachattack.com), HTTP > compression is no longer safe (I assume nginx don't use SSL compression by > default?), so we should disable it. > > > Yes, modern nginx versions do not use SSL compression. > > Now, We are using config like the following: > > gzip on; > .. > > server { > listen 127.0.0.1:80 default_server; > listen 127.0.0.1:443 default_server ssl; > > > > With the need to split into two servers section, is it possible to turn > off gzip when we are using SSL? > > > You have to split the dual mode server section into two server server > sections and set "gzip off" > SSL-enabled on. There is no way to disable gzip in dual mode server > section, but if you really > worry about security in general the server sections should be different. > > > -- > Igor Sysoev > http://nginx.com/services.html > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From farseas at gmail.com Sun Aug 18 13:01:27 2013 From: farseas at gmail.com (Bob S.) Date: Sun, 18 Aug 2013 09:01:27 -0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: I thought that "if" statements slowed nginx down? On Sun, Aug 18, 2013 at 6:27 AM, howard chen wrote: > Hi, > > Thanks for the insight. > > Finally I solved by: > > if ($scheme = https) { > gzip off; > } > > Separating into two servers require to duplicate the rules like rewrite, > which is cumbersome. > > Thanks anyway > > > > > On Sat, Aug 17, 2013 at 8:43 PM, Igor Sysoev wrote: > >> On Aug 17, 2013, at 8:59 , howard chen wrote: >> >> Hi, >> >> As you know, due the breach attack (http://breachattack.com), HTTP >> compression is no longer safe (I assume nginx don't use SSL compression by >> default?), so we should disable it. >> >> >> Yes, modern nginx versions do not use SSL compression. >> >> Now, We are using config like the following: >> >> gzip on; >> .. >> >> server { >> listen 127.0.0.1:80 default_server; >> listen 127.0.0.1:443 default_server ssl; >> >> >> >> With the need to split into two servers section, is it possible to turn >> off gzip when we are using SSL? >> >> >> You have to split the dual mode server section into two server server >> sections and set "gzip off" >> SSL-enabled on. There is no way to disable gzip in dual mode server >> section, but if you really >> worry about security in general the server sections should be different. >> >> >> -- >> Igor Sysoev >> http://nginx.com/services.html >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From valery+nginxen at grid.net.ru Sun Aug 18 14:31:23 2013 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Sun, 18 Aug 2013 16:31:23 +0200 Subject: Nginx Web Server Q3 survey Message-ID: <5210DABB.5000404@grid.net.ru> Hi everyone! I would like to ask for 5 minutes of your time and participate in a survey that is intended to monitor current trends in Nginx community and suggest improvements to Nginx. To participate just visit this URL and use Facebook, Google accounts or your Email to login: http://survey.nginxguts.com Note that this survey is completely anonymous: your Email or social network ID will be used only to query if you already participated or not. In no way individual answers will be matched against your Personal Data. Should the survey accumulate data from enough participants, results will be published in Nginx Guts blog: http://www.nginxguts.com So stay tuned and thank you for your time! -- Best regards, Valery Kholodkov From nginx-forum at nginx.us Sun Aug 18 17:09:45 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 18 Aug 2013 13:09:45 -0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: Igor Sysoev Wrote: ------------------------------------------------------- > Yes, modern nginx versions do not use SSL compression. [...] > You have to split the dual mode server section into two server server > sections and set "gzip off" > SSL-enabled on. There is no way to disable gzip in dual mode server > section, but if you really > worry about security in general the server sections should be > different. If modern versions do not use ssl compression why split a dual mode server? If gzip is on in the http section, what happens then to the ssl section of a dual mode server? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241953,241984#msg-241984 From contact at jpluscplusm.com Sun Aug 18 17:15:22 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 18 Aug 2013 18:15:22 +0100 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: On 18 August 2013 18:09, itpp2012 wrote: > Igor Sysoev Wrote: > ------------------------------------------------------- >> Yes, modern nginx versions do not use SSL compression. > [...] >> You have to split the dual mode server section into two server server >> sections and set "gzip off" >> SSL-enabled on. There is no way to disable gzip in dual mode server >> section, but if you really >> worry about security in general the server sections should be >> different. > > If modern versions do not use ssl compression why split a dual mode server? > If gzip is on in the http section, what happens then to the ssl section of a > dual mode server? +1 From nurahmadie at gmail.com Sun Aug 18 17:46:47 2013 From: nurahmadie at gmail.com (Adie Nurahmadie) Date: Mon, 19 Aug 2013 00:46:47 +0700 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: I think you mistake ssl/tls level compression with gzip http compression, both are different. If you put gzip in http section, all server sections under this http will inherits this gzip config. This is why Igor recommends you to split the server config for SSL and non-SSL, and put 'gzip on' only at the non-SSL one. On Mon, Aug 19, 2013 at 12:15 AM, Jonathan Matthews wrote: > On 18 August 2013 18:09, itpp2012 wrote: > > Igor Sysoev Wrote: > > ------------------------------------------------------- > >> Yes, modern nginx versions do not use SSL compression. > > [...] > >> You have to split the dual mode server section into two server server > >> sections and set "gzip off" > >> SSL-enabled on. There is no way to disable gzip in dual mode server > >> section, but if you really > >> worry about security in general the server sections should be > >> different. > > > > If modern versions do not use ssl compression why split a dual mode > server? > > If gzip is on in the http section, what happens then to the ssl section > of a > > dual mode server? > > +1 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- regards, Nurahmadie -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sun Aug 18 18:55:26 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 18 Aug 2013 14:55:26 -0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: This discussion started regarding concerns about the BREACH, which (if you documented about it) attacks SSL-encrypted HTTP-level-compressed data, thus implying the discussion around gzip. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Aug 18 19:14:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 18 Aug 2013 23:14:56 +0400 Subject: Nginx reload problem In-Reply-To: References: <20130817020928.GY2130@mdounin.ru> <20130817031616.GB2130@mdounin.ru> <20130817113756.GA76786@mdounin.ru> Message-ID: <20130818191456.GC76786@mdounin.ru> Hello! On Sat, Aug 17, 2013 at 12:36:38PM -0400, B.R. wrote: > Hello, > > > On Sat, Aug 17, 2013 at 7:37 AM, Maxim Dounin wrote: > > > Hello! > > > > I don't think that calling "nginx -t" as a mandatory step before > > configuration reload is a good idea: nginx binary running and > > nginx binary on disk might be different, and "nginx -t" result > > might be incorrect because of this, in some cases rejecting valid > > configurations. > > > > Additionally, it does duplicate work by parsing/loading a > > configuration which will be again parsed by a master process > > during configuration reload. While in most cases it's not > > significant, I've seen configurations taking more than 1m to load > > due to big geo module bases used. > > > > ??In that case, the server admin has a problem, since he has no way to test > the configuration other the calling 'reload' on the running instance and > check the logs for errors, hoping they are not already crawling under > production-related log messages... > One way or another, you test the configuration against an existing binary > because you want to start or reload this binary with the conf. There is no > point in having a running instance having already deleted its disk binary > file: If you are in transition between 2? > > ?versions of Nginx, you shouldn't also make big changes to the conf... > That's a 2-steps procedures I'd say?: One thing at a time. Making any changes to the configuration isn't something significant: even without changes at all new binary on disk might not consider an old configuration as a valid e.g. due to some module not compiled in. And a reload might be required for various external reasons. I don't say it's a normal situation, but it's possible, and proposed change to init script will prevent init script from working in such situation. > Testing conf is of course a duplicate of work, but that's a safe operation. > The command output will determine if your new configuration will work > without having to carefully watch logs with anxiety. As I already tried to explain, watching logs is required anyway. > > There is the "upgrade" command in the init script shipped with > > nginx.org linux packages. > > > > ??Ok, so could Li have used the 'upgrade' command insted of 'reload' to > reload the configuration and change the allocated memory? Yes. -- Maxim Dounin http://nginx.org/en/donation.html From paulnpace at gmail.com Sun Aug 18 19:31:43 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Sun, 18 Aug 2013 12:31:43 -0700 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: Igor said: >You have to split the dual mode server section into two server server sections and set "gzip off" >SSL-enabled on. There is no way to disable gzip in dual mode server section, but if you really >worry about security in general the server sections should be different. Adie said: >This is why Igor recommends you to split the server config for SSL and non-SSL, and put 'gzip >on' only at the non-SSL one. So I can be clear, I have 'gzip_vary on' in my http block and in subsequent HTTPS blocks (I separate HTTP from HTTPS) I have 'gzip_vary' off. Am I doing it right? From paulnpace at gmail.com Sun Aug 18 19:37:22 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Sun, 18 Aug 2013 12:37:22 -0700 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: On Sun, Aug 18, 2013 at 12:31 PM, Paul N. Pace wrote: > Igor said: >>You have to split the dual mode server section into two server server sections and set "gzip off" >>SSL-enabled on. There is no way to disable gzip in dual mode server section, but if you really >>worry about security in general the server sections should be different. > > Adie said: >>This is why Igor recommends you to split the server config for SSL and non-SSL, and put 'gzip >>on' only at the non-SSL one. > > So I can be clear, I have 'gzip_vary on' in my http block and in > subsequent HTTPS blocks (I separate HTTP from HTTPS) I have > 'gzip_vary' off. Am I doing it right? 'gzip_vary' was supposed to be 'gzip' From steve at greengecko.co.nz Sun Aug 18 20:28:14 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 19 Aug 2013 08:28:14 +1200 Subject: multiple nginx In-Reply-To: <9442195.76.1376813298002.JavaMail.root@mx1.proxyy.biz> References: <9442195.76.1376813298002.JavaMail.root@mx1.proxyy.biz> Message-ID: <1376857694.21922.158.camel@steve-new> On Sun, 2013-08-18 at 16:08 +0800, Edwin Lee wrote: > Hi, > > Is is alright to have two installations of nginx on the same machine? > I have a running instance of nginx with php installed from distribution package manager. > Instead of writing another config, I would like to compile and install nginx from source code and run as second instance. > The second instance is to optimize for load balancing, reverse proxy, cache and modsecurity. > > My concerns is would this break the system on debian squeeze? > > Thanks for answering. > > Edwin Lee > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Yes, it's perfectly ok to do so. Make sure you're installing into a separate location ( eg /usr/local ), and you'll need a separate startup script. You cannot share port/ip address pairs though. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Sun Aug 18 20:48:48 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 18 Aug 2013 16:48:48 -0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: I think we could all benefit from a nginx recommendation on using gzip with single and dual mode server sections regarding a hardening approach against breach. Maxim? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241953,241993#msg-241993 From reallfqq-nginx at yahoo.fr Sun Aug 18 21:12:26 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 18 Aug 2013 17:12:26 -0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: Hello, On Sun, Aug 18, 2013 at 4:48 PM, itpp2012 wrote: > I think we could all benefit from a nginx recommendation on using gzip with > single and dual mode server sections regarding a hardening approach against > breach. Maxim? > ?As Igor advised, 2 different servers to server HTTP & HTTPS requests are preferred: server { listen 80; server_name inter.net include inter.net_shared_http_https_content.conf # Conf specific to HTTP content delivery here } server { listen 443; server_name inter.net include inter.net_shared_http_https_content.conf # Conf specific to HTTPS content delivery here } If you read the conf for the gzip directive, you'd notice that gzip directive default value is 'off', so if you don't mention 'gzip on' anywhere in your conf tree for the considered servers, there'll be no HTTP compression. Thus, if you kept your server configuration minimal and didn't explicitely activated gzip compression somewhere, you are safe by default. You couldn't be safier as the only way you are exposed would it be due to a lack of control/understanding of directives *you explicitely put* into your server(s) configuration. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sun Aug 18 21:29:11 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 18 Aug 2013 17:29:11 -0400 Subject: Nginx reload problem In-Reply-To: <20130818191456.GC76786@mdounin.ru> References: <20130817020928.GY2130@mdounin.ru> <20130817031616.GB2130@mdounin.ru> <20130817113756.GA76786@mdounin.ru> <20130818191456.GC76786@mdounin.ru> Message-ID: Hello, On Sun, Aug 18, 2013 at 3:14 PM, Maxim Dounin wrote: > > Making any changes to the configuration isn't something > significant: even without changes at all new binary on disk might > not consider an old configuration as a valid e.g. due to some > module not compiled in. And a reload might be required for > various external reasons. > > I don't say it's a normal situation, but it's possible, and > proposed change to init script will prevent init script from > working in such situation. > ??OK ?I think I got it. 'reload' deals with a running instance while 'upgrade' starts a new one from the binary on disk, so it makes sense to check the configuration against the binary when upgrading but not when reloading in case the binary on disk changed in between. The latter is a weirdest-case scenario (since you change the binary when you want to upgrade something, which won't result in a 'reload' call), though possible... You decision makes sense and is the safest. Thanks for your lights on that. ? > > > Testing conf is of course a duplicate of work, but that's a safe > operation. > > The command output will determine if your new configuration will work > > without having to carefully watch logs with anxiety. > > As I already tried to explain, watching logs is required anyway. > ?... if you had changes between the binary on disk and the one being run. Which is highly unlikely to happen as calling 'reload' on the current process would mean applying the configuration made for the new binary to the old running one (which needs to be replaced ASAP since it can't resist to a server restart...)?. But yeah, in that weird case, you'll watch the logs. ? > > > > There is the "upgrade" command in the init script shipped with > > > nginx.org linux packages. > > > > > > > ??Ok, so could Li have used the 'upgrade' command insted of 'reload' to > > reload the configuration and change the allocated memory? > > Yes. > ?Thanks.? Your input has been much appreciated, as always... --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Aug 18 22:49:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Aug 2013 02:49:03 +0400 Subject: Nginx reload problem In-Reply-To: References: <20130817020928.GY2130@mdounin.ru> <20130817031616.GB2130@mdounin.ru> <20130817113756.GA76786@mdounin.ru> <20130818191456.GC76786@mdounin.ru> Message-ID: <20130818224903.GG76786@mdounin.ru> Hello! On Sun, Aug 18, 2013 at 05:29:11PM -0400, B.R. wrote: [...] > > > Testing conf is of course a duplicate of work, but that's a safe > > operation. > > > The command output will determine if your new configuration will work > > > without having to carefully watch logs with anxiety. > > > > As I already tried to explain, watching logs is required anyway. > > > > ?... if you had changes between the binary on disk and the one being run. > Which is highly unlikely to happen as calling 'reload' on the current > process would mean applying the configuration made for the new binary to > the old running one (which needs to be replaced ASAP since it can't resist > to a server restart...)?. But yeah, in that weird case, you'll watch the > logs. Quote from my first messages in this thread: : What is very wrong is to assume that sending : a HUP signal to nginx is enough for a reload. For various : reasons, ranging from configuration syntax errors to out of memory : problems, configuration reload might fail. Even if the binary on disk matches one in memory, and configuration syntax is ok - it's always possible that system will run out of memory or file descriptors (or will reach per-process limits), or newly configured listening sockets will conflict with some other services running (or previously configured sockets in case of Linux, which doesn't allow wildcard and non-wildcard sockets to coexists), or something else will happen (including cases when reload is not possible due to requested configuration changes, see original question). Questions "why nginx can't reload configuration" appear in mailing lists on a regular basis. In addition to this thread, this week seen at least once in nginx-ru@, see [1]. Correct answer is usually the same - "Try looking into error log". [1] http://mailman.nginx.org/pipermail/nginx-ru/2013-August/051677.html -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sun Aug 18 23:21:58 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Aug 2013 03:21:58 +0400 Subject: ssl_cipher for mail not working In-Reply-To: <1c57a7ca7627379cb969524c90db5f49.NginxMailingListEnglish@forum.nginx.org> References: <1c57a7ca7627379cb969524c90db5f49.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130818232158.GH76786@mdounin.ru> Hello! On Wed, Aug 14, 2013 at 06:56:32AM -0400, MKl wrote: > Hello, > > to increase security of SSL I added some eliptic-curves-ciphers to the > chain. For HTTPS it's working fine, but for the mail proxy it does not work, > I only always get RC4-SHA instead of the ECDH ciphers. > See configuration at the end of this message. > > I'm testing it with: > openssl s_client -cipher 'ECDH:DH' -connect domain.de:443 > openssl s_client -cipher 'ECDH:DH' -connect imap.domain.de:993 > > The first command gives me a successful connection with ECDHE-RSA-RC4-SHA, > so for HTTPS the cipherlist is used. The second command fails with an error: > "sslv3 alert handshake failure", the IMAPS server does not provide ECDH > support. I used exactly the same ssl_cipher line for HTTPS and the mail > proxy. > > When using the following command without forcing any ciphers on the client I > can see that RC4-SHA is the "best" cipher that is supported and used: > openssl s_client -connect imap.domain.de:993 > > Anybody has an idea where the problem is? Looks like the problem fixed by this changeset: http://trac.nginx.org/nginx/changeset/32fe021911c9/nginx Should work fine in nginx 1.5.1+. [...] -- Maxim Dounin http://nginx.org/en/donation.html From igor at sysoev.ru Mon Aug 19 04:41:41 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 19 Aug 2013 08:41:41 +0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: <5AA2AEF3-D0E1-420C-B483-1DEE6F4F096D@sysoev.ru> On Aug 18, 2013, at 21:09 , itpp2012 wrote: > Igor Sysoev Wrote: > ------------------------------------------------------- >> Yes, modern nginx versions do not use SSL compression. > [...] >> You have to split the dual mode server section into two server server >> sections and set "gzip off" >> SSL-enabled on. There is no way to disable gzip in dual mode server >> section, but if you really >> worry about security in general the server sections should be >> different. > > If modern versions do not use ssl compression why split a dual mode server? > If gzip is on in the http section, what happens then to the ssl section of a > dual mode server? These are different vulnerabilities: SSL compression is subject to CRIME vulnerability while HTTP/SSL compression is subject to BREACH vulnerability. -- Igor Sysoev http://nginx.com/services.html From igor at sysoev.ru Mon Aug 19 04:43:02 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 19 Aug 2013 08:43:02 +0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: On Aug 18, 2013, at 14:27 , howard chen wrote: > Hi, > > Thanks for the insight. > > Finally I solved by: > > if ($scheme = https) { > gzip off; > } This does not work on server level. And on location level it may work in wrong way. > Separating into two servers require to duplicate the rules like rewrite, which is cumbersome. I believe that dual mode server block may be subject to vulnerabilities due to site map, so BREACH is the least of them. -- Igor Sysoev http://nginx.com/services.html > On Sat, Aug 17, 2013 at 8:43 PM, Igor Sysoev wrote: > On Aug 17, 2013, at 8:59 , howard chen wrote: > >> Hi, >> >> As you know, due the breach attack (http://breachattack.com), HTTP compression is no longer safe (I assume nginx don't use SSL compression by default?), so we should disable it. > > Yes, modern nginx versions do not use SSL compression. > >> Now, We are using config like the following: >> >> gzip on; >> .. >> >> server { >> listen 127.0.0.1:80 default_server; >> listen 127.0.0.1:443 default_server ssl; >> >> >> >> With the need to split into two servers section, is it possible to turn off gzip when we are using SSL? > > > You have to split the dual mode server section into two server server sections and set "gzip off" > SSL-enabled on. There is no way to disable gzip in dual mode server section, but if you really > worry about security in general the server sections should be different. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Aug 19 05:56:18 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 19 Aug 2013 01:56:18 -0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: <5AA2AEF3-D0E1-420C-B483-1DEE6F4F096D@sysoev.ru> References: <5AA2AEF3-D0E1-420C-B483-1DEE6F4F096D@sysoev.ru> Message-ID: On Mon, Aug 19, 2013 at 12:41 AM, Igor Sysoev wrote: > > These are different vulnerabilities: SSL compression is subject to > CRIME vulnerability while HTTP/SSL compression is subject to BREACH > vulnerability. > ?Incorrect. CRIME attacks a vulnerability in the implementation of SSLv3 and TLS1.0? using CBC flaw: the IV was guessable. Hte other vulnerability was a facilitator to inject automatically ?arbitrary content (so attackers could inject what they wish to make their trail-and-error attack). CRIME conclusion is: use TLS v1.1 or later (not greater than v1.2 for now). BREACH attacks the fact that compressed HTTP content encrypted with SSL makes it easy to guess a known existing header field from the request that is repeated in the (encrypted) answer looking at the size of the body. BEAST conclusion is: don't use HTTP compression underneath SSL encryption. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Mon Aug 19 06:04:23 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 19 Aug 2013 10:04:23 +0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: <5AA2AEF3-D0E1-420C-B483-1DEE6F4F096D@sysoev.ru> Message-ID: <664E0335-2A3C-4167-96C2-022F877B5072@sysoev.ru> On Aug 19, 2013, at 9:56 , B.R. wrote: > On Mon, Aug 19, 2013 at 12:41 AM, Igor Sysoev wrote: > > These are different vulnerabilities: SSL compression is subject to > CRIME vulnerability while HTTP/SSL compression is subject to BREACH > vulnerability. > > ?Incorrect. > > CRIME attacks a vulnerability in the implementation of SSLv3 and TLS1.0? using CBC flaw: the IV was guessable. Hte other vulnerability was a facilitator to inject automatically ?arbitrary content (so attackers could inject what they wish to make their trail-and-error attack). > CRIME conclusion is: use TLS v1.1 or later (not greater than v1.2 for now). You probably mix up it with BEAST. > BREACH attacks the fact that compressed HTTP content encrypted with SSL makes it easy to guess a known existing header field from the request that is repeated in the (encrypted) answer looking at the size of the body. > BEAST conclusion is: don't use HTTP compression underneath SSL encryption. -- Igor Sysoev http://nginx.com/services.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 19 06:05:00 2013 From: nginx-forum at nginx.us (ronin) Date: Mon, 19 Aug 2013 02:05:00 -0400 Subject: Sub-domain filtering Message-ID: <896ab758371eeda4910ecbbbf84a7b92.NginxMailingListEnglish@forum.nginx.org> I am using the statement is: if ($ host! = www.mj.com|ci.mj.com) {rewrite ^ / (. *) $ http://www.mj.com/ $ 1 permanent;} This page contains a redirect loop occurs causing the problem can not access the site, I ask you how to handle this statement to be compatible with multiple subdomains, thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242007,242007#msg-242007 From igor at sysoev.ru Mon Aug 19 06:07:24 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 19 Aug 2013 10:07:24 +0400 Subject: Recommendations for safeguarding against BREACH ? In-Reply-To: References: <301EB14A-56C5-4CA4-B198-E190394C17C9@sysoev.ru> Message-ID: <4DFB4BC1-F501-44E5-BA18-EDF2C144334D@sysoev.ru> On Aug 12, 2013, at 21:32 , offmind wrote: > And what if we are using gzip_static? > As far as I understand, we have to block gzipping page code. But what about > .js .css with no secure content? Statically gzipped files do not depend on user input so they are not subject to BREACH. -- Igor Sysoev http://nginx.com/services.html From contact at jpluscplusm.com Mon Aug 19 07:13:09 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 19 Aug 2013 08:13:09 +0100 Subject: Sub-domain filtering In-Reply-To: <896ab758371eeda4910ecbbbf84a7b92.NginxMailingListEnglish@forum.nginx.org> References: <896ab758371eeda4910ecbbbf84a7b92.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 19 Aug 2013 07:05, "ronin" wrote: > > I am using the statement is: > if ($ host! = www.mj.com|ci.mj.com) {rewrite ^ / (. *) $ http://www.mj.com/ > $ 1 permanent;} > This page contains a redirect loop occurs causing the problem can not access > the site, I ask you how to handle this statement to be compatible with > multiple subdomains, thank you Use a separate server{} for the redirection-only vhosts. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 19 10:04:49 2013 From: nginx-forum at nginx.us (MKl) Date: Mon, 19 Aug 2013 06:04:49 -0400 Subject: ssl_cipher for mail not working In-Reply-To: <20130818232158.GH76786@mdounin.ru> References: <20130818232158.GH76786@mdounin.ru> Message-ID: <9371e3aae0150272c563bbf5f50b398f.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Looks like the problem fixed by this changeset: > > http://trac.nginx.org/nginx/changeset/32fe021911c9/nginx > > Should work fine in nginx 1.5.1+. Hi Maxim, thanks for your answer! I will try this later. Will this also be merged into 1.4 or even 1.2 branch? Because it's a bugfix and not a new feature? Currently we have the problem that the upload-module and upload-progress-module are not working with >=1.4, so we are still on 1.2 branch. Thank you again for your help and work on nginx! Michael Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241834,242014#msg-242014 From mdounin at mdounin.ru Mon Aug 19 10:56:42 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Aug 2013 14:56:42 +0400 Subject: ssl_cipher for mail not working In-Reply-To: <9371e3aae0150272c563bbf5f50b398f.NginxMailingListEnglish@forum.nginx.org> References: <20130818232158.GH76786@mdounin.ru> <9371e3aae0150272c563bbf5f50b398f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130819105642.GC705@mdounin.ru> Hello! On Mon, Aug 19, 2013 at 06:04:49AM -0400, MKl wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Looks like the problem fixed by this changeset: > > > > http://trac.nginx.org/nginx/changeset/32fe021911c9/nginx > > > > Should work fine in nginx 1.5.1+. > > Hi Maxim, > > thanks for your answer! I will try this later. > Will this also be merged into 1.4 or even 1.2 branch? Because it's a bugfix > and not a new feature? Certainly not into 1.2.x, it's obsolete. Most likely not into 1.4.x as it's never worked in previous versions. -- Maxim Dounin http://nginx.org/en/donation.html From phofstetter at sensational.ch Mon Aug 19 12:07:09 2013 From: phofstetter at sensational.ch (Philip Hofstetter) Date: Mon, 19 Aug 2013 14:07:09 +0200 Subject: nginx 1.4.1 - slow transfers / connection resets Message-ID: Hi, I have a nginx (stock ubuntu config) as a reverse proxy in front of a haproxy in front of 5 more nginx machines which use fastcgi to talk to php-fpm. My issue is with the frontend proxy and long-running, veeeeery slowwwww requests. The clients are very underpowered mobile barcode scanners using 2G GSM connections. When they try to download 2.1 MB of data dynamically generated by PHP on the back, backend, the Frontend will close the connection after ~1MB has been downloaded (at ~2 KBytes/s). I can reproduce the same behavior using curl (with --rate-limit 2K): % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1196k 0 1196k 0 343 2047 0 --:--:-- 0:09:58 --:--:-- 1888 curl: (56) Recv failure: Connection reset by peer The access log on the frontend server lists a 200 status code but too few transmitted bytes. The error log (on info) shows 2013/08/19 14:03:36 [info] 32469#0: *1166 client timed out (110: Connection timed out) while sending to client, client: xxx.xxx.xxx.xxx Which is not true - it's showing that while curl (--rate-limit 2K) ist still running. Can you give me any pointers in how to debug/fix this? Philip -- Sensational AG Giessh?belstrasse 62c, Postfach 1966, 8021 Z?rich Tel. +41 43 544 09 60, Mobile +41 79 341 01 99 info at sensational.ch, http://www.sensational.ch From mdounin at mdounin.ru Mon Aug 19 12:17:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Aug 2013 16:17:21 +0400 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: References: Message-ID: <20130819121721.GJ705@mdounin.ru> Hello! On Mon, Aug 19, 2013 at 02:07:09PM +0200, Philip Hofstetter wrote: > Hi, > > I have a nginx (stock ubuntu config) as a reverse proxy in front of a > haproxy in front of 5 more nginx machines which use fastcgi to talk to > php-fpm. > > My issue is with the frontend proxy and long-running, veeeeery > slowwwww requests. > > The clients are very underpowered mobile barcode scanners using 2G GSM > connections. When they try to download 2.1 MB of data dynamically > generated by PHP on the back, backend, the Frontend will close the > connection after ~1MB has been downloaded (at ~2 KBytes/s). > > I can reproduce the same behavior using curl (with --rate-limit 2K): > > % Total % Received % Xferd Average Speed Time Time Time Current > Dload Upload Total Spent Left Speed > 100 1196k 0 1196k 0 343 2047 0 --:--:-- 0:09:58 --:--:-- 1888 > curl: (56) Recv failure: Connection reset by peer > > The access log on the frontend server lists a 200 status code but too > few transmitted bytes. > > The error log (on info) shows > > 2013/08/19 14:03:36 [info] 32469#0: *1166 client timed out (110: > Connection timed out) while sending to client, client: xxx.xxx.xxx.xxx > > Which is not true - it's showing that while curl (--rate-limit 2K) ist > still running. > > Can you give me any pointers in how to debug/fix this? Debug log should be helpful, see http://nginx.org/en/docs/debugging_log.html. -- Maxim Dounin http://nginx.org/en/donation.html From reallfqq-nginx at yahoo.fr Mon Aug 19 15:46:04 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 19 Aug 2013 11:46:04 -0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: <664E0335-2A3C-4167-96C2-022F877B5072@sysoev.ru> References: <5AA2AEF3-D0E1-420C-B483-1DEE6F4F096D@sysoev.ru> <664E0335-2A3C-4167-96C2-022F877B5072@sysoev.ru> Message-ID: On Mon, Aug 19, 2013 at 2:04 AM, Igor Sysoev wrote: > ?Incorrect. > > CRIME attacks a vulnerability in the implementation of SSLv3 and TLS1.0? > using CBC flaw: the IV was guessable. Hte other vulnerability was a > facilitator to inject automatically ?arbitrary content (so attackers could > inject what they wish to make their trail-and-error attack). > CRIME conclusion is: use TLS v1.1 or later (not greater than v1.2 for now). > > > You probably mix up it with BEAST. > ?You're right. I mixed up things...? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Aug 19 17:05:10 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 19 Aug 2013 19:05:10 +0200 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: References: Message-ID: Hi, >?? % Total??? % Received % Xferd? Average Speed?? Time??? Time???? Time? Current >????????????????????????????????? Dload? Upload?? Total?? Spent??? Left? Speed > 100 1196k??? 0 1196k??? 0?? 343?? 2047????? 0 --:--:--? 0:09:58 --:--:--? 1888 > curl: (56) Recv failure: Connection reset by peer Looks like there is some timeout at 600 seconds (Time Spent: 0:09:58)? Any match in the haproxy or nginx configurations? > I have a nginx (stock ubuntu config) as a reverse proxy in front of? a > haproxy in front of 5 more nginx machines which use fastcgi to talk to > php-fpm. Since you can reproduce it with curl, why not track the issue down to a specific part of your infrastructure (try on the nginx backends first, then on the haproxy box, and then on the frontent nginx box). Lukas From nginx-forum at nginx.us Mon Aug 19 19:56:27 2013 From: nginx-forum at nginx.us (justin) Date: Mon, 19 Aug 2013 15:56:27 -0400 Subject: Proxying requests based on $http_authorization (API Key) Message-ID: Hello. We are looking to proxy requests to different backends using upstream based on http basic auth. I.E. the API key of the request. I am thinking I need to first get the API key from the raw http request ($http_authorization). Then do a lookup in redis for the backend to forward too. We are already using OpenResty. Can we do this natively, or even better yet, are there 3rd party modules we can utilize to help with this? Is using lua required, or can we utilize standard nginx directives? Thanks for the help, and insights. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242036,242036#msg-242036 From paulnpace at gmail.com Mon Aug 19 21:53:36 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Mon, 19 Aug 2013 14:53:36 -0700 Subject: Piwik conf file Message-ID: I have recently discovered this wonderful includes directive and I am using it to clean up my server blocks. Doing this has forced me to evaluate some of my configurations. I am trying to set up a conf file for Piwik installations and I'm hoping a second set of of eyes can help: location /piwik/ { location /js/ { allow all; } location ~ /js/.*\.php$ { include /etc/nginx/global-configs/php.conf; } location ~ /piwik.php$ { include /etc/nginx/global-configs/php.conf; } return 301 https://server_name$request_uri?; } Piwik seems trickier than other applications because certain components must be available through HTTP sessions or else browsers give scary warnings or don't load the tracking code, but I want to force the Piwik dashboard to open in HTTPS. Any comments appreciated. Thanks! Paul From phofstetter at sensational.ch Tue Aug 20 07:14:18 2013 From: phofstetter at sensational.ch (Philip Hofstetter) Date: Tue, 20 Aug 2013 09:14:18 +0200 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: References: Message-ID: Hi, On Mon, Aug 19, 2013 at 7:05 PM, Lukas Tribus wrote: > Looks like there is some timeout at 600 seconds (Time Spent: 0:09:58)? Any match > in the haproxy or nginx configurations? That's consistent with what nginx is logging to the error log. But it doesn't make sense as there is data being transmitted. >> I have a nginx (stock ubuntu config) as a reverse proxy in front of a >> haproxy in front of 5 more nginx machines which use fastcgi to talk to >> php-fpm. > > Since you can reproduce it with curl, why not track the issue down to a > specific part of your infrastructure (try on the nginx backends first, > then on the haproxy box, and then on the frontent nginx box). That's what I've done before coming here. No issues on either haproxy or the backend. Here's the debug log of just the failing request (thanks, nginx, for making error_log a directive that can be used in a location block): http://www.gnegg.ch/debug.log Philip -- Sensational AG Giessh?belstrasse 62c, Postfach 1966, 8021 Z?rich Tel. +41 43 544 09 60, Mobile +41 79 341 01 99 info at sensational.ch, http://www.sensational.ch From phofstetter at sensational.ch Tue Aug 20 07:23:57 2013 From: phofstetter at sensational.ch (Philip Hofstetter) Date: Tue, 20 Aug 2013 09:23:57 +0200 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: References: Message-ID: The last debug log I sent is not showing the full picture. In this case, I was aborting the curl command once nginx has logged an incomplete response (status=200 but too short length) to access.log, but while it was still transferring data (how's that even possible)? Hence the "connection reset by peer" in the log. I'm now making a second log, this time waiting it out. I will also produce a third log transferring a static file from the backend nginx in order to rule out fastcgi issues. Philip On Tue, Aug 20, 2013 at 9:14 AM, Philip Hofstetter wrote: > Hi, > > > On Mon, Aug 19, 2013 at 7:05 PM, Lukas Tribus wrote: > >> Looks like there is some timeout at 600 seconds (Time Spent: 0:09:58)? Any match >> in the haproxy or nginx configurations? > > That's consistent with what nginx is logging to the error log. But it > doesn't make sense as there is data being transmitted. > >>> I have a nginx (stock ubuntu config) as a reverse proxy in front of a >>> haproxy in front of 5 more nginx machines which use fastcgi to talk to >>> php-fpm. >> >> Since you can reproduce it with curl, why not track the issue down to a >> specific part of your infrastructure (try on the nginx backends first, >> then on the haproxy box, and then on the frontent nginx box). > > That's what I've done before coming here. No issues on either haproxy > or the backend. > > Here's the debug log of just the failing request (thanks, nginx, for > making error_log a directive that can be used in a location block): > > http://www.gnegg.ch/debug.log > > Philip > > -- > Sensational AG > Giessh?belstrasse 62c, Postfach 1966, 8021 Z?rich > Tel. +41 43 544 09 60, Mobile +41 79 341 01 99 > info at sensational.ch, http://www.sensational.ch -- Sensational AG Giessh?belstrasse 62c, Postfach 1966, 8021 Z?rich Tel. +41 43 544 09 60, Mobile +41 79 341 01 99 info at sensational.ch, http://www.sensational.ch From phofstetter at sensational.ch Tue Aug 20 07:49:32 2013 From: phofstetter at sensational.ch (Philip Hofstetter) Date: Tue, 20 Aug 2013 09:49:32 +0200 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: References: Message-ID: Ok. I have three debug logs now: http://www.gnegg.ch/debug-cancel.log is the first log I created where I quit curl once nginx has logged a 200 status with a truncated length to the access log (how can it log success while it's still transmitting data?) http://www.gnegg.ch/debug-full.log is the same request, but this time waiting for curl to complain about the connection reset. Again, nginx logs a 200 with truncated length (way before curl bails out) http://www.gnegg.ch/debug-staticfile.log Is me downloading a static file from one of the backend servers. This shows the same behavior as the dynamically generated response and helps ruling out fastcgi issues. To add a further note: The machine which shows this issue is under considerable load. When I run the tests against and identical machine which is not under load, the download runs correctly (until I do put it under load at which point it fails the same way). The fact that nginx logs the request as successful (but truncated) while it's still ongoing does kinda point to a kernel issue, but I'm really just guessing at this point. Philip On Tue, Aug 20, 2013 at 9:23 AM, Philip Hofstetter wrote: > The last debug log I sent is not showing the full picture. In this > case, I was aborting the curl command once nginx has logged an > incomplete response (status=200 but too short length) to access.log, > but while it was still transferring data (how's that even possible)? > > Hence the "connection reset by peer" in the log. > > I'm now making a second log, this time waiting it out. I will also > produce a third log transferring a static file from the backend nginx > in order to rule out fastcgi issues. > > Philip > > On Tue, Aug 20, 2013 at 9:14 AM, Philip Hofstetter > wrote: >> Hi, >> >> >> On Mon, Aug 19, 2013 at 7:05 PM, Lukas Tribus wrote: >> >>> Looks like there is some timeout at 600 seconds (Time Spent: 0:09:58)? Any match >>> in the haproxy or nginx configurations? >> >> That's consistent with what nginx is logging to the error log. But it >> doesn't make sense as there is data being transmitted. >> >>>> I have a nginx (stock ubuntu config) as a reverse proxy in front of a >>>> haproxy in front of 5 more nginx machines which use fastcgi to talk to >>>> php-fpm. >>> >>> Since you can reproduce it with curl, why not track the issue down to a >>> specific part of your infrastructure (try on the nginx backends first, >>> then on the haproxy box, and then on the frontent nginx box). >> >> That's what I've done before coming here. No issues on either haproxy >> or the backend. >> >> Here's the debug log of just the failing request (thanks, nginx, for >> making error_log a directive that can be used in a location block): >> >> http://www.gnegg.ch/debug.log >> >> Philip >> >> -- >> Sensational AG >> Giessh?belstrasse 62c, Postfach 1966, 8021 Z?rich >> Tel. +41 43 544 09 60, Mobile +41 79 341 01 99 >> info at sensational.ch, http://www.sensational.ch > > > > -- > Sensational AG > Giessh?belstrasse 62c, Postfach 1966, 8021 Z?rich > Tel. +41 43 544 09 60, Mobile +41 79 341 01 99 > info at sensational.ch, http://www.sensational.ch -- Sensational AG Giessh?belstrasse 62c, Postfach 1966, 8021 Z?rich Tel. +41 43 544 09 60, Mobile +41 79 341 01 99 info at sensational.ch, http://www.sensational.ch From mdounin at mdounin.ru Tue Aug 20 11:26:47 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Aug 2013 15:26:47 +0400 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: References: Message-ID: <20130820112647.GD19334@mdounin.ru> Hello! On Tue, Aug 20, 2013 at 09:49:32AM +0200, Philip Hofstetter wrote: > Ok. I have three debug logs now: > > http://www.gnegg.ch/debug-cancel.log > is the first log I created where I quit curl once nginx has logged a > 200 status with a truncated length to the access log (how can it log > success while it's still transmitting data?) A http status code nginx logs to access log corresponds to the code sent to a client. As the code was already sent at the time the problem was detected - it's 200. > http://www.gnegg.ch/debug-full.log > is the same request, but this time waiting for curl to complain about > the connection reset. Again, nginx logs a 200 with truncated length > (way before curl bails out) > > http://www.gnegg.ch/debug-staticfile.log > Is me downloading a static file from one of the backend servers. This > shows the same behavior as the dynamically generated response and > helps ruling out fastcgi issues. > > To add a further note: The machine which shows this issue is under > considerable load. When I run the tests against and identical machine > which is not under load, the download runs correctly (until I do put > it under load at which point it fails the same way). > > The fact that nginx logs the request as successful (but truncated) > while it's still ongoing does kinda point to a kernel issue, but I'm > really just guessing at this point. Both full logs show that nothing happens in 60 seconds (while there are unsent data pending): 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http output filter "/index.php/winclient/gnegg?" 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http copy filter: "/index.php/winclient/gnegg?" 2013/08/20 09:33:31 [debug] 1692#0: *1101651 image filter 2013/08/20 09:33:31 [debug] 1692#0: *1101651 xslt filter body 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http postpone filter "/index.php/winclient/gnegg?" 00000000022A7218 2013/08/20 09:33:31 [debug] 1692#0: *1101651 write new buf t:0 f:0 0000000000000000, pos 000000000231CAF0, size: 4096 file: 0, size: 0 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http write filter: l:0 f:1 s:4096 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http write filter limit 0 2013/08/20 09:33:31 [debug] 1692#0: *1101651 writev: 1953 Note: only 1953 of 4096 bytes were sent. 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http write filter 00000000022A7228 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http copy filter: -2 "/index.php/winclient/gnegg?" 2013/08/20 09:33:31 [debug] 1692#0: *1101651 event timer del: 141: 1376984038781 2013/08/20 09:33:31 [debug] 1692#0: *1101651 event timer add: 141: 60000:1376984071388 Note: timer was set to timeout after 60 seconds. 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http upstream request: "/index.php/winclient/gnegg?" 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http upstream process non buffered upstream 2013/08/20 09:33:31 [debug] 1692#0: *1101651 event timer: 141, old: 1376984071388, new: 1376984071390 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http upstream request: "/index.php/winclient/gnegg?" 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http upstream dummy handler 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http upstream request: "/index.php/winclient/gnegg?" 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http upstream process non buffered upstream 2013/08/20 09:33:31 [debug] 1692#0: *1101651 event timer: 141, old: 1376984071388, new: 1376984071645 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http upstream request: "/index.php/winclient/gnegg?" 2013/08/20 09:33:31 [debug] 1692#0: *1101651 http upstream dummy handler 2013/08/20 09:34:31 [debug] 1692#0: *1101651 event timer del: 141: 1376984071388 2013/08/20 09:34:31 [debug] 1692#0: *1101651 http run request: "/index.php/winclient/gnegg?" 2013/08/20 09:34:31 [debug] 1692#0: *1101651 http upstream process non buffered downstream 2013/08/20 09:34:31 [info] 1692#0: *1101651 client timed out (110: Connection timed out) while sending to client, client: 80.219.149.116, server: , request: "POST /index.php/winclient/gnegg HTTP/1.0", upstream: "http://127.0.0.1:8081/index.php/winclient/gnegg", host: "REDACTED.popscan.ch" 2013/08/20 09:34:31 [debug] 1692#0: *1101651 finalize http upstream request: 408 After a 60 seconds timer was fired and client connection was closed as timed out. That is, from nginx point of view everything looks like a real timeout. Unfortunately, with location-level debug logs it's not possible to see event handling details (and that's why it's generally recommended to activate debug log at global level, BTW). But I would suppose everything is fine there as well, and the problem is actually a result of kernel's behaviour. -- Maxim Dounin http://nginx.org/en/donation.html From phofstetter at sensational.ch Tue Aug 20 13:14:20 2013 From: phofstetter at sensational.ch (Philip Hofstetter) Date: Tue, 20 Aug 2013 15:14:20 +0200 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: <20130820112647.GD19334@mdounin.ru> References: <20130820112647.GD19334@mdounin.ru> Message-ID: Hello! On Tue, Aug 20, 2013 at 1:26 PM, Maxim Dounin wrote: > 2013/08/20 09:34:31 [debug] 1692#0: *1101651 http upstream process non buffered downstream > 2013/08/20 09:34:31 [info] 1692#0: *1101651 client timed out (110: Connection timed out) while sending to client, client: 80.219.149.116, server: , request: "POST /index.php/winclient/gnegg HTTP/1.0", upstream: "http://127.0.0.1:8081/index.php/winclient/gnegg", host: "REDACTED.popscan.ch" > 2013/08/20 09:34:31 [debug] 1692#0: *1101651 finalize http upstream request: 408 > > After a 60 seconds timer was fired and client connection was > closed as timed out. Yeah. That's what I feared. But the connection was definitely still open and data was being transferred. > Unfortunately, with location-level debug logs it's not possible to > see event handling details (and that's why it's generally > recommended to activate debug log at global level, BTW). any idea how to do this on a system that's under load (60 requests per second)? As I said before: When I do the same request on a system that's not under load, the problem doesn't appear. > But I would suppose everything is fine there as well, and the problem is > actually a result of kernel's behaviour. I started suspecting as much. Any pointers how I could work around/fix the issue on the kernel level? Philip -- Sensational AG Giessh?belstrasse 62c, Postfach 1966, 8021 Z?rich Tel. +41 43 544 09 60, Mobile +41 79 341 01 99 info at sensational.ch, http://www.sensational.ch From mdounin at mdounin.ru Tue Aug 20 13:50:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Aug 2013 17:50:57 +0400 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: References: <20130820112647.GD19334@mdounin.ru> Message-ID: <20130820135057.GE19334@mdounin.ru> Hello! On Tue, Aug 20, 2013 at 03:14:20PM +0200, Philip Hofstetter wrote: > Hello! > > On Tue, Aug 20, 2013 at 1:26 PM, Maxim Dounin wrote: > > > 2013/08/20 09:34:31 [debug] 1692#0: *1101651 http upstream process non buffered downstream > > 2013/08/20 09:34:31 [info] 1692#0: *1101651 client timed out (110: Connection timed out) while sending to client, client: 80.219.149.116, server: , request: "POST /index.php/winclient/gnegg HTTP/1.0", upstream: "http://127.0.0.1:8081/index.php/winclient/gnegg", host: "REDACTED.popscan.ch" > > 2013/08/20 09:34:31 [debug] 1692#0: *1101651 finalize http upstream request: 408 > > > > After a 60 seconds timer was fired and client connection was > > closed as timed out. > > Yeah. That's what I feared. But the connection was definitely still > open and data was being transferred. > > > > Unfortunately, with location-level debug logs it's not possible to > > see event handling details (and that's why it's generally > > recommended to activate debug log at global level, BTW). > > any idea how to do this on a system that's under load (60 requests per > second)? As I said before: When I do the same request on a system > that's not under load, the problem doesn't appear. 60 requests per second is low enough, just switching on debug log should work. > > But I would suppose everything is fine there as well, and the problem is > > actually a result of kernel's behaviour. > > I started suspecting as much. Any pointers how I could work around/fix > the issue on the kernel level? No exact recommendation, but likely it's related to buffering at some point. First of all I would recommend to look at what actually happens on the wire with tcpdump/wireshark. If there is indeed transfer stall for 60+ seconds - you should look at the client's side of the TCP connection, i.e. either client's kernel or curl. If there is continous flow of packets - it's likely something to do with sending part (just in case, aren't you using send_lowat? if set too high it may cause such symptoms). -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Aug 20 15:43:20 2013 From: nginx-forum at nginx.us (jreich) Date: Tue, 20 Aug 2013 11:43:20 -0400 Subject: Sub-requests to an upstream server Message-ID: Hello, I am writing an nginx plugin and I am trying to send sub-requests to an upstream server. I am aware of the ngx_http_subrequest function but I didn't succeed to use it in my case and I am not sure it is what I need. In my use case, I have a structure in shared memory. This structure must leave as long as some conditions are fulfilled based on several sequential sub-requests to an upstream server. My first problem is that ngx_http_subrequest must have a server request to work. I would prefer to send a client request based on an internal event (in my case a timer event). If this is not possible, I can use a server request to do that but in that case I don't want to add sub-requests responses content to the response body. Is this possible ? Do you have any advice ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242054,242054#msg-242054 From igor.sverkos at googlemail.com Tue Aug 20 19:24:41 2013 From: igor.sverkos at googlemail.com (Igor Sverkos) Date: Tue, 20 Aug 2013 21:24:41 +0200 Subject: nginx 1.4.1 - slow transfers / connection resets In-Reply-To: References: <20130820112647.GD19334@mdounin.ru> Message-ID: H i > > After a 60 seconds timer was fired and client connection was > > closed as timed out. > > Yeah. That's what I feared. But the connection was definitely still > open and data was being transferred. You are still testing through your 2G GSM connection, right? How can you be sure that this connection isn't lagging? Can you create a network capture on server-side? Can you bypass the dynamic backend, e.g. are the clients able to fetch a 2.1MB static file just for testing through your nginx setup? -- Regards, Igor -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 20 21:12:50 2013 From: nginx-forum at nginx.us (rmalayter) Date: Tue, 20 Aug 2013 17:12:50 -0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: B.R. Wrote: > BREACH attacks the fact that compressed HTTP content encrypted with > SSL > makes it easy to guess a known existing header field from the request > that > is repeated in the (encrypted) answer looking at the size of the body. > BEAST conclusion is: don't use HTTP compression underneath SSL > encryption. No, the conclusion is: don't echo back values supplied by the requester as trusted in your *application* code. This is the most basic of anti-injection protections. BREACH is the result of an application-layer problem, and needs to be solved there. Why would you *ever* echo arbitrary header or form input back to the requester alongside sensitive data? A huge number of established security best practices prevent the BREACH attack at the application layer; a man-in-the-middle as well as an exploitable XSS/CSRF vulnerability is needed to even get the attack started. Fix those issues first. Also, you should likely be rate-limiting responses by session at your back-end to prevent DoS attacks. For the extra paranoid, randomly HTML-entity-encode characters of any user data supplied before echoing it back in a response, and add random padding of random length to the HEAD of all responses. At the nginx layer, some sensible rate limits might also be an appropriate mitigation: thousands-to-millions of requests are needed to extract secret data with BREACH. I haven't seen Google or any other large web site turn of gzip compression of HTTPS responses yet because of BREACH. If *you* can actually afford to do so, your traffic level is simply trivial. We would see approximately an 8x increase in bandwidth costs (and corresponding 8x increase in end-user response time) if we disabled GZIP for HTTPS connections. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241953,242060#msg-242060 From francis at daoine.org Tue Aug 20 21:44:26 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Aug 2013 22:44:26 +0100 Subject: Piwik conf file In-Reply-To: References: Message-ID: <20130820214426.GD27161@craic.sysops.org> On Mon, Aug 19, 2013 at 02:53:36PM -0700, Paul N. Pace wrote: Hi there, > I am trying to set up a conf file for Piwik installations and I'm > hoping a second set of of eyes can help: In nginx one request is handled in one location. The rules for selecting the location are at http://nginx.org/r/location Given that information, the following output... > location /piwik/ { > > location /js/ { > allow all; > } > > location ~ /js/.*\.php$ { > include /etc/nginx/global-configs/php.conf; > } > > location ~ /piwik.php$ { > include /etc/nginx/global-configs/php.conf; > } > > return 301 https://server_name$request_uri?; > } $ sbin/nginx -t nginx: [emerg] location "/js/" is outside location "/piwik/" in /usr/local/nginx/conf/nginx.conf:14 nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed should not be a surprise. Can you list some of the requests that you want to have handled, and how you want them to be handled? That might help someone who knows nginx but not piwik to understand what the intention is. Doing a web search for "site:nginx.org piwik" does seem to point at a config file, which seems very different from yours. Searching for "nginx" on the piwik.org web site also refers to an install document. Do those documents offer any help to what you are doing? > Piwik seems trickier than other applications because certain > components must be available through HTTP sessions or else browsers > give scary warnings or don't load the tracking code, but I want to > force the Piwik dashboard to open in HTTPS. These words don't obviously directly translate to your config file snippet above. What request is the Piwik dashboard? What request is certain components? f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Tue Aug 20 22:24:56 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 20 Aug 2013 18:24:56 -0400 Subject: How to turn off gzip compression for SSL traffic In-Reply-To: References: Message-ID: On Tue, Aug 20, 2013 at 5:12 PM, rmalayter wrote: > No, the conclusion is: don't echo back values supplied by the requester as > trusted in your *application* code. This is the most basic of > anti-injection > protections. BREACH is the result of an application-layer problem, and > needs > to be solved there. Why would you *ever* echo arbitrary header or form > input > back to the requester alongside sensitive data? > > A huge number of established security best practices prevent the BREACH > attack at the application layer; a man-in-the-middle as well as an > exploitable XSS/CSRF vulnerability is needed to even get the attack > started. > Fix those issues first. Also, you should likely be rate-limiting responses > by session at your back-end to prevent DoS attacks. For the extra paranoid, > randomly HTML-entity-encode characters of any user data supplied before > echoing it back in a response, and add random padding of random length to > the HEAD of all responses. > > At the nginx layer, some sensible rate limits might also be an appropriate > mitigation: thousands-to-millions of requests are needed to extract secret > data with BREACH. > > I haven't seen Google or any other large web site turn of gzip compression > of HTTPS responses yet because of BREACH. If *you* can actually afford to > do > so, your traffic level is simply trivial. We would see approximately an 8x > increase in bandwidth costs (and corresponding 8x increase in end-user > response time) if we disabled GZIP for HTTPS connections. > ?I took a shortcut. You're right: deactivating gzip compression is usable only for relatively small websites. Anyway, I wonder which real-world scenario need to send back user requests in its answers... maybe some application need this? I can't imagine a serious use-case however. For a quick cheat-sheet on possible mitigations, starting with the most radical ones, some advice has already been provided here: http://breachattack.com/#mitigations. I maintain the 'turn the gzip compression off' piece of advice here, as I suspect people managing HA or populated websites already understand the problem deeper and don't need to ask on a specific webserver's mailing list what 'recommendation' they provide... Thus I guess What I wrote fits the audience here.? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From aflexzor at gmail.com Tue Aug 20 22:43:40 2013 From: aflexzor at gmail.com (Alex Flex) Date: Tue, 20 Aug 2013 16:43:40 -0600 Subject: How to distinguish if nginx generated a 504 error or upstream returned it? Message-ID: <5213F11C.8010406@gmail.com> Hello ! I run nginx as a reverse proxy and send requests to an upstream server, the problem is according to my logs sometimes i start seeing this: [499] [-] [0] [11602] [xx.126.55.81] [GET /weblogin/ HTTP/1.1] or [504] [-] [0] [11602] [xx.126.55.81] [GET /weblogin/ HTTP/1.1] The first field is the $status, the second is the $upstream_cache_status. So I know for a fact these two requests did go to the upstream server however what i dont know is who returned the 504 and 499 codes. My server or the upstream ? I know I can implement $request_time to try to "guess" using my timeouts and assume if they where generated below them that it may be the upstream that for whatever reason served the request with that code. How can I be sure? Alex From paulnpace at gmail.com Tue Aug 20 23:05:09 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Tue, 20 Aug 2013 16:05:09 -0700 Subject: Piwik conf file In-Reply-To: <20130820214426.GD27161@craic.sysops.org> References: <20130820214426.GD27161@craic.sysops.org> Message-ID: Thank you for your responses! On Tue, Aug 20, 2013 at 2:44 PM, Francis Daly wrote: > On Mon, Aug 19, 2013 at 02:53:36PM -0700, Paul N. Pace wrote: > > Hi there, > >> I am trying to set up a conf file for Piwik installations and I'm >> hoping a second set of of eyes can help: > > In nginx one request is handled in one location. The rules for selecting > the location are at http://nginx.org/r/location > > Given that information, the following output... > >> location /piwik/ { >> >> location /js/ { >> allow all; >> } >> >> location ~ /js/.*\.php$ { >> include /etc/nginx/global-configs/php.conf; >> } >> >> location ~ /piwik.php$ { >> include /etc/nginx/global-configs/php.conf; >> } >> >> return 301 https://server_name$request_uri?; >> } > > $ sbin/nginx -t > nginx: [emerg] location "/js/" is outside location "/piwik/" in /usr/local/nginx/conf/nginx.conf:14 > nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed > > should not be a surprise. Yes, I fixed that by changing to /piwik/js/ - is this the right way to enter it? Here is what the file would read now: location /piwik/ { location /piwik/js/ { allow all; } location ~ /piwik/js/.*\.php$ { include /etc/nginx/global-configs/php.conf; } location ~ /piwik/piwik.php$ { include /etc/nginx/global-configs/php.conf; } return 301 https://www.unpm.org$request_uri?; } > > Can you list some of the requests that you want to have handled, and > how you want them to be handled? That might help someone who knows nginx > but not piwik to understand what the intention is. > > Doing a web search for "site:nginx.org piwik" does seem to point at a > config file, which seems very different from yours. Yes, to be honest, that config is beyond my current understanding of nginx. I reviewed the GitHub entry on the configuration, and it included instructions to "Move the old /etc/nginx directory to /etc/nginx.old" which seems a bit extreme to me and more work to reconfigure for the other settings on my server, not to mention that their /etc/nginx.conf file, among others, hasn't been updated in 2 years. I have the Mastering Nginx book, but I still struggle to decode many example configurations. I especially struggle with regular expressions. > Searching for "nginx" on the piwik.org web site also refers to an > install document. The nginx FAQ points to the above GitHub page. >> Piwik seems trickier than other applications because certain >> components must be available through HTTP sessions or else browsers >> give scary warnings or don't load the tracking code, but I want to >> force the Piwik dashboard to open in HTTPS. > > These words don't obviously directly translate to your config file > snippet above. What request is the Piwik dashboard? What request is > certain components? The Piwik dashboard is located in /piwiki/index.php, and that is what always needs to be served securely. The tracking code for Piwik is loaded with either /piwik/js/index.php, /piwik/piwik.php, or the /piwik/js/ directory, depending on various client or server configurations. Thank you for you help! From mdounin at mdounin.ru Wed Aug 21 10:43:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Aug 2013 14:43:52 +0400 Subject: How to distinguish if nginx generated a 504 error or upstream returned it? In-Reply-To: <5213F11C.8010406@gmail.com> References: <5213F11C.8010406@gmail.com> Message-ID: <20130821104352.GJ19334@mdounin.ru> Hello! On Tue, Aug 20, 2013 at 04:43:40PM -0600, Alex Flex wrote: > Hello ! > > I run nginx as a reverse proxy and send requests to an upstream > server, the problem is according to my logs sometimes i start seeing > this: > > [499] [-] [0] [11602] [xx.126.55.81] [GET /weblogin/ HTTP/1.1] > > or > > [504] [-] [0] [11602] [xx.126.55.81] [GET /weblogin/ HTTP/1.1] > > The first field is the $status, the second is the > $upstream_cache_status. So I know for a fact these two requests did > go to the upstream server however what i dont know is who returned > the 504 and 499 codes. My server or the upstream ? > > I know I can implement $request_time to try to "guess" using my > timeouts and assume if they where generated below them that it may > be the upstream that for whatever reason served the request with > that code. > > How can I be sure? $upstream_status? http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables -- Maxim Dounin http://nginx.org/en/donation.html From lists at ruby-forum.com Wed Aug 21 11:05:21 2013 From: lists at ruby-forum.com (sajan tharayil) Date: Wed, 21 Aug 2013 13:05:21 +0200 Subject: Nginx as Reverse Proxy for Tomcat + SSL In-Reply-To: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> References: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Dounin, > 3) Can I have an SSL from Client to Nginx and another between Nginx and Tomcat . Yes. How do we do this. I am trying to find a way to do this, either with Haproxy or Nginx Kind Regards Sajan -- Posted via http://www.ruby-forum.com/. From jens.rantil at telavox.se Wed Aug 21 11:49:27 2013 From: jens.rantil at telavox.se (Jens Rantil) Date: Wed, 21 Aug 2013 11:49:27 +0000 Subject: SV: Nginx as Reverse Proxy for Tomcat + SSL In-Reply-To: References: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <473be8c0b90d43faa5901bf922846d13@AMSPR07MB132.eurprd07.prod.outlook.com> Hi Sajan, Which of the two subproblems is that you are having issues with? Kind Regards, Jens -----Ursprungligt meddelande----- Fr?n: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] F?r sajan tharayil Skickat: den 21 augusti 2013 13:05 Till: nginx at nginx.org ?mne: Re: Nginx as Reverse Proxy for Tomcat + SSL Hi Dounin, > 3) Can I have an SSL from Client to Nginx and another between Nginx and Tomcat . Yes. How do we do this. I am trying to find a way to do this, either with Haproxy or Nginx Kind Regards Sajan -- Posted via http://www.ruby-forum.com/. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Aug 21 17:19:01 2013 From: nginx-forum at nginx.us (stephan13360) Date: Wed, 21 Aug 2013 13:19:01 -0400 Subject: TLS 1.2 ciphers Message-ID: <2f0fab58019b9c880aedd47169ed1051.NginxMailingListEnglish@forum.nginx.org> Chrome 29 came out recently and now supports TLS 1.2. So i decided to add some of the new TLS 1.2 ciphers to my webserver, which are specified here: https://www.openssl.org/docs/apps/ciphers.html#TLS_v1_2_cipher_suites. My current setup is: Ubuntu 10.04, Nginx 1.5.3 ,OpenSSL 1.0.1e (build myself) Config file: server { listen 80; server_name sherbers.de; return 301 https://$server_name$request_uri; } server { listen 443 ssl spdy default_server; server_name sherbers.de; ssl_certificate /etc/ssl/private/hosteurope/www.sherbers.de.pem; ssl_certificate_key /etc/ssl/private/hosteurope/www.sherbers.de.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; As you can see i only use ciphers with perfect forward secrecy, because why not. When i connect to my webserver chrome shows it is using TLS 1.2 but as a cipher it using ECDHE-RSA, which it was using before too when i only offered TLS 1.1, without any of the ECDHE-ECDSA ciphers. Any idea why nginx doesn't offers the new cipers? Additional information: - An ssl check at https://sslcheck.globalsign.com doesn't list any of the ECDHE-ECDSA ciphers - "openssl ciphers -v | grep ECDHE-ECDSA" outputs the following: ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA384 ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-DES-CBC3-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=3DES(168) Mac=SHA1 ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES128-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDHE-ECDSA-RC4-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=RC4(128) Mac=SHA1 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242096,242096#msg-242096 From mdounin at mdounin.ru Wed Aug 21 18:00:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Aug 2013 22:00:12 +0400 Subject: TLS 1.2 ciphers In-Reply-To: <2f0fab58019b9c880aedd47169ed1051.NginxMailingListEnglish@forum.nginx.org> References: <2f0fab58019b9c880aedd47169ed1051.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130821180012.GT19334@mdounin.ru> Hello! On Wed, Aug 21, 2013 at 01:19:01PM -0400, stephan13360 wrote: > Chrome 29 came out recently and now supports TLS 1.2. So i decided to add > some of the new TLS 1.2 ciphers to my webserver, which are specified here: > https://www.openssl.org/docs/apps/ciphers.html#TLS_v1_2_cipher_suites. > > My current setup is: Ubuntu 10.04, Nginx 1.5.3 ,OpenSSL 1.0.1e (build > myself) > Config file: > > server { > listen 80; > server_name sherbers.de; > return 301 https://$server_name$request_uri; > } > server { > listen 443 ssl spdy default_server; > server_name sherbers.de; > > ssl_certificate /etc/ssl/private/hosteurope/www.sherbers.de.pem; > ssl_certificate_key /etc/ssl/private/hosteurope/www.sherbers.de.key; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers > ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA; > ssl_prefer_server_ciphers on; > ssl_session_cache shared:SSL:10m; > > As you can see i only use ciphers with perfect forward secrecy, because why > not. When i connect to my webserver chrome shows it is using TLS 1.2 but as > a cipher it using ECDHE-RSA, which it was using before too when i only > offered TLS 1.1, without any of the ECDHE-ECDSA ciphers. > > Any idea why nginx doesn't offers the new cipers? ECDSA ciphers need an ECDSA certificate to work. As your cert is RSA, it RSA ciphers are used. -- Maxim Dounin http://nginx.org/en/donation.html From lists at ruby-forum.com Wed Aug 21 18:01:59 2013 From: lists at ruby-forum.com (sajan tharayil) Date: Wed, 21 Aug 2013 20:01:59 +0200 Subject: Nginx as Reverse Proxy for Tomcat + SSL In-Reply-To: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> References: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <47124a341526b5fb35093cda737fd32e@ruby-forum.com> Hi Jens, I will explain you my need. I need an end to en encryption for my client server communication. Client ->nginx/haproxy - https nginx/haprody -> tomcat - https So one way to do this is a layer 4 load balancing at nginx/haproxy layer. But What I am trying to do is to do a layer 7 encryption itself. So the first ssl offloading will happen at the nginx/haproxy level. Then it will be again encrypted and send to the underlaying tomcat. Then tomcat will offload ssl again. The reason for this is, I am creating my stack in amazon and we do not want any kind of plane communication happening in amazon network. So I am not really sure about the configuration which I can do on nginx which will do the following: 1. Off load the ssl for the requests coming from client (users) - This configuration is simple enough 2. encrypt the communication again and send to underlaying tomcats Can I have an SSL from Client to Nginx and another between Nginx and Tomcat? so it will be like Kind Regards Sajan -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Aug 21 18:07:31 2013 From: nginx-forum at nginx.us (stephan13360) Date: Wed, 21 Aug 2013 14:07:31 -0400 Subject: TLS 1.2 ciphers In-Reply-To: <20130821180012.GT19334@mdounin.ru> References: <20130821180012.GT19334@mdounin.ru> Message-ID: <673e9505ee1f352ac3490a16da5c1876.NginxMailingListEnglish@forum.nginx.org> Thanks. I never even considered that the certificate could be the problem. Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Aug 21, 2013 at 01:19:01PM -0400, stephan13360 wrote: > > > Chrome 29 came out recently and now supports TLS 1.2. So i decided > to add > > some of the new TLS 1.2 ciphers to my webserver, which are specified > here: > > > https://www.openssl.org/docs/apps/ciphers.html#TLS_v1_2_cipher_suites. > > > > My current setup is: Ubuntu 10.04, Nginx 1.5.3 ,OpenSSL 1.0.1e > (build > > myself) > > Config file: > > > > server { > > listen 80; > > server_name sherbers.de; > > return 301 https://$server_name$request_uri; > > } > > server { > > listen 443 ssl spdy default_server; > > server_name sherbers.de; > > > > ssl_certificate /etc/ssl/private/hosteurope/www.sherbers.de.pem; > > ssl_certificate_key > /etc/ssl/private/hosteurope/www.sherbers.de.key; > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > ssl_ciphers > > > ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AE > S256-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-S > HA; > > ssl_prefer_server_ciphers on; > > ssl_session_cache shared:SSL:10m; > > > > As you can see i only use ciphers with perfect forward secrecy, > because why > > not. When i connect to my webserver chrome shows it is using TLS 1.2 > but as > > a cipher it using ECDHE-RSA, which it was using before too when i > only > > offered TLS 1.1, without any of the ECDHE-ECDSA ciphers. > > > > Any idea why nginx doesn't offers the new cipers? > > ECDSA ciphers need an ECDSA certificate to work. As your cert is > RSA, it RSA ciphers are used. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242096,242099#msg-242099 From francis at daoine.org Wed Aug 21 18:22:06 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 21 Aug 2013 19:22:06 +0100 Subject: Piwik conf file In-Reply-To: References: <20130820214426.GD27161@craic.sysops.org> Message-ID: <20130821182206.GG27161@craic.sysops.org> On Tue, Aug 20, 2013 at 04:05:09PM -0700, Paul N. Pace wrote: > On Tue, Aug 20, 2013 at 2:44 PM, Francis Daly wrote: > > On Mon, Aug 19, 2013 at 02:53:36PM -0700, Paul N. Pace wrote: Hi there, > >> I am trying to set up a conf file for Piwik installations and I'm > >> hoping a second set of of eyes can help: > > > > In nginx one request is handled in one location. The rules for selecting > > the location are at http://nginx.org/r/location > Yes, I fixed that by changing to /piwik/js/ - is this the right way to > enter it? It really depends on what the actual urls that are requested are, and how you want them to be handled. In this case, I don't see what the /piwik/js/ location does -- because it sets a directive to its default value, and you haven't shown it set to a non-default value anywhere. > Here is what the file would read now: > > location /piwik/ { I would probably make that one be "location ^~ /piwik/" -- it may not make a difference, depending on what else is in your config file. > location ~ /piwik/js/.*\.php$ { > include /etc/nginx/global-configs/php.conf; > } > > location ~ /piwik/piwik.php$ { That one will match the requests /piwik/piwikXphp and /piwik/Y/piwikXphp, for any single character X and for any multi-character Y. It may be that, for the requests you care about, that is exactly the same as location = /piwik/piwik.php > > Can you list some of the requests that you want to have handled, and > > how you want them to be handled? That might help someone who knows nginx > > but not piwik to understand what the intention is. > >> Piwik seems trickier than other applications because certain > >> components must be available through HTTP sessions or else browsers > >> give scary warnings or don't load the tracking code, but I want to > >> force the Piwik dashboard to open in HTTPS. > > > > These words don't obviously directly translate to your config file > > snippet above. What request is the Piwik dashboard? What request is > > certain components? > > The Piwik dashboard is located in /piwiki/index.php, and that is what > always needs to be served securely. So, in the "http-only" server: location = /piwiki/index.php { return 301 https://www.unpm.org$request_uri?; } and in the "https" server: location = /piwiki/index.php { # whatever it should be, probably "include php.conf" } > The tracking code for Piwik is loaded with either /piwik/js/index.php, > /piwik/piwik.php, or the /piwik/js/ directory, depending on various > client or server configurations. /piwik/piwik.php is handled above; everything ending in ".php" in /piwik/js/ is handled above. Everything else in /piwik/js/ comes from the filesystem. Are there specific urls that do not respond the way you expect? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Aug 21 18:34:16 2013 From: nginx-forum at nginx.us (xfce4) Date: Wed, 21 Aug 2013 14:34:16 -0400 Subject: Nginx stuck on startup Message-ID: <89c488deb8b5a84a620c10308963fe3d.NginxMailingListEnglish@forum.nginx.org> Hi everyone, Recently I've needed to restart nginx on my vps (which is debian wheezy) and it got stuck. Other services are fine but nginx not responding if there is a file operation involved. For example nginx -h prints out fine, but nginx -t is just waiting. How can I examine problem? dmesg and messages are clean. disk and memory usage is not even %50. This is probably not an nginx-specific error but it's disturbing. I've tried to downgrade nginx a bit but not worked. Thanks for any suggestions. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242101,242101#msg-242101 From nginx-forum at nginx.us Wed Aug 21 19:40:48 2013 From: nginx-forum at nginx.us (xfce4) Date: Wed, 21 Aug 2013 15:40:48 -0400 Subject: Nginx stuck on startup In-Reply-To: <89c488deb8b5a84a620c10308963fe3d.NginxMailingListEnglish@forum.nginx.org> References: <89c488deb8b5a84a620c10308963fe3d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6228415364dc0ffbc853540cdf0dff99.NginxMailingListEnglish@forum.nginx.org> nevermind. problem with vhosts config Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242101,242102#msg-242102 From nginx-forum at nginx.us Thu Aug 22 01:42:38 2013 From: nginx-forum at nginx.us (cachito) Date: Wed, 21 Aug 2013 21:42:38 -0400 Subject: Proxying and caching a fragile server to ensure availability Message-ID: <9e61558b855bb25e8ed221fea9336379.NginxMailingListEnglish@forum.nginx.org> Hello colleagues. I'm trying to save a website hosted on a (VERY) low-powered server by placing a strong nginx server in front of it as a caching proxy. I don't control the website code and the rest of its configuration, so I'm not free to move it elsewhere stronger. I'd like to have the caching proxy serve content as fresh as possible, regardless of the response given by the upstream. If the upstream doesn't respond, grab whatever we have in cache and serve it. Below is the configuration for this particular site. With this configuration, (and I tried taking away many of the cache busting parts while testing, same results), I'm getting lots of "504 Gateway Timeout" errors, even after successfully loading a page and just reloading it in the browser. Is there something in this setup that is inherently bad? The server is working perfectly for other sites being proxied, it operates as an "origin pull CDN" to unload static files from various Wordpress blogs. Whatever you could find to help me will be very welcome. Thanks in advance. server { listen 80; server_name .thissite.com; access_log /var/log/nginx/thissite.access.log; error_log /var/log/nginx/thissite.errors.log; # this log only lists "upstream timeout" errors, no further details. proxy_ignore_headers X-Accel-Expires Expires Cache-Control; # Set proxy headers for the passthrough #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; # Let the Set-Cookie and Cache-Control headers through. proxy_pass_header Set-Cookie; #proxy_pass_header Cache-Control; #proxy_pass_header Expires; proxy_pass_header Host; # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_504 http_404; # Set the proxy cache key set $cache_key $scheme$host$uri$is_args$args; #proxy_cache start set $no_cache 0; # POST requests and urls with a query string should always go to PHP if ($request_method = POST) { set $no_cache 1; } if ($query_string != "") { set $no_cache 1; } if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") { set $no_cache 1; } #The different cache zones reside in different filesystems. the "pages" zone is saved in RAM for fast delivery. location / { proxy_pass http://upstream; proxy_cache pages; proxy_cache_key $cache_key; proxy_cache_valid 60m; # I was hoping to have the proxy query the upstream once every hour for updated content. # 2 rules to dedicate the no caching rule for logged in users. proxy_cache_bypass $no_cache; # Do not cache the response. proxy_no_cache $no_cache; # Do not serve response from cache. add_header X-Cache $upstream_cache_status; expires 60m; } location ~* \.(png|jpg|jpeg|gif|ico|swf|flv|mov|mpg|mp3)$ { expires max; log_not_found off; proxy_pass http://upstream; proxy_cache images; proxy_cache_key $cache_key; proxy_cache_valid 365d; } location ~* \.(css|js|html|htm)$ { expires 7d; log_not_found off; proxy_pass http://upstream; proxy_cache scripts; proxy_cache_key $cache_key; proxy_cache_valid 7d; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242106,242106#msg-242106 From kovacs at gmail.com Thu Aug 22 01:48:09 2013 From: kovacs at gmail.com (Michael Kovacs) Date: Wed, 21 Aug 2013 18:48:09 -0700 Subject: gzip compression won't disable Message-ID: Hey all, I'm running nginx 1.2.6 from a packaged install on Ubuntu 13.04. I'm at a total loss as to how this is happening but I simply cannot disable gzip compression for my server no matter what I try. Setting gzip off; in nginx.conf in the http context doesn't work. I modified the default config setting that was already there. I even moved it to the bottom of the section to see if maybe there was something else that was toggling it on after that config entry. Is there something else I can look for that would be enabling gzip compression on my server? I saw there's a static gzip compression module that's optionally compiled in but that doesn't seem relevant to my situation as this is a REST call. Here's my response header from curl which does not appear to gzip compress: HTTP/1.1 200 OK Server: nginx/1.2.6 (Ubuntu) Date: Thu, 22 Aug 2013 00:08:26 GMT Content-Type: application/json Content-Length: 0 Connection: keep-alive P3P: CP="CAO PSA OUR" Set-Cookie: jcid=5215567ae4b0861a8dd5c1dc;Path=/;Domain=.foo.com;Expires=Fri, 22-Aug-2014 00:08:26 GMT;Max-Age=31536000 Expires: Thu, 01 Jan 1970 00:00:00 GMT ETag: "5215567ae4b0861a8dd5c1dc" However that same URL's response headers in any browser (chrome, FF, safari) are as follows: Connection:keep-alive Content-Encoding:gzip Content-Type:application/json Date:Thu, 22 Aug 2013 01:39:24 GMT ETag:"521462bfe4b00dcc1c3b7c52-gzip" Expires:Thu, 01 Jan 1970 00:00:00 GMT P3P:CP="CAO PSA OUR" Server:nginx/1.2.6 (Ubuntu) Set-Cookie:jcid=521462bfe4b00dcc1c3b7c52;Path=/;Domain=.foo.com;Expires=Fri, 22-Aug-2014 01:39:24 GMT;Max-Age=31536000 Transfer-Encoding:chunked Vary:Accept-Encoding, User-Agent I'm certain that I'm doing something wrong but I'm out of ideas at the moment on what to check. Thanks for any insight or pointers anyone may be able to provide. Cheers! -- -Michael http://twitter.com/mk -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 22 01:59:16 2013 From: nginx-forum at nginx.us (cachito) Date: Wed, 21 Aug 2013 21:59:16 -0400 Subject: gzip compression won't disable In-Reply-To: References: Message-ID: My guess: If the compressed files are pregenerated and sitting on the filesystem (e.g. you have blah.json and blah.json.gz), nginx will serve them to any browser that sends the correct Accept-Encoding headers. Good luck. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242107,242108#msg-242108 From reallfqq-nginx at yahoo.fr Thu Aug 22 02:23:56 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 21 Aug 2013 22:23:56 -0400 Subject: Proxying and caching a fragile server to ensure availability In-Reply-To: <9e61558b855bb25e8ed221fea9336379.NginxMailingListEnglish@forum.nginx.org> References: <9e61558b855bb25e8ed221fea9336379.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, On Wed, Aug 21, 2013 at 9:42 PM, cachito wrote: > if ($http_cookie ~* > "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") { > set $no_cache 1; > } > ??If the user sends requests using cookies from Wordpress, the cache won't be used... Thus, you seem not to cache requests from Wordpress at all. That CMS need resources such as power and generates huge traffic (that needs to be processed aswell). Are you sure not all the Wordpress requests contain cookie(s)? ? > #The different cache zones reside in different filesystems. the "pages" > zone > is saved in RAM for fast delivery. > location / { > proxy_pass http://upstream; > proxy_cache pages; > proxy_cache_key $cache_key; > proxy_cache_valid 60m; # I was hoping to have the proxy query the > upstream once every hour for updated content. > ??Read the correct syntax and use of proxy_cache_valid. If no HTTP code is specified as optional parameter, it says 'then only 200, 301 and 302 responses are cached'. ? > # 2 rules to dedicate the no caching rule for logged in users. > proxy_cache_bypass $no_cache; # Do not cache the response. > proxy_no_cache $no_cache; # Do not serve response from cache. > ??Actually, those 2 comments are inverted: proxy_no_cache controls whether or not items should be written in cache and proxy_cache_bypass controls whether or not content should be read from it. ? > add_header X-Cache $upstream_cache_status; > expires 60m; > } > --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 22 02:35:55 2013 From: nginx-forum at nginx.us (cachito) Date: Wed, 21 Aug 2013 22:35:55 -0400 Subject: Proxying and caching a fragile server to ensure availability In-Reply-To: References: Message-ID: <87ccb041ce799cbf9fbc4b250bb57b97.NginxMailingListEnglish@forum.nginx.org> B.R. Wrote: ------------------------------------------------------- > Hello, > > On Wed, Aug 21, 2013 at 9:42 PM, cachito wrote: > > > if ($http_cookie ~* > > > "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") > { > > set $no_cache 1; > > } > > > > ??If the user sends requests using cookies from Wordpress, the cache > won't > be used... > Thus, you seem not to cache requests from Wordpress at all. That CMS > need > resources such as power and generates huge traffic (that needs to be > processed aswell). > Are you sure not all the Wordpress requests contain cookie(s)? > ? I'm manually deleting cookies to force caching and it isn't happening. Even if I set $no_cache to 0 in this particular section (or at the end of the if list) nothing happens. > > > > #The different cache zones reside in different filesystems. the > "pages" > > zone > > is saved in RAM for fast delivery. > > location / { > > proxy_pass http://upstream; > > proxy_cache pages; > > proxy_cache_key $cache_key; > > proxy_cache_valid 60m; # I was hoping to have the proxy query > the > > upstream once every hour for updated content. > > > > ??Read the correct syntax and use of proxy_cache_valid. > If no HTTP code is specified as optional parameter, it says 'then only > 200, > 301 and 302 responses are cached'. This is the intended behavior. If the upstream responds something useful, cache it and serve it. Errors shouldn't be cached, and the stale cache should be served for that URL. > ? > > > > # 2 rules to dedicate the no caching rule for logged in users. > > proxy_cache_bypass $no_cache; # Do not cache the response. > > proxy_no_cache $no_cache; # Do not serve response from cache. > > > > ??Actually, those 2 comments are inverted: proxy_no_cache controls > whether > or not items should be written in cache and proxy_cache_bypass > controls > whether or not content should be read from it. Comments aside, when $no_cache is set to 0 it mandates that the upstream response should be cached and the client should be served content from cache, right? Thanks B. R. > ? > > > > add_header X-Cache $upstream_cache_status; > > expires 60m; > > } > > > --- > *B. R.* > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242106,242110#msg-242110 From reallfqq-nginx at yahoo.fr Thu Aug 22 02:38:15 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 21 Aug 2013 22:38:15 -0400 Subject: gzip compression won't disable In-Reply-To: References: Message-ID: Hello, On Wed, Aug 21, 2013 at 9:48 PM, Michael Kovacs wrote: > Hey all, > > I'm running nginx 1.2.6 from a packaged install on Ubuntu 13.04. I'm at a > total loss as to how this is happening but I simply cannot disable gzip > compression for my server no matter what I try. Setting gzip off; in > nginx.conf in the http context doesn't work. I modified the default config > setting that was already there. I even moved it to the bottom of the > section to see if maybe there was something else that was toggling it on > after that config entry. > ??I'm not? sure moving the directive around the same level has any impact if you don't have another 'gzip' directive there. Have you checked that there is no gzip at inferior levels (server or location blocks). Even better: gzip directive is internally set to 'off'. Check that there is no 'gzip' usage anywhere in any included file of your configuration (simple grep) and you'll have the conf you wish. 2nd part: Check your conf is *really* applied on reload: first, check the syntax with nginx -t, then monitor the logs when reloading to see any error popping up. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Aug 22 02:53:00 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 21 Aug 2013 22:53:00 -0400 Subject: Proxying and caching a fragile server to ensure availability In-Reply-To: <87ccb041ce799cbf9fbc4b250bb57b97.NginxMailingListEnglish@forum.nginx.org> References: <87ccb041ce799cbf9fbc4b250bb57b97.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, On Wed, Aug 21, 2013 at 10:35 PM, cachito wrote: > B.R. Wrote: > ------------------------------------------------------- > > Hello, > > > > On Wed, Aug 21, 2013 at 9:42 PM, cachito wrote: > > > > > if ($http_cookie ~* > > > > > "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") > > { > > > set $no_cache 1; > > > } > > > > > > > ??If the user sends requests using cookies from Wordpress, the cache > > won't > > be used... > > Thus, you seem not to cache requests from Wordpress at all. That CMS > > need > > resources such as power and generates huge traffic (that needs to be > > processed aswell). > > Are you sure not all the Wordpress requests contain cookie(s)? > > ? > > I'm manually deleting cookies to force caching and it isn't happening. Even > if I set $no_cache to 0 in this particular section (or at the end of the if > list) nothing happens. > ?Is your conf really applied? Check logs on reload to be sure. ? > > > > > > > > #The different cache zones reside in different filesystems. the > > "pages" > > > zone > > > is saved in RAM for fast delivery. > > > location / { > > > proxy_pass http://upstream; > > > proxy_cache pages; > > > proxy_cache_key $cache_key; > > > proxy_cache_valid 60m; # I was hoping to have the proxy query > > the > > > upstream once every hour for updated content. > > > > > > > ??Read the correct syntax and use of proxy_cache_valid. > > If no HTTP code is specified as optional parameter, it says 'then only > > 200, > > 301 and 302 responses are cached'. > > This is the intended behavior. If the upstream responds something useful, > cache it and serve it. Errors shouldn't be cached, and the stale cache > should be served for that URL. > ?Ok? > > ? > > > > > > > # 2 rules to dedicate the no caching rule for logged in users. > > > proxy_cache_bypass $no_cache; # Do not cache the response. > > > proxy_no_cache $no_cache; # Do not serve response from cache. > > > > > > > ??Actually, those 2 comments are inverted: proxy_no_cache controls > > whether > > or not items should be written in cache and proxy_cache_bypass > > controls > > whether or not content should be read from it. > > Comments aside, when $no_cache is set to 0 it mandates that the upstream > response should be cached and the client should be served content from > cache, right? > I unnderstood the double negation the same way as yours. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirilk at cloudxcel.com Thu Aug 22 11:26:03 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Thu, 22 Aug 2013 14:26:03 +0300 Subject: Nginx mod_security leaks file descriptors Message-ID: <732B2C92-0F05-4440-90ED-321BA5D89D54@cloudxcel.com> Hi, I have a problem with nginx and mod_security module. After reloading nginx configuration (kill -HUP ) all files opened by mod_security are opened once again without closing the old ones. That means at some point we hit the limit of open file descriptors, in my real life scenario I leak over 300 files on each reload. Here are my sample configs just to illustrate the problem: ============================================================ nginx.conf user www-data www-data; worker_processes 6; worker_rlimit_nofile 200000; error_log /var/log/nginx/error.log debug; events { worker_connections 16384; multi_accept on; use epoll; } http { server { listen 80; location / { ModSecurityEnabled on; ModSecurityConfig modsecurity.conf; return 555; } } } ============================================================ modsecurity.conf: # Debug log SecDebugLog /var/log/waf/events.log ============================================================ In this situation after each configuration reload I am leaking open files: www-data at dev03 ~ # lsof | grep nginx | wc -l; kill -HUP `ps aux | grep 'nginx: master process' | grep -v grep | awk '{print $2}'`; sleep 2; lsof | grep nginx | wc -l 361 368 I am using Ubuntu 12.04 LTS and nginx _openresty 1.4.2.1 (DEPLOY)www-data at dev03:~# nginx -V nginx version: ngx_openresty/1.4.2.1 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled Does someone else have the same problem? I will be happy to provide other information if necessary. Regards, Kiril -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3565 bytes Desc: not available URL: From jens.rantil at telavox.se Thu Aug 22 11:52:07 2013 From: jens.rantil at telavox.se (Jens Rantil) Date: Thu, 22 Aug 2013 11:52:07 +0000 Subject: SV: Nginx as Reverse Proxy for Tomcat + SSL In-Reply-To: <47124a341526b5fb35093cda737fd32e@ruby-forum.com> References: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> <47124a341526b5fb35093cda737fd32e@ruby-forum.com> Message-ID: Hi Sajan, I see. nginx supports serving https content. Documentation is here: http://nginx.org/en/docs/http/ngx_http_ssl_module.html nginx also supports proxying to upstream servers that are using SSL/https: http://stackoverflow.com/questions/15394904/nginx-load-balance-with-upstream-ssl What you'd like to do is possible. Good luck, Jens -----Ursprungligt meddelande----- Fr?n: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] F?r sajan tharayil Skickat: den 21 augusti 2013 20:02 Till: nginx at nginx.org ?mne: Re: Nginx as Reverse Proxy for Tomcat + SSL Hi Jens, I will explain you my need. I need an end to en encryption for my client server communication. Client ->nginx/haproxy - https nginx/haprody -> tomcat - https So one way to do this is a layer 4 load balancing at nginx/haproxy layer. But What I am trying to do is to do a layer 7 encryption itself. So the first ssl offloading will happen at the nginx/haproxy level. Then it will be again encrypted and send to the underlaying tomcat. Then tomcat will offload ssl again. The reason for this is, I am creating my stack in amazon and we do not want any kind of plane communication happening in amazon network. So I am not really sure about the configuration which I can do on nginx which will do the following: 1. Off load the ssl for the requests coming from client (users) - This configuration is simple enough 2. encrypt the communication again and send to underlaying tomcats Can I have an SSL from Client to Nginx and another between Nginx and Tomcat? so it will be like Kind Regards Sajan -- Posted via http://www.ruby-forum.com/. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From alan.silva at gmail.com Thu Aug 22 15:16:53 2013 From: alan.silva at gmail.com (Alan Silva) Date: Thu, 22 Aug 2013 12:16:53 -0300 Subject: Nginx mod_security leaks file descriptors In-Reply-To: <732B2C92-0F05-4440-90ED-321BA5D89D54@cloudxcel.com> References: <732B2C92-0F05-4440-90ED-321BA5D89D54@cloudxcel.com> Message-ID: <6BC36ECB-A9D4-4867-9FA7-9A1F4903B235@gmail.com> Hi Kiril, I think the better place to make this question its on modsecurity users list, because apparently its a problem in modsecurity module and don't in NGINX. Regards, Alan On Aug 22, 2013, at 8:26 AM, Kiril Kalchev wrote: > Hi, > > I have a problem with nginx and mod_security module. After reloading nginx configuration (kill -HUP ) all files opened by mod_security are opened once again without closing the old ones. That means at some point we hit the limit of open file descriptors, in my real life scenario I leak over 300 files on each reload. > > Here are my sample configs just to illustrate the problem: > ============================================================ > nginx.conf > user www-data www-data; > worker_processes 6; > worker_rlimit_nofile 200000; > > error_log /var/log/nginx/error.log debug; > > events { > worker_connections 16384; > multi_accept on; > use epoll; > } > > http { > server { > listen 80; > location / { > ModSecurityEnabled on; > ModSecurityConfig modsecurity.conf; > return 555; > } > } > } > > ============================================================ > modsecurity.conf: > > # Debug log > SecDebugLog /var/log/waf/events.log > ============================================================ > > In this situation after each configuration reload I am leaking open files: > > www-data at dev03 ~ # lsof | grep nginx | wc -l; kill -HUP `ps aux | grep 'nginx: master process' | grep -v grep | awk '{print $2}'`; sleep 2; lsof | grep nginx | wc -l > 361 > 368 > > I am using Ubuntu 12.04 LTS and nginx _openresty 1.4.2.1 > > (DEPLOY)www-data at dev03:~# nginx -V > nginx version: ngx_openresty/1.4.2.1 > built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) > TLS SNI support enabled > > Does someone else have the same problem? > > I will be happy to provide other information if necessary. > > Regards, > Kiril > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From kirilk at cloudxcel.com Thu Aug 22 15:20:52 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Thu, 22 Aug 2013 18:20:52 +0300 Subject: Nginx mod_security leaks file descriptors In-Reply-To: <6BC36ECB-A9D4-4867-9FA7-9A1F4903B235@gmail.com> References: <732B2C92-0F05-4440-90ED-321BA5D89D54@cloudxcel.com> <6BC36ECB-A9D4-4867-9FA7-9A1F4903B235@gmail.com> Message-ID: <440E5C0D-AFF9-4EC9-A9BF-B5559943C160@cloudxcel.com> Thank you for the quick replay. I did it and they are looking at it. I am adding link to the github issue about this one just for reference if someone need it in future. https://github.com/SpiderLabs/ModSecurity/issues/137 Regards, Kiril On Aug 22, 2013, at 6:16 PM, Alan Silva wrote: > Hi Kiril, > > I think the better place to make this question its on modsecurity users list, because apparently its a problem in modsecurity module and don't in NGINX. > > Regards, > > Alan > > > On Aug 22, 2013, at 8:26 AM, Kiril Kalchev wrote: > >> Hi, >> >> I have a problem with nginx and mod_security module. After reloading nginx configuration (kill -HUP ) all files opened by mod_security are opened once again without closing the old ones. That means at some point we hit the limit of open file descriptors, in my real life scenario I leak over 300 files on each reload. >> >> Here are my sample configs just to illustrate the problem: >> ============================================================ >> nginx.conf >> user www-data www-data; >> worker_processes 6; >> worker_rlimit_nofile 200000; >> >> error_log /var/log/nginx/error.log debug; >> >> events { >> worker_connections 16384; >> multi_accept on; >> use epoll; >> } >> >> http { >> server { >> listen 80; >> location / { >> ModSecurityEnabled on; >> ModSecurityConfig modsecurity.conf; >> return 555; >> } >> } >> } >> >> ============================================================ >> modsecurity.conf: >> >> # Debug log >> SecDebugLog /var/log/waf/events.log >> ============================================================ >> >> In this situation after each configuration reload I am leaking open files: >> >> www-data at dev03 ~ # lsof | grep nginx | wc -l; kill -HUP `ps aux | grep 'nginx: master process' | grep -v grep | awk '{print $2}'`; sleep 2; lsof | grep nginx | wc -l >> 361 >> 368 >> >> I am using Ubuntu 12.04 LTS and nginx _openresty 1.4.2.1 >> >> (DEPLOY)www-data at dev03:~# nginx -V >> nginx version: ngx_openresty/1.4.2.1 >> built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) >> TLS SNI support enabled >> >> Does someone else have the same problem? >> >> I will be happy to provide other information if necessary. >> >> Regards, >> Kiril >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3565 bytes Desc: not available URL: From sajan at noppix.com Thu Aug 22 18:03:01 2013 From: sajan at noppix.com (Sajan Parikh) Date: Thu, 22 Aug 2013 13:03:01 -0500 Subject: keepalive_timeout not working? Message-ID: <52165255.7080903@noppix.com> keepalives seem to be working, but the timeout limit I'm setting isn't honored it seems.. Someone let me know if I've done something wrong or am missing something. I have nginx installed on Ubuntu from the stable repository at nginx.org. Nothing has been added, nothing removed. ================================== nginx version: nginx/1.4.2 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_spdy_module --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --with-ipv6 ================================= In my http block, I have the following. keepalive_timeout 3s; That is the only place the string 'keepalive' exists. ================================= root at web:/etc# grep -r "keepali" . grep: ./blkid.tab: No such file or directory Binary file ./alternatives/rsh matches Binary file ./alternatives/rlogin matches ./nginx/nginx.conf: keepalive_timeout 3s; ================================= Yet, when I take a look at netstat -tc, I have waay too many http connections in a TIME_WAIT state and they seemingly stay there forever. Obviously it's not forever, but it's not closing in 3 seconds. Thanks. -- Sajan Parikh Owner, Noppix LLC o: (563) 726-0371 c: (563) 508-3184 From mdounin at mdounin.ru Thu Aug 22 18:34:33 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 22 Aug 2013 22:34:33 +0400 Subject: keepalive_timeout not working? In-Reply-To: <52165255.7080903@noppix.com> References: <52165255.7080903@noppix.com> Message-ID: <20130822183433.GF19334@mdounin.ru> Hello! On Thu, Aug 22, 2013 at 01:03:01PM -0500, Sajan Parikh wrote: > keepalives seem to be working, but the timeout limit I'm setting > isn't honored it seems.. Someone let me know if I've done something > wrong or am missing something. [...] > Yet, when I take a look at netstat -tc, I have waay too many http > connections in a TIME_WAIT state and they seemingly stay there > forever. Obviously it's not forever, but it's not closing in 3 > seconds. TIME_WAIT is a TCP state, and any socket is expected to be in this state on a side which does an active close for 2 * MSL seconds after a connection is closed. Reducing keepalive_timeout isn't expected to reduce number of sockets in a TIME_WAIT state. On the contrary, it may cause more connections to be established and then closed, resulting in more sockets in TIME_WAIT state. -- Maxim Dounin http://nginx.org/en/donation.html From lists at ruby-forum.com Fri Aug 23 04:03:15 2013 From: lists at ruby-forum.com (sajan tharayil) Date: Fri, 23 Aug 2013 06:03:15 +0200 Subject: Nginx as Reverse Proxy for Tomcat + SSL In-Reply-To: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> References: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9990866750059cf1a347fdff000a80b3@ruby-forum.com> Hi Jens, Thanks much for your explanation. I was sure about the first part and was thinking more complexly on the second part I mean https upstream. But your simple solution to this is awesome. Kind Regards Sajan -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Fri Aug 23 16:46:34 2013 From: nginx-forum at nginx.us (fhding618) Date: Fri, 23 Aug 2013 12:46:34 -0400 Subject: Nginx return empty Message-ID: I hava a website on a remote server and set the nginx servername "xiangyingdu.com abc.com" server{ listen 80; server_name xiangyingdu.com abc.com; ... ... ... ... } I set host in my local pc. 115.12.**.** xiangyingdu.com 115.12.**.** abc.com Then I visit abc.com is OK, but the xiangyingdu.com always return code "200 0" 125.**.**.130 - - [24/Aug/2013:00:39:08 +0800] "GET / HTTP/1.1" 200 0 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; QQDownload 718; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" Now I don't know what wrong is.... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242153,242153#msg-242153 From ben at indietorrent.org Fri Aug 23 16:46:44 2013 From: ben at indietorrent.org (Ben Johnson) Date: Fri, 23 Aug 2013 12:46:44 -0400 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off Message-ID: <521791F4.4040306@indietorrent.org> Hello, I'm seeing a strange problem with nginx (1.5.2 on Windows) and PHP (5.4.8 on Windows). Whenever I make a cURL request with PHP's curl_exec() function to a secure URL (https protocol), and I disable peer verification, like this curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); nginx responds with a "504 Gateway Time-out". If I set CURLOPT_SSL_VERIFYPEER to TRUE, nginx responds with a "200 OK", although curl_exec() returns false, which is expected due to the verification failure (I'm using a self-signed certificate in nginx). I have tried executing the same script under Apache and it functions as expected with peer verification disabled. Thank you for any help, -Ben From mdounin at mdounin.ru Fri Aug 23 18:05:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Aug 2013 22:05:54 +0400 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off In-Reply-To: <521791F4.4040306@indietorrent.org> References: <521791F4.4040306@indietorrent.org> Message-ID: <20130823180553.GN19334@mdounin.ru> Hello! On Fri, Aug 23, 2013 at 12:46:44PM -0400, Ben Johnson wrote: > Hello, > > I'm seeing a strange problem with nginx (1.5.2 on Windows) and PHP > (5.4.8 on Windows). > > Whenever I make a cURL request with PHP's curl_exec() function to a > secure URL (https protocol), and I disable peer verification, like this > > curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); > > nginx responds with a "504 Gateway Time-out". > > If I set CURLOPT_SSL_VERIFYPEER to TRUE, nginx responds with a "200 OK", > although curl_exec() returns false, which is expected due to the > verification failure (I'm using a self-signed certificate in nginx). > > I have tried executing the same script under Apache and it functions as > expected with peer verification disabled. What URL is requested by your script? Symptoms described suggest you are requesting some php script from the same server, and 504 is likely due to only one php backend process. -- Maxim Dounin http://nginx.org/en/donation.html From ben at indietorrent.org Fri Aug 23 18:41:43 2013 From: ben at indietorrent.org (Ben Johnson) Date: Fri, 23 Aug 2013 14:41:43 -0400 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off In-Reply-To: <20130823180553.GN19334@mdounin.ru> References: <521791F4.4040306@indietorrent.org> <20130823180553.GN19334@mdounin.ru> Message-ID: <5217ACE7.5060103@indietorrent.org> On 8/23/2013 2:05 PM, Maxim Dounin wrote: > Hello! > > On Fri, Aug 23, 2013 at 12:46:44PM -0400, Ben Johnson wrote: > >> Hello, >> >> I'm seeing a strange problem with nginx (1.5.2 on Windows) and PHP >> (5.4.8 on Windows). >> >> Whenever I make a cURL request with PHP's curl_exec() function to a >> secure URL (https protocol), and I disable peer verification, like this >> >> curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); >> >> nginx responds with a "504 Gateway Time-out". >> >> If I set CURLOPT_SSL_VERIFYPEER to TRUE, nginx responds with a "200 OK", >> although curl_exec() returns false, which is expected due to the >> verification failure (I'm using a self-signed certificate in nginx). >> >> I have tried executing the same script under Apache and it functions as >> expected with peer verification disabled. > > What URL is requested by your script? Symptoms described suggest > you are requesting some php script from the same server, and 504 > is likely due to only one php backend process. > Thank you for the quick reply, Maxim! I appreciate it. You are exactly right; my script requests another URL on the same server (which happens to be localhost in this case). Just so I understand the problem, are you saying that the script that contains the cURL call (via PHP's curl_exec() function) essentially ties-up the only available PHP backend process, which causes curl_exec() to time-out when it requests another URL on the same server? Is there a solution to this problem? My setup is essentially the same as what is described at http://wiki.nginx.org/PHPFastCGIOnWindows . Thanks again, -Ben From mdounin at mdounin.ru Fri Aug 23 19:23:00 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Aug 2013 23:23:00 +0400 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off In-Reply-To: <5217ACE7.5060103@indietorrent.org> References: <521791F4.4040306@indietorrent.org> <20130823180553.GN19334@mdounin.ru> <5217ACE7.5060103@indietorrent.org> Message-ID: <20130823192259.GP19334@mdounin.ru> Hello! On Fri, Aug 23, 2013 at 02:41:43PM -0400, Ben Johnson wrote: > > > On 8/23/2013 2:05 PM, Maxim Dounin wrote: > > Hello! > > > > On Fri, Aug 23, 2013 at 12:46:44PM -0400, Ben Johnson wrote: > > > >> Hello, > >> > >> I'm seeing a strange problem with nginx (1.5.2 on Windows) and PHP > >> (5.4.8 on Windows). > >> > >> Whenever I make a cURL request with PHP's curl_exec() function to a > >> secure URL (https protocol), and I disable peer verification, like this > >> > >> curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); > >> > >> nginx responds with a "504 Gateway Time-out". > >> > >> If I set CURLOPT_SSL_VERIFYPEER to TRUE, nginx responds with a "200 OK", > >> although curl_exec() returns false, which is expected due to the > >> verification failure (I'm using a self-signed certificate in nginx). > >> > >> I have tried executing the same script under Apache and it functions as > >> expected with peer verification disabled. > > > > What URL is requested by your script? Symptoms described suggest > > you are requesting some php script from the same server, and 504 > > is likely due to only one php backend process. > > > > Thank you for the quick reply, Maxim! I appreciate it. > > You are exactly right; my script requests another URL on the same server > (which happens to be localhost in this case). > > Just so I understand the problem, are you saying that the script that > contains the cURL call (via PHP's curl_exec() function) essentially > ties-up the only available PHP backend process, which causes curl_exec() > to time-out when it requests another URL on the same server? > > Is there a solution to this problem? > > My setup is essentially the same as what is described at > http://wiki.nginx.org/PHPFastCGIOnWindows . If you are using php-cgi, configuring PHP_FCGI_CHILDREN environment variable before starting php-cgi should help. -- Maxim Dounin http://nginx.org/en/donation.html From ben at indietorrent.org Fri Aug 23 20:28:24 2013 From: ben at indietorrent.org (Ben Johnson) Date: Fri, 23 Aug 2013 16:28:24 -0400 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off In-Reply-To: <20130823192259.GP19334@mdounin.ru> References: <521791F4.4040306@indietorrent.org> <20130823180553.GN19334@mdounin.ru> <5217ACE7.5060103@indietorrent.org> <20130823192259.GP19334@mdounin.ru> Message-ID: <5217C5E8.2030100@indietorrent.org> On 8/23/2013 3:23 PM, Maxim Dounin wrote: > Hello! > > On Fri, Aug 23, 2013 at 02:41:43PM -0400, Ben Johnson wrote: > >> >> >> On 8/23/2013 2:05 PM, Maxim Dounin wrote: >>> Hello! >>> >>> On Fri, Aug 23, 2013 at 12:46:44PM -0400, Ben Johnson wrote: >>> >>>> Hello, >>>> >>>> I'm seeing a strange problem with nginx (1.5.2 on Windows) and PHP >>>> (5.4.8 on Windows). >>>> >>>> Whenever I make a cURL request with PHP's curl_exec() function to a >>>> secure URL (https protocol), and I disable peer verification, like this >>>> >>>> curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); >>>> >>>> nginx responds with a "504 Gateway Time-out". >>>> >>>> If I set CURLOPT_SSL_VERIFYPEER to TRUE, nginx responds with a "200 OK", >>>> although curl_exec() returns false, which is expected due to the >>>> verification failure (I'm using a self-signed certificate in nginx). >>>> >>>> I have tried executing the same script under Apache and it functions as >>>> expected with peer verification disabled. >>> >>> What URL is requested by your script? Symptoms described suggest >>> you are requesting some php script from the same server, and 504 >>> is likely due to only one php backend process. >>> >> >> Thank you for the quick reply, Maxim! I appreciate it. >> >> You are exactly right; my script requests another URL on the same server >> (which happens to be localhost in this case). >> >> Just so I understand the problem, are you saying that the script that >> contains the cURL call (via PHP's curl_exec() function) essentially >> ties-up the only available PHP backend process, which causes curl_exec() >> to time-out when it requests another URL on the same server? >> >> Is there a solution to this problem? >> >> My setup is essentially the same as what is described at >> http://wiki.nginx.org/PHPFastCGIOnWindows . > > If you are using php-cgi, configuring PHP_FCGI_CHILDREN > environment variable before starting php-cgi should help. > Thank you for the suggestion, Maxim. I set PHP_FCGI_CHILDREN to 4, 20, etc., but it doesn't seem to make any difference. Here is the Windows batch script that I'm using to start php-cgi.exe (please excuse the wrapping): @ECHO OFF ECHO Starting PHP FastCGI... SET PHP_FCGI_MAX_REQUESTS=0 SET PHP_FCGI_CHILDREN=4 SET PATH="C:\Program Files\php;%PATH%" "C:\Program Files\php\php-cgi.exe" -b 127.0.0.1:9000 -c "C:\Program Files\php\php.ini" Any other ideas? Thanks again, -Ben From coolbsd at hotmail.com Fri Aug 23 23:01:08 2013 From: coolbsd at hotmail.com (Cool) Date: Fri, 23 Aug 2013 16:01:08 -0700 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off In-Reply-To: <5217C5E8.2030100@indietorrent.org> References: <521791F4.4040306@indietorrent.org> <20130823180553.GN19334@mdounin.ru> <5217ACE7.5060103@indietorrent.org> <20130823192259.GP19334@mdounin.ru> <5217C5E8.2030100@indietorrent.org> Message-ID: It was said PHP_FCGI_CHILDREN doesn't work under Windows, though I'm not sure current status of this issue: https://bugs.php.net/bug.php?id=49859 -C.B. On 8/23/2013 1:28 PM, Ben Johnson wrote: > > On 8/23/2013 3:23 PM, Maxim Dounin wrote: >> Hello! >> >> On Fri, Aug 23, 2013 at 02:41:43PM -0400, Ben Johnson wrote: >> >>> >>> On 8/23/2013 2:05 PM, Maxim Dounin wrote: >>>> Hello! >>>> >>>> On Fri, Aug 23, 2013 at 12:46:44PM -0400, Ben Johnson wrote: >>>> >>>>> Hello, >>>>> >>>>> I'm seeing a strange problem with nginx (1.5.2 on Windows) and PHP >>>>> (5.4.8 on Windows). >>>>> >>>>> Whenever I make a cURL request with PHP's curl_exec() function to a >>>>> secure URL (https protocol), and I disable peer verification, like this >>>>> >>>>> curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); >>>>> >>>>> nginx responds with a "504 Gateway Time-out". >>>>> >>>>> If I set CURLOPT_SSL_VERIFYPEER to TRUE, nginx responds with a "200 OK", >>>>> although curl_exec() returns false, which is expected due to the >>>>> verification failure (I'm using a self-signed certificate in nginx). >>>>> >>>>> I have tried executing the same script under Apache and it functions as >>>>> expected with peer verification disabled. >>>> What URL is requested by your script? Symptoms described suggest >>>> you are requesting some php script from the same server, and 504 >>>> is likely due to only one php backend process. >>>> >>> Thank you for the quick reply, Maxim! I appreciate it. >>> >>> You are exactly right; my script requests another URL on the same server >>> (which happens to be localhost in this case). >>> >>> Just so I understand the problem, are you saying that the script that >>> contains the cURL call (via PHP's curl_exec() function) essentially >>> ties-up the only available PHP backend process, which causes curl_exec() >>> to time-out when it requests another URL on the same server? >>> >>> Is there a solution to this problem? >>> >>> My setup is essentially the same as what is described at >>> http://wiki.nginx.org/PHPFastCGIOnWindows . >> If you are using php-cgi, configuring PHP_FCGI_CHILDREN >> environment variable before starting php-cgi should help. >> > Thank you for the suggestion, Maxim. > > I set PHP_FCGI_CHILDREN to 4, 20, etc., but it doesn't seem to make any > difference. Here is the Windows batch script that I'm using to start > php-cgi.exe (please excuse the wrapping): > > @ECHO OFF > ECHO Starting PHP FastCGI... > SET PHP_FCGI_MAX_REQUESTS=0 > SET PHP_FCGI_CHILDREN=4 > SET PATH="C:\Program Files\php;%PATH%" > "C:\Program Files\php\php-cgi.exe" -b 127.0.0.1:9000 -c "C:\Program > Files\php\php.ini" > > Any other ideas? > > Thanks again, > > -Ben > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > From nginx-forum at nginx.us Sat Aug 24 09:29:17 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 24 Aug 2013 05:29:17 -0400 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off In-Reply-To: <5217C5E8.2030100@indietorrent.org> References: <5217C5E8.2030100@indietorrent.org> Message-ID: set PHP_FCGI_CHILDREN=0 set PHP_FCGI_MAX_REQUESTS=10000 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242154,242181#msg-242181 From nginx-forum at nginx.us Sat Aug 24 19:24:58 2013 From: nginx-forum at nginx.us (webmastersitesi) Date: Sat, 24 Aug 2013 15:24:58 -0400 Subject: Upload timeouts after about 30 seconds Message-ID: <2e8acb36b74c83360ea46f8fb4da68d6.NginxMailingListEnglish@forum.nginx.org> Hello, I've a problem with file upload and googling and checking with everything didn't help to solve it. The problem is when file is being uploaded larger than few megs browser repeats request after about 30 seconds, and on the second request after another 30-40s it throws connection reset and nginx has 408 error on that post request, no other errors in nginx nor in php. related settings to fiel upload are: client_body_buffer_size 1m; client_header_buffer_size 128k; client_max_body_size 1000m; client_body_timeout 500; client_header_timeout 500; keepalive_timeout 500 500; send_timeout 500; keepalive_requests 100; tcp_nodelay on; reset_timedout_connection on; server works with php (php-fpm) with socket. standart configuration. nginx.conf: client_max_body_size 42m; php.ini: memory_limit = 64M post_max_size = 40M upload_max_filesize = 32M it works fine for me with that settings. I havent speficied any timeout settings you used. Have you edited php.ini for php-fpm? If not - default upload file size is 2M Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238838,238840#msg-238840 Of course I set correct php settings settings: memory_limit => 512M post_max_size => 1000M upload_max_filesize => 1000M Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242188,242188#msg-242188 From nginx-forum at nginx.us Sat Aug 24 19:28:25 2013 From: nginx-forum at nginx.us (webmastersitesi) Date: Sat, 24 Aug 2013 15:28:25 -0400 Subject: Upload timeouts after about 30 seconds In-Reply-To: <2e8acb36b74c83360ea46f8fb4da68d6.NginxMailingListEnglish@forum.nginx.org> References: <2e8acb36b74c83360ea46f8fb4da68d6.NginxMailingListEnglish@forum.nginx.org> Message-ID: There is problem with client_max_body_size on SSL enabled. I just got same problem on lasted nginx version and it ignores this directive in secure connections. Still looking for solution???? please help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242188,242189#msg-242189 From pug+nginx at felsing.net Sun Aug 25 06:53:57 2013 From: pug+nginx at felsing.net (Christian Felsing) Date: Sun, 25 Aug 2013 08:53:57 +0200 Subject: Fake Basic Auth Message-ID: <5219AA05.8050405@felsing.net> Hello, I am new to nginx and have following problem: Nginx should be used as a reverse proxy and configured for client certificate authentication. Backoffice application supports basic auth only. Apache 2.4 solution for that kind of problems is "Fake Basic Auth" so backoffice application gets a remote_user and password generated from client certificate presented by user. Example: AuthBasicFake %{SSL_CLIENT_S_DN_CN} %{sha1:passphrase-%{SSL_CLIENT_S_DN_CN}} This set remote user to CN from client certifiate. is there a similar mechanism in Nginx? Does HttpLuaModule allow to fake a 401 authentication? best regards Christian From nginx-forum at nginx.us Sun Aug 25 07:14:55 2013 From: nginx-forum at nginx.us (etrader) Date: Sun, 25 Aug 2013 03:14:55 -0400 Subject: How to serve PHP files outside the public folder? Message-ID: <5d7c9b7913bab9ce7ffa69f4f01ec0ef.NginxMailingListEnglish@forum.nginx.org> For serving the PHP scripts, I use this location location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } now I want to keep a folder outside the public folder to be served as a location /private/ { /* serving static files from /private/$server_name/ */ location ~ \.php$ { /* serving PHP scripts from /private/$server_name/ */ } } How should set this location to serve the files from outside the public folder? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242193,242193#msg-242193 From smallfish.xy at gmail.com Sun Aug 25 07:20:23 2013 From: smallfish.xy at gmail.com (smallfish) Date: Sun, 25 Aug 2013 15:20:23 +0800 Subject: Fake Basic Auth In-Reply-To: <5219AA05.8050405@felsing.net> References: <5219AA05.8050405@felsing.net> Message-ID: use ngx_lua for 401 auth example: http://chenxiaoyu.org/2012/02/08/nginx-lua-401-auth.html -- smallfish http://chenxiaoyu.org On Sun, Aug 25, 2013 at 2:53 PM, Christian Felsing wrote: > Hello, > > I am new to nginx and have following problem: > > Nginx should be used as a reverse proxy and configured for client > certificate authentication. Backoffice application supports basic auth > only. > Apache 2.4 solution for that kind of problems is "Fake Basic Auth" so > backoffice application gets a remote_user and password generated from > client certificate presented by user. > > Example: > AuthBasicFake %{SSL_CLIENT_S_DN_CN} > %{sha1:passphrase-%{SSL_CLIENT_S_DN_CN}} > This set remote user to CN from client certifiate. > > is there a similar mechanism in Nginx? > > Does HttpLuaModule allow to fake a 401 authentication? > > best regards > Christian > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nhadie at gmail.com Sun Aug 25 23:58:16 2013 From: nhadie at gmail.com (ron ramos) Date: Mon, 26 Aug 2013 07:58:16 +0800 Subject: How to serve PHP files outside the public folder? In-Reply-To: <5d7c9b7913bab9ce7ffa69f4f01ec0ef.NginxMailingListEnglish@forum.nginx.org> References: <5d7c9b7913bab9ce7ffa69f4f01ec0ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Maybe you can try something like this; location /private/ { try_files @private } location @private { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } Regards, Ron On Sun, Aug 25, 2013 at 3:14 PM, etrader wrote: > For serving the PHP scripts, I use this location > > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include /etc/nginx/fastcgi_params; > } > > now I want to keep a folder outside the public folder to be served as a > > location /private/ { > /* serving static files from /private/$server_name/ */ > location ~ \.php$ { > /* serving PHP scripts from /private/$server_name/ */ > } > } > > How should set this location to serve the files from outside the public > folder? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,242193,242193#msg-242193 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stadtpirat11 at ymail.com Mon Aug 26 11:14:32 2013 From: stadtpirat11 at ymail.com (- -) Date: Mon, 26 Aug 2013 04:14:32 -0700 (PDT) Subject: Securing nginx: Workers per server block under specific user? Message-ID: <1377515672.31001.YahooMailNeo@web140502.mail.bf1.yahoo.com> Hello, I don't quite understand how this works. Until now I was running my websites under Cherokee Web Server. Cherokee ran under user www-data and all my websites shared the same permissions (www-data:www-data rwxrwx---). That worked well, but then I also realised: If someone would be able to inject php code into one of my websites, he would have full read/write acces to all of my sites. That would enable him to read my database passwords. For example using this line of code: `scandir("/usr/local/var/www/site2/config/database.php")`. Now, I said goodbye to Cherokee and am currently looking into nginx. The first thing I did was to restrict the permissions in the www folder: > drwxr-x--- 4 root ? ?? root ?? ? 4.0K Aug 16 14:30 . > drwxr-sr-x 7 root????? staff? ?? 4.0K Aug 15 15:02 .. > drwx------ 2 www-site1 www-site1 4.0K Aug 25 20:44 site1 > drwx------ 9 www-site2 www-site2 4.0K Aug 15 15:38 site2 Then I realised, that I cannot spawn workers per server block. So as far as I understand, the user under which nginx is running (www-data) needs read access to folder site1 and site2. So I would need to change the permissions to > drwxr-x--- 4 root ? ?? root ???? 4.0K Aug 16 14:30 . > drwxr-sr-x 7 root????? staff? ?? 4.0K Aug 15 15:02 .. > drwxr-xr-x 2 www-site1 www-data? 4.0K Aug 25 20:44 site1 > drwxr-xr-x 9 www-site2 www-data? 4.0K Aug 15 15:38 site2 That is really bad because I would have the same security problem as I had before with cherokee. With one line of php he could read from any "site" folder (see above). I could tackle that problem by assigning rwx------ permissions to all files, but then I would probably be busier with changing file permissions that developing websites ... -> Is there no way to have workers spawn per server block that run under a specific user? Say, 5 server blocks, 3 workers each? -> How did you solve this problem? Cheers Stadtpirat From vbart at nginx.com Mon Aug 26 11:23:33 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 26 Aug 2013 15:23:33 +0400 Subject: Securing nginx: Workers per server block under specific user? In-Reply-To: <1377515672.31001.YahooMailNeo@web140502.mail.bf1.yahoo.com> References: <1377515672.31001.YahooMailNeo@web140502.mail.bf1.yahoo.com> Message-ID: <201308261523.33769.vbart@nginx.com> On Monday 26 August 2013 15:14:32 - - wrote: [...] > That > is really bad because I would have the same security problem as I had > before with cherokee. With one line of php he could read from any "site" > folder (see above). I could tackle that problem by assigning rwx------ > permissions to all files, but then I would probably be busier with > changing file permissions that developing websites ... > Nginx doesn't execute php, so what is the problem then? wbr, Valentin V. Bartenev From akunz at wishmedia.de Mon Aug 26 11:30:26 2013 From: akunz at wishmedia.de (Alexander Kunz - Wishmedia GmbH) Date: Mon, 26 Aug 2013 13:30:26 +0200 Subject: Securing nginx: Workers per server block under specific user? In-Reply-To: <201308261523.33769.vbart@nginx.com> References: <1377515672.31001.YahooMailNeo@web140502.mail.bf1.yahoo.com> <201308261523.33769.vbart@nginx.com> Message-ID: <521B3C52.2030106@wishmedia.de> Am 26.08.2013 13:23, schrieb Valentin V. Bartenev: > On Monday 26 August 2013 15:14:32 - - wrote: > [...] >> That >> is really bad because I would have the same security problem as I had >> before with cherokee. With one line of php he could read from any "site" >> folder (see above). I could tackle that problem by assigning rwx------ >> permissions to all files, but then I would probably be busier with >> changing file permissions that developing websites ... >> > > Nginx doesn't execute php, so what is the problem then? > > wbr, Valentin V. Bartenev Try to use php-fpm, there you can define pools with a specific username for each pool. Kind regards Alexander Kunz From ben at indietorrent.org Mon Aug 26 14:37:57 2013 From: ben at indietorrent.org (Ben Johnson) Date: Mon, 26 Aug 2013 10:37:57 -0400 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off In-Reply-To: References: <5217C5E8.2030100@indietorrent.org> Message-ID: <521B6845.408@indietorrent.org> Thanks for the suggestion, itpp2012. I tried adding those directives to the batch script that starts php-cgi.exe, but the problem persists. What I find strange is that the problem occurs only when I set peer verification to false: curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); When I set this value to true, I at least receive a response. Maxim, you said: "What URL is requested by your script? Symptoms described suggest you are requesting some php script from the same server, and 504 is likely due to only one php backend process." If this were the root cause, wouldn't the cURL call fail in the way way, regardless of the CURLOPT_SSL_VERIFYPEER value? In other words, it doesn't seem like changing this cURL option would change the number of backend processes required to handle the request(s). But I could be wrong. I'm pretty much out of ideas. Any further troubleshooting tips would be much appreciated. Thanks! -Ben On 8/24/2013 5:29 AM, itpp2012 wrote: > set PHP_FCGI_CHILDREN=0 > set PHP_FCGI_MAX_REQUESTS=10000 > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242154,242181#msg-242181 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From luky-37 at hotmail.com Mon Aug 26 15:25:51 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 26 Aug 2013 17:25:51 +0200 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off In-Reply-To: <521B6845.408@indietorrent.org> References: <5217C5E8.2030100@indietorrent.org>, , <521B6845.408@indietorrent.org> Message-ID: Hi! > If this were the root cause, wouldn't the cURL call fail in the way way, > regardless of the CURLOPT_SSL_VERIFYPEER value? In other words, it > doesn't seem like changing this cURL option would change the number of > backend processes required to handle the request(s). But I could be wrong. Yes, it there is a difference. CURLOPT_SSL_VERIFYPEER = true probably masks your real problem, because it fails at SSL level (due to certificate validation failure; after all, thats why you disabled it, right?). So the HTTP request passes only when you disable certificate validation, which is way you see the 504 error only when its disabled. That doesn't mean there is a problem with curl or SSL. It means there is a problem with your backend. > Any further troubleshooting tips would be much appreciated. Triple check that your backend can handle multiple requests simultanously and that your script doesn't somehow create a deadlook (requesting the output of itself). Check FCGI logs. If that doesn't help, increment the debug levels on nginx and FCGI. Regards, Lukas From pug+nginx at felsing.net Mon Aug 26 18:19:49 2013 From: pug+nginx at felsing.net (Christian Felsing) Date: Mon, 26 Aug 2013 20:19:49 +0200 Subject: Fake Basic Auth In-Reply-To: References: <5219AA05.8050405@felsing.net> Message-ID: <521B9C45.3060000@felsing.net> Sorry, does not what I need: proxy_pass http://myapache:8000; rewrite_by_lua ' ngx.var.remote_user = "user" ngx.var.remote_password = "secret" '; This should fake a 401 login but I get 2013/08/26 20:11:11 [error] 19175#0: *2 lua entry thread aborted: runtime error: [string "rewrite_by_lua"]:2: variable "remote_user" not changeable stack traceback: coroutine 0: [C]: ? [string "rewrite_by_lua"]:2: in function <[string "rewrite_by_lua"]:1>, client: 192.168.100.99, server: localhost, request: "GET /x.php HTTP/1.1", host: "test.example.net" Obviously Nginx does not like any changes on remote_user in Apache AuthBasicFake %{SSL_CLIENT_S_DN_CN} %{sha1:passphrase-%{SSL_CLIENT_S_DN_CN}} set remote_user to value passwd by SSL_CLIENT_S_DN_CN and a fake password generated from SSL_CLIENT_S_DN_CN I need something similar for nginx. regards Christian Am 25.08.2013 09:20, schrieb smallfish: > use ngx_lua for 401 auth > example: http://chenxiaoyu.org/2012/02/08/nginx-lua-401-auth.html From francis at daoine.org Mon Aug 26 22:08:37 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 26 Aug 2013 23:08:37 +0100 Subject: Fake Basic Auth In-Reply-To: <5219AA05.8050405@felsing.net> References: <5219AA05.8050405@felsing.net> Message-ID: <20130826220837.GK27161@craic.sysops.org> On Sun, Aug 25, 2013 at 08:53:57AM +0200, Christian Felsing wrote: Hi there, > Nginx should be used as a reverse proxy and configured for client > certificate authentication. Backoffice application supports basic auth only. > Apache 2.4 solution for that kind of problems is "Fake Basic Auth" so > backoffice application gets a remote_user and password generated from > client certificate presented by user. So, in nginx and http terms, at the point where you "proxy_pass http://backoffice", you also want to "proxy_set_header Authorization" with the correct value. The correct value is "Basic " followed by the base64-encoding of user:pass, where "user" and "pass" are respectively the username and password that you want the backoffice application to see. Presumably you have a method of deriving the username from the client certificate, and you have a method for deriving the password for this username. I'm not aware of a distribution-nginx-config way of doing the base64 encoding. You could try using a part of a third-party module like http://wiki.nginx.org/HttpSetMiscModule, or perhaps you could use one of the language modules to do the conversion. (Or you could write a dedicated module to just do exactly what you want.) Another option, if you have a fixed set of client certificates, could be to use a "map" to hardcode the Authorization header value for each certificate, and then use that variable in the "proxy_set_header" line -- that would not need anything extra from nginx; and, as a bonus, whatever method you have to turn the certificate into a username can be opaque to nginx, so it can be as complicated as you like. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Aug 26 22:12:15 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 26 Aug 2013 23:12:15 +0100 Subject: How to serve PHP files outside the public folder? In-Reply-To: <5d7c9b7913bab9ce7ffa69f4f01ec0ef.NginxMailingListEnglish@forum.nginx.org> References: <5d7c9b7913bab9ce7ffa69f4f01ec0ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130826221215.GL27161@craic.sysops.org> On Sun, Aug 25, 2013 at 03:14:55AM -0400, etrader wrote: > now I want to keep a folder outside the public folder to be served as a > > location /private/ { > /* serving static files from /private/$server_name/ */ > location ~ \.php$ { > /* serving PHP scripts from /private/$server_name/ */ > } > } > > How should set this location to serve the files from outside the public > folder? http://nginx.org/r/location Probably one of "location ^~ /private/"; or else "location ~ /private/*.\php$" before your "location ~ \.php$", should work. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Aug 26 22:14:10 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 26 Aug 2013 23:14:10 +0100 Subject: How to serve PHP files outside the public folder? In-Reply-To: <20130826221215.GL27161@craic.sysops.org> References: <5d7c9b7913bab9ce7ffa69f4f01ec0ef.NginxMailingListEnglish@forum.nginx.org> <20130826221215.GL27161@craic.sysops.org> Message-ID: <20130826221410.GM27161@craic.sysops.org> On Mon, Aug 26, 2013 at 11:12:15PM +0100, Francis Daly wrote: > Probably one of "location ^~ /private/"; or else "location ~ > /private/*.\php$" before your "location ~ \.php$", should work. That's "^/private/*\.php$", of course. Fat fingers... f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Aug 27 05:11:05 2013 From: nginx-forum at nginx.us (dt0x) Date: Tue, 27 Aug 2013 01:11:05 -0400 Subject: Nginx as Reverse Proxy for Tomcat + SSL In-Reply-To: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> References: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <06344d7cce5d77fe9bef34f3880e2484.NginxMailingListEnglish@forum.nginx.org> Assuming that this happens all on one machine, Tomcat can be set to listen only on localhost e.g. 127.0.0.1:8080 in which case SSL from nginx reverse proxy becomes redundant. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,24126,242227#msg-242227 From pchychi at gmail.com Tue Aug 27 05:22:18 2013 From: pchychi at gmail.com (Payam Chychi) Date: Mon, 26 Aug 2013 22:22:18 -0700 Subject: Nginx as Reverse Proxy for Tomcat + SSL In-Reply-To: References: <7e2ba0da4e5398e2e546d4fb9763c5df.NginxMailingListEnglish@forum.nginx.org> Message-ID: Ssl proxy with nginx, copy over the ssl keys from the end site to nginx. Now if u want ssl from nginx, simply https the connection and sign a cert... What am i missing here? Are you looking for an actual config sample? -- Payam Chychi Network Engineer / Security Specialist On Wednesday, 21 August, 2013 at 4:05 AM, sajan tharayil wrote: > Hi Dounin, > > > > 3) Can I have an SSL from Client to Nginx and another between Nginx and Tomcat . > > Yes. > > How do we do this. I am trying to find a way to do this, either with > Haproxy or Nginx > > > Kind Regards > Sajan > > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hendry at dabase.com Tue Aug 27 08:43:12 2013 From: hendry at dabase.com (Kai Hendry) Date: Tue, 27 Aug 2013 16:43:12 +0800 Subject: VirtualDocumentRoot with 1.4.2 Message-ID: <20130827084312.GA26431@sg.webconverger.com> Hi there, I've tried to replicate my Apache VirtualDocumentRoot /srv/www/%0 to nginx. I have http://dabase.com/e/04055/ with server_name ~^(?.*)$; root /srv/www/$vhost; access_log /var/log/nginx/$vhost.access.log; However it's still logging to /var/log/nginx/access.log instead of /var/log/nginx/$vhost.access.log. [root at sg ~]# cd /var/log/nginx/ [root at sg nginx]# inotifywait -r -m . Setting up watches. Beware: since -r was given, this may take a while! Watches established. ./ MODIFY access.log ./ MODIFY access.log `root /srv/www/$vhost;` works, but not `access_log /var/log/nginx/$vhost.access.log;`. What am I missing? Could my configuration upon http://dabase.com/e/04055/ be otherwise improved, without breaking it into individual server blocks? Many thanks, From mdounin at mdounin.ru Tue Aug 27 11:32:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2013 15:32:29 +0400 Subject: VirtualDocumentRoot with 1.4.2 In-Reply-To: <20130827084312.GA26431@sg.webconverger.com> References: <20130827084312.GA26431@sg.webconverger.com> Message-ID: <20130827113229.GS19334@mdounin.ru> Hello! On Tue, Aug 27, 2013 at 04:43:12PM +0800, Kai Hendry wrote: > Hi there, > > I've tried to replicate my Apache VirtualDocumentRoot /srv/www/%0 to > nginx. > > I have http://dabase.com/e/04055/ with > > server_name ~^(?.*)$; > root /srv/www/$vhost; > access_log /var/log/nginx/$vhost.access.log; > > However it's still logging to /var/log/nginx/access.log instead of > /var/log/nginx/$vhost.access.log. > > [root at sg ~]# cd /var/log/nginx/ > [root at sg nginx]# inotifywait -r -m . > Setting up watches. Beware: since -r was given, this may take a while! > Watches established. > ./ MODIFY access.log > ./ MODIFY access.log > > `root /srv/www/$vhost;` works, but not `access_log > /var/log/nginx/$vhost.access.log;`. > > What am I missing? http://nginx.org/r/access_log : The file path can contain variables (0.7.6+), but such logs have some : constraints: : : - the user whose credentials are used by worker processes should have : permissions to create files in a directory with such logs; : : - buffered writes do not work; : : - the file is opened and closed for each log write. However, since the : descriptors of frequently used files can be stored in a cache, writing to the : old file can continue during the time specified by the open_log_file_cache : directive?s valid parameter : : - during each log write the existence of the request?s root directory is : checked, and if it does not exist the log is not created. It is thus a good : idea to specify both root and access_log on the same level: -- Maxim Dounin http://nginx.org/en/donation.html From hendry at dabase.com Tue Aug 27 12:24:12 2013 From: hendry at dabase.com (Kai Hendry) Date: Tue, 27 Aug 2013 20:24:12 +0800 Subject: VirtualDocumentRoot with 1.4.2 In-Reply-To: <20130827113229.GS19334@mdounin.ru> References: <20130827084312.GA26431@sg.webconverger.com> <20130827113229.GS19334@mdounin.ru> Message-ID: <20130827122412.GA28464@sg.webconverger.com> On Tue, Aug 27, 2013 at 03:32:29PM +0400, Maxim Dounin wrote: > : - the user whose credentials are used by worker processes should have > : permissions to create files in a directory with such logs; /var/log/nginx has +x on http user, so that's fine. > : - buffered writes do not work; Not sure what that means. > : - the file is opened and closed for each log write. However, since the > : descriptors of frequently used files can be stored in a cache, writing to the > : old file can continue during the time specified by the open_log_file_cache > : directive?s valid parameter IIUC there can be delay. No problem. > : - during each log write the existence of the request?s root directory is > : checked, and if it does not exist the log is not created. It is thus a good > : idea to specify both root and access_log on the same level: The root is served fine... I don't understand how I fix my problem. Or are you saying there isn't a way to fix it? From francis at daoine.org Tue Aug 27 12:28:42 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 27 Aug 2013 13:28:42 +0100 Subject: VirtualDocumentRoot with 1.4.2 In-Reply-To: <20130827122412.GA28464@sg.webconverger.com> References: <20130827084312.GA26431@sg.webconverger.com> <20130827113229.GS19334@mdounin.ru> <20130827122412.GA28464@sg.webconverger.com> Message-ID: <20130827122842.GN27161@craic.sysops.org> On Tue, Aug 27, 2013 at 08:24:12PM +0800, Kai Hendry wrote: > On Tue, Aug 27, 2013 at 03:32:29PM +0400, Maxim Dounin wrote: Hi there, > > : - the user whose credentials are used by worker processes should have > > : permissions to create files in a directory with such logs; > > /var/log/nginx has +x on http user, so that's fine. +x doesn't allow you to create files in a directory. +w allows you to create files in a directory. > I don't understand how I fix my problem. Does the error log give any indication of the problem? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Aug 27 13:37:06 2013 From: nginx-forum at nginx.us (milordk) Date: Tue, 27 Aug 2013 09:37:06 -0400 Subject: zero size buf in output(Bug?) In-Reply-To: References: Message-ID: ??????????? ???????? ? ???????? ? ?????: zero size buf in output uname -a : FreeBSD srv.sportactions.ru 9.1-RELEASE-p5 FreeBSD 9.1-RELEASE-p5 #0: Sat Jul 27 01:14:23 UTC 2013 root at amd64 builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 nginx -v : nginx version: nginx/1.5.3 nginx -V : built by gcc 4.2.1 20070831 patched [FreeBSD] configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --with-file-aio --with-google_perftools_module --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_gunzip_module --with-http_perl_module --with-http_realip_module --with-http_stub_status_module --with-pcre ???????????? Spawn-fcgi + Nginx ?????? ????????? ?????? ?? ????? ? ?????????? ?????? 5-10 ????? ? ??? ????? ???? ?????????... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231543,242242#msg-242242 From nginx-forum at nginx.us Tue Aug 27 13:43:27 2013 From: nginx-forum at nginx.us (milordk) Date: Tue, 27 Aug 2013 09:43:27 -0400 Subject: zero size buf in output(Bug?) In-Reply-To: References: Message-ID: <776c02b5fbf3e98b02300639dbc57d8f.NginxMailingListEnglish@forum.nginx.org> ?? ?????? ??????: Active Connections ??????? 700..800, ?? ??? 500 ? ????????? Waiting (?? ?????? stub) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231543,242245#msg-242245 From mdounin at mdounin.ru Tue Aug 27 14:06:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2013 18:06:04 +0400 Subject: nginx-1.5.4 Message-ID: <20130827140604.GW19334@mdounin.ru> Changes with nginx 1.5.4 27 Aug 2013 *) Change: the "js" extension MIME type has been changed to "application/javascript"; default value of the "charset_types" directive was changed accordingly. *) Change: now the "image_filter" directive with the "size" parameter returns responses with the "application/json" MIME type. *) Feature: the ngx_http_auth_request_module. *) Bugfix: a segmentation fault might occur on start or during reconfiguration if the "try_files" directive was used with an empty parameter. *) Bugfix: memory leak if relative paths were specified using variables in the "root" or "auth_basic_user_file" directives. *) Bugfix: the "valid_referers" directive incorrectly executed regular expressions if a "Referer" header started with "https://". Thanks to Liangbin Li. *) Bugfix: responses might hang if subrequests were used and an SSL handshake error happened during subrequest processing. Thanks to Aviram Cohen. *) Bugfix: in the ngx_http_autoindex_module. *) Bugfix: in the ngx_http_spdy_module. -- Maxim Dounin http://nginx.org/en/donation.html From hendry at dabase.com Tue Aug 27 14:53:45 2013 From: hendry at dabase.com (Kai Hendry) Date: Tue, 27 Aug 2013 22:53:45 +0800 Subject: VirtualDocumentRoot with 1.4.2 In-Reply-To: <20130827122842.GN27161@craic.sysops.org> References: <20130827084312.GA26431@sg.webconverger.com> <20130827113229.GS19334@mdounin.ru> <20130827122412.GA28464@sg.webconverger.com> <20130827122842.GN27161@craic.sysops.org> Message-ID: <20130827145345.GA31693@sg.webconverger.com> On Tue, Aug 27, 2013 at 01:28:42PM +0100, Francis Daly wrote: > +w allows you to create files in a directory. Sorry I meant to say +w I think I must have got confused with one of my server blocks and not the wildcard. Sorry to trouble you. it seems to be to working. :-) From ben+nginx at list-subs.com Tue Aug 27 14:57:03 2013 From: ben+nginx at list-subs.com (Ben) Date: Tue, 27 Aug 2013 15:57:03 +0100 Subject: Help needed NGINX reverse proxy to NODE.JS Message-ID: <521CBE3F.4090008@list-subs.com> Hi, I've tried this with NGINX 1.4.1-1ppa1~precise and node v0.10.17 and just can't get it to work. What I've tried / what happens : (a) Yes, I have tested this direct between client code and node listening on 0.0.0.0 and it works as expected (b) I have tried a multitude of alternative configs... adding a path addribute to my node server config and tweaking proxy_pass http://node_admin accordingly. That doesn't work either. (c) When running via NGINX, I only ever get two alerts, the "WebSocket is supported by your Browser!" string and the "Connection is closed..." string. Nothing happens in between, and nothing on the node console either. (d) Nothing in the NGINX logs either. Help ! ;-) Thanks ! Simple "hello world" style node script : var WebSocketServer = require('ws').Server , wss = new WebSocketServer({host:'localhost',port: 8080}); wss.on('connection', function(ws) { ws.on('message', function(message) { console.log('received: %s', message); }); ws.send('something'); }); Simple NGINX conf : upstream node_admin { server 127.0.0.1:8080; keepalive 64; } location /api { proxy_redirect off; proxy_http_version 1.1; proxy_pass http://node_admin; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; } Simple client : From mdounin at mdounin.ru Tue Aug 27 16:02:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2013 20:02:03 +0400 Subject: Help needed NGINX reverse proxy to NODE.JS In-Reply-To: <521CBE3F.4090008@list-subs.com> References: <521CBE3F.4090008@list-subs.com> Message-ID: <20130827160203.GD19334@mdounin.ru> Hello! On Tue, Aug 27, 2013 at 03:57:03PM +0100, Ben wrote: > Hi, > > I've tried this with NGINX 1.4.1-1ppa1~precise and node v0.10.17 and > just can't get it to work. > > What I've tried / what happens : > (a) Yes, I have tested this direct between client code and node > listening on 0.0.0.0 and it works as expected > (b) I have tried a multitude of alternative configs... adding a path > addribute to my node server config and tweaking proxy_pass > http://node_admin accordingly. That doesn't work either. > (c) When running via NGINX, I only ever get two alerts, the > "WebSocket is supported by your Browser!" string and the "Connection > is closed..." string. Nothing happens in between, and nothing on > the node console either. > (d) Nothing in the NGINX logs either. > > Help ! ;-) Nothing in nginx logs at all, even in access log? Sounds like a DNS problem... -- Maxim Dounin http://nginx.org/en/donation.html From ben+nginx at list-subs.com Tue Aug 27 16:49:19 2013 From: ben+nginx at list-subs.com (Ben) Date: Tue, 27 Aug 2013 17:49:19 +0100 Subject: Help needed NGINX reverse proxy to NODE.JS In-Reply-To: <20130827160203.GD19334@mdounin.ru> References: <521CBE3F.4090008@list-subs.com> <20130827160203.GD19334@mdounin.ru> Message-ID: <521CD88F.9090307@list-subs.com> Nothing at all ... I promise you ! (I've been tail -f'ing the logs) , and I can promise you its not a DNS problem because I can see the NGINX default website on port 80 ;-) I found a workaround setting "proxy_buffering off;" in nginx makes it work again. Don't know if this is the way it's supposed to be and/or a recommended way to do things ?? > > Nothing in nginx logs at all, even in access log? Sounds like a > DNS problem... > From reallfqq-nginx at yahoo.fr Tue Aug 27 16:48:57 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 27 Aug 2013 12:48:57 -0400 Subject: zero size buf in output(Bug?) In-Reply-To: <776c02b5fbf3e98b02300639dbc57d8f.NginxMailingListEnglish@forum.nginx.org> References: <776c02b5fbf3e98b02300639dbc57d8f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, Isn't there a dedicated nginx-ru mailing list? :o) ? --- *B. R.* ** -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Aug 27 17:14:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2013 21:14:59 +0400 Subject: Help needed NGINX reverse proxy to NODE.JS In-Reply-To: <521CD88F.9090307@list-subs.com> References: <521CBE3F.4090008@list-subs.com> <20130827160203.GD19334@mdounin.ru> <521CD88F.9090307@list-subs.com> Message-ID: <20130827171458.GI19334@mdounin.ru> Hello! On Tue, Aug 27, 2013 at 05:49:19PM +0100, Ben wrote: > Nothing at all ... I promise you ! (I've been tail -f'ing the logs) > , and I can promise you its not a DNS problem because I can see the > NGINX default website on port 80 ;-) The location with websocket proxy you are testing is on port 80 too. Do you see other requests to the host in access log? > I found a workaround setting "proxy_buffering off;" in nginx makes > it work again. Don't know if this is the way it's supposed to be > and/or a recommended way to do things ?? The fact that it helps indicate that connection is ok (that is, you should see it in access log once it completes - unless you disabled access logging or looking into wrong logs), but isn't considered to be upgraded to a websocket protocol for some reason. Most obvious reason I can think of is an old version of nginx actually running, the one without websocket proxy support (something before 1.3.13 instead of 1.4.1 you claim in your initial message). It's trivial to check by looking into Server header line returned. If still no lock, try configuring a debug log and obtaining one while trying a websocket request, see http://nginx.org/en/docs/debugging_log.html for details. -- Maxim Dounin http://nginx.org/en/donation.html From r1ch+nginx at teamliquid.net Tue Aug 27 21:12:00 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 27 Aug 2013 17:12:00 -0400 Subject: zero size buf in output(Bug?) In-Reply-To: References: <776c02b5fbf3e98b02300639dbc57d8f.NginxMailingListEnglish@forum.nginx.org> Message-ID: I also just saw this today: 2013/08/28 06:05:36 [alert] 26208#0: *919486194 zero size buf in output t:1 r:0 f:0 000000000264D7A5 000000000264D7A5-000000000264D7A5 0000000000000000 0-0 while sending request to upstream Looking through logs, there are several similar lines, all to the same URL (fastcgi/PHP) with a HEAD request. On Tue, Aug 27, 2013 at 12:48 PM, B.R. wrote: > Hello, > > Isn't there a dedicated nginx-ru mailing list? :o) > ? > --- > *B. R.* > ** > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Aug 28 12:28:10 2013 From: nginx-forum at nginx.us (bernado) Date: Wed, 28 Aug 2013 08:28:10 -0400 Subject: 502-Error but nothing in error.log (anymore) Message-ID: <300f117ac8a32a570bf8fcb66512c279.NginxMailingListEnglish@forum.nginx.org> Hi there, I'm using nginx on my arm5 to run a seafile-server and ran into a couple of problems just now. when I open a page that embeds a picture from a certain folder the picture won't show. When I copy the pictures url and open that url manually, I get the error: "An error occurred. Sorry, the page you are looking for is currently unavailable. Please try again later ..." As I've understood, thats a fallback-template for all server-related errors. But when I open the error.log, it shows me nothing. Only a notice, that some pid has been created or similar. I haven't set a specifc path to a log-file in the nginx.conf, so it should log to the default location, which in my case is /var/log/nginx/error.log, and it used to do that back when I had some 403 permission errors with wrong ownership of files. so it's not the configurations fault. It seems that it's just that particular kind of error that doesn't get properly logged. The access.log in that same folder gets filled with access logs, upon opening the image it responds with 502. Any help on how I could debug this further? thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242295,242295#msg-242295 From ben at indietorrent.org Wed Aug 28 13:44:48 2013 From: ben at indietorrent.org (Ben Johnson) Date: Wed, 28 Aug 2013 09:44:48 -0400 Subject: 504 Gateway Time-out when calling curl_exec() in PHP with SSL peer verification (CURLOPT_SSL_VERIFYPEER) off In-Reply-To: References: <5217C5E8.2030100@indietorrent.org>, , <521B6845.408@indietorrent.org> Message-ID: <521DFED0.9020404@indietorrent.org> On 8/26/2013 11:25 AM, Lukas Tribus wrote: > Hi! > > >> If this were the root cause, wouldn't the cURL call fail in the way way, >> regardless of the CURLOPT_SSL_VERIFYPEER value? In other words, it >> doesn't seem like changing this cURL option would change the number of >> backend processes required to handle the request(s). But I could be wrong. > > Yes, it there is a difference. CURLOPT_SSL_VERIFYPEER = true probably masks > your real problem, because it fails at SSL level (due to certificate > validation failure; after all, thats why you disabled it, right?). That's correct! > So the HTTP request passes only when you disable certificate validation, > which is way you see the 504 error only when its disabled. That doesn't > mean there is a problem with curl or SSL. It means there is a problem > with your backend. > Okay; that makes sense. > > >> Any further troubleshooting tips would be much appreciated. > > Triple check that your backend can handle multiple requests simultanously > and that your script doesn't somehow create a deadlook (requesting the > output of itself). > Is there a prescribed mechanism for the former (ensuring that the backend can handle multiple requests simultaneously)? Or should I simply write a script that, for example, uses a combination of "while" and "sleep()"to force a lengthy execution time while outputting some type of progress to indicate that each instance of the script is "alive"? > Check FCGI logs. If that doesn't help, increment the debug levels on nginx > and FCGI. > By FCGI logs, you mean the PHP logs, correct? Unfortunately, they reveal nothing, even at maximum verbosity. I'll try increasing nginx's logging verbosity, though. > > > > Regards, > > Lukas Thanks for your helpful insights here, Lukas! -Ben From aflexzor at gmail.com Wed Aug 28 18:03:48 2013 From: aflexzor at gmail.com (Alex Flex) Date: Wed, 28 Aug 2013 12:03:48 -0600 Subject: Add country region in logs. Message-ID: <521E3B84.7060407@gmail.com> Hello nginx! Iam trying to identify all my visitors based on their continent region, i want to log this info in each log entry. There is no geoip database for continent region so I want to map manually the countries within their regions. I already have the GeoIP db loaded, and an example map for a region as follows: map $geoip_country_code $north_africa { default ""; ##Northern Africa DZ 1; #"Algeria" EG 1; #"Egypt" EH 1; #"Western Sahara" LY 1; #"Libyan Arab Jamahiriya" MA 1; #"Morocco" SD 1; #"Sudan" TN 1; #"Tunisia" } Would anybody kindly offer me an example of how to generate a variable I can use in the logging that would print "NorthAfica" if one of the countries in the map matched? Thanks Alex From pug+nginx at felsing.net Wed Aug 28 18:18:36 2013 From: pug+nginx at felsing.net (Christian Felsing) Date: Wed, 28 Aug 2013 20:18:36 +0200 Subject: Fake Basic Auth In-Reply-To: <20130826220837.GK27161@craic.sysops.org> References: <5219AA05.8050405@felsing.net> <20130826220837.GK27161@craic.sysops.org> Message-ID: <521E3EFC.90205@felsing.net> Thank you for your hint, which solved this problem with a little bit LUA code. If LUA code (security) tests are finsihed I will publish this code. cheers Christian Am 27.08.2013 00:08, schrieb Francis Daly: > The correct value is "Basic " followed by the base64-encoding of > user:pass, where "user" and "pass" are respectively the username and > password that you want the backoffice application to see. From francis at daoine.org Thu Aug 29 00:29:20 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Aug 2013 01:29:20 +0100 Subject: Add country region in logs. In-Reply-To: <521E3B84.7060407@gmail.com> References: <521E3B84.7060407@gmail.com> Message-ID: <20130829002920.GO27161@craic.sysops.org> On Wed, Aug 28, 2013 at 12:03:48PM -0600, Alex Flex wrote: Hi there, I haven't tested this, but... > I already have the GeoIP db loaded, and an example map for a region as > follows: > > map $geoip_country_code $north_africa { > default ""; > ##Northern Africa > DZ 1; #"Algeria" http://nginx.org/r/map The "value" does not have to just be "1". It could be, for example, "NorthAfrica". > Would anybody kindly offer me an example of how to generate a variable I > can use in the logging that would print "NorthAfica" if one of the > countries in the map matched? $north_africa. But it might make more sense to call it $continent_region, and let it take values for the whole world. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Aug 29 09:01:01 2013 From: nginx-forum at nginx.us (christospap) Date: Thu, 29 Aug 2013 05:01:01 -0400 Subject: Server_Name regular expression Message-ID: I would like to syntax a regular expression which will match two server names. The server names are example.com and www.example.com. In particular, a regular expression for www or nothing I wrote the following regular expression (www\.|)example.com Nginx configuration file is compatible with Perl programming language. While it should work, it doesn't seem to work. I cannot find the proper regular expression. Does anyone have an idea how to synatx the regular expression? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242321,242321#msg-242321 From mdounin at mdounin.ru Thu Aug 29 09:37:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Aug 2013 13:37:52 +0400 Subject: Server_Name regular expression In-Reply-To: References: Message-ID: <20130829093752.GB22852@mdounin.ru> Hello! On Thu, Aug 29, 2013 at 05:01:01AM -0400, christospap wrote: > I would like to syntax a regular expression which will match two server > names. The server names are example.com and www.example.com. In particular, > a regular expression for www or nothing > > I wrote the following regular expression (www\.|)example.com Nginx > configuration file is compatible with Perl programming language. While it > should work, it doesn't seem to work. I cannot find the proper regular > expression. Does anyone have an idea how to synatx the regular expression? How to use regular expressions in server_name directive is documented here: http://nginx.org/r/server_name Note that regular expressions must be preceeded with "~" to distinguish them from normal names. Syntax of regular expressions per se is documented in the PCRE library manual pages, see "man pcresyntax" for quick syntax reference. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Aug 29 14:42:37 2013 From: nginx-forum at nginx.us (spacecwoboy) Date: Thu, 29 Aug 2013 10:42:37 -0400 Subject: Check if variable exists in file, Using file contents for variable handling Message-ID: Is there a way to check a variable against file contents for processing? A couple scenarios below. This is used here, but adding multiple agents can get burdensome: if ($http_user_agent ~ (agent1|agent2|Foo|Wget|Nmap|BadAgent) ) { return 403; } I'd like to maintain a file with all the variables, (and custom script the addition/removal of file entries) like this: if ($http_user_agent ~ (in.file(../../badAgents.txt) ) { return 403; } Or using file references for Allow/Deny: Allow ../../whitelist.txt Deny ../../badHosts.txt Or Checking usernames against a whitelist/blacklist: if ( $arg.Username does.not.exist.in(../../allowedUsers.txt) ) if ( $arg.Username exists.in(../../blockedUsers.txt) ) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242333,242333#msg-242333 From yaoweibin at gmail.com Thu Aug 29 15:02:43 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Thu, 29 Aug 2013 23:02:43 +0800 Subject: [ANNOUNCE] Tengine-1.5.1 is released Message-ID: Hi folks, We are pleased to announce that Tengine-1.5.1 (stable version) has been released! You can either checkout the source code from GitHub: https://github.com/alibaba/tengine or download the tar ball directly: http://tengine.taobao.org/download/tengine-1.5.1.tar.gz This release fixes the bugs we found of the Tengine-1.5 branch. We are now shifting to the development of Tengine-2.0, in which we will introduce more advanced features, performance improvements, and security enhancements. Stay tuned :) The full change log follows below: *) Feature: added the directive 'retry_cached_connection' which could disable unconditional retries with a cached backend connection. (yaoweibin) *) Feature: added the argument of 'ncpu' to 'sysguard_load' directive. (yzprofile) *) Bugfix: fixed a bug in referer module that regex rules might be invalid with https requests. (lilbedwin) *) Bugfix: fixed a bug that the trim module might send a zero-size buffer. (taoyuanyuan) *) Bugfix: fixed a compile error when using the configure option '--without-dso'. (zhuzhaoyuan) *) Bugfix: fixed two compile warnings. (zzjin, diwayou) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Have fun! Regards, -- Weibin Yao Developer @ Server Platform Team of Taobao From kworthington at gmail.com Thu Aug 29 16:02:20 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 29 Aug 2013 12:02:20 -0400 Subject: [nginx-announce] nginx-1.5.4 In-Reply-To: <20130827140612.GX19334@mdounin.ru> References: <20130827140612.GX19334@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.4 for Windows http://goo.gl/7UA8XZ (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Aug 27, 2013 at 10:06 AM, Maxim Dounin wrote: > Changes with nginx 1.5.4 27 Aug > 2013 > > *) Change: the "js" extension MIME type has been changed to > "application/javascript"; default value of the "charset_types" > directive was changed accordingly. > > *) Change: now the "image_filter" directive with the "size" parameter > returns responses with the "application/json" MIME type. > > *) Feature: the ngx_http_auth_request_module. > > *) Bugfix: a segmentation fault might occur on start or during > reconfiguration if the "try_files" directive was used with an empty > parameter. > > *) Bugfix: memory leak if relative paths were specified using variables > in the "root" or "auth_basic_user_file" directives. > > *) Bugfix: the "valid_referers" directive incorrectly executed regular > expressions if a "Referer" header started with "https://". > Thanks to Liangbin Li. > > *) Bugfix: responses might hang if subrequests were used and an SSL > handshake error happened during subrequest processing. > Thanks to Aviram Cohen. > > *) Bugfix: in the ngx_http_autoindex_module. > > *) Bugfix: in the ngx_http_spdy_module. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Aug 29 17:42:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Aug 2013 21:42:17 +0400 Subject: Check if variable exists in file, Using file contents for variable handling In-Reply-To: References: Message-ID: <20130829174216.GK22852@mdounin.ru> Hello! On Thu, Aug 29, 2013 at 10:42:37AM -0400, spacecwoboy wrote: > Is there a way to check a variable against file contents for processing? A > couple scenarios below. > > This is used here, but adding multiple agents can get burdensome: > if ($http_user_agent ~ (agent1|agent2|Foo|Wget|Nmap|BadAgent) ) { > return 403; > } > > I'd like to maintain a file with all the variables, (and custom script the > addition/removal of file entries) like this: > if ($http_user_agent ~ (in.file(../../badAgents.txt) ) { > return 403; > } > > > Or using file references for Allow/Deny: > Allow ../../whitelist.txt > Deny ../../badHosts.txt > > > Or Checking usernames against a whitelist/blacklist: > if ( $arg.Username does.not.exist.in(../../allowedUsers.txt) ) > if ( $arg.Username exists.in(../../blockedUsers.txt) ) The map module is probably what you are looking for, see here: http://nginx.org/en/docs/http/ngx_http_map_module.html Additionally, the "include" directive may be usefull, see http://nginx.org/r/include. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Aug 29 20:29:45 2013 From: nginx-forum at nginx.us (endo) Date: Thu, 29 Aug 2013 16:29:45 -0400 Subject: Strange SPDY behaviour about request time with stastic proxy cached content. Message-ID: Good time of day! We use nginx as load balancer and reverse proxy for some static content (images etc). And a problem was found with enabling SPDY: cached content with enabled SPDY becomes get from server with greater request time and i think slower (according Chrome debug console). Here some lines from access.log, without spdy, just ssl: request_time: "0.069" upstream_response_time: "0.069" "MISS" request_time: "0.370" upstream_response_time: "0.211" "MISS" request_time: "1.294" upstream_response_time: "1.200" "MISS" and from cache: request_time: "0.778" upstream_response_time: "-" "HIT" request_time: "0.938" upstream_response_time: "-" "HIT" SPDY enabled: request_time: "0.380" upstream_response_time: "0.120" "MISS" request_time: "1.181" upstream_response_time: "0.737" "MISS" before now everything looks fine, but now for content from cache: request_time: "10.389" upstream_response_time: "-" "HIT" request_time: "9.493" upstream_response_time: "-" "HIT" Here is a graphical illustration of problem: in the middle for some time was enabled spdy, but not other time on both sides. It shows average $request_time and $upstream_response_time variables sum every second: sum(time)/sum(requestcount) per second Nginx cache placed on memory disk /dev/md0 , capacity 60% right now Env: FreeBSD amd64 nginx version: nginx/1.4.2 built by gcc 4.2.2 20070831 prerelease [FreeBSD] TLS SNI support enabled openssl-1.0.1e Any suggestions, thoughts, ideas? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242350,242350#msg-242350 From nginx-forum at nginx.us Thu Aug 29 20:39:34 2013 From: nginx-forum at nginx.us (endo) Date: Thu, 29 Aug 2013 16:39:34 -0400 Subject: Strange SPDY behaviour about request time with stastic proxy cached content. In-Reply-To: References: Message-ID: <1eecfc51ad072fdc108dd9c1daf26f19.NginxMailingListEnglish@forum.nginx.org> Sorry, forgot link image for visualizing a problem: http://i59.fastpic.ru/big/2013/0830/87/689d2b6f84dfb88fe5a57b7ad60def87.png Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242350,242351#msg-242351 From nginx-forum at nginx.us Fri Aug 30 03:42:56 2013 From: nginx-forum at nginx.us (JanetHard) Date: Thu, 29 Aug 2013 23:42:56 -0400 Subject: the apparel visual appeal Message-ID: <1ff47a1d38345160af5237aa022c9ae9.NginxMailingListEnglish@forum.nginx.org> Be sure to carry brilliant treatment on the dress if you never need to be on hassle, because you will need to fork out a good deal in case the renter finds impairment and grime on there.So if you are in fond of autumn themed elements for your big day, and would like to add typical colors from the beautiful nature for the fall wedding, the typical autumn wedding dresses are the ones you have thought about them for a long conclusion, should you desire an important colored maid-matron of honour garment, participating some charcoal tie circumstance, and / or looking for some mother-of-the-bride dress (also frequently placed once, ) hiring some garment works in this case much too. [url=http://www.shopwigsie.com/]black curly wig[/url] Browse much more associated with Temperley London Springtime 2013 wedding selection right here. Coloured wedding dresses had been an enormous pattern about the Springtime 2013 runways. Have you been daring sufficient to test the gleaming steel wedding gown? If that's the case, this particular Adrianna Papell Sequined Bustier Nylon uppers Dress is actually an infinitely more budget-friendly option from $278 than the usual real Romona Keveza.la and orange county lly19870405.Marriage dresses- a celebration during which couple of peoplee to be 1, where by a couple individuals turned into a person. The item ?s known for a awesome benefits available as one lifetime.An amazing plan for typical early spring Wedding dress 2012 would be to wear an important knee-length vivid white dress up, in lieu of a full span costume, produced from charmeuse fabric, the kind of silk silk garment and also team the application all the way up utilizing peep-toed high-heeled shoes for your flirty seem. [url=http://www.nzdresses.com/]plus sized prom dresses[/url] The actual trend in store may possibly take a look fabulous you. Yet, there's an easy opportunity it may not likely. Staying the offered mind the moment running to think about some sort of apparel is definitely an important the main vacation.Yet, often times there are styles which usually stay out of the other parts because the year 2010 model traits. The highest 5 Holiday season bash clothing may make over everyone towards the middle of the town about awareness and the coveted by with all the different various females in that respect there.Also, if you book in advance, you'll have your pick of hotels so you'll be where you want to be well as your hotel, ensuring that you book your transport to and from your hen party venue in advance will mean that you're not looking for taxis when everyone else is. [url=http://www.hotsalewigs.com/]cheap wigs uk[/url] There aree to be this need to have associated with man made, anybody hopes to take a look hip in addition to stylist. Sometimes fathers and mothers choose the young children appear wonderful and even stylist.Compared to toe up with very little is usually defly, destroy oversize be given, several proudly owning apartments rentals space leases keep quick look significance with sense of to be Good! This bodycon garments shall be often just one in each and every connected with amounts which event to help it is non-public variables, and so produce time period to help them to almost any mindblowing self-belief perform the job target of this establish.Shoelace includes manufactured the return 2010 through manyponents. A great deal quicker, a large number of graphic designers utilised dark or possibly wine red colored shoelace in making outfits, still 2010, along with white wide lace is rather favorite. [url=http://www.newwigs.co.uk/]wigs sale[/url] Ribbons The front Wigs Hairloss Remedy quickly quarter-hour Hair loss because of illness will be the destroying for you to any person. The doesnt subject in case you are ultra powerful or simply delicate, young and even elderly; a impact inside losing the looks you can be employed for quite some time is perhaps astonishing, mainly everyday activity very quickly Accordingly, the value will in addition relate to the beauty products the particular supplies along with about how precisely your hairpiece continues to be manufactured.Younger kids are really very hard to pick ones own gear. The institution celebs, homing nights in addition to group persons are extremely will want several periodic dresses to create us all behaved and also exquisite.Many big eyes are for certain caught up into the girl whilst this girl april, and even for sure there presently exist a few young ladies nowadays the fact that may want to maintain all the bride's bottom at the time. [url=http://www.meineskleid.com/]abendkleid f?r schwangere[/url] Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242355,242355#msg-242355 From contact at jpluscplusm.com Fri Aug 30 08:54:22 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 30 Aug 2013 09:54:22 +0100 Subject: Server_Name regular expression In-Reply-To: References: Message-ID: On 29 Aug 2013 10:01, "christospap" wrote: > > I would like to syntax a regular expression which will match two server > names. The server names are example.com and www.example.com. In particular, > a regular expression for www or nothing > > I wrote the following regular expression (www\.|)example.com Irrespective of your use of this in an nginx config, you have made a mistake constructing the regex. It's such a simple one that I feel relatively sure that you're a newcomer to such things, hence I would suggest you find a regex primer (book|website) and study it carefully. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From xkyanh at gmail.com Fri Aug 30 09:03:31 2013 From: xkyanh at gmail.com (Anh K. Huynh) Date: Fri, 30 Aug 2013 16:03:31 +0700 Subject: what is simplest way to convert Request-URI to lowercase? Message-ID: <20130830160331.54e97d15@icy.bar> Hello, We are using nginx heavily. We need to rewrite all request URIs into lowercase, e.g, http://foo.bar/ThiS_will_be_Rewritten/?q=Foobar will be translated into http://foo.bar/this_will_be_rewritten/?q=foobar I know some modules (Perl, Lua) can do this. My question is what the simplest way (module) to do that, because Perl/Lua seems to be overkill here ;) Thank you very much. -- I am ... 5.5 dog years old. From jens.rantil at telavox.se Fri Aug 30 10:34:53 2013 From: jens.rantil at telavox.se (Jens Rantil) Date: Fri, 30 Aug 2013 10:34:53 +0000 Subject: SV: Server_Name regular expression In-Reply-To: References: Message-ID: <7b039ef178184c589dface21826c560c@AMSPR07MB132.eurprd07.prod.outlook.com> Hi, Maybe http://regex101.com/ could help? Cheers, Jens Fr?n: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] F?r Jonathan Matthews Skickat: den 30 augusti 2013 10:54 Till: nginx at nginx.org ?mne: Re: Server_Name regular expression On 29 Aug 2013 10:01, "christospap" > wrote: > > I would like to syntax a regular expression which will match two server > names. The server names are example.com and www.example.com. In particular, > a regular expression for www or nothing > > I wrote the following regular expression (www\.|)example.com Irrespective of your use of this in an nginx config, you have made a mistake constructing the regex. It's such a simple one that I feel relatively sure that you're a newcomer to such things, hence I would suggest you find a regex primer (book|website) and study it carefully. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Aug 31 12:55:34 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 31 Aug 2013 13:55:34 +0100 Subject: what is simplest way to convert Request-URI to lowercase? In-Reply-To: <20130830160331.54e97d15@icy.bar> References: <20130830160331.54e97d15@icy.bar> Message-ID: <20130831125534.GR27161@craic.sysops.org> On Fri, Aug 30, 2013 at 04:03:31PM +0700, Anh K. Huynh wrote: Hi there, > We are using nginx heavily. We need to rewrite all request URIs > into lowercase, e.g, > > http://foo.bar/ThiS_will_be_Rewritten/?q=Foobar > > will be translated into > > http://foo.bar/this_will_be_rewritten/?q=foobar Why? Depending on the answer, it may be more appropriate for your back-end processor to do the conversion instead of nginx. (Note: this isn't "please justify your needs"; this is "have you considered the possible alternatives".) If it turns out that nginx is the correct place to do this conversion, then... > I know some modules (Perl, Lua) can do this. My question is > what the simplest way (module) to do that, because Perl/Lua > seems to be overkill here ;) The simplest module is the one that you (arrange to) write that does exactly what you want and no more. There doesn't appear to be a default-distributed module that does that. You may be able to use or adapt the module named "Lower Upper Case" listed at http://wiki.nginx.org/3rdPartyModules to do what you want. There are the general-purpose embedded language modules that you could use -- you have to decide whether the run-time "overkill" overhead is more important that your build-time overhead to get the dedicated module written. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Aug 31 12:59:51 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 31 Aug 2013 13:59:51 +0100 Subject: Fake Basic Auth In-Reply-To: <521E3EFC.90205@felsing.net> References: <5219AA05.8050405@felsing.net> <20130826220837.GK27161@craic.sysops.org> <521E3EFC.90205@felsing.net> Message-ID: <20130831125951.GS27161@craic.sysops.org> On Wed, Aug 28, 2013 at 08:18:36PM +0200, Christian Felsing wrote: Hi there, > Thank you for your hint, which solved this problem with a little bit LUA > code. If LUA code (security) tests are finsihed I will publish this code. Good to know that you found a solution. I guess that the outline should work in most similar cases; but the details will likely differ each time. All the best, f -- Francis Daly francis at daoine.org From xsanch at gmail.com Sat Aug 31 21:46:52 2013 From: xsanch at gmail.com (Jorge Sanchez) Date: Sat, 31 Aug 2013 16:46:52 -0500 Subject: NGINX perl module to server files Message-ID: Hello, I have created perl NGINX module to server static files on the NGINX (mainly images). For security reasons I am generating the AES:CBC encrypted url which I am decrypting on the NGINX and serving the file via NGINX perl module. The problem is that I am sometimes getting the bellow response with HTTP response code set to 000: XX.XX.XX.XX - - [01/Sep/2013:01:20:37 +0400] "GET /media/u5OU/NRkImrrwH/TThHe7hns5bOEv+Aou2/VJ8YD/ts= HTTP/1.1" *000* 39078 " http://XXXX/full/JcbyEJTb8nMh+YH0xSg1jgl4N7vWQi2xBPep7VcJmD8=" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:23.0) Gecko/20100101 Firefox/23.0" The way how I handle the url in the perl module is : In case the file is found: $r->sendfile($fileresult[0]); $r->flush(); return OK; else: $r->status(404); return DECLINED; My question is if I am sending the files correctly or is there any other specific value i should send back from perl (besides returning OK). If needed I can send the nginx.conf. Thanks for your help. Regards, Jorge -------------- next part -------------- An HTML attachment was scrubbed... URL: