From gfrankliu at gmail.com Tue Dec 1 08:49:50 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 1 Dec 2015 00:49:50 -0800 Subject: bind failed Message-ID: Hi, I was doing some tests today and have created a single test virtual host with listen 8181; and nginx runs fine (1.9.7). Now if I change the listen to only one interface ip: listen 192.168.10.10:8181 configtest shows fine but reload gives "bind failed" in the error log. Is this normal? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 1 13:00:59 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Dec 2015 16:00:59 +0300 Subject: bind failed In-Reply-To: References: Message-ID: <20151201130059.GM74233@mdounin.ru> Hello! On Tue, Dec 01, 2015 at 12:49:50AM -0800, Frank Liu wrote: > Hi, > > I was doing some tests today and have created a single test virtual host > with > listen 8181; > and nginx runs fine (1.9.7). Now if I change the listen to only one > interface ip: > listen 192.168.10.10:8181 > configtest shows fine but reload gives "bind failed" in the error log. > Is this normal? Short answer: Yes, if you are using Linux. Long answer: Linux doesn't allow listen sockets on INADDR_ANY and an IP address on the same port to coexist, due to "security" reasons. And this is exactly what happens when you try to reload a configuration - nginx still has an open listening sockets on *:8181 and tries to open another one on 192.168.10.10:8181. As a result, the bind() system call fails due to the Linux limitation, and that's what you see in the error log. To switch from a listening on * to an IP address on Linux you'll have to restart nginx, reload won't work. The same process works fine on other OSes without such artificial limitations (e.g., FreeBSD). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Dec 1 16:18:00 2015 From: nginx-forum at nginx.us (Ortal) Date: Tue, 01 Dec 2015 11:18:00 -0500 Subject: ETag from S3 did not match computed MD5 Message-ID: Hello, I am new in NGINX. I am using boto service with NGINX. When I am sending a put request I get a response with ETag which does not match the MD5. My question is: what could be the reason for that? Thanks, Ortal Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263186,263186#msg-263186 From ytlec2014 at gmail.com Tue Dec 1 16:52:06 2015 From: ytlec2014 at gmail.com (Smart Goldman) Date: Wed, 2 Dec 2015 01:52:06 +0900 Subject: PHP and CGI on UserDir In-Reply-To: <20151129111019.GV3351@daoine.org> References: <20151129111019.GV3351@daoine.org> Message-ID: Hi Francis Daly and Aleksandar Lazic, I am sorry I am late. 2015-11-29 20:10 GMT+09:00 Francis Daly : > On Sun, Nov 29, 2015 at 05:04:50PM +0900, Smart Goldman wrote: > > Hi there, > >> I try to enable PHP and CGI(Perl) on UserDir (/home/user/public_html) with >> nginx. >> But on my Chrome, PHP script is downloaded and CGI script shows me "404 Not >> Found" page. > > In nginx, one requests is handled in one location. > > http://nginx.org/r/location describes how the one location is chosen > for a particular request. > > You have: > >> location / { >> location ~ ^/~(.+?)(/.*)?$ { >> location = /50x.html { >> location ~ (^~)*\.php$ { >> location ~ (^~)*\.pl|cgi$ { >> location ~ .*~.*\.php$ { >> location ~ .*~.*\.pl|cgi$ { > > According to the description, the requests /~user/index.cgi and > /~user/index.php are both handled in the second location there, > which says: > >> location ~ ^/~(.+?)(/.*)?$ { >> alias /home/$1/public_html$2; >> index index.html index.htm; >> autoindex on; > > which says "serve the file /home/user/public_html/index.cgi (or index.php) > from the filesystem, with no further processing". And that is what you > see -- one file does not exist, do you get 404; the other file does exist, > so you get it. > > To make things work, you will need to arrange your location{} blocks > so that the one that you want nginx to use to process a request, is the > one that nginx does choose to process a request. > > And then make sure that you know what mapping you want nginx to use for > *this* request should be handled by processing *this* file through *that* > fastcgi server (or whatever is appropriate). > > Good luck with it, Thank you for great help! I did not understand /~user/index.cgi and /~user/index.php are handled on "location ~ ^/~(.+?)(/.*)?$ {". I remade /etc/nginx/conf.d/default.conf as the following: server { listen 80; server_name localhost; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /var/www/html; index index.html index.htm; } location ~ ^/~(.+?)(/.*)?\.(php)$ { alias /home/$1/public_html$2.$3; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~ ^/~(.+?)(/.*)?\.(pl|cgi)$ { alias /home/$1/public_html$2.$3; fastcgi_pass 127.0.0.1:8999; fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~ ^/~(.+?)(/.*)?$ { alias /home/$1/public_html$2; index index.html index.htm; autoindex on; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} location ~ \.php$ { root /var/www/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~ \.pl|cgi$ { root /var/www/html; fastcgi_pass 127.0.0.1:8999; fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } After that, I restarted nginx. Looks like PHP (/~user/index.php) finally works! But CGI (/~user/index.cgi) says "Error: No such CGI app - /home/user/public_html/index.cgi/~user/index.cgi may not exist or is not executable by this process." though I don't know why. 2015-12-01 1:47 GMT+09:00 Aleksandar Lazic : > Hi. > > Am 29-11-2015 12:02, schrieb Smart Goldman: >> >> Hi, thank you for great help, Aleksandar Lazic. >> I tried it. > > > How looks now your config? I remade as above config. >> PHP script shows me "File not found." and outputs the following log: >> 2015/11/29 05:50:15 [error] 5048#0: *6 FastCGI sent in stderr: "Primary >> script unknown" while reading response header from upstream, client: >> 119.105.136.26, server: localhost, request: "GET /~user/index.php >> HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000 [2]", host: >> "host.domain.com [4]" >> >> - I do not know how to fix it... >> >> CGI script shows me "Error: No such CGI app - >> /home//public_html/~user/index.cgi may not exist or is not executable by >> this process." and outputs nothing to error.log. >> >> - /home//public_html/~user/... I think this path is wrong and I tried to >> fix this path but I could not. /home/user/public_html/ should be correct >> path.. > > > Please run the debug log to see more. > > http://nginx.org/en/docs/debugging_log.html > > Due to the fact that I don't know if you use the centos packes or the nginx > package I suggest to install the following packages > > http://nginx.org/en/linux_packages.html#mainline > > and the nginx-debug and run the debug instance with the suggested settings > in > > http://nginx.org/en/docs/debugging_log.html I installed nginx with rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm and yum -y install nginx. Do I need to reinstall nginx for the debug? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Dec 1 20:50:05 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Dec 2015 20:50:05 +0000 Subject: PHP and CGI on UserDir In-Reply-To: References: <20151129111019.GV3351@daoine.org> Message-ID: <20151201205005.GW3351@daoine.org> On Wed, Dec 02, 2015 at 01:52:06AM +0900, Smart Goldman wrote: > 2015-11-29 20:10 GMT+09:00 Francis Daly : > > On Sun, Nov 29, 2015 at 05:04:50PM +0900, Smart Goldman wrote: Hi there, > location ~ ^/~(.+?)(/.*)?\.(php)$ { > alias /home/$1/public_html$2.$3; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; Delete that line - it does nothing useful here. (It does no harm, other than being a distraction.) > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; Change that line to be just fastcgi_param SCRIPT_FILENAME $document_root; "alias" is a bit funny. In this context, it means that $document_root corresponds to the file on the filesystem that you want the fastcgi server to process. I'm a bit surprised that the current version works -- perhaps your php config does not have cgi.fix_pathinfo=0 and so takes more than one guess at the file to process. > include /etc/nginx/fastcgi_params; > } > > location ~ ^/~(.+?)(/.*)?\.(pl|cgi)$ { > alias /home/$1/public_html$2.$3; > fastcgi_pass 127.0.0.1:8999; > fastcgi_index index.cgi; Delete. > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; Change to remove $fastcgi_script_name, just like the previous case. Here, most likely php is not involved, and your fastcgi server just tries to process SCRIPT_FILENAME, which does not name a real file right now. > include /etc/nginx/fastcgi_params; > } > After that, I restarted nginx. Looks like PHP (/~user/index.php) finally > works! > But CGI (/~user/index.cgi) says "Error: No such CGI app - > /home/user/public_html/index.cgi/~user/index.cgi may not exist or is not > executable by this process." though I don't know why. I think that the explanations are above, along with the one necessary fix (for cgi/pl) and the one strongly suggested fix (for php). Good luck with it, f -- Francis Daly francis at daoine.org From aelston at aerohive.com Tue Dec 1 23:28:10 2015 From: aelston at aerohive.com (Aladdin Elston) Date: Tue, 1 Dec 2015 23:28:10 +0000 Subject: Unsubscribe Message-ID: Thank you, Aladdin On 12/1/15, 4:00 AM, "nginx on behalf of nginx-request at nginx.org" wrote: >Send nginx mailing list submissions to > nginx at nginx.org > >To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx >or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > >You can reach the person managing the list at > nginx-owner at nginx.org > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of nginx digest..." > > >Today's Topics: > > 1. Re: Basic auth is slow (Maxim Dounin) > 2. Re: nginx and oracle weblogic server (itpp2012) > 3. Re: 502 errors and request_time (Maxim Dounin) > 4. Re: PHP and CGI on UserDir (Aleksandar Lazic) > 5. Re: Basic auth is slow (Jo? ?d?m) > 6. bind failed (Frank Liu) > > >---------------------------------------------------------------------- > >Message: 1 >Date: Mon, 30 Nov 2015 16:12:10 +0300 >From: Maxim Dounin >To: nginx at nginx.org >Subject: Re: Basic auth is slow >Message-ID: <20151130131210.GF74233 at mdounin.ru> >Content-Type: text/plain; charset=utf-8 > >Hello! > >On Sat, Nov 28, 2015 at 06:18:54PM +0100, Jo? ?d?m wrote: > >> Hi, >> >> I just noticed that enabling basic authentication adds between 100 and >> 150 ms to my otherwise 30-40 ms page load time. Is this known >> behaviour? Is this somehow inherent or a design / implementation >> mistake? > >Basic authentication checks user password on each request. >Depending on a password hash used for a particular user in the >user file, it may take significant time - as password hashes >are designed to be CPU-intensive to prevent password recovery >attacks. Some additional information can be found here: > >https://en.wikipedia.org/wiki/Crypt_(C) > >Depending on your particular setup and possible risks, you may >consider using something less CPU-intensive as your password hash >function if a hash calculation takes 100ms. All crypt(3) schemes >as supported by your system are understood by nginx, as well as >some additional schemes for portability and debugging. See here >for more details: > >http://nginx.org/r/auth_basic_user_file > >-- >Maxim Dounin >http://nginx.org/ > > > >------------------------------ > >Message: 2 >Date: Mon, 30 Nov 2015 08:52:30 -0500 >From: "itpp2012" >To: nginx at nginx.org >Subject: Re: nginx and oracle weblogic server >Message-ID: > <6fee993a93365197b7da64b405034036.NginxMailingListEnglish at forum.nginx.org> > >Content-Type: text/plain; charset=UTF-8 > >Garcia Wrote: >------------------------------------------------------- >> Hi, >> Can Nginx work with oracle weblogic server properly? > >As a proxy nginx can easily handle weblogic servers, interfacing (API) nginx >with Oracle can be done with Lua. > >> Does Oracle have support for Nginx? > >It depends what kind of support you are looking for, ask Oracle to start >with. > >Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263159,263163#msg-263163 > > > >------------------------------ > >Message: 3 >Date: Mon, 30 Nov 2015 18:00:48 +0300 >From: Maxim Dounin >To: nginx at nginx.org >Subject: Re: 502 errors and request_time >Message-ID: <20151130150048.GI74233 at mdounin.ru> >Content-Type: text/plain; charset=us-ascii > >Hello! > >On Sat, Nov 28, 2015 at 03:02:33PM +0800, Shi wrote: > >> Hi: >> >> I've found 2 kinds of 502 errors in my server: >> >> connection reset by peer, it happens as request_time reach to 10s. >> >> connection timeout, it happens as request_time reach to 3s. >> >> Server is: >> >> centos , kernel(2.6.32) >> >> php5.2 php-fpm >> >> nginx 1.6.3. >> >> access logs outputs: >> [28/Nov/2015:14:41:08 +0800] "GET /ben2.php HTTP/1.1" 502 172 "-" "Apache-HttpClient/4.2.6 (java 1.5)" "-" 3.000 >> >> [28/Nov/2015:14:41:11 +0800] "GET /ben2.php HTTP/1.1" 502 172 "-" "Apache-HttpClient/4.2.6 (java 1.5)" "-" 10.000 >> error logs: >> 2015/11/28 14:41:11 [error] 12981#0: *798323 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: xx.xx.xx.xx, server: xx.xx.xx, request: "GET /ben2.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xx.xx.xx" >> >> 2015/11/28 14:41:08 [error] 12981#0: *798215 connect() failed (110: Connection timed out) while connecting to upstream, client: xx.xx.xx.xx, server: xx.xx.xx, request: "GET /ben2.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xx.xx.xx" >> >> >> >> I've tried "request_terminate_timeout, fastcgi_connect_timeout, fastcgi_read/write_timeout", but no help. >> >> What can I do next step ? > >The "connection reset by peer" means that the connection was >terminated by your backend, not by nginx. There isn't much you >can do on nginx side. Consider checking php-fpm logs instead, may >be you are hitting some limit like execution time limit or >something. > >The "connection timed out" means that nginx wasn't able to connect >to the backend in time, fastcgi_connection_timeout is something >you can tune - though the default is 60s, and it should be big >enough for normal use. Again, consider looking into your backend >to find out why connection takes so long - likely it's overloaded >and can't process connection requests in time. > >-- >Maxim Dounin >http://nginx.org/ > > > >------------------------------ > >Message: 4 >Date: Mon, 30 Nov 2015 17:47:16 +0100 >From: Aleksandar Lazic >To: Smart Goldman >Cc: nginx at nginx.org >Subject: Re: PHP and CGI on UserDir >Message-ID: <96697b73d09200e820ba5d685f4aaba6 at none.at> >Content-Type: text/plain; charset=US-ASCII; format=flowed > >Hi. > >Am 29-11-2015 12:02, schrieb Smart Goldman: >> Hi, thank you for great help, Aleksandar Lazic. >> I tried it. > >How looks now your config? > >> PHP script shows me "File not found." and outputs the following log: >> 2015/11/29 05:50:15 [error] 5048#0: *6 FastCGI sent in stderr: "Primary >> script unknown" while reading response header from upstream, client: >> 119.105.136.26, server: localhost, request: "GET /~user/index.php >> HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000 [2]", host: >> "host.domain.com [4]" >> >> - I do not know how to fix it... >> >> CGI script shows me "Error: No such CGI app - >> /home//public_html/~user/index.cgi may not exist or is not executable >> by >> this process." and outputs nothing to error.log. >> >> - /home//public_html/~user/... I think this path is wrong and I tried >> to >> fix this path but I could not. /home/user/public_html/ should be >> correct >> path.. > >Please run the debug log to see more. > >http://nginx.org/en/docs/debugging_log.html > >Due to the fact that I don't know if you use the centos packes or the >nginx package I suggest to install the following packages > >http://nginx.org/en/linux_packages.html#mainline > >and the nginx-debug and run the debug instance with the suggested >settings in > >http://nginx.org/en/docs/debugging_log.html > >BR Aleks > >> 2015-11-29 18:41 GMT+09:00 Aleksandar Lazic : >> >>> Hi Smart Goldman. >>> >>> Am 29-11-2015 09:04, schrieb Smart Goldman: >>> >>>> Hello. I am new here. >>>> >>>> I try to enable PHP and CGI(Perl) on UserDir >>>> (/home/user/public_html) >>>> with nginx. >>>> But on my Chrome, PHP script is downloaded and CGI script shows me >>>> "404 >>>> Not Found" page. >>>> Here's my configurations. What is wrong with my configurations? >>> >>> Try to use nested locations. >>> >>> http://nginx.org/en/docs/http/ngx_http_core_module.html#location >>> >>>> OS: Linux 3.10.0 / CentOS 7 64bit >>>> nginx version: 1.8.0 >>>> >>>> ---------------------------------------------- >>>> /etc/nginx/conf.d/default.conf: >>>> server { >>>> listen 80; >>>> server_name localhost; >>>> access_log /var/log/nginx/access.log; >>>> error_log /var/log/nginx/error.log; >>>> >>>> #charset koi8-r; >>>> #access_log /var/log/nginx/log/host.access.log main; >>>> >>>> location / { >>>> root /var/www/html; >>>> index index.html index.htm; >>>> } >>>> >>>> location ~ ^/~(.+?)(/.*)?$ { >>>> alias /home/$1/public_html$2; >>>> index index.html index.htm; >>>> autoindex on; >>> >>> include my_php_config.conf; >>> >>> include my_cgi_config.conf; >>> >>>> } >>>> >>>> #error_page 404 /404.html; >>>> >>>> # redirect server error pages to the static page /50x.html >>>> # >>>> error_page 500 502 503 504 /50x.html; >>>> location = /50x.html { >>>> root /var/www/html; >>>> } >>>> >>>> # proxy the PHP scripts to Apache listening on 127.0.0.1:80 [1] >>>> [1] >>>> # >>>> #location ~ \.php$ { >>>> # proxy_pass http://127.0.0.1; >>>> #} >>>> >>>> # pass the PHP scripts to FastCGI server listening on >>>> 127.0.0.1:9000 [2] >>>> [2] >>>> # >>>> #location ~ \.php$ { >>>> # root html; >>>> # fastcgi_pass 127.0.0.1:9000 [2] [2]; >>>> # fastcgi_index index.php; >>>> # fastcgi_param SCRIPT_FILENAME >>>> /scripts$fastcgi_script_name; >>>> # include fastcgi_params; >>>> #} >>>> >>>> location ~ (^~)*\.php$ { >>>> root /var/www/html; >>>> fastcgi_pass 127.0.0.1:9000 [2] [2]; >>>> fastcgi_index index.php; >>>> fastcgi_param SCRIPT_FILENAME >>>> $document_root$fastcgi_script_name; >>>> include /etc/nginx/fastcgi_params; >>>> } >>>> location ~ (^~)*\.pl|cgi$ { >>>> root /var/www/html; >>>> fastcgi_pass 127.0.0.1:8999 [3] [3]; >>>> fastcgi_index index.cgi; >>>> fastcgi_param SCRIPT_FILENAME >>>> $document_root$fastcgi_script_name; >>>> include /etc/nginx/fastcgi_params; >>>> } >>> >>> This block into "my_php_config.conf" >>> >>>> location ~ .*~.*\.php$ { >>>> alias /home/$1/public_html$2; >>>> fastcgi_pass 127.0.0.1:9000 [2] [2]; >>>> fastcgi_index index.php; >>>> fastcgi_param SCRIPT_FILENAME >>>> $document_root$fastcgi_script_name; >>>> include /etc/nginx/fastcgi_params; >>>> } >>> END >>> >>> This block into "my_cgi_config.conf" >>> >>>> location ~ .*~.*\.pl|cgi$ { >>>> alias /home/$1/public_html$2; >>>> fastcgi_pass 127.0.0.1:8999 [3] [3]; >>>> fastcgi_index index.cgi; >>>> fastcgi_param SCRIPT_FILENAME >>>> $document_root$fastcgi_script_name; >>>> include /etc/nginx/fastcgi_params; >>>> } >>> >>> END >>> >>>> # deny access to .htaccess files, if Apache's document root >>>> # concurs with nginx's one >>> >>> BR Aleks >> >> >> >> Links: >> ------ >> [1] http://127.0.0.1:80 >> [2] http://127.0.0.1:9000 >> [3] http://127.0.0.1:8999 >> [4] http://host.domain.com > > > >------------------------------ > >Message: 5 >Date: Mon, 30 Nov 2015 23:03:05 +0100 >From: Jo? ?d?m >To: nginx at nginx.org >Subject: Re: Basic auth is slow >Message-ID: > >Content-Type: text/plain; charset=UTF-8 > >Wow, I just realized how stupid my question was. I wasn?t considering >the high iteration count I myself selected for hashing? Thanks, Maxim! > >? > > > >------------------------------ > >Message: 6 >Date: Tue, 1 Dec 2015 00:49:50 -0800 >From: Frank Liu >To: nginx at nginx.org >Subject: bind failed >Message-ID: > >Content-Type: text/plain; charset="utf-8" > >Hi, > >I was doing some tests today and have created a single test virtual host >with >listen 8181; >and nginx runs fine (1.9.7). Now if I change the listen to only one >interface ip: >listen 192.168.10.10:8181 >configtest shows fine but reload gives "bind failed" in the error log. >Is this normal? > >Thanks! >Frank >-------------- next part -------------- >An HTML attachment was scrubbed... >URL: > >------------------------------ > >Subject: Digest Footer > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > >------------------------------ > >End of nginx Digest, Vol 74, Issue 1 >************************************ From nginx-forum at nginx.us Wed Dec 2 01:42:43 2015 From: nginx-forum at nginx.us (willchu@gmail.com) Date: Tue, 01 Dec 2015 20:42:43 -0500 Subject: nginx changes the hostname of proxy_pass into ip address unwantedly Message-ID: <2139732f626c7ba65ec4739586233155.NginxMailingListEnglish@forum.nginx.org> I have configured nginx to use mod_zip to zip up multiple files. I also have configured nginx so that it proxies to s3 when our server gets a request to /images. location /p/ { proxy_pass http://bucket-name.s3.amazonaws.com; } When I access a file via < server >/images/foo.png, everything works great. However, when I try using mod_zip, the http://bucket-name.s3.amazonaws.com gets rewritten to an ip address http://54.231.176.194:80/foo.png S3 needs the bucket-name in the url and gives me a NoSuchBucket error when I try using the ip address I get the following error 2015/12/01 01:37:08 [error] 14955#0: *16 mod_zip: a subrequest returned 400, aborting... while reading response header from upstream, client: 172.31.10.100, server: convoygarage.com, request: "GET /api/get_zipfile HTTP/1.1", subrequest: "/p/foo.png", upstream: "http:// 54.231.176.194:80/foo.png", host: "convoygarage.com" Why is nginx or mod_zip rewriting the hostname portion of my s3 URL into an ip address? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263199,263199#msg-263199 From cprenzberg at gmail.com Wed Dec 2 11:31:10 2015 From: cprenzberg at gmail.com (Charles Nnamdi Akalugwu) Date: Wed, 2 Dec 2015 12:31:10 +0100 Subject: server_name within tcp server blocks Message-ID: Hi guys, I have the following tcp server block in my nginx.conf stream { upstream kafka_producer { server kafka.service.consul:9092; } server { listen 9092; server_name kafka.stream.mycompany.com; proxy_connect_timeout 10s; proxy_timeout 30s; proxy_pass kafka_producer; } } I would like that my kafka tcp stream is accessible using only the kafka.stream.mycompany.com:9092 address....just in the same way that it works with http server blocks. However I get the following error regarding the server_name: *"server_name" directive is not allowed here in /etc/nginx/nginx.conf* So who knows how I can simulate server_name within tcp server blocks? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ytlec2014 at gmail.com Wed Dec 2 13:04:25 2015 From: ytlec2014 at gmail.com (Smart Goldman) Date: Wed, 2 Dec 2015 22:04:25 +0900 Subject: PHP and CGI on UserDir In-Reply-To: <20151201205005.GW3351@daoine.org> References: <20151129111019.GV3351@daoine.org> <20151201205005.GW3351@daoine.org> Message-ID: Hi Francis, 2015-12-02 5:50 GMT+09:00 Francis Daly : > On Wed, Dec 02, 2015 at 01:52:06AM +0900, Smart Goldman wrote: >> 2015-11-29 20:10 GMT+09:00 Francis Daly : >> > On Sun, Nov 29, 2015 at 05:04:50PM +0900, Smart Goldman wrote: > > Hi there, > >> location ~ ^/~(.+?)(/.*)?\.(php)$ { >> alias /home/$1/public_html$2.$3; >> fastcgi_pass 127.0.0.1:9000; >> fastcgi_index index.php; > > Delete that line - it does nothing useful here. (It does no harm, other > than being a distraction.) > >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > Change that line to be just > > fastcgi_param SCRIPT_FILENAME $document_root; > > "alias" is a bit funny. In this context, it means that $document_root > corresponds to the file on the filesystem that you want the fastcgi > server to process. > > I'm a bit surprised that the current version works -- perhaps your php > config does not have > > cgi.fix_pathinfo=0 This value is commented out. $ grep cgi.fix_pathinfo php.ini ; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI. PHP's ;cgi.fix_pathinfo=1 I've never edited php.ini. > and so takes more than one guess at the file to process. > >> include /etc/nginx/fastcgi_params; >> } >> >> location ~ ^/~(.+?)(/.*)?\.(pl|cgi)$ { >> alias /home/$1/public_html$2.$3; >> fastcgi_pass 127.0.0.1:8999; >> fastcgi_index index.cgi; > > Delete. > >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > Change to remove $fastcgi_script_name, just like the previous case. > > Here, most likely php is not involved, and your fastcgi server just tries > to process SCRIPT_FILENAME, which does not name a real file right now. > >> include /etc/nginx/fastcgi_params; >> } > >> After that, I restarted nginx. Looks like PHP (/~user/index.php) finally >> works! >> But CGI (/~user/index.cgi) says "Error: No such CGI app - >> /home/user/public_html/index.cgi/~user/index.cgi may not exist or is not >> executable by this process." though I don't know why. > > I think that the explanations are above, along with the one necessary fix > (for cgi/pl) and the one strongly suggested fix (for php). > > Good luck with it, Great!! Both of CGI and PHP finally works!! I made configuration file by copying from configuration file which I found from somewhere and I did not understand what fastcgi_param means. Thanks so much for your help! I put successful configuration here again for future use. /etc/nginx/conf.d/default.conf server { listen 80; server_name localhost; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /var/www/html; index index.html index.htm; } location ~ ^/~(.+?)(/.*)?\.(php)$ { alias /home/$1/public_html$2.$3; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root; include /etc/nginx/fastcgi_params; } location ~ ^/~(.+?)(/.*)?\.(pl|cgi)$ { alias /home/$1/public_html$2.$3; fastcgi_pass 127.0.0.1:8999; fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME $document_root; include /etc/nginx/fastcgi_params; } location ~ ^/~(.+?)(/.*)?$ { alias /home/$1/public_html$2; index index.html index.htm; autoindex on; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} location ~ \.php$ { root /var/www/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~ \.pl|cgi$ { root /var/www/html; fastcgi_pass 127.0.0.1:8999; fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 2 16:05:15 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Dec 2015 19:05:15 +0300 Subject: nginx changes the hostname of proxy_pass into ip address unwantedly In-Reply-To: <2139732f626c7ba65ec4739586233155.NginxMailingListEnglish@forum.nginx.org> References: <2139732f626c7ba65ec4739586233155.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151202160515.GV74233@mdounin.ru> Hello! On Tue, Dec 01, 2015 at 08:42:43PM -0500, willchu at gmail.com wrote: > I have configured nginx to use mod_zip to zip up multiple files. > > I also have configured nginx so that it proxies to s3 when our server gets a > request to /images. > > location /p/ { > proxy_pass http://bucket-name.s3.amazonaws.com; > } > When I access a file via < server >/images/foo.png, everything works great. > > However, when I try using mod_zip, the http://bucket-name.s3.amazonaws.com > gets rewritten to an ip address > > http://54.231.176.194:80/foo.png > > S3 needs the bucket-name in the url and gives me a NoSuchBucket error when I > try using the ip address > > I get the following error > > 2015/12/01 01:37:08 [error] 14955#0: *16 mod_zip: a subrequest returned 400, > aborting... while reading response header from upstream, client: > 172.31.10.100, server: convoygarage.com, request: "GET /api/get_zipfile > HTTP/1.1", subrequest: "/p/foo.png", upstream: "http:// > 54.231.176.194:80/foo.png", host: "convoygarage.com" > > Why is nginx or mod_zip rewriting the hostname portion of my s3 URL into an > ip address? Note that "upstream" as logged to error log shows an IP address of a particular server used, not a host name used in the request. This is intentional behaviour to make it easy to debug which backend server caused an error. To find out what was in fact sent in the Host request header you'll have to look into debugging log (or use tcpdump to see what is actually sent though the wire). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 2 17:15:38 2015 From: nginx-forum at nginx.us (frederico) Date: Wed, 02 Dec 2015 12:15:38 -0500 Subject: Nginx Reverse proxy + RD Gateway Auth Problem In-Reply-To: <2afea5e7fd097102d9748590f45f3fa8.NginxMailingListEnglish@forum.nginx.org> References: <20141017125529.GC35211@mdounin.ru> <6fecef054759e9d9d01d938805bfc724.NginxMailingListEnglish@forum.nginx.org> <30a75ca5fa9ed66b7b9182e2b9faacd1.NginxMailingListEnglish@forum.nginx.org> <2afea5e7fd097102d9748590f45f3fa8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <197f2291055af69b4a4ceb1e3e9b9f1a.NginxMailingListEnglish@forum.nginx.org> Hi, I've tried a lot of commands, stream is not recognized and I don't think it's possible to make it work. nginx need a certificate and RD Gateway need also a certificate, so there are 2 SSL connection between the client and the server, it's can't work. I also tried with the same certificate on the 2 connections, but without success... :-( Regards, Fred Posted at Nginx Forum: https://forum.nginx.org/read.php?2,254095,263214#msg-263214 From nginx-forum at nginx.us Wed Dec 2 17:25:55 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 02 Dec 2015 12:25:55 -0500 Subject: Nginx Reverse proxy + RD Gateway Auth Problem In-Reply-To: <197f2291055af69b4a4ceb1e3e9b9f1a.NginxMailingListEnglish@forum.nginx.org> References: <20141017125529.GC35211@mdounin.ru> <6fecef054759e9d9d01d938805bfc724.NginxMailingListEnglish@forum.nginx.org> <30a75ca5fa9ed66b7b9182e2b9faacd1.NginxMailingListEnglish@forum.nginx.org> <2afea5e7fd097102d9748590f45f3fa8.NginxMailingListEnglish@forum.nginx.org> <197f2291055af69b4a4ceb1e3e9b9f1a.NginxMailingListEnglish@forum.nginx.org> Message-ID: frederico Wrote: ------------------------------------------------------- > Hi, > > I've tried a lot of commands, stream is not recognized and I don't > think it's possible to make it work. nginx need a certificate and RD > Gateway need also a certificate, so there are 2 SSL connection between > the client and the server, it's can't work. Then you have tried the wrong version: http://nginx.org/en/docs/stream/ngx_stream_ssl_module.html http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_pass Posted at Nginx Forum: https://forum.nginx.org/read.php?2,254095,263215#msg-263215 From r1ch+nginx at teamliquid.net Wed Dec 2 18:16:09 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 2 Dec 2015 19:16:09 +0100 Subject: server_name within tcp server blocks In-Reply-To: References: Message-ID: TCP has no concept of server names, so this is not possible. It only works in HTTP because the client sends the hostname it is trying to access as part of the request, allowing nginx to match it to a specific server block. On Wed, Dec 2, 2015 at 12:31 PM, Charles Nnamdi Akalugwu < cprenzberg at gmail.com> wrote: > Hi guys, > > I have the following tcp server block in my nginx.conf > > stream { > upstream kafka_producer { > > server kafka.service.consul:9092; > } > > server { > listen 9092; > server_name kafka.stream.mycompany.com; > proxy_connect_timeout 10s; > proxy_timeout 30s; > proxy_pass kafka_producer; > } > } > > I would like that my kafka tcp stream is accessible using only the > kafka.stream.mycompany.com:9092 address....just in the same way that it > works with http server blocks. > > However I get the following error regarding the server_name: > > *"server_name" directive is not allowed here in /etc/nginx/nginx.conf* > > So who knows how I can simulate server_name within tcp server blocks? > > Thanks! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 2 18:46:29 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 02 Dec 2015 13:46:29 -0500 Subject: server_name within tcp server blocks In-Reply-To: References: Message-ID: <79dfece019f7482acf49c89cccad65e8.NginxMailingListEnglish@forum.nginx.org> Richard Stanway Wrote: ------------------------------------------------------- > TCP has no concept of server names, so this is not possible. It only > works > in HTTP because the client sends the hostname it is trying to access > as To put into better wording, The 'hostname' technique has 2 parts, part 1 is a receiver (nginx) receiving a request containing a hostname which it can match (or not) to an item in its configuration, and part 2 the DNS name where this name is recorded against its IP address. With stream{} you can only rely on part 2. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263208,263219#msg-263219 From nginx-forum at nginx.us Thu Dec 3 06:05:07 2015 From: nginx-forum at nginx.us (WANJUNE) Date: Thu, 03 Dec 2015 01:05:07 -0500 Subject: Nginx node helth check Message-ID: <2e039abacad130aeb4d75626d9644950.NginxMailingListEnglish@forum.nginx.org> Hi ! We currently use NginX as a L4 switch. We have 1 single Upstream VIP which has 2 real nodes of Apache web server. If all nodes under the VIP works fine, we always get normal response from the NginX whenever we try ping test from out of IDC. My question is that, if all nodes under the VIP went down, does ping test to the VIP returns failure ? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263225,263225#msg-263225 From zhanght1 at lenovo.com Thu Dec 3 06:10:35 2015 From: zhanght1 at lenovo.com (Felix HT1 Zhang) Date: Thu, 3 Dec 2015 06:10:35 +0000 Subject: Could Nginx stream support FTP PASSIVE? Message-ID: <3B8195E42ECF3D4DA1072EF35B4F39F8E26AC08F@CNMAILEX03.lenovo.com> Dears, Could Nginx stream support FTP PASSIVE? #er nobody; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } stream { upstream port21 { server 10.122.x.x:21 weight=1; server 10.122.x.xx:21 weight=1; } server { listen 21; proxy_connect_timeout 60; proxy_pass port21; } } BR Felix zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From cprenzberg at gmail.com Thu Dec 3 10:11:19 2015 From: cprenzberg at gmail.com (Charles Nnamdi Akalugwu) Date: Thu, 3 Dec 2015 11:11:19 +0100 Subject: server_name within tcp server blocks Message-ID: Thanks a lot for the clarification guys! :) On Wed, Dec 2, 2015 at 7:46 PM, itpp2012 wrote: > Richard Stanway Wrote: > ------------------------------------------------------- > > TCP has no concept of server names, so this is not possible. It only > > works > > in HTTP because the client sends the hostname it is trying to access > > as > > To put into better wording, > The 'hostname' technique has 2 parts, part 1 is a receiver (nginx) > receiving > a request containing a hostname which it can match (or not) to an item in > its configuration, and part 2 the DNS name where this name is recorded > against its IP address. With stream{} you can only rely on part 2. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263208,263219#msg-263219 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Dec 3 12:36:10 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 3 Dec 2015 17:36:10 +0500 Subject: SSL handshake issue !! Message-ID: Hi, We've been encountering this issue quiet frequently. Looks like that is the reason of our drop in traffic as well. 2015/12/03 16:19:18 [crit] 26272#0: *176263213 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: 43.245.8.217, server: 10.0.0.52:443 Guys, any help on it ? So far i am unable to find any fix for it. Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Dec 3 13:27:21 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 03 Dec 2015 08:27:21 -0500 Subject: SSL handshake issue !! In-Reply-To: References: Message-ID: <237d2a7a4d31b9df855a03336407668c.NginxMailingListEnglish@forum.nginx.org> https://www.ruby-forum.com/topic/6873127 http://serverfault.com/questions/663290/in-nginx-error-log-ssl-bytes-to-cipher-listinappropriate-fallback http://stackoverflow.com/questions/28010492/nginx-critical-error-with-ssl-handshaking Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263236,263237#msg-263237 From r1ch+nginx at teamliquid.net Thu Dec 3 15:05:50 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Thu, 3 Dec 2015 16:05:50 +0100 Subject: Could Nginx stream support FTP PASSIVE? In-Reply-To: <3B8195E42ECF3D4DA1072EF35B4F39F8E26AC08F@CNMAILEX03.lenovo.com> References: <3B8195E42ECF3D4DA1072EF35B4F39F8E26AC08F@CNMAILEX03.lenovo.com> Message-ID: Passive ports are dynamically allocated, so FTP with the stream module is unlikely to work at all. On Thu, Dec 3, 2015 at 7:10 AM, Felix HT1 Zhang wrote: > Dears, > > Could Nginx stream support FTP PASSIVE? > > > > #er nobody; > > worker_processes 4; > > > > #error_log logs/error.log; > > #error_log logs/error.log notice; > > #error_log logs/error.log info; > > > > #pid logs/nginx.pid; > > > > events { > > worker_connections 1024; > > } > > > > stream > > { > > upstream port21 > > { > > server 10.122.x.x:21 weight=1; > > server 10.122.x.xx:21 weight=1; > > } > > server > > { listen 21; > > proxy_connect_timeout 60; > > proxy_pass port21; > > } > > } > > BR > > Felix zhang > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Thu Dec 3 18:14:12 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 3 Dec 2015 13:14:12 -0500 Subject: SPDY + HTTP/2 Message-ID: NGINX devs, I know you were very excited to remove SPDY support from NGINX, but for the next few years there are a lot of devices (mobile devices that can't upgrade, end users who aren't comfortable upgrading, etc) that are not going to have http/2 support. By removing SPDY support you've created a situation where we either have to penalize those users by forcing them to HTTP(S) connections, or we have to forego upgrading NGINX and not offer HTTP/2. Cloudflare is offering both SPDY and HTTP/2 - they are a huge NGINX shop but I'm not clear if they are using NGINX to do that or not. I'd like to encourage you to follow their lead and reinstate the SPDY support for a while (even if its just a compile time option thats disabled by default). -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu Dec 3 18:29:01 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 3 Dec 2015 21:29:01 +0300 Subject: SPDY + HTTP/2 In-Reply-To: References: Message-ID: <566089ED.9050001@nginx.com> Hello, On 12/3/15 9:14 PM, CJ Ess wrote: > NGINX devs, > > I know you were very excited to remove SPDY support from NGINX, but > for the next few years there are a lot of devices (mobile devices > that can't upgrade, end users who aren't comfortable upgrading, etc) > that are not going to have http/2 support. By removing SPDY support > you've created a situation where we either have to penalize those > users by forcing them to HTTP(S) connections, or we have to forego > upgrading NGINX and not offer HTTP/2. > I think these "a lot" and "many years" are just overestimation based on the marketing buzz around spdy, http2 and other so-called innovations: http://w3techs.com/technologies/details/ce-spdy/all/all http://w3techs.com/technologies/details/ce-http2/all/all As you see, spdy penetration is just 6% and it starts to decline. http/2 is still ~2% but growing super quickly. And we do provide 1.8 branch which supports spdy. > Cloudflare is offering both SPDY and HTTP/2 - they are a huge NGINX > shop but I'm not clear if they are using NGINX to do that or not. > I'd like to encourage you to follow their lead and reinstate the > SPDY support for a while (even if its just a compile time option > thats disabled by default). > As for your request: can you explain how spdy deprecation affects you personally (not Cloudflare or other). -- Maxim Konovalov From patrickobrien at tetrisblocks.net Thu Dec 3 19:41:51 2015 From: patrickobrien at tetrisblocks.net (Patrick O'Brien) Date: Thu, 3 Dec 2015 11:41:51 -0800 Subject: Blocking unknown hostnames for SSL/TLS connections Message-ID: Hello, We're currently using nginx for SSL/TLS termination, which then proxies the request to a pair of internal pair of load balancers. Since the TLS handshake is performed before nginx is able to figure out what hostname is being requested, except in cases where SNI is used, it will accept any request for any hostname and pass it along to our internal load balancers. This puts us in a situation where internal resources are allowed to be exposed externally, although in a roundabout way. For example, our internal load balancers have a pool called "news", which is accessible via news, or news.dc1.example.com and is intended to be internally accessible only. If you were to add our external IP address mapped to news.dc1.example.com and told curl to ignore the invalid cert, nginx will proxy this request along to our internal load balancers and the internal service will happily respond. Here's a curl example of this hitting the internal healthcheck endpoint: ``` curl -k https://news.dc1.example.com alive ``` Ideally this would be blocked at our ingress point, which is nginx. The only way around this that I've found so far is to inspect the $host variable in the server definition for the 443 blocks. The example below shows the check for the server block which is intended to respond to www.example.com and stg1.example.com only: ``` # if the request coming in doesn't match any of the hosts we know # about, throw a 301 and rewrite to the default server. if ($host !~ (^www.example.com$|^stg1.example.com$)) { return 301 https://stg1.example.com; } ``` In our production environment we have a wildcard cert that covers as many as 6 externally available resources, so I am concerned with the performance hit of doing a check on every host. Is there a preferred method of dealing with an issue like this? I've read through the config pitfalls page on the readthedocs.org[0] page, and the If Is Evil page[1], so I am pretty positive the solution above is very inefficient. The pitfalls page even talks about the preferred alternative to an if statement for hostname matching[2], but this does not appear to cover TLS connections. Is there any other documentation that talks about this or could be useful? Is this something we need to just solve at the internal load balancer level? Doing a check there to ensure that some pools are accessible to external resources and others are not? I imagine that will just shift the inefficiency from nginx to the load balancers. Thanks! -pat 0 - http://ngx.readthedocs.org/en/latest/topics/tutorials/config_pitfalls.html 1 - https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/ 2 - http://ngx.readthedocs.org/en/latest/topics/tutorials/config_pitfalls.html#server-name-if From vbart at nginx.com Thu Dec 3 21:44:02 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 04 Dec 2015 00:44:02 +0300 Subject: Blocking unknown hostnames for SSL/TLS connections In-Reply-To: References: Message-ID: <2821604.1Cv5aRJQZg@vbart-laptop> On Thursday 03 December 2015 11:41:51 Patrick O'Brien wrote: > Hello, > > We're currently using nginx for SSL/TLS termination, which then > proxies the request to a pair of internal pair of load balancers. > Since the TLS handshake is performed before nginx is able to figure > out what hostname is being requested, except in cases where SNI is > used, it will accept any request for any hostname and pass it along > to our internal load balancers. This puts us in a situation where > internal resources are allowed to be exposed externally, although in > a roundabout way. > > For example, our internal load balancers have a pool called "news", > which is accessible via news, or news.dc1.example.com and is intended > to be internally accessible only. If you were to add our external IP > address mapped to news.dc1.example.com and told curl to ignore the > invalid cert, nginx will proxy this request along to our internal > load balancers and the internal service will happily respond. Here's > a curl example of this hitting the internal healthcheck endpoint: > > ``` > curl -k https://news.dc1.example.com > alive > ``` > > Ideally this would be blocked at our ingress point, which is nginx. > > The only way around this that I've found so far is to inspect the > $host variable in the server definition for the 443 blocks. The > example below shows the check for the server block which is intended > to respond to www.example.com and stg1.example.com only: > > ``` > # if the request coming in doesn't match any of the hosts we know > # about, throw a 301 and rewrite to the default server. > if ($host !~ (^www.example.com$|^stg1.example.com$)) { > return 301 https://stg1.example.com; > } > ``` > > In our production environment we have a wildcard cert that covers as > many as 6 externally available resources, so I am concerned with the > performance hit of doing a check on every host. > > Is there a preferred method of dealing with an issue like this? I've > read through the config pitfalls page on the readthedocs.org[0] page, > and the If Is Evil page[1], so I am pretty positive the solution > above is very inefficient. The pitfalls page even talks about the > preferred alternative to an if statement for hostname matching[2], > but this does not appear to cover TLS connections. Is there any other > documentation that talks about this or could be useful? > [..] http://nginx.org/en/docs/http/server_names.html http://nginx.org/en/docs/http/request_processing.html It's as simple as: ssl_certificate example.com.crt; ssl_certificate_key example.com.key server { listen 443 ssl default_server; return 301 https://stg1.example.com; } server { listen 443 ssl; server_name stg1.example.com www.example.com ...; location / { proxy_pass ...; } } wbr, Valentin V. Bartenev From zxcvbn4038 at gmail.com Thu Dec 3 22:35:04 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 3 Dec 2015 17:35:04 -0500 Subject: SPDY + HTTP/2 In-Reply-To: <566089ED.9050001@nginx.com> References: <566089ED.9050001@nginx.com> Message-ID: Let me get back to you on that - we're going to send some traffic through Cloudflare and see how the traffic breaks out given the choice of all three protocols. On Thu, Dec 3, 2015 at 1:29 PM, Maxim Konovalov wrote: > Hello, > > On 12/3/15 9:14 PM, CJ Ess wrote: > > NGINX devs, > > > > I know you were very excited to remove SPDY support from NGINX, but > > for the next few years there are a lot of devices (mobile devices > > that can't upgrade, end users who aren't comfortable upgrading, etc) > > that are not going to have http/2 support. By removing SPDY support > > you've created a situation where we either have to penalize those > > users by forcing them to HTTP(S) connections, or we have to forego > > upgrading NGINX and not offer HTTP/2. > > > I think these "a lot" and "many years" are just overestimation based > on the marketing buzz around spdy, http2 and other so-called > innovations: > > http://w3techs.com/technologies/details/ce-spdy/all/all > http://w3techs.com/technologies/details/ce-http2/all/all > > As you see, spdy penetration is just 6% and it starts to decline. > http/2 is still ~2% but growing super quickly. > > And we do provide 1.8 branch which supports spdy. > > > Cloudflare is offering both SPDY and HTTP/2 - they are a huge NGINX > > shop but I'm not clear if they are using NGINX to do that or not. > > I'd like to encourage you to follow their lead and reinstate the > > SPDY support for a while (even if its just a compile time option > > thats disabled by default). > > > As for your request: can you explain how spdy deprecation affects you > personally (not Cloudflare or other). > > -- > Maxim Konovalov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Dec 3 23:58:23 2015 From: nginx-forum at nginx.us (silentmiles) Date: Thu, 03 Dec 2015 18:58:23 -0500 Subject: *14177278 readv() failed (104: Connection reset by peer) while reading upstream In-Reply-To: References: Message-ID: <7e3b6b894a535cf4357145acb0074f83.NginxMailingListEnglish@forum.nginx.org> I have also started to see these errors in my log -- readv() failed (104: Connection reset by peer). I recently upgraded from nginx 1.6.2 to 1.8, and I think they started from this point. This is with a Linux/PHP-FPM setup. I've found a few mentions of the above error with this setup, but they're all related to back-end timeouts. None of my requests to the back-end run for more than a second or two, so I don't think this is a timeout issue for me. Has anyone else seen this issue occur recently, or can provide any pointers? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262035,263252#msg-263252 From reallfqq-nginx at yahoo.fr Fri Dec 4 01:03:46 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 4 Dec 2015 02:03:46 +0100 Subject: *14177278 readv() failed (104: Connection reset by peer) while reading upstream In-Reply-To: <7e3b6b894a535cf4357145acb0074f83.NginxMailingListEnglish@forum.nginx.org> References: <7e3b6b894a535cf4357145acb0074f83.NginxMailingListEnglish@forum.nginx.org> Message-ID: Beware the observation bias. Maybe try going back to v1.6.2 (or v1.6.3 for the latest 1.6)? These messages indicates the remote side resets the connection, not nginx, thus your PHP-FPM is responsible for that. Why? Ask it :o) --- *B. R.* On Fri, Dec 4, 2015 at 12:58 AM, silentmiles wrote: > I have also started to see these errors in my log -- readv() failed (104: > Connection reset by peer). I recently upgraded from nginx 1.6.2 to 1.8, and > I think they started from this point. > > This is with a Linux/PHP-FPM setup. I've found a few mentions of the above > error with this setup, but they're all related to back-end timeouts. None > of > my requests to the back-end run for more than a second or two, so I don't > think this is a timeout issue for me. > > Has anyone else seen this issue occur recently, or can provide any > pointers? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,262035,263252#msg-263252 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrickobrien at tetrisblocks.net Fri Dec 4 01:34:50 2015 From: patrickobrien at tetrisblocks.net (Patrick O'Brien) Date: Thu, 3 Dec 2015 17:34:50 -0800 Subject: Blocking unknown hostnames for SSL/TLS connections In-Reply-To: <2821604.1Cv5aRJQZg@vbart-laptop> References: <2821604.1Cv5aRJQZg@vbart-laptop> Message-ID: On Thu, Dec 3, 2015 at 1:44 PM, Valentin V. Bartenev wrote: > On Thursday 03 December 2015 11:41:51 Patrick O'Brien wrote: >> Hello, >> >> We're currently using nginx for SSL/TLS termination, which then >> proxies the request to a pair of internal pair of load balancers. >> Since the TLS handshake is performed before nginx is able to figure >> out what hostname is being requested, except in cases where SNI is >> used, it will accept any request for any hostname and pass it along >> to our internal load balancers. This puts us in a situation where >> internal resources are allowed to be exposed externally, although in >> a roundabout way. >> >> For example, our internal load balancers have a pool called "news", >> which is accessible via news, or news.dc1.example.com and is intended >> to be internally accessible only. If you were to add our external IP >> address mapped to news.dc1.example.com and told curl to ignore the >> invalid cert, nginx will proxy this request along to our internal >> load balancers and the internal service will happily respond. Here's >> a curl example of this hitting the internal healthcheck endpoint: >> >> ``` >> curl -k https://news.dc1.example.com >> alive >> ``` >> >> Ideally this would be blocked at our ingress point, which is nginx. >> >> The only way around this that I've found so far is to inspect the >> $host variable in the server definition for the 443 blocks. The >> example below shows the check for the server block which is intended >> to respond to www.example.com and stg1.example.com only: >> >> ``` >> # if the request coming in doesn't match any of the hosts we know >> # about, throw a 301 and rewrite to the default server. >> if ($host !~ (^www.example.com$|^stg1.example.com$)) { >> return 301 https://stg1.example.com; >> } >> ``` >> >> In our production environment we have a wildcard cert that covers as >> many as 6 externally available resources, so I am concerned with the >> performance hit of doing a check on every host. >> >> Is there a preferred method of dealing with an issue like this? I've >> read through the config pitfalls page on the readthedocs.org[0] page, >> and the If Is Evil page[1], so I am pretty positive the solution >> above is very inefficient. The pitfalls page even talks about the >> preferred alternative to an if statement for hostname matching[2], >> but this does not appear to cover TLS connections. Is there any other >> documentation that talks about this or could be useful? >> > [..] Hi Valentin, > > http://nginx.org/en/docs/http/server_names.html > http://nginx.org/en/docs/http/request_processing.html > > It's as simple as: > > ssl_certificate example.com.crt; > ssl_certificate_key example.com.key > > server { > listen 443 ssl default_server; > return 301 https://stg1.example.com; > } > > server { > listen 443 ssl; > server_name stg1.example.com www.example.com ...; > > location / { > proxy_pass ...; > } > } > It looks like this works unless if you have multiple ssl server definitions which require different certs. Here is what we ended up with (more or less): # rewrite everything on 80 to https server { listen 80; server_name _; return 301 https://$host$request_uri; proxy_set_header Host $host; } # server definition for www server { listen 443 ssl; server_name www.example.com example.com; access_log /var/log/nginx/ssl.access.log; error_log /var/log/nginx/ssl.error.log; ssl_certificate /etc/nginx/ssl/www.combined; ssl_certificate_key /etc/nginx/ssl/www.key; ... } # server definition for wildcard server { listen 443 ssl; server_name foo.example.com bar.example.com; access_log /var/log/nginx/ssl.access.log; error_log /var/log/nginx/ssl.error.log; ssl_certificate /etc/nginx/ssl/wildcard.combined; ssl_certificate_key /etc/nginx/ssl/wildcard.key; ... } # catch all to force unknown hostnames to www server { listen 443 ssl default_server; server_name _; ssl_certificate /etc/nginx/ssl/www.combined; ssl_certificate_key /etc/nginx/ssl/www.key; return 301 https://www.example.com; } I did some spot checking via curl and chrome and everything appears to be working how I expect it to. -pat > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Dec 4 12:57:42 2015 From: nginx-forum at nginx.us (x-man) Date: Fri, 04 Dec 2015 07:57:42 -0500 Subject: Nginx crashing every few hours??? Message-ID: <6fdc51cb16a5f2973f87f3d077b0a6dc.NginxMailingListEnglish@forum.nginx.org> Hi, last few years I using nginx on very busy websites and all time all working fine, never I don`t have problem with nginx crashing but now I move my websites to new server and I using same settings like on old server but now nginx crashing every few hours, here is nginx -V info: nginx version: nginx/1.8.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) configure arguments: --with-http_stub_status_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_autoindex_module --without-http_browser_module --without-http_geo_module --without-http_empty_gif_module --without-http_memcached_module --without-http_ssi_module --with-debug centos-release-6-7.el6.centos.12.3.x86_64 And here is log where nginx stop to work: 2015/12/04 08:09:45 [debug] 100691#0: *780601 post rewrite phase: 3 2015/12/04 08:09:45 [debug] 100691#0: *780601 generic phase: 4 2015/12/04 08:09:45 [debug] 100691#0: *780601 generic phase: 5 2015/12/04 08:09:45 [debug] 100691#0: *780601 access phase: 6 2015/12/04 08:09:45 [debug] 100691#0: *780601 access phase: 7 2015/12/04 08:09:45 [debug] 100691#0: *780601 post access phase: 8 2015/12/04 08:09:45 [debug] 100691#0: *780601 content phase: 9 2015/12/04 08:09:45 [debug] 100691#0: *780601 content phase: 10 2015/12/04 08:09:45 [debug] 100691#0: *780601 http filename: "/home/mmmm/public_html/logo.gif" 2015/12/04 08:09:45 [debug] 100691#0: *780601 add cleanup: 0000000000A433F8 2015/12/04 08:09:45 [debug] 100691#0: *780601 cached open file: /home/mmmm/public_html/logo.gif, fd:49, c:1, e:0, u:50 2015/12/04 08:09:45 [debug] 100691#0: *780601 http static fd: 49 2015/12/04 08:09:45 [debug] 100691#0: *780601 http set discard body 2015/12/04 08:09:45 [debug] 100691#0: *780601 HTTP/1.1 200 OK 2015/12/04 08:09:45 [debug] 100691#0: *780601 write new buf t:1 f:0 0000000000A435D8, pos 0000000000A435D8, size: 234 file: 0, size: 0 2015/12/04 08:09:45 [debug] 100691#0: *780601 http write filter: l:0 f:0 s:234 2015/12/04 08:09:45 [debug] 100691#0: *780601 http output filter "/logo.gif?" 2015/12/04 08:09:45 [debug] 100691#0: *780601 http copy filter: "/logo.gif?" 2015/12/04 08:09:45 [debug] 100691#0: *780601 malloc: 0000000000A390D0:5126 2015/12/04 08:09:45 [debug] 100691#0: *780601 read: 49, 0000000000A390D0, 5126, 0 2015/12/04 08:09:45 [debug] 100691#0: *780601 write old buf t:1 f:0 0000000000A435D8, pos 0000000000A435D8, size: 234 file: 0, size: 0 2015/12/04 08:09:45 [debug] 100691#0: *780601 write new buf t:1 f:0 0000000000A390D0, pos 0000000000A390D0, size: 5126 file: 0, size: 0 2015/12/04 08:09:45 [debug] 100691#0: *780601 http write filter: l:1 f:0 s:5360 2015/12/04 08:09:45 [debug] 100691#0: *780601 http write filter limit 0 2015/12/04 08:09:45 [debug] 100691#0: *780601 writev: 5360 of 5360 2015/12/04 08:09:45 [debug] 100691#0: *780601 http write filter 0000000000000000 2015/12/04 08:09:45 [debug] 100691#0: *780601 http copy filter: 0 "/logo.gif?" 2015/12/04 08:09:45 [debug] 100691#0: *780601 http finalize request: 0, "/logo.gif?" a:1, c:1 2015/12/04 08:09:45 [debug] 100691#0: *780601 set http keepalive handler 2015/12/04 08:09:45 [debug] 100691#0: *780601 http close request 2015/12/04 08:09:45 [debug] 100691#0: *780601 http log handler 2015/12/04 08:09:45 [debug] 100691#0: *780601 run cleanup: 0000000000A433F8 2015/12/04 08:09:45 [debug] 100691#0: *780601 close cached open file: /home/mmmm/public_html/logo.gif, fd:49, c:0, u:50, 0 2015/12/04 08:09:45 [debug] 100691#0: *780601 free: 0000000000A390D0 2015/12/04 08:09:45 [debug] 100691#0: *780601 free: 0000000000A42180, unused: 5 2015/12/04 08:09:45 [debug] 100691#0: *780601 free: 0000000000A43190, unused: 2480 2015/12/04 08:09:45 [debug] 100691#0: *780601 free: 000000000097C060 2015/12/04 08:09:45 [debug] 100691#0: *780601 hc free: 0000000000000000 0 2015/12/04 08:09:45 [debug] 100691#0: *780601 hc busy: 0000000000000000 0 2015/12/04 08:09:45 [debug] 100691#0: *780601 tcp_nodelay 2015/12/04 08:09:45 [debug] 100691#0: *780601 reusable connection: 1 2015/12/04 08:09:45 [debug] 100691#0: *780601 event timer add: 86: 4000:1449212989792 2015/12/04 08:09:45 [debug] 100691#0: *780601 post event 00007F6CAB710EC8 2015/12/04 08:09:45 [debug] 100691#0: posted event 00007F6CAB710EC8 2015/12/04 08:09:45 [debug] 100691#0: *780601 delete posted event 00007F6CAB710EC8 2015/12/04 08:09:45 [debug] 100691#0: *780601 http keepalive handler 2015/12/04 08:09:45 [debug] 100691#0: *780601 malloc: 000000000097C060:1024 2015/12/04 08:09:45 [debug] 100691#0: *780601 recv: fd:86 -1 of 1024 2015/12/04 08:09:45 [debug] 100691#0: *780601 recv() not ready (11: Resource temporarily unavailable) 2015/12/04 08:09:45 [debug] 100691#0: *780601 free: 000000000097C060 2015/12/04 08:09:45 [debug] 100691#0: worker cycle 2015/12/04 08:09:45 [debug] 100691#0: accept mutex locked 2015/12/04 08:09:45 [debug] 100691#0: epoll timer: 164 2015/12/04 08:09:45 [notice] 100692#0: signal 15 (SIGTERM) received, exiting 2015/12/04 08:09:45 [info] 100692#0: epoll_wait() failed (4: Interrupted system call) 2015/12/04 08:09:45 [debug] 100692#0: timer delta: 65 2015/12/04 08:09:45 [notice] 100692#0: exiting 2015/12/04 08:09:45 [debug] 100692#0: flush files 2015/12/04 08:09:45 [debug] 100692#0: run cleanup: 00000000009F8A80 2015/12/04 08:09:45 [debug] 100692#0: cleanup resolver 2015/12/04 08:09:45 [debug] 100692#0: run cleanup: 000000000098A138 2015/12/04 08:09:45 [debug] 100692#0: open file cache cleanup 2015/12/04 08:09:45 [notice] 100691#0: signal 15 (SIGTERM) received, exiting 2015/12/04 08:09:45 [debug] 100692#0: delete cached open file: /home/mmmm/public_html/wp-content/themes/u-design/scripts/admin/uploadify/uploadify.css 2015/12/04 08:09:45 [notice] 100690#0: signal 15 (SIGTERM) received, exiting 2015/12/04 08:09:45 [info] 100691#0: epoll_wait() failed (4: Interrupted system call) 2015/12/04 08:09:45 [debug] 100692#0: delete cached open file: /home/mmmm/public_html/piwik.php 2015/12/04 08:09:45 [debug] 100691#0: timer delta: 18 2015/12/04 08:09:45 [debug] 100692#0: close cached open file: /home/mmmm/public_html/piwik.php, fd:-1, c:0, u:1, 1 2015/12/04 08:09:45 [notice] 100691#0: exiting 2015/12/04 08:09:45 [debug] 100692#0: delete cached open file: /home/sharedt/public_html/some.php 2015/12/04 08:09:45 [debug] 100691#0: flush files 2015/12/04 08:09:45 [debug] 100690#0: wake up, sigio 0 2015/12/04 08:09:45 [debug] 100692#0: close cached open file: /home/sharedt/public_html/some.php, fd:161, c:0, u:11, 1 2015/12/04 08:09:45 [debug] 100691#0: run cleanup: 00000000009F8A80 2015/12/04 08:09:45 [debug] 100690#0: child: 0 100691 e:0 t:0 d:0 r:1 j:0 2015/12/04 08:09:45 [debug] 100692#0: delete cached open file: /home/turboi/public_html/some.php 2015/12/04 08:09:45 [debug] 100691#0: cleanup resolver 2015/12/04 08:09:45 [debug] 100692#0: close cached open file: /home/turboi/public_html/some.php, fd:127, c:0, u:4, 1 2015/12/04 08:09:45 [debug] 100690#0: child: 1 100692 e:0 t:0 d:0 r:1 j:0 2015/12/04 08:09:45 [debug] 100691#0: run cleanup: 000000000098A138 2015/12/04 08:09:45 [debug] 100692#0: delete cached open file: /home/turboi/public_html/i/some.php 2015/12/04 08:09:45 [debug] 100690#0: termination cycle: 50 2015/12/04 08:09:45 [debug] 100691#0: open file cache cleanup 2015/12/04 08:09:45 [debug] 100692#0: close cached open file: /home/turboi/public_html/i/some.php, fd:-1, c:0, u:1, 1 2015/12/04 08:09:45 [debug] 100691#0: delete cached open file: /home/mmmm/public_html/some.php 2015/12/04 08:09:45 [debug] 100692#0: delete cached open file: /home/turboi/public_html/some.php 2015/12/04 08:09:45 [debug] 100691#0: close cached open file: /home/mmmm/public_html/some.php, fd:140, c:0, u:4, 1 2015/12/04 08:09:45 [debug] 100690#0: sigsuspend 2015/12/04 08:09:45 [debug] 100692#0: close cached open file: /home/turboi/public_html/some.php, fd:135, c:0, u:214, 1 2015/12/04 08:09:45 [debug] 100691#0: delete cached open file: /home/mmmm/public_html/images/vid.png 2015/12/04 08:09:45 [debug] 100692#0: delete cached open file: /home/turboi/public_html/some.php 2015/12/04 08:09:45 [debug] 100691#0: close cached open file: /home/mmmm/public_html/images/vid.png, fd:119, c:0, u:2, 1 2015/12/04 08:09:45 [debug] 100692#0: close cached open file: /home/turboi/public_html/some.php, fd:79, c:0, u:223, 1 2015/12/04 08:09:45 [debug] 100691#0: delete cached open file: /home/mmmm/public_html/images/default.png 2015/12/04 08:09:45 [debug] 100691#0: close cached open file: /home/mmmm/public_html/images/default.png, fd:-1, c:0, u:1, 1 2015/12/04 08:09:45 [debug] 100692#0: delete cached open file: /home/mmmm/public_html/some.php 2015/12/04 08:09:45 [debug] 100691#0: delete cached open file: /home/mmmm/public_html/piwik.php 2015/12/04 08:09:45 [debug] 100692#0: close cached open file: /home/mmmm/public_html/some.php, fd:55, c:0, u:111, 1 2015/12/04 08:09:45 [debug] 100691#0: close cached open file: /home/mmmm/public_html/piwik.php, fd:-1, c:0, u:1, 1 2015/12/04 08:09:45 [debug] 100692#0: delete cached open file: /home/mmmm/public_html/search.php 2015/12/04 08:09:45 [debug] 100691#0: delete cached open file: /home/mmmm/public_html/wp-admin/admin-ajax.php Where can be problem? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263265,263265#msg-263265 From mdounin at mdounin.ru Fri Dec 4 14:43:01 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Dec 2015 17:43:01 +0300 Subject: Nginx crashing every few hours??? In-Reply-To: <6fdc51cb16a5f2973f87f3d077b0a6dc.NginxMailingListEnglish@forum.nginx.org> References: <6fdc51cb16a5f2973f87f3d077b0a6dc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151204144301.GJ74233@mdounin.ru> Hello! On Fri, Dec 04, 2015 at 07:57:42AM -0500, x-man wrote: > last few years I using nginx on very busy websites and all time all working > fine, never I don`t have problem with nginx crashing but now I move my > websites to new server and I using same settings like on old server but now > nginx crashing every few hours, here is nginx -V info: [...] > 2015/12/04 08:09:45 [notice] 100692#0: signal 15 (SIGTERM) received, exiting ... > 2015/12/04 08:09:45 [notice] 100692#0: exiting ... > 2015/12/04 08:09:45 [notice] 100691#0: signal 15 (SIGTERM) received, exiting ... > 2015/12/04 08:09:45 [notice] 100690#0: signal 15 (SIGTERM) received, exiting ... > 2015/12/04 08:09:45 [notice] 100691#0: exiting ... > 2015/12/04 08:09:45 [debug] 100690#0: wake up, sigio 0 ... > 2015/12/04 08:09:45 [debug] 100690#0: child: 0 100691 e:0 t:0 d:0 r:1 j:0 ... > 2015/12/04 08:09:45 [debug] 100690#0: child: 1 100692 e:0 t:0 d:0 r:1 j:0 [...] > Where can be problem? As per logs, someone or something send SIGTERM to all nginx processes, including master (pid 100690) and all workers (100691, 100692). Check your server to find out how this happened. Usually such things happen due to bugs in log rotation scripts or something similar. -- Maxim Dounin http://nginx.org/ From anil at saog.net Fri Dec 4 14:47:16 2015 From: anil at saog.net (=?UTF-8?B?QW7EsWwgw4dldGlu?=) Date: Fri, 4 Dec 2015 16:47:16 +0200 Subject: Nginx crashing every few hours??? In-Reply-To: <20151204144301.GJ74233@mdounin.ru> References: <6fdc51cb16a5f2973f87f3d077b0a6dc.NginxMailingListEnglish@forum.nginx.org> <20151204144301.GJ74233@mdounin.ru> Message-ID: Hello, It can also be oomkiller killing nginx due to low resources, you should monitor & check the other logs. 2015-12-04 16:43 GMT+02:00 Maxim Dounin : > Hello! > > On Fri, Dec 04, 2015 at 07:57:42AM -0500, x-man wrote: > > > last few years I using nginx on very busy websites and all time all > working > > fine, never I don`t have problem with nginx crashing but now I move my > > websites to new server and I using same settings like on old server but > now > > nginx crashing every few hours, here is nginx -V info: > > [...] > > > 2015/12/04 08:09:45 [notice] 100692#0: signal 15 (SIGTERM) received, > exiting > ... > > 2015/12/04 08:09:45 [notice] 100692#0: exiting > ... > > 2015/12/04 08:09:45 [notice] 100691#0: signal 15 (SIGTERM) received, > exiting > ... > > 2015/12/04 08:09:45 [notice] 100690#0: signal 15 (SIGTERM) received, > exiting > ... > > 2015/12/04 08:09:45 [notice] 100691#0: exiting > ... > > 2015/12/04 08:09:45 [debug] 100690#0: wake up, sigio 0 > ... > > 2015/12/04 08:09:45 [debug] 100690#0: child: 0 100691 e:0 t:0 d:0 r:1 j:0 > ... > > 2015/12/04 08:09:45 [debug] 100690#0: child: 1 100692 e:0 t:0 d:0 r:1 j:0 > > [...] > > > Where can be problem? > > As per logs, someone or something send SIGTERM to all nginx > processes, including master (pid 100690) and all workers (100691, > 100692). Check your server to find out how this happened. > Usually such things happen due to bugs in log rotation scripts or > something similar. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Dec 4 14:53:35 2015 From: nginx-forum at nginx.us (x-man) Date: Fri, 04 Dec 2015 09:53:35 -0500 Subject: Nginx crashing every few hours??? In-Reply-To: <20151204144301.GJ74233@mdounin.ru> References: <20151204144301.GJ74233@mdounin.ru> Message-ID: <89f13fe64b9150c27ef187ee183f6318.NginxMailingListEnglish@forum.nginx.org> I also thinking about it but I can`t find what doing it...on this server I have cpanel and csf firewall with processes monitoring but csf sending e-mail when do something... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263265,263270#msg-263270 From nginx-forum at nginx.us Fri Dec 4 14:55:37 2015 From: nginx-forum at nginx.us (x-man) Date: Fri, 04 Dec 2015 09:55:37 -0500 Subject: Nginx crashing every few hours??? In-Reply-To: References: Message-ID: <3a1a7dd9b5b734da0b9fc54f8dab0508.NginxMailingListEnglish@forum.nginx.org> egrep -i -r 'killed process' /var/log/ result nothing... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263265,263271#msg-263271 From nginx-forum at nginx.us Fri Dec 4 22:40:02 2015 From: nginx-forum at nginx.us (agruener) Date: Fri, 04 Dec 2015 17:40:02 -0500 Subject: OCSP malformedrequest with 1.9.7 and openssl 1.0.2e Message-ID: Hello, OCSP is not working on my raspberrypi2 with nginx 1.9.7 and OpenSSL 1.0.2e. I have compiled both together. tail /var/log/nginx/error.log 2015/12/04 22:28:21 [error] 14841#0: OCSP response not successful (1: malformedrequest) while requesting certificate status, responder: ocsp.startssl.com 2015/12/04 22:28:29 [error] 14841#0: OCSP response not successful (1: malformedrequest) while requesting certificate status, responder: ocsp.startssl.com 2015/12/04 22:28:30 [error] 14842#0: OCSP response not successful (1: malformedrequest) while requesting certificate status, responder: ocsp.startssl.com Got the ca-bundle.pem from https://www.startssl.com/certs/?C=S;O=D /etc/nginx/sites-enabled $ cat default .... # OCSP Stapling ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/nginx/my_ssl_certs/ca-bundle.pem; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; ..... OCSP is not working after checks with sslabs and openssl e.g. echo QUIT | openssl s_client -connect www.mydomain.com:443 -status 2> /dev/null | grep -A 17 'OCSP response:' | grep -B 17 'Next Update' According to https://www.ietf.org/rfc/rfc2560.txt the errors says: .... OCSPResponseStatus ::= ENUMERATED { malformedRequest (1), --Illegal confirmation request .... My StartSSL certificates are SHA2 (https://www.startssl.com/certs/class1/sha2/pem/) In /etc/nginx/sites-enabled/ I have more than one config / domain configured. But it does not matter wether I only configure OCSP in every single file or just default. I only found a Bug message here: " OpenSSL OCSP Bad Request" (http://jfcarter.net/~jimc/documents/bugfix/21-openssl-ocsp.html) saying you have to add: -header "HOST" "ocsp.startssl.com" My options for compiling openssl & nginx have been ./config --prefix=$STATICLIBSSL no-ssl2 no-ssl3 no-shared \ && make depend \ && make \ && make install_sw ./configure --with-cc-opt="-I $STATICLIBSSL/include -I/usr/include" \ --with-ld-opt="-L $STATICLIBSSL/lib -Wl,-rpath -lssl -lcrypto -ldl -lz" \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --pid-path=/var/run/nginx.pid \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --with-pcre=$BPATH/$VERSION_PCRE \ --with-http_ssl_module \ --with-http_v2_module \ --with-file-aio \ --with-ipv6 \ --with-http_gzip_static_module \ --with-http_stub_status_module \ --without-mail_pop3_module \ --without-mail_smtp_module \ --without-mail_imap_module \ && make && make install Any ideas ? Thanks in advance, Alexander Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263279,263279#msg-263279 From zxcvbn4038 at gmail.com Fri Dec 4 23:17:01 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Fri, 4 Dec 2015 18:17:01 -0500 Subject: SPDY + HTTP/2 In-Reply-To: References: Message-ID: Looks like Cloudflare patched SPDY support back into NGINX, and they will release the patch to everyone next year: https://blog.cloudflare.com/introducing-http2/#comment-2391853103 On Thu, Dec 3, 2015 at 1:14 PM, CJ Ess wrote: > NGINX devs, > > I know you were very excited to remove SPDY support from NGINX, but for > the next few years there are a lot of devices (mobile devices that can't > upgrade, end users who aren't comfortable upgrading, etc) that are not > going to have http/2 support. By removing SPDY support you've created a > situation where we either have to penalize those users by forcing them to > HTTP(S) connections, or we have to forego upgrading NGINX and not offer > HTTP/2. > > Cloudflare is offering both SPDY and HTTP/2 - they are a huge NGINX shop > but I'm not clear if they are using NGINX to do that or not. I'd like to > encourage you to follow their lead and reinstate the SPDY support for a > while (even if its just a compile time option thats disabled by default). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Dec 5 03:32:48 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 5 Dec 2015 06:32:48 +0300 Subject: OCSP malformedrequest with 1.9.7 and openssl 1.0.2e In-Reply-To: References: Message-ID: <20151205033248.GN74233@mdounin.ru> Hello! On Fri, Dec 04, 2015 at 05:40:02PM -0500, agruener wrote: > OCSP is not working on my raspberrypi2 with nginx 1.9.7 and OpenSSL 1.0.2e. > I have compiled both together. > > tail /var/log/nginx/error.log > > 2015/12/04 22:28:21 [error] 14841#0: OCSP response not successful (1: > malformedrequest) while requesting certificate status, responder: > ocsp.startssl.com > 2015/12/04 22:28:29 [error] 14841#0: OCSP response not successful (1: > malformedrequest) while requesting certificate status, responder: > ocsp.startssl.com > 2015/12/04 22:28:30 [error] 14842#0: OCSP response not successful (1: > malformedrequest) while requesting certificate status, responder: > ocsp.startssl.com The message means that an OCSP request was successfully sent, but OCSP responder returned an error. This may be either due to OCSP response being indeed incorrect for some reason, or due to a problem on OCSP responder side. You may try the following: - check if OCSP requests from other clients (e.g., browsers) work; note that openssl's OCSP client will likely fail out of the box; - check if the same error occurs on x86 hosts for the same certificate or not; - try tcpdump'ing traffic between nginx and the OCSP responder to see what happens on the wire. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sat Dec 5 09:20:32 2015 From: nginx-forum at nginx.us (agruener) Date: Sat, 05 Dec 2015 04:20:32 -0500 Subject: OCSP malformedrequest with 1.9.7 and openssl 1.0.2e In-Reply-To: <20151205033248.GN74233@mdounin.ru> References: <20151205033248.GN74233@mdounin.ru> Message-ID: <1a8c70023ae0637cc088c03c05d3daa0.NginxMailingListEnglish@forum.nginx.org> Dear Maxim, thanks for your ideas. I think I have not fully understand this matter, yet ;-) - check if OCSP requests from other clients (e.g., browsers) work; note that openssl's OCSP client will likely fail out of the box; ---> it does not work with openssl on Ubuntu 14.04 LTS (OpenSSL 1.0.1f 6 Jan 2014), openssl on raspberrypi2 (OpenSSL 1.0.2e) and Qualsys ssllabs (https://www.ssllabs.com/ssltest/). I do not get any errors on the other hand in Firefox or Chrome on Windows / Ubuntu / Android browsing to my websites. But I do not know how to do the same OCSP tests with my browsers. - check if the same error occurs on x86 hosts for the same certificate or not; --> I have to try this later, it is not that easy to set up here right now. - try tcpdump'ing traffic between nginx and the OCSP responder to see what happens on the wire. --> I have done it. It is showing some communication when I do the test with openssl, e.g. echo QUIT | openssl s_client -connect www.mydomain.com:443 -status 2> /dev/null | grep -A 17 'OCSP response:' | grep -B 17 'Next Update' Pcap extraction show communication: .... . StartCom Ltd.1+0)..U..."Secure Digital Certificate Signing1806..U.../StartCom Class 1 Primary Intermediate Server CA0.. 151011024455Z.... ..... . ...M0..I0...g.....0..;..+......7...0..*0...+........"http://www.startssl.com/policy.pdf0....+.......0..0'. StartCom Certification Authority0.......This certificate was issued according to the Class 1 Validation requirements of the StartCom CA policy, reliance only for the intended purpose in compliance of the relying party obligations.05..U....0,0*.(.&.$http://crl.startssl.com/crt1-crl.crl0....+..........0.09..+.....0..-http://ocsp.startssl.com/sub/class1/server/ca0B..+.....0..6http://aia.startssl.com/certs/sub.class1.server.ca.crt0#..U....0...http://www.startssl.com/0.... But at the end of my pcap I have a TLSv1.2 Record Layer: Encrypted Alert Content Type: Alert (21) Version: TLS 1.2 (0x0303) Length: 26 Alert Message: Encrypted Alert followed by FIN, ACK Greetings, Alexander Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263279,263285#msg-263285 From gideon425.gb8 at mailnull.com Sat Dec 5 21:59:00 2015 From: gideon425.gb8 at mailnull.com (gideon425.gb8 at mailnull.com) Date: Sat, 5 Dec 2015 16:59:00 -0500 (EST) Subject: web application to call C code Message-ID: <20151205215900.48DA6510202@outside.256.com> I am writing a web application and am interested in whether I can use nginx. For each request for the web server, I want it to call some C code I have written. The code C will generate the HTML I want to send back. Is there a simple way to do this? ---------- This message was sent from a MailNull anti-spam account. You can get your free account and take control over your email by visiting the following URL. http://mailnull.com/ From nginx-forum at nginx.us Sat Dec 5 22:45:02 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 05 Dec 2015 17:45:02 -0500 Subject: web application to call C code In-Reply-To: <20151205215900.48DA6510202@outside.256.com> References: <20151205215900.48DA6510202@outside.256.com> Message-ID: <2f4cc48c8514e0cc1f864a3b44a41c03.NginxMailingListEnglish@forum.nginx.org> Your better of with Lua (embedded in nginx), search for openresty. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263295,263296#msg-263296 From richard at kearsley.me Sun Dec 6 01:28:15 2015 From: richard at kearsley.me (Richard Kearsley) Date: Sun, 06 Dec 2015 01:28:15 +0000 Subject: output_buffers value breaks thread_pool/aio on 1.9.5+ Message-ID: <56638F2F.6090502@kearsley.me> Hi Since 1.9.5, *) Change: now the "output_buffers" directive uses two buffers by default. The two buffers do not work with thread_pool/aio, the connection is closed at 32,768 bytes (size of one buffer) These messages are shown in the error log: [alert] 126931#126931: task #106 already active - Changing output_buffers back to 1 32k avoids the problem - aio=off avoids the problem Many thanks Richard From nginx-forum at nginx.us Sun Dec 6 06:14:30 2015 From: nginx-forum at nginx.us (WANJUNE) Date: Sun, 06 Dec 2015 01:14:30 -0500 Subject: NginX SSL reverse mode, client ip address problem Message-ID: In NginX reverse mode, There is a problem that can't get real client's Ip address. If I use Http protocol, I can simply handle this problem with below http configuration. http { server { listen 80; location / { proxy_set_header X-forwarded-for; proxy_pass http://destAddress; } } } The problem is in SSL. I don't want to use http ssl listen becase of SSL handshaking burden on NginX. I decided to use stream codec like below. stream { upstream aa34 { zone first_row 64k; server google.com fail_timeout=5s; } server { listen 127.0.0.1:8081; location / { proxy_pass https://aa34; } } In this case, I think I can't specify any http related parameters like 'X-forwarded-for'. Is there any way to change source ip address of TCP/IP Protocol header(Ip Header) to client's real Ip ? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263299,263299#msg-263299 From al-nginx at none.at Sun Dec 6 10:17:53 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 06 Dec 2015 11:17:53 +0100 Subject: NginX SSL reverse mode, client ip address problem In-Reply-To: References: Message-ID: <266531f557a7281e36c2c28b3c2cf356@none.at> Hi WANJUNE. Am 06-12-2015 07:14, schrieb WANJUNE: > In NginX reverse mode, > > There is a problem that can't get real client's Ip address. [snipp] > I don't want to use http ssl listen becase of SSL handshaking burden on > NginX. > > I decided to use stream codec like below. > > stream { > upstream aa34 { > zone first_row 64k; > server google.com fail_timeout=5s; > } > server { > listen 127.0.0.1:8081; > location / { > proxy_pass https://aa34; > } > } > In this case, I think I can't specify any http related parameters like > 'X-forwarded-for'. > Is there any way to change source ip address of TCP/IP Protocol > header(Ip > Header) to client's real Ip ? How about to use the proxy protocol? http://www.haproxy.org/download/1.6/doc/proxy-protocol.txt This option was introduced in 1.9.2 ############## http://nginx.org/en/CHANGES Changes with nginx 1.9.2 16 Jun 2015 *) Feature: the "proxy_protocol" directive in the stream module. ############## It's not yet in the documentation but in the code ;-) http://nginx.org/en/docs/stream/ngx_stream_core_module.html I would suggest to use the following line server fail_timeout=5s proxy_protocol; and on the origin server, in case it is nginx, this. http://nginx.org/en/docs/http/ngx_http_core_module.html#listen listen ..... proxy_protocol ....; If your destiation server is not able to read the proxy protocol then you only DSR (direct Server Return) is able to show you the client IP. Cheers Aleks From al-nginx at none.at Sun Dec 6 10:24:28 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 06 Dec 2015 11:24:28 +0100 Subject: web application to call C code In-Reply-To: <20151205215900.48DA6510202@outside.256.com> References: <20151205215900.48DA6510202@outside.256.com> Message-ID: <9b64525d31f7b0deb371e203997be8a2@none.at> Hi. Am 05-12-2015 22:59, schrieb gideon425.gb8 at mailnull.com: > I am writing a web application and am interested in whether I can use > nginx. For each request for the web server, I want it to call some C > code I have written. The code C will generate the HTML I want to send > back. Is there a simple way to do this? I see at least 2 options. 1.) Write a nginx module like ngx_http_empty_gif_module or any other nginx module. http://nginx.org/en/docs/http/ngx_http_empty_gif_module.html http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_empty_gif_module.c 2.) use a more dedicated server like http://gwan.ch/developers On which platform do you plan to run your C-Code ? Cheers Aleks From nginx-forum at nginx.us Sun Dec 6 11:04:05 2015 From: nginx-forum at nginx.us (WANJUNE) Date: Sun, 06 Dec 2015 06:04:05 -0500 Subject: NginX SSL reverse mode, client ip address problem In-Reply-To: <266531f557a7281e36c2c28b3c2cf356@none.at> References: <266531f557a7281e36c2c28b3c2cf356@none.at> Message-ID: <3b9c4964cb049b00fa506c11c553f846.NginxMailingListEnglish@forum.nginx.org> Aleks, I'm really thank you for your timely response. I checked "proxy_protocol on;" option is working fine and watched the L4 machine send proxy protocol header like "PROXY TCP4 [Ip1] [Ip2] [Port1] [Port2]". Really thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263299,263302#msg-263302 From bjunity at gmail.com Sun Dec 6 14:00:23 2015 From: bjunity at gmail.com (bjunity at gmail.com) Date: Sun, 6 Dec 2015 15:00:23 +0100 Subject: HTTP/2 Gateway Message-ID: Hi, i've tried to use nginx as http/2 gateway for backends which only supporting HTTP 1.1. If a backend is already HTTP/2 ready, than HTTP/2 should be used. When i test my configuration (simple proxy_pass / upstream), the connection from nginx to the backend is always HTTP 1.1 (even if the backend supports HTTP/2). Is there a possiblity to speak HTTP/2 also to the backends? I've used "proxy_http_version 1.1" in my configuration (no other setting (higher version or protocol auto-negotiation) available). Regards, Bjoern -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 7 01:57:50 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Dec 2015 04:57:50 +0300 Subject: OCSP malformedrequest with 1.9.7 and openssl 1.0.2e In-Reply-To: <1a8c70023ae0637cc088c03c05d3daa0.NginxMailingListEnglish@forum.nginx.org> References: <20151205033248.GN74233@mdounin.ru> <1a8c70023ae0637cc088c03c05d3daa0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151207015750.GO74233@mdounin.ru> Hello! On Sat, Dec 05, 2015 at 04:20:32AM -0500, agruener wrote: > Dear Maxim, > > thanks for your ideas. > > I think I have not fully understand this matter, yet ;-) > > - check if OCSP requests from other clients (e.g., browsers) work; > note that openssl's OCSP client will likely fail out of the box; > > ---> it does not work with openssl on Ubuntu 14.04 LTS (OpenSSL 1.0.1f 6 Jan > 2014), openssl on raspberrypi2 (OpenSSL 1.0.2e) and Qualsys ssllabs > (https://www.ssllabs.com/ssltest/). I do not get any errors on the other > hand in Firefox or Chrome on Windows / Ubuntu / Android browsing to my > websites. But I do not know how to do the same OCSP tests with my browsers. It looks like you've mistaken OCSP requests and OCSP stapling. You have to test OCSP requests from other clients, not if OCSP stapling is provided by your server. Note well that Browsers are not expected to show any errors if OCSP requests fail, and not all browsers will use OCSP by default or at all. You have to dump traffic between the browser and the OCSP responder to see what happens. [...] > - try tcpdump'ing traffic between nginx and the OCSP responder to see what > happens on the wire. > > --> I have done it. It is showing some communication when I do the test with > openssl, e.g. > > echo QUIT | openssl s_client -connect www.mydomain.com:443 -status 2> > /dev/null | grep -A 17 'OCSP response:' | grep -B 17 'Next Update' > > > Pcap extraction show communication: > .... > . > StartCom Ltd.1+0)..U..."Secure Digital Certificate > Signing1806..U.../StartCom Class 1 Primary Intermediate Server CA0.. > 151011024455Z.... This seems to be traffic between openssl and nginx. You have to dump traffic between nginx and the OCSP responder (ocsp.startssl.com) to see OCSP requests from nginx and responses with errors. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Dec 7 02:13:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Dec 2015 05:13:55 +0300 Subject: HTTP/2 Gateway In-Reply-To: References: Message-ID: <20151207021354.GQ74233@mdounin.ru> Hello! On Sun, Dec 06, 2015 at 03:00:23PM +0100, bjunity at gmail.com wrote: > i've tried to use nginx as http/2 gateway for backends which only > supporting HTTP 1.1. If a backend is already HTTP/2 ready, than HTTP/2 > should be used. > > When i test my configuration (simple proxy_pass / upstream), the connection > from nginx to the backend is always HTTP 1.1 (even if the backend supports > HTTP/2). > > Is there a possiblity to speak HTTP/2 also to the backends? > > I've used "proxy_http_version 1.1" in my configuration (no other setting > (higher version or protocol auto-negotiation) available). No, talking to backends using the HTTP/2 protocol is not supported by proxy module. -- Maxim Dounin http://nginx.org/ From martin at martin-wolfert.de Mon Dec 7 07:22:46 2015 From: martin at martin-wolfert.de (Martin Wolfert) Date: Mon, 7 Dec 2015 08:22:46 +0100 Subject: HTTP/2 stable status Message-ID: <566533C6.2070701@martin-wolfert.de> Hey, has anyone experiences with nginx 1.9.7 and http/2 in pruduction environments? Means: is http/2 stable in 1.9.7 ? Best, Martin From nikolai at lusan.id.au Mon Dec 7 10:00:40 2015 From: nikolai at lusan.id.au (Nikolai Lusan) Date: Mon, 07 Dec 2015 20:00:40 +1000 Subject: IPv6, HTTPS, and SNI Message-ID: <1449482440.26958.17.camel@lusan.id.au> Hi, I am having in issue using https with multiple sites on ipv6 (nominally SNI). If I declare more than one listen directive for ipv6 on port 443 nginx refuses to start. The ipv4 configuration is fine, it's only an issue with ipv6. Nginx details: ? nginx version: nginx/1.9.7 ? built by gcc 4.9.2 ? built with OpenSSL 1.0.1k 8 Jan 2015 ? TLS SNI support enabled Configuration looks like: server { ????listen??????????????80 ? ? listen ? ? ? ? ? ? ?[::]:80; ? ? listen??????????????443 ssl; ?? ?listen ? ? ? ? ? ? ?[::]:443 ssl; ????server_name?????????my_site.com; ????ssl_certificate?????my_site.com.crt; ????ssl_certificate_key my_site.com.key; ????... } server { ????listen??????????????80; ? ? listen ? ? ? ? ? ? ?[::]:80; ????listen??????????????443 ssl; ? ? listen ? ? ? ? ? ? ?[::]:443 ssl; ????server_name?????????your_site.com; ????ssl_certificate?????your_site.com.crt; ????ssl_certificate_key your_site.com.key; ????... } Does anyone have an idea on why this might be occurring? In theory ipv6 shouldn't make a difference, and it sure as heck doesn't make a difference to the ipv4 configuration. -- Nikolai Lusan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From nginx-forum at nginx.us Mon Dec 7 10:58:28 2015 From: nginx-forum at nginx.us (pankaj.sain) Date: Mon, 07 Dec 2015 05:58:28 -0500 Subject: Can we limit a ngnix website bandwidth to 2 gb per day Message-ID: ?Can we limit a website bandwidth (i.e. 2 GB per day) I'm running multiple website on a single webserver, one of these is heavily accessed and consuming huge bandwidth and due to this other websites unnecessarily suffered. ??Any insight will be helpful. ?? Thanks, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263316,263316#msg-263316 From nginx-forum at nginx.us Mon Dec 7 11:32:32 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 07 Dec 2015 06:32:32 -0500 Subject: IPv6, HTTPS, and SNI In-Reply-To: <1449482440.26958.17.camel@lusan.id.au> References: <1449482440.26958.17.camel@lusan.id.au> Message-ID: <4a8bf3bf908582bc826b30a42cd14a67.NginxMailingListEnglish@forum.nginx.org> I seem to recall that with ipv6 you can't mix 80 with 443 in one server configuration, but I might be wrong here. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263315,263317#msg-263317 From luky-37 at hotmail.com Mon Dec 7 12:16:06 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 7 Dec 2015 13:16:06 +0100 Subject: IPv6, HTTPS, and SNI In-Reply-To: <1449482440.26958.17.camel@lusan.id.au> References: <1449482440.26958.17.camel@lusan.id.au> Message-ID: Hi, > listen 80; Afaik this will make nginx listen to both IPv4 and IPv6 family. Specify the real IPv4 adress you want to listen to, to avoid the IPv6 bind. > listen [::]:80; This will make nginx to listen to both IPv6 and IPv4 family. Specify ipv6only=on [1] as a keyword to avoid the IPv4 bind. Same goes for 443/ssl. Imho, what you want is just listen to both address-families (without declaring IPv6): listen 80; listen 443 ssl; Regards, Lukas [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#listen From mdounin at mdounin.ru Mon Dec 7 12:51:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Dec 2015 15:51:58 +0300 Subject: IPv6, HTTPS, and SNI In-Reply-To: <1449482440.26958.17.camel@lusan.id.au> References: <1449482440.26958.17.camel@lusan.id.au> Message-ID: <20151207125157.GV74233@mdounin.ru> Hello! On Mon, Dec 07, 2015 at 08:00:40PM +1000, Nikolai Lusan wrote: > I am having in issue using https with multiple sites on ipv6 (nominally > SNI). If I declare more than one listen directive for ipv6 on port 443 > nginx refuses to start. The ipv4 configuration is fine, it's only an > issue with ipv6. Please define "refuses to start". It should print error details to stdout if startup fails for some reason, and will log anything to error log as well. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Dec 7 13:00:50 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Dec 2015 16:00:50 +0300 Subject: IPv6, HTTPS, and SNI In-Reply-To: References: <1449482440.26958.17.camel@lusan.id.au> Message-ID: <20151207130050.GW74233@mdounin.ru> Hello! On Mon, Dec 07, 2015 at 01:16:06PM +0100, Lukas Tribus wrote: > > listen 80; > > Afaik this will make nginx listen to both IPv4 and IPv6 family. > > Specify the real IPv4 adress you want to listen to, to avoid the IPv6 bind. No, just a port means IPv4 wildcard address. > > listen [::]:80; > > This will make nginx to listen to both IPv6 and IPv4 family. > > Specify ipv6only=on [1] as a keyword to avoid the IPv4 bind. No, IPv6-and-IPv4 listen sockets will be created if and only if you'll explicitly set the ipv6only parameter to off. (Before nginx 1.3.4, the operation system setting was used as a default for ipv6only. This was proven to be a wrong approach, and now nginx forces ipv6only=on by default. See http://nginx.org/r/listen for some more details.) -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Dec 7 13:01:41 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Dec 2015 16:01:41 +0300 Subject: IPv6, HTTPS, and SNI In-Reply-To: <4a8bf3bf908582bc826b30a42cd14a67.NginxMailingListEnglish@forum.nginx.org> References: <1449482440.26958.17.camel@lusan.id.au> <4a8bf3bf908582bc826b30a42cd14a67.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151207130141.GX74233@mdounin.ru> Hello! On Mon, Dec 07, 2015 at 06:32:32AM -0500, itpp2012 wrote: > I seem to recall that with ipv6 you can't mix 80 with 443 in one server > configuration, but I might be wrong here. You recall it incorrectly. -- Maxim Dounin http://nginx.org/ From sca at andreasschulze.de Mon Dec 7 13:13:21 2015 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 07 Dec 2015 14:13:21 +0100 Subject: IPv6, HTTPS, and SNI In-Reply-To: <1449482440.26958.17.camel@lusan.id.au> Message-ID: <20151207141321.Horde.kdp5S4ujecW4rPMwUITPoxh@andreasschulze.de> Nikolai Lusan: > In theory ipv6 shouldn't make a difference, and it sure as heck > doesn't make a > difference to the ipv4 configuration. Maybe not what you expect/like to hear: Why does my head hurt if I run against a wall? -> simply don't do that. IPv6 is more then IPv4 with longer addresses. same here: generally there's no need for SNI on IPv6 Take one address per service and you're fine. Andreas From vbart at nginx.com Mon Dec 7 14:05:06 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 07 Dec 2015 17:05:06 +0300 Subject: output_buffers value breaks thread_pool/aio on 1.9.5+ In-Reply-To: <56638F2F.6090502@kearsley.me> References: <56638F2F.6090502@kearsley.me> Message-ID: <5235856.G9XxzyLFbO@vbart-workstation> On Sunday 06 December 2015 01:28:15 Richard Kearsley wrote: > Hi > Since 1.9.5, > *) Change: now the "output_buffers" directive uses two buffers by default. > > The two buffers do not work with thread_pool/aio, the connection is > closed at 32,768 bytes (size of one buffer) I've just tested and everything worked fine. It seems there are other factors. > > These messages are shown in the error log: > [alert] 126931#126931: task #106 already active > > - Changing output_buffers back to 1 32k avoids the problem > - aio=off avoids the problem > Could you provide full configuration and a debug log? http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From daniel at mostertman.org Mon Dec 7 14:16:48 2015 From: daniel at mostertman.org (=?UTF-8?Q?Dani=c3=abl_Mostertman?=) Date: Mon, 7 Dec 2015 15:16:48 +0100 Subject: Optional SSL on same port, or access control/log with stream protocol. Message-ID: <566594D0.40703@mostertman.org> Hi! I'm running an HTTP-based application (Plex) that decides whether or not to use SSL based on what the client decides to talk to it. I would like to be able to control what it does and who's able to connect to it a bit more, and I'd like to do that with nginx. I've tried disabling all SSL, which works, but then the frontend client that's loaded over SSL will obviously get mixed content problems in browsers. The frontend client is loaded in a location on my SSL-only website, so I'd prefer to not do that. I can also not enable SSL completely, since Android/iOS/Chromecast clients can't cope with the SSL yet. I've tried using the stream protocol, which obviously works fine, but it does not seem possible to log IP's or have access control with them at all, is that correct? I get not being able to log HTTP-information, but some details of the incoming connection are certainly known, and could therefore be logged? Kind regards, Dani?l Mostertman From richard at kearsley.me Mon Dec 7 15:57:18 2015 From: richard at kearsley.me (Richard Kearsley) Date: Mon, 07 Dec 2015 15:57:18 +0000 Subject: output_buffers value breaks thread_pool/aio on 1.9.5+ In-Reply-To: <5235856.G9XxzyLFbO@vbart-workstation> References: <56638F2F.6090502@kearsley.me> <5235856.G9XxzyLFbO@vbart-workstation> Message-ID: <5665AC5E.8050809@kearsley.me> Hi Valentin Thanks for looking, I looked further and found that the cause is enabling (any) "body_filter_by_lua_file" script from lua module while aio and multiple output_buffers are set I'll send the report over to agentzh instead Thanks Richard On 07/12/15 14:05, Valentin V. Bartenev wrote: > On Sunday 06 December 2015 01:28:15 Richard Kearsley wrote: >> Hi >> Since 1.9.5, >> *) Change: now the "output_buffers" directive uses two buffers by default. >> >> The two buffers do not work with thread_pool/aio, the connection is >> closed at 32,768 bytes (size of one buffer) > I've just tested and everything worked fine. > It seems there are other factors. > >> These messages are shown in the error log: >> [alert] 126931#126931: task #106 already active >> >> - Changing output_buffers back to 1 32k avoids the problem >> - aio=off avoids the problem >> > Could you provide full configuration and a debug log? > http://nginx.org/en/docs/debugging_log.html > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From barry at automattic.com Mon Dec 7 17:53:33 2015 From: barry at automattic.com (Barry Abrahamson) Date: Mon, 7 Dec 2015 15:53:33 -0200 Subject: HTTP/2 stable status In-Reply-To: <566533C6.2070701@martin-wolfert.de> References: <566533C6.2070701@martin-wolfert.de> Message-ID: Yes, it's stable. We are running it on WordPress.com serving many hundreds of thousands req/sec. > On Dec 7, 2015, at 5:22 AM, Martin Wolfert wrote: > > Hey, > > has anyone experiences with nginx 1.9.7 and http/2 in pruduction environments? > Means: is http/2 stable in 1.9.7 ? > > Best, > Martin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Barry Abrahamson | Systems Wrangler | Automattic Blog: http://barry.wordpress.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From nginx-forum at nginx.us Mon Dec 7 19:44:07 2015 From: nginx-forum at nginx.us (George) Date: Mon, 07 Dec 2015 14:44:07 -0500 Subject: HTTP/2 stable status In-Reply-To: References: Message-ID: <7de811566f5da50199f83e7a0fd4e4e8.NginxMailingListEnglish@forum.nginx.org> yup very stable for me on 1.9.7 + HTTP/2 + ngx_pagespeed :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263313,263337#msg-263337 From reallfqq-nginx at yahoo.fr Mon Dec 7 21:08:40 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 7 Dec 2015 22:08:40 +0100 Subject: location ^~ modifier Message-ID: Hello, I got kind of a newbie question: Does the ^~ location modifier finds a matching string at the start of an URI? I naively thought it was just a variant of the classic prefix search, without any constraint on the placement of the matched string in an URI. Is there a simple way of matching the longest prefix location anywhere in an URI, discarding any regex location at the same level? ?Thanks,? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Dec 8 05:36:40 2015 From: nginx-forum at nginx.us (theheadsage) Date: Tue, 08 Dec 2015 00:36:40 -0500 Subject: nginx ignoring config file when started via systemd Message-ID: <4054ffd603ce83bb436acaab5eb64831.NginxMailingListEnglish@forum.nginx.org> Hi Guys, Got a somewhat interesting bug with nginx, which is where my config is being ignored by nginx when it's started via systemd. Here's the config: server { listen 80 default_server; listen [::]:80 default_server; server_name _; location /nodejs/testnodebox { proxy_pass http://127.0.0.1:7000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } Now, this config works fine on another server, but on this one, when started via systemd, visiting the URL results in the nginx 404 page. 2015/12/08 16:23:16 [error] 64787#0: *1 open() "/usr/share/nginx/html/nodejs/testnodebox" failed (2: No such file or directory), client: , server: _, request: "GET /nodejs/testnodebox HTTP/1.1", host: "" but if I kill nginx via systemctl stop nginx.service and start it manually via sudo nginx (in the /etc/nginx/ directory) it reads the config and proxies requests fine: - - [08/Dec/2015:16:34:30 +1100] "GET /nodejs/testnodebox HTTP/1.1" 200 33 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36" "-" I've tried enabling the debug log and that hasn't shown anything helpful. So I'm somewhat stuck. Any ideas on what might be causing this? Anywhere I should be looking? Thanks for you help, Daniel Sage Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263342,263342#msg-263342 From nginx-forum at nginx.us Tue Dec 8 06:28:41 2015 From: nginx-forum at nginx.us (mex) Date: Tue, 08 Dec 2015 01:28:41 -0500 Subject: nginx ignoring config file when started via systemd In-Reply-To: <4054ffd603ce83bb436acaab5eb64831.NginxMailingListEnglish@forum.nginx.org> References: <4054ffd603ce83bb436acaab5eb64831.NginxMailingListEnglish@forum.nginx.org> Message-ID: <79a724c5b6d2a6d9145b83ab39bd8291.NginxMailingListEnglish@forum.nginx.org> hi daniel, hiw did you installed nginx, manually (self-compiled) or through your distratos repo? can you provide the nginx -V - output? usually /etc/nginx/nginx.conf is the default-config, if not given; nginx -V will tell what defaults arre used in your config. cheers, mex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263342,263343#msg-263343 From francis at daoine.org Tue Dec 8 08:40:48 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 8 Dec 2015 08:40:48 +0000 Subject: location ^~ modifier In-Reply-To: References: Message-ID: <20151208084048.GZ3351@daoine.org> On Mon, Dec 07, 2015 at 10:08:40PM +0100, B.R. wrote: Hi there, > Does the ^~ location modifier finds a matching string at the start of an > URI? Yes. > I naively thought it was just a variant of the classic prefix search, Yes. > without any constraint on the placement of the matched string in an URI. No. > Is there a simple way of matching the longest prefix location anywhere in > an URI, discarding any regex location at the same level? "prefix" means "at the start". I'm not sure what you're asking. The first regex location that matches is used (with caveats). So if you have a regex location which is just your desired string, that is first in the config file, then it will be used ahead of any other regex locations that might have been used. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Dec 8 13:42:30 2015 From: nginx-forum at nginx.us (Ortal) Date: Tue, 08 Dec 2015 08:42:30 -0500 Subject: Sending the body Message-ID: <9905afc28c3d348a36cecc3f83bd43eb.NginxMailingListEnglish@forum.nginx.org> Hi, I am using the "Emiller's Guide To Nginx Module Development", in section "3.1.4. Sending the body", there is allocation to: ngx_chain_t out. I do not understand where is the free for this allocation? Should it be a handler which free the buf? Thanks, Ortal Levi Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263346,263346#msg-263346 From nginx-forum at nginx.us Tue Dec 8 14:03:52 2015 From: nginx-forum at nginx.us (huakaibird) Date: Tue, 08 Dec 2015 09:03:52 -0500 Subject: Nginx use ssl slow than ELB Message-ID: <919a2147ce447bfc3e778d8d7fe8d578.NginxMailingListEnglish@forum.nginx.org> Hi, I want to use nginx as LB to replace aws ELB, but found that it is much slower, it affected web users, sometime user will encounter access web time out. this is my configuration, please help to check if something is wrong. I use ssl. user nginx; worker_processes auto; worker_rlimit_nofile 65535; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { use epoll; worker_connections 65535; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main buffer=1m flush=5s; sendfile on; keepalive_timeout 60; client_max_body_size 0; server { listen 8080; root /usr/share/nginx/html; location = /nginx_status { stub_status on; access_log off; } location = /status.html { } } include /etc/nginx/test.d/test.conf; } test.conf: ssl_session_cache shared:SSL:10m; ssl_session_timeout 30m; upstream backend { server x.x.x.x; server x.x.x.x; check interval=30000 rise=3 fall=5 timeout=5000 type=http; check_http_send "HEAD /healthcheck HTTP/1.0\r\n\r\n"; } server { listen 80; listen 443 ssl; location / { proxy_pass http://backend; } keepalive_timeout 60; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_buffers 128 16k; client_body_buffer_size 2048k; underscores_in_headers on; ssl_certificate ssl/chained.crt; ssl_certificate_key ssl/key.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!DH:!EDH'; #ssl_ciphers HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM; ssl_prefer_server_ciphers on; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263347,263347#msg-263347 From rainer at ultra-secure.de Tue Dec 8 14:34:28 2015 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Tue, 08 Dec 2015 15:34:28 +0100 Subject: try_files and expires Message-ID: Hi, I have a case with a CMS (pimcore) and setting the expires directive. location / { # First attempt to serve request as file, then as directory, then fall back to index.php try_files $uri $uri/ /website/var/assets/$uri /index.php?$args; index index.php; } So, there are "assets" (pdfs, mostly but also some images) in a subdirectory. This works, but this directive: location ~* \.(jpe?g|gif|png|bmp|ico|css|js|pdf|zip|htm|html|docx?|xlsx?|pptx?|txt|wav|swf|avi|mp\d)$ { access_log off; log_not_found off; expires 1w; } breaks the first directive. Is there a way to have both? Rainer From mdounin at mdounin.ru Tue Dec 8 16:09:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Dec 2015 19:09:39 +0300 Subject: nginx-1.9.8 Message-ID: <20151208160938.GG74233@mdounin.ru> Changes with nginx 1.9.8 08 Dec 2015 *) Feature: pwritev() support. *) Feature: the "include" directive inside the "upstream" block. *) Feature: the ngx_http_slice_module. *) Bugfix: a segmentation fault might occur in a worker process when using LibreSSL; the bug had appeared in 1.9.6. *) Bugfix: nginx could not be built on OS X in some cases. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Tue Dec 8 17:33:52 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 8 Dec 2015 18:33:52 +0100 Subject: location ^~ modifier In-Reply-To: <20151208084048.GZ3351@daoine.org> References: <20151208084048.GZ3351@daoine.org> Message-ID: The location documentation states the following: *A location can either be defined by a prefix string, or by a regular expression.* Thus, ~ and ~* being regex modifiers, the rest is considered as being classified as 'prefix. That means the following is a 'prefix' location block: location /whatever/ { } That said, this block will match the /whatever/ string anywhere in the URI string, not only at its start. As a consequence, to me, the meaning of 'prefix' was not tied to the location of the matched string in the URI, but rather a definition more like 'matching a string in the URI'. Without any modifier, a prefix location will be sensitive to the presence of regex locations, while with the ^~ modifier it also won't match the same cases. That modifier actually have 2 consequences, instead of only the first being documented. My brain hurts... Note that if you remove the capability of the default (without any modifier) location to match the string anywhere in the URI (thus becoming 'prefix' *per se*), you do not have any other means to match a string regardless of its position in the URI than regex ones... which are discouraged from being used (see recent answer from Maxim on a similar topic). ?Where is the exit of the maze again?? --- *B. R.* On Tue, Dec 8, 2015 at 9:40 AM, Francis Daly wrote: > On Mon, Dec 07, 2015 at 10:08:40PM +0100, B.R. wrote: > > Hi there, > > > Does the ^~ location modifier finds a matching string at the start of an > > URI? > > Yes. > > > I naively thought it was just a variant of the classic prefix search, > > Yes. > > > without any constraint on the placement of the matched string in an URI. > > No. > > > Is there a simple way of matching the longest prefix location anywhere in > > an URI, discarding any regex location at the same level? > > "prefix" means "at the start". > > I'm not sure what you're asking. > > The first regex location that matches is used (with caveats). So if you > have a regex location which is just your desired string, that is first in > the config file, then it will be used ahead of any other regex locations > that might have been used. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue Dec 8 18:53:08 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 8 Dec 2015 13:53:08 -0500 Subject: [nginx-announce] nginx-1.9.8 In-Reply-To: <20151208160943.GH74233@mdounin.ru> References: <20151208160943.GH74233@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.8 for Windows http:// kevinworthington.com/nginxwin198 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Dec 8, 2015 at 11:09 AM, Maxim Dounin wrote: > Changes with nginx 1.9.8 08 Dec > 2015 > > *) Feature: pwritev() support. > > *) Feature: the "include" directive inside the "upstream" block. > > *) Feature: the ngx_http_slice_module. > > *) Bugfix: a segmentation fault might occur in a worker process when > using LibreSSL; the bug had appeared in 1.9.6. > > *) Bugfix: nginx could not be built on OS X in some cases. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Tue Dec 8 19:50:47 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 08 Dec 2015 20:50:47 +0100 Subject: How about to add splice Message-ID: <1f588540c9a5f7f221211bb8f2a405b0@none.at> Dear developer. Do you know the splice() feature in Linux? https://lwn.net/Articles/178199/ http://man7.org/linux/man-pages/man2/splice.2.html http://ogris.de/howtos/splice.html How about to add it ;-) BR Aleks From nginx-forum at nginx.us Wed Dec 9 07:58:50 2015 From: nginx-forum at nginx.us (theheadsage) Date: Wed, 09 Dec 2015 02:58:50 -0500 Subject: nginx ignoring config file when started via systemd In-Reply-To: <79a724c5b6d2a6d9145b83ab39bd8291.NginxMailingListEnglish@forum.nginx.org> References: <4054ffd603ce83bb436acaab5eb64831.NginxMailingListEnglish@forum.nginx.org> <79a724c5b6d2a6d9145b83ab39bd8291.NginxMailingListEnglish@forum.nginx.org> Message-ID: <91c0f9960f7ff910590f9c7f549235ae.NginxMailingListEnglish@forum.nginx.org> Hi mex, It was installed via the RHEL 7 repo. I've managed to solve the issue, and so for anyone else experiencing this odd behavour the solution is to check the SELINUX file context. My config was in a file that had "unconfined_u:object_r:var_t:s0" and it needed "unconfined_u:object_r:httpd_config_t:s0" so run "sudo restorecon -v /path/to/conf" Otherwise nginx just silently ignores the file, and no error shows up in the audit.log file. (or disable SELINUX, but alas I can't) Regards, Daniel Sage Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263342,263359#msg-263359 From francis at daoine.org Wed Dec 9 08:47:05 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 9 Dec 2015 08:47:05 +0000 Subject: location ^~ modifier In-Reply-To: References: <20151208084048.GZ3351@daoine.org> Message-ID: <20151209084705.GA3351@daoine.org> On Tue, Dec 08, 2015 at 06:33:52PM +0100, B.R. wrote: Hi there, > That means the following is a 'prefix' location block: > location /whatever/ { > } > > That said, this block will match the /whatever/ string anywhere in the URI > string, not only at its start. No, it won't. === location / { return 200 "in location /\n"; } location /aaa/ { return 200 "in location /aaa/\n"; } === $ curl http://localhost/aaa/bbb/ in location /aaa/ $ curl http://localhost/bbb/aaa/ in location / > As a consequence, to me, the meaning of 'prefix' was not tied to the > location of the matched string in the URI, but rather a definition more > like 'matching a string in the URI'. No. "prefix" has its normal English language meaning. The documentation at http://nginx.org/r/location is correct. (I think the documentation there is *incomplete*, as it is not immediately clear to me how nested locations are searched. But that has been clarified on the mailing list recently, and that clarification matches what can be seen in tests.) > Where is the exit of the maze again? prefix matches -- without modifier, with modifier ^~, and (technically, probably) with modifier = -- are exact string matches at the start of the url. (And consequently should all start with the character "/".) If you want to match something that is not an exact string match at the start of the url, you must use something that is not a prefix match. f -- Francis Daly francis at daoine.org From 44699290 at qq.com Wed Dec 9 09:59:40 2015 From: 44699290 at qq.com (=?ISO-8859-1?B?WyRdQF5fXi87ZiA=?=) Date: Wed, 9 Dec 2015 17:59:40 +0800 Subject: the nginx server could not access win2012r2's request Message-ID: i install a nginx service in a centos 7 server.i can visit the nginx web page from a linux client.but i can not visit the page from a win2012r2 client.it show ERR_CONNECTION_TIMED_OUT error.there is no error on the nginx's logs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dennisml at conversis.de Wed Dec 9 11:40:27 2015 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Wed, 9 Dec 2015 12:40:27 +0100 Subject: nginx 1.8.0 not logging properly Message-ID: <5668132B.2050404@conversis.de> Hi, I'm using the Nginx 1.8.0 rpm found on http://nginx.org/packages/centos/7/x86_64/RPMS/ with a config that has been working fine so far with Nginx 1.6.x. The problem is that sometimes I get 500 Internal Server Errors but even though have the line "error_log /var/log/nginx/error.log warn;" in nginx.conf I don't get any log entries. When I restart Nginx I get the following in the error log: 2015/12/09 12:16:45 [alert] 32589#0: *1662117 open socket #10 left in connection 25 2015/12/09 12:16:45 [alert] 32589#0: *1662116 open socket #3 left in connection 43 2015/12/09 12:16:45 [alert] 32589#0: aborting Any ideas what might be going on here? Clearly Nginx can write to the log yet for the 500 errors for example it just doesn't which makes finding out what is causing these errors a bit problematic. If I can't find out how to fix this the only option I see is downgrading to 1.6.x again. Regards, Dennis From vbart at nginx.com Wed Dec 9 11:54:00 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 09 Dec 2015 14:54 +0300 Subject: How about to add splice In-Reply-To: <1f588540c9a5f7f221211bb8f2a405b0@none.at> References: <1f588540c9a5f7f221211bb8f2a405b0@none.at> Message-ID: <4279341.HZZb69u3hR@vbart-workstation> On Tuesday 08 December 2015 20:50:47 Aleksandar Lazic wrote: > Dear developer. > > Do you know the splice() feature in Linux? > > https://lwn.net/Articles/178199/ > http://man7.org/linux/man-pages/man2/splice.2.html > http://ogris.de/howtos/splice.html > > How about to add it ;-) > [..] Of course we know about splice(). You can find some mentions of it in the mailing list archive since 2006. The problem with this syscall is that it's not easy to implement, while the performance benefits are questionable (AFAIK not all network cards work well with splice()) and use cases are limited. It only can be useful for proxying big amounts of data without any processing. But if you need compression, or TLS, or SSI, or even some simple substitution, then splice() cannot be used. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Dec 9 15:08:11 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Dec 2015 18:08:11 +0300 Subject: nginx-1.9.9 Message-ID: <20151209150811.GP74233@mdounin.ru> Changes with nginx 1.9.9 09 Dec 2015 *) Bugfix: proxying to unix domain sockets did not work when using variables; the bug had appeared in 1.9.8. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Wed Dec 9 15:50:41 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 9 Dec 2015 10:50:41 -0500 Subject: [nginx-announce] nginx-1.9.9 In-Reply-To: <20151209150815.GQ74233@mdounin.ru> References: <20151209150815.GQ74233@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.9 for Windows http://kevinworthington.com/nginxwin199 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Wed, Dec 9, 2015 at 10:08 AM, Maxim Dounin wrote: > Changes with nginx 1.9.9 09 Dec > 2015 > > *) Bugfix: proxying to unix domain sockets did not work when using > variables; the bug had appeared in 1.9.8. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 9 15:57:08 2015 From: nginx-forum at nginx.us (ibmed) Date: Wed, 09 Dec 2015 10:57:08 -0500 Subject: How to make nginx return response body for 4xx and 5xx? Message-ID: Looks like nginx removes response body when HTTP code is 4xx/5xx. Is there a way to make nginx proxy the original response body? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263391,263391#msg-263391 From cj.wijtmans at gmail.com Wed Dec 9 16:03:56 2015 From: cj.wijtmans at gmail.com (Christ-Jan Wijtmans) Date: Wed, 9 Dec 2015 17:03:56 +0100 Subject: 502 bad gateway Message-ID: Hello, I have a magento website working fine for months now. However i recently installed a PDF plugin to send invoices with emails. The PDF url is giving a 502 bad gateway "{admin}/sales_invoice/view/invoice_id/717/key/5650a40d93929f5db4383db2a169b08a/". I have checked around on the internet and cant find the issue here. no APC. increased fastcgi buffers doesnt change anything. checked nginx logs, magento logs and user logs but nothing. Could anyone point me in the right direction on how to figure this one out? From mdounin at mdounin.ru Wed Dec 9 16:23:08 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Dec 2015 19:23:08 +0300 Subject: How to make nginx return response body for 4xx and 5xx? In-Reply-To: References: Message-ID: <20151209162308.GV74233@mdounin.ru> Hello! On Wed, Dec 09, 2015 at 10:57:08AM -0500, ibmed wrote: > Looks like nginx removes response body when HTTP code is 4xx/5xx. > > Is there a way to make nginx proxy the original response body? This is what nginx does by default. If you see response bodies removed in your setup, you've probably explicitly asked nginx to do so using the proxy_intercept_errors directive. See http://nginx.org/r/proxy_intercept_errors for details. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed Dec 9 16:31:06 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 9 Dec 2015 16:31:06 +0000 Subject: try_files and expires In-Reply-To: References: Message-ID: <20151209163106.GB3351@daoine.org> On Tue, Dec 08, 2015 at 03:34:28PM +0100, rainer at ultra-secure.de wrote: Hi there, > location / { > # First attempt to serve request as file, then as directory, > then fall back to index.php > try_files $uri $uri/ /website/var/assets/$uri /index.php?$args; > index index.php; > } > > So, there are "assets" (pdfs, mostly but also some images) in a > subdirectory. > > This works, but this directive: > > location ~* \.(jpe?g|gif|png|bmp|ico|css|js|pdf|zip|htm|html|docx?|xlsx?|pptx?|txt|wav|swf|avi|mp\d)$ > { > access_log off; > log_not_found off; > > expires 1w; > } > > breaks the first directive. > > Is there a way to have both? One request is handled in one location. Put the configuration that you want, in the location that is matched. It is not immediately clear to me what response you want for a request for /one.pdf. Perhaps add the try_files to the regex location? f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Dec 9 17:23:32 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Dec 2015 20:23:32 +0300 Subject: 502 bad gateway In-Reply-To: References: Message-ID: <20151209172332.GW74233@mdounin.ru> Hello! On Wed, Dec 09, 2015 at 05:03:56PM +0100, Christ-Jan Wijtmans wrote: > Hello, > > I have a magento website working fine for months now. > However i recently installed a PDF plugin to send invoices with emails. > The PDF url is giving a 502 bad gateway > "{admin}/sales_invoice/view/invoice_id/717/key/5650a40d93929f5db4383db2a169b08a/". > I have checked around on the internet and cant find the issue here. no > APC. increased fastcgi buffers doesnt change anything. checked nginx > logs, magento logs and user logs but nothing. > > Could anyone point me in the right direction on how to figure this one out? If 502 is returned by nginx, this means that an error happened while talking to an upstream server. Details about the error are expected to be available in the error log at the "error" level. If you don't see error in the error log - this means that either logging isn't properly configured (or the error was returned by the upstream server, not by nginx). Details on how to configure logging can be found here: http://nginx.org/r/error_log This article may be useful as well: http://nginx.org/en/docs/debugging_log.html -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Wed Dec 9 19:50:02 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 9 Dec 2015 20:50:02 +0100 Subject: location ^~ modifier In-Reply-To: <20151209084705.GA3351@daoine.org> References: <20151208084048.GZ3351@daoine.org> <20151209084705.GA3351@daoine.org> Message-ID: Thanks Francis, I directly went back to the basics nginx 101 lesson. I feel dumb all of a sudden... The maze was made of my misconceptions! I wonder where they came from. I was sure I tested what I said... which is impossible. --- *B. R.* On Wed, Dec 9, 2015 at 9:47 AM, Francis Daly wrote: > On Tue, Dec 08, 2015 at 06:33:52PM +0100, B.R. wrote: > > Hi there, > > > That means the following is a 'prefix' location block: > > location /whatever/ { > > } > > > > That said, this block will match the /whatever/ string anywhere in the > URI > > string, not only at its start. > > No, it won't. > > === > location / { > return 200 "in location /\n"; > } > > location /aaa/ { > return 200 "in location /aaa/\n"; > } > === > > $ curl http://localhost/aaa/bbb/ > in location /aaa/ > > $ curl http://localhost/bbb/aaa/ > in location / > > > As a consequence, to me, the meaning of 'prefix' was not tied to the > > location of the matched string in the URI, but rather a definition more > > like 'matching a string in the URI'. > > No. > > "prefix" has its normal English language meaning. The documentation at > http://nginx.org/r/location is correct. > > (I think the documentation there is *incomplete*, as it is not immediately > clear to me how nested locations are searched. But that has been clarified > on the mailing list recently, and that clarification matches what can > be seen in tests.) > > > Where is the exit of the maze again? > > prefix matches -- without modifier, with modifier ^~, and (technically, > probably) with modifier = -- are exact string matches at the start of > the url. (And consequently should all start with the character "/".) > > If you want to match something that is not an exact string match at the > start of the url, you must use something that is not a prefix match. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Wed Dec 9 20:48:38 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 09 Dec 2015 21:48:38 +0100 Subject: How about to add splice In-Reply-To: <4279341.HZZb69u3hR@vbart-workstation> References: <1f588540c9a5f7f221211bb8f2a405b0@none.at> <4279341.HZZb69u3hR@vbart-workstation> Message-ID: <895011b15cda93f60bba90017bfc09f5@none.at> Dear Valentin. Am 09-12-2015 12:54, schrieb Valentin V. Bartenev: > On Tuesday 08 December 2015 20:50:47 Aleksandar Lazic wrote: >> Dear developer. >> >> Do you know the splice() feature in Linux? >> >> https://lwn.net/Articles/178199/ >> http://man7.org/linux/man-pages/man2/splice.2.html >> http://ogris.de/howtos/splice.html >> >> How about to add it ;-) >> > [..] > > Of course we know about splice(). You can find some mentions of it > in the mailing list archive since 2006. > > The problem with this syscall is that it's not easy to implement, > while the performance benefits are questionable (AFAIK not all network > cards work well with splice()) and use cases are limited. > > It only can be useful for proxying big amounts of data without any > processing. But if you need compression, or TLS, or SSI, or even some > simple substitution, then splice() cannot be used. Thanks for answer. BR Aleks From maxim at nginx.com Wed Dec 9 22:15:27 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 10 Dec 2015 01:15:27 +0300 Subject: How about to add splice In-Reply-To: <4279341.HZZb69u3hR@vbart-workstation> References: <1f588540c9a5f7f221211bb8f2a405b0@none.at> <4279341.HZZb69u3hR@vbart-workstation> Message-ID: <5668A7FF.70706@nginx.com> On 12/9/15 2:54 PM, Valentin V. Bartenev wrote: > On Tuesday 08 December 2015 20:50:47 Aleksandar Lazic wrote: >> Dear developer. >> >> Do you know the splice() feature in Linux? >> >> https://lwn.net/Articles/178199/ >> http://man7.org/linux/man-pages/man2/splice.2.html >> http://ogris.de/howtos/splice.html >> >> How about to add it ;-) >> > [..] > > Of course we know about splice(). You can find some mentions of it > in the mailing list archive since 2006. > > The problem with this syscall is that it's not easy to implement, > while the performance benefits are questionable (AFAIK not all network > cards work well with splice()) and use cases are limited. > > It only can be useful for proxying big amounts of data without any > processing. But if you need compression, or TLS, or SSI, or even some > simple substitution, then splice() cannot be used. > It should fit in our stream quite nicely. -- Maxim Konovalov From purpleritza at gmail.com Wed Dec 9 23:58:29 2015 From: purpleritza at gmail.com (=?UTF-8?B?R29yYW4gVGVwxaFpxIc=?=) Date: Thu, 10 Dec 2015 00:58:29 +0100 Subject: HTTP/2 stable status In-Reply-To: <7de811566f5da50199f83e7a0fd4e4e8.NginxMailingListEnglish@forum.nginx.org> References: <7de811566f5da50199f83e7a0fd4e4e8.NginxMailingListEnglish@forum.nginx.org> Message-ID: To bad ngx_pagespeed doesn't work on FreeBSD :(( On Dec 7, 2015 8:44 PM, "George" wrote: > yup very stable for me on 1.9.7 + HTTP/2 + ngx_pagespeed :) > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263313,263337#msg-263337 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Dec 10 10:21:23 2015 From: nginx-forum at nginx.us (jonefee) Date: Thu, 10 Dec 2015 05:21:23 -0500 Subject: nginx return http code 498 Message-ID: <4f3fdd096eef77dcfa3128e3b1df4071.NginxMailingListEnglish@forum.nginx.org> i am using nginx as a reverse proxy server(Let 's call it Server A). server A's upstream server is also a nginx server((Let 's call it Server B) backend with a jetty server((Let 's call it Server C) served at port 8080. we have a lot of Server A and each one of them use the same bunch of server B as upstream, each server B use its local jetty server as its own upstream server. server A and Server B work at port 80. i found few http response code 498 from server B about 2 or 3 days ago . google on it found little information ,any one has some idea ? server A config: location /views/3.0/ { if ( $query_string ~* ^(.*)req_times=[2-3](.*)$ ){ return 444; } proxy_pass http://views_java_sec_cnc; proxy_set_header REMOTE_ADDR $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_next_upstream http_500 http_502 http_504; limit_req zone=java burst=5 nodelay; proxy_next_upstream_tries 1; } Server B config: # views location /views/ { proxy_pass http://views_server; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_http_version 1.1; proxy_set_header Connection ""; #proxy_pass_request_body off; #proxy_cache IFACE; limit_req zone=views-java burst=5 nodelay; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263399,263399#msg-263399 From mdounin at mdounin.ru Thu Dec 10 13:17:52 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Dec 2015 16:17:52 +0300 Subject: How about to add splice In-Reply-To: <5668A7FF.70706@nginx.com> References: <1f588540c9a5f7f221211bb8f2a405b0@none.at> <4279341.HZZb69u3hR@vbart-workstation> <5668A7FF.70706@nginx.com> Message-ID: <20151210131752.GA74233@mdounin.ru> Hello! On Thu, Dec 10, 2015 at 01:15:27AM +0300, Maxim Konovalov wrote: > On 12/9/15 2:54 PM, Valentin V. Bartenev wrote: > > On Tuesday 08 December 2015 20:50:47 Aleksandar Lazic wrote: > >> Dear developer. > >> > >> Do you know the splice() feature in Linux? > >> > >> https://lwn.net/Articles/178199/ > >> http://man7.org/linux/man-pages/man2/splice.2.html > >> http://ogris.de/howtos/splice.html > >> > >> How about to add it ;-) > >> > > [..] > > > > Of course we know about splice(). You can find some mentions of it > > in the mailing list archive since 2006. > > > > The problem with this syscall is that it's not easy to implement, > > while the performance benefits are questionable (AFAIK not all network > > cards work well with splice()) and use cases are limited. > > > > It only can be useful for proxying big amounts of data without any > > processing. But if you need compression, or TLS, or SSI, or even some > > simple substitution, then splice() cannot be used. > > > It should fit in our stream quite nicely. Not really, as stream is able to do SSL encoding and decoding. -- Maxim Dounin http://nginx.org/ From luky-37 at hotmail.com Thu Dec 10 14:31:23 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 10 Dec 2015 15:31:23 +0100 Subject: How about to add splice In-Reply-To: <20151210131752.GA74233@mdounin.ru> References: <1f588540c9a5f7f221211bb8f2a405b0@none.at>, <4279341.HZZb69u3hR@vbart-workstation> <5668A7FF.70706@nginx.com>,<20151210131752.GA74233@mdounin.ru> Message-ID: Hi Maxim, >>> It only can be useful for proxying big amounts of data without any >>> processing. But if you need compression, or TLS, or SSI, or even some >>> simple substitution, then splice() cannot be used. >>> >> It should fit in our stream quite nicely. > > Not really, as stream is able to do SSL encoding and decoding. We also have sendfile() in nginx despite the fact that it can't be used in every single occasion, such as TLS sessions or when gzipping. I am aware that sendfile() is extremely useful in some configurations, I'm just saying the fact that it isn't able to work with TLS wasn't showstopper for it, so at the very least it shouldn't be the only showstopper for splice(). splice() would do something similar than sendfile(). HAProxy doesn't always splice() either, but when it makes sense [1]. Regarding TLS: There are some efforts in both linux [2] and FreeBSD [3] to implement TLS in-kernel; leveraging kernel features like sendfile() while sending it through a in-kernel crypto stack. I'm not saying splice() is a must-have; I certainly don't need it personally, but there are use-cases out there. Regards, Lukas [1] http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-option%20splice-auto [2] https://lwn.net/Articles/666509/ [3] https://people.freebsd.org/~rrs/asiabsd_2015_tls.pdf From mdounin at mdounin.ru Thu Dec 10 15:36:54 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Dec 2015 18:36:54 +0300 Subject: How about to add splice In-Reply-To: References: <1f588540c9a5f7f221211bb8f2a405b0@none.at> <4279341.HZZb69u3hR@vbart-workstation> <5668A7FF.70706@nginx.com> <20151210131752.GA74233@mdounin.ru> Message-ID: <20151210153654.GE74233@mdounin.ru> Hello! On Thu, Dec 10, 2015 at 03:31:23PM +0100, Lukas Tribus wrote: > >>> It only can be useful for proxying big amounts of data without any > >>> processing. But if you need compression, or TLS, or SSI, or even some > >>> simple substitution, then splice() cannot be used. > >>> > >> It should fit in our stream quite nicely. > > > > Not really, as stream is able to do SSL encoding and decoding. > > We also have sendfile() in nginx despite the fact that it can't be used > in every single occasion, such as TLS sessions or when gzipping. For sure it can be handled like sendfile(), and only used when possible. I'm just saying that even with the stream module the question is still the same as outlined by Valentin - questionable potential benefits vs. added complexity. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 10 16:12:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Dec 2015 19:12:55 +0300 Subject: nginx return http code 498 In-Reply-To: <4f3fdd096eef77dcfa3128e3b1df4071.NginxMailingListEnglish@forum.nginx.org> References: <4f3fdd096eef77dcfa3128e3b1df4071.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151210161254.GF74233@mdounin.ru> Hello! On Thu, Dec 10, 2015 at 05:21:23AM -0500, jonefee wrote: > i am using nginx as a reverse proxy server(Let 's call it Server A). server > A's upstream server is also a nginx server((Let 's call it Server B) backend > with a jetty server((Let 's call it Server C) served at port 8080. we have > a lot of Server A and each one of them use the same bunch of server B as > upstream, each server B use its local jetty server as its own upstream > server. server A and Server B work at port 80. > > i found few http response code 498 from server B about 2 or 3 days ago . > google on it found little information ,any one has some idea ? 498 is not something nginx returns unless explicitly configured to do so. Unless you see something like "return 498" in your nginx configs, try looking into your backend server code instead. -- Maxim Dounin http://nginx.org/ From gfrankliu at gmail.com Thu Dec 10 16:34:19 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 10 Dec 2015 08:34:19 -0800 Subject: Next upstream based on custom http code Message-ID: Hi There are a few options for when to try next upstream : http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream Is it possible to configure a custom http code so that upstream servers can send that code if it wants to send nginx to upstream ? Thanks Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From cj.wijtmans at gmail.com Thu Dec 10 17:04:49 2015 From: cj.wijtmans at gmail.com (Christ-Jan Wijtmans) Date: Thu, 10 Dec 2015 18:04:49 +0100 Subject: 502 bad gateway In-Reply-To: <20151209172332.GW74233@mdounin.ru> References: <20151209172332.GW74233@mdounin.ru> Message-ID: Hello Maxim, Thank you for the response. I was assuming the error_log in the block was properly set up but apparently it was not. I finally got an error message "2015/12/10 18:02:13 [error] 30341#0: *10 recv() failed (104: Connection reset by peer) while reading response header from upstream". Probably php-fpm related? I will continue to dig around. From nginx-forum at nginx.us Thu Dec 10 18:45:04 2015 From: nginx-forum at nginx.us (ibmed) Date: Thu, 10 Dec 2015 13:45:04 -0500 Subject: How to make nginx return response body for 4xx and 5xx? In-Reply-To: References: Message-ID: <757b98b168a54e9b34c3e505d26abca1.NginxMailingListEnglish@forum.nginx.org> Sorry to bother everyone. Nginx proxies everything just fine. The issue was at the upstream side. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263391,263406#msg-263406 From reallfqq-nginx at yahoo.fr Thu Dec 10 19:12:28 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 10 Dec 2015 20:12:28 +0100 Subject: Next upstream based on custom http code In-Reply-To: References: Message-ID: Like... 503? To me 'server wants to make another upstream dealing with the request' sounds very much like 'Service Unavailable'. --- *B. R.* On Thu, Dec 10, 2015 at 5:34 PM, Frank Liu wrote: > Hi > > There are a few options for when to try next upstream : > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream > > Is it possible to configure a custom http code so that upstream servers > can send that code if it wants to send nginx to upstream ? > > Thanks > Frank > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Thu Dec 10 20:20:07 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 10 Dec 2015 12:20:07 -0800 Subject: Next upstream based on custom http code In-Reply-To: References: Message-ID: No, 503 may be a legitimate error from upstream that nginx needs to pass to client. I am thinking some unused code , say, 590. On Thursday, December 10, 2015, B.R. wrote: > Like... 503? > To me 'server wants to make another upstream dealing with the request' > sounds very much like 'Service Unavailable'. > --- > *B. R.* > > On Thu, Dec 10, 2015 at 5:34 PM, Frank Liu > wrote: > >> Hi >> >> There are a few options for when to try next upstream : >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream >> >> Is it possible to configure a custom http code so that upstream servers >> can send that code if it wants to send nginx to upstream ? >> >> Thanks >> Frank >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Dec 11 08:23:05 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 11 Dec 2015 09:23:05 +0100 Subject: Next upstream based on custom http code In-Reply-To: References: Message-ID: If the upstream refuses to process a request, you might wish to emulate an unavailable service or a lack of response (timeout). Backend up and working are expected to process requests. Switching between legitimate errors and faked one will be done by monitoring backend logs. There is no such thing as a 'Coffee Break' HTTP code. :oP --- *B. R.* On Thu, Dec 10, 2015 at 9:20 PM, Frank Liu wrote: > No, 503 may be a legitimate error from upstream that nginx needs to pass > to client. > I am thinking some unused code , say, 590. > > > On Thursday, December 10, 2015, B.R. wrote: > >> Like... 503? >> To me 'server wants to make another upstream dealing with the request' >> sounds very much like 'Service Unavailable'. >> --- >> *B. R.* >> >> On Thu, Dec 10, 2015 at 5:34 PM, Frank Liu wrote: >> >>> Hi >>> >>> There are a few options for when to try next upstream : >>> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream >>> >>> Is it possible to configure a custom http code so that upstream servers >>> can send that code if it wants to send nginx to upstream ? >>> >>> Thanks >>> Frank >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Fri Dec 11 09:05:40 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 11 Dec 2015 01:05:40 -0800 Subject: Next upstream based on custom http code In-Reply-To: References: Message-ID: Here is my flow: client - nginx - upstream - real upstream Upstream is getting the response from the 'real' upstream. So if real upstream is wrong, nginx will get standard 5xx from upstream and in this case I don't really want nginx to try next upstream because it will hit the same bad real upstream with the same error code. In other cases where real upstream is good but one upstream is having too much load or wants to finish up in flight requests then go down for maintenance , is it possible for upstream to send a soft error code to nginx to tell it try next upstream.? On Friday, December 11, 2015, B.R. wrote: > If the upstream refuses to process a request, you might wish to emulate an > unavailable service or a lack of response (timeout). Backend up and working > are expected to process requests. > Switching between legitimate errors and faked one will be done by > monitoring backend logs. > > There is no such thing as a 'Coffee Break' HTTP code. :oP > --- > *B. R.* > > On Thu, Dec 10, 2015 at 9:20 PM, Frank Liu > wrote: > >> No, 503 may be a legitimate error from upstream that nginx needs to pass >> to client. >> I am thinking some unused code , say, 590. >> >> >> On Thursday, December 10, 2015, B.R. > > wrote: >> >>> Like... 503? >>> To me 'server wants to make another upstream dealing with the request' >>> sounds very much like 'Service Unavailable'. >>> --- >>> *B. R.* >>> >>> On Thu, Dec 10, 2015 at 5:34 PM, Frank Liu wrote: >>> >>>> Hi >>>> >>>> There are a few options for when to try next upstream : >>>> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream >>>> >>>> Is it possible to configure a custom http code so that upstream servers >>>> can send that code if it wants to send nginx to upstream ? >>>> >>>> Thanks >>>> Frank >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From full.vladd at gmail.com Fri Dec 11 12:14:17 2015 From: full.vladd at gmail.com (Vlad Fulgeanu) Date: Fri, 11 Dec 2015 14:14:17 +0200 Subject: How to set up nginx for file uploading Message-ID: Hi everyone! I am having some trouble setting up nginx for file uploading. I am using nginx as a proxy in front of my nodejs server (that has hapi as server framework). Here is the nginx.conf file's portion for this server: http://dpaste.com/0VJKE5K The problem is that I get > No 'Access-Control-Allow-Origin' header is present on the requested > resource. Origin 'https://test.project.com' is therefore not allowed > access. > immediately after sending the pre-flight request when uploading the file. Can anyone please help me? Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Fri Dec 11 14:27:40 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 11 Dec 2015 15:27:40 +0100 Subject: How to set up nginx for file uploading In-Reply-To: References: Message-ID: Your config doesn't appear to add any Access-Control-Allow-Origin header, so unless your backend is adding this, you will need to add an appropriate Access-Control-Allow-Origin header. On Fri, Dec 11, 2015 at 1:14 PM, Vlad Fulgeanu wrote: > Hi everyone! > > I am having some trouble setting up nginx for file uploading. > > I am using nginx as a proxy in front of my nodejs server (that has hapi as > server framework). > > Here is the nginx.conf file's portion for this server: > http://dpaste.com/0VJKE5K > > The problem is that I get > >> No 'Access-Control-Allow-Origin' header is present on the requested >> resource. Origin 'https://test.project.com' is therefore not allowed >> access. >> > immediately after sending the pre-flight request when uploading the file. > > Can anyone please help me? > Thanks in advance. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From full.vladd at gmail.com Fri Dec 11 14:50:28 2015 From: full.vladd at gmail.com (Vlad Fulgeanu) Date: Fri, 11 Dec 2015 16:50:28 +0200 Subject: How to set up nginx for file uploading In-Reply-To: References: Message-ID: I added these in "location /upload/preview": add_header 'Access-Control-Allow-Origin' 'https://test.project.com'; > add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, > X-Requested-With, Cache-Control, If-None-Match'; And now it gives me: No 'Access-Control-Allow-Origin' header is present on the requested > resource. Origin 'https://test.project.com' is therefore not allowed > access. The response had HTTP status code 503. As a response to the actual request (not the pre-flight) On Fri, Dec 11, 2015 at 4:27 PM, Richard Stanway wrote: > Your config doesn't appear to add any Access-Control-Allow-Origin header, > so unless your backend is adding this, you will need to add an > appropriate Access-Control-Allow-Origin header. > > On Fri, Dec 11, 2015 at 1:14 PM, Vlad Fulgeanu > wrote: > >> Hi everyone! >> >> I am having some trouble setting up nginx for file uploading. >> >> I am using nginx as a proxy in front of my nodejs server (that has hapi >> as server framework). >> >> Here is the nginx.conf file's portion for this server: >> http://dpaste.com/0VJKE5K >> >> The problem is that I get >> >>> No 'Access-Control-Allow-Origin' header is present on the requested >>> resource. Origin 'https://test.project.com' is therefore not allowed >>> access. >>> >> immediately after sending the pre-flight request when uploading the file. >> >> Can anyone please help me? >> Thanks in advance. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Fri Dec 11 16:07:50 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 11 Dec 2015 17:07:50 +0100 Subject: How to set up nginx for file uploading In-Reply-To: References: Message-ID: "The response had HTTP status code 503. " It looks your backend is failing, as it's returning a HTTP/503 error and likely not including the correct headers. You should look into why your backed is returning a 503 as this doesn't seem like an nginx issue any more. On Fri, Dec 11, 2015 at 3:50 PM, Vlad Fulgeanu wrote: > I added these in "location /upload/preview": > > add_header 'Access-Control-Allow-Origin' 'https://test.project.com'; >> add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, >> X-Requested-With, Cache-Control, If-None-Match'; > > > And now it gives me: > > No 'Access-Control-Allow-Origin' header is present on the requested >> resource. Origin 'https://test.project.com' is therefore not allowed >> access. The response had HTTP status code 503. > > > As a response to the actual request (not the pre-flight) > > On Fri, Dec 11, 2015 at 4:27 PM, Richard Stanway < > r1ch+nginx at teamliquid.net> wrote: > >> Your config doesn't appear to add any Access-Control-Allow-Origin header, >> so unless your backend is adding this, you will need to add an >> appropriate Access-Control-Allow-Origin header. >> >> On Fri, Dec 11, 2015 at 1:14 PM, Vlad Fulgeanu >> wrote: >> >>> Hi everyone! >>> >>> I am having some trouble setting up nginx for file uploading. >>> >>> I am using nginx as a proxy in front of my nodejs server (that has hapi >>> as server framework). >>> >>> Here is the nginx.conf file's portion for this server: >>> http://dpaste.com/0VJKE5K >>> >>> The problem is that I get >>> >>>> No 'Access-Control-Allow-Origin' header is present on the requested >>>> resource. Origin 'https://test.project.com' is therefore not allowed >>>> access. >>>> >>> immediately after sending the pre-flight request when uploading the file. >>> >>> Can anyone please help me? >>> Thanks in advance. >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholas.capo at gmail.com Fri Dec 11 22:03:15 2015 From: nicholas.capo at gmail.com (Nicholas Capo) Date: Fri, 11 Dec 2015 22:03:15 +0000 Subject: HTTP/2 Gateway In-Reply-To: <20151207021354.GQ74233@mdounin.ru> References: <20151207021354.GQ74233@mdounin.ru> Message-ID: Is HTTP/2 proxy support planned for the near future? Nicholas On Sun, Dec 6, 2015 at 8:14 PM Maxim Dounin wrote: > Hello! > > On Sun, Dec 06, 2015 at 03:00:23PM +0100, bjunity at gmail.com wrote: > > > i've tried to use nginx as http/2 gateway for backends which only > > supporting HTTP 1.1. If a backend is already HTTP/2 ready, than HTTP/2 > > should be used. > > > > When i test my configuration (simple proxy_pass / upstream), the > connection > > from nginx to the backend is always HTTP 1.1 (even if the backend > supports > > HTTP/2). > > > > Is there a possiblity to speak HTTP/2 also to the backends? > > > > I've used "proxy_http_version 1.1" in my configuration (no other setting > > (higher version or protocol auto-negotiation) available). > > No, talking to backends using the HTTP/2 protocol is not supported > by proxy module. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chinix at gmail.com Sat Dec 12 15:05:40 2015 From: chinix at gmail.com (chinix at gmail.com) Date: Sat, 12 Dec 2015 23:05:40 +0800 Subject: nginx-1.9.8 In-Reply-To: <20151208160938.GG74233@mdounin.ru> References: <20151208160938.GG74233@mdounin.ru> Message-ID: <566C37C4.10802@gmail.com> A exciting feature for ngx_http_slice_module, I test it,but too many upsteam connections,could I limit the count of connection? ? 15/12/9 ??12:09, Maxim Dounin ??: > Changes with nginx 1.9.8 08 Dec 2015 > > *) Feature: pwritev() support. > > *) Feature: the "include" directive inside the "upstream" block. > > *) Feature: the ngx_http_slice_module. > > *) Bugfix: a segmentation fault might occur in a worker process when > using LibreSSL; the bug had appeared in 1.9.6. > > *) Bugfix: nginx could not be built on OS X in some cases. > > From pluknet at nginx.com Sat Dec 12 16:29:15 2015 From: pluknet at nginx.com (Sergey Kandaurov) Date: Sat, 12 Dec 2015 19:29:15 +0300 Subject: nginx-1.9.8 In-Reply-To: <566C37C4.10802@gmail.com> References: <20151208160938.GG74233@mdounin.ru> <566C37C4.10802@gmail.com> Message-ID: <566C4B5B.1090108@nginx.com> On 12.12.2015 18:05, chinix at gmail.com wrote: > A exciting feature for ngx_http_slice_module, I test it,but too many > upsteam connections,could I limit the count of connection? > You could cache upstream connections. See for details: http://nginx.org/r/keepalive From full.vladd at gmail.com Sat Dec 12 17:11:06 2015 From: full.vladd at gmail.com (Vlad Fulgeanu) Date: Sat, 12 Dec 2015 19:11:06 +0200 Subject: How to set up nginx for file uploading In-Reply-To: References: Message-ID: The problem is that I have tested the application locally (without nginx) and the uploading works just fine. That's why I think it has something to do with de way nginx is configured. On Fri, Dec 11, 2015 at 6:07 PM, Richard Stanway wrote: > "The response had HTTP status code 503. " > > It looks your backend is failing, as it's returning a HTTP/503 error and > likely not including the correct headers. You should look into why your > backed is returning a 503 as this doesn't seem like an nginx issue any more. > > On Fri, Dec 11, 2015 at 3:50 PM, Vlad Fulgeanu > wrote: > >> I added these in "location /upload/preview": >> >> add_header 'Access-Control-Allow-Origin' 'https://test.project.com'; >>> add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, >>> X-Requested-With, Cache-Control, If-None-Match'; >> >> >> And now it gives me: >> >> No 'Access-Control-Allow-Origin' header is present on the requested >>> resource. Origin 'https://test.project.com' is therefore not allowed >>> access. The response had HTTP status code 503. >> >> >> As a response to the actual request (not the pre-flight) >> >> On Fri, Dec 11, 2015 at 4:27 PM, Richard Stanway < >> r1ch+nginx at teamliquid.net> wrote: >> >>> Your config doesn't appear to add any Access-Control-Allow-Origin >>> header, so unless your backend is adding this, you will need to add an >>> appropriate Access-Control-Allow-Origin header. >>> >>> On Fri, Dec 11, 2015 at 1:14 PM, Vlad Fulgeanu >>> wrote: >>> >>>> Hi everyone! >>>> >>>> I am having some trouble setting up nginx for file uploading. >>>> >>>> I am using nginx as a proxy in front of my nodejs server (that has hapi >>>> as server framework). >>>> >>>> Here is the nginx.conf file's portion for this server: >>>> http://dpaste.com/0VJKE5K >>>> >>>> The problem is that I get >>>> >>>>> No 'Access-Control-Allow-Origin' header is present on the requested >>>>> resource. Origin 'https://test.project.com' is therefore not allowed >>>>> access. >>>>> >>>> immediately after sending the pre-flight request when uploading the >>>> file. >>>> >>>> Can anyone please help me? >>>> Thanks in advance. >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Dec 12 19:45:45 2015 From: nginx-forum at nginx.us (pjudt) Date: Sat, 12 Dec 2015 14:45:45 -0500 Subject: How to preserve nginx url from being redirected (302) from back end tomcat server? Message-ID: <2baea8e146f4c62e6714c286fb8a1937.NginxMailingListEnglish@forum.nginx.org> Newby working for a couple weeks. Any guidance really appreciated. Nginx is not preserving the url from redirected backend tomcat server. Nginx url: //develop-application.example.com/ Backend tomcat url: //application.example.com/ tomcat redirects url: //application.example.com/application I want the client browser url to always remain https://develop-application.example.com/application no matter what returns from tomcat. Config: server { listen 443 proxy_protocol; set_real_ip_from 0.0.0.0/0; real_ip_header proxy_protocol; access_log /var/log/nginx/develop-application.example.com.https.access.log elb_log; server_name develop-application.example.com; ssl on; ssl_certificate /etc/nginx/ssl/example.com.crt.chained; ssl_certificate_key /etc/nginx/ssl/example.com.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; location / { proxy_pass https://application.example.com/; proxy_redirect https://application.example.com/ https://develop-application.example.com/; proxy_set_header Host $http_host; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263441,263441#msg-263441 From reallfqq-nginx at yahoo.fr Sat Dec 12 19:56:00 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 12 Dec 2015 20:56:00 +0100 Subject: How to preserve nginx url from being redirected (302) from back end tomcat server? In-Reply-To: <2baea8e146f4c62e6714c286fb8a1937.NginxMailingListEnglish@forum.nginx.org> References: <2baea8e146f4c62e6714c286fb8a1937.NginxMailingListEnglish@forum.nginx.org> Message-ID: Your proxy_redirect rule does not match what is returned from your backend. Remove the "https" prefix maybe? Since you want to redirect to the current hostname, you could also use the $host variable. Then, since your server block is configured to only listen on SSL port, you could also take advantage of the $scheme variable. proxy_redirect //application.example.com/ $scheme://$host/; --- *B. R.* On Sat, Dec 12, 2015 at 8:45 PM, pjudt wrote: > Newby working for a couple weeks. Any guidance really appreciated. > > Nginx is not preserving the url from redirected backend tomcat server. > > Nginx url: //develop-application.example.com/ > > Backend tomcat url: //application.example.com/ > > tomcat redirects url: //application.example.com/application > > I want the client browser url to always remain > https://develop-application.example.com/application no matter what returns > from tomcat. > > Config: > > server { > listen 443 proxy_protocol; > set_real_ip_from 0.0.0.0/0; > real_ip_header proxy_protocol; > access_log > /var/log/nginx/develop-application.example.com.https.access.log elb_log; > > server_name develop-application.example.com; > > ssl on; > ssl_certificate /etc/nginx/ssl/example.com.crt.chained; > ssl_certificate_key /etc/nginx/ssl/example.com.key; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; > ssl_prefer_server_ciphers on; > ssl_session_cache shared:SSL:10m; > > location / { > > proxy_pass https://application.example.com/; > proxy_redirect https://application.example.com/ > https://develop-application.example.com/; > proxy_set_header Host $http_host; > > } > } > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263441,263441#msg-263441 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jackcowlick at gmx.com Sun Dec 13 16:10:27 2015 From: jackcowlick at gmx.com (jack cowlick) Date: Sun, 13 Dec 2015 17:10:27 +0100 Subject: Request configuration file review Message-ID: An HTML attachment was scrubbed... URL: From adrian.datri.guiran at gmail.com Sun Dec 13 21:32:12 2015 From: adrian.datri.guiran at gmail.com (Adrian D'Atri-Guiran) Date: Sun, 13 Dec 2015 13:32:12 -0800 Subject: Is stale-while-revalidate (RFC 5861) currently supported in nginx? Message-ID: There is a very old post from September, 2011 which says it is not supported, I am just wondering if anything has changed since then? http://mailman.nginx.org/pipermail/nginx/2011-September/028926.html Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nzt4567 at gmx.com Sun Dec 13 23:31:11 2015 From: nzt4567 at gmx.com (Tomas Kvasnicka) Date: Mon, 14 Dec 2015 00:31:11 +0100 Subject: Using uninitialized variable value Message-ID: <75AD16DC-63A4-4A4F-8896-71D4BA7A821F@gmx.com> Hi, we are getting quite a lot of these warnings about using uninitialized variables, although we have ?set? directive for each variable in every server block. I understand the fact, that it happens when the request is not valid and the rewrite module handler does not run, but this also happens on valid requests (valid headers, valid request line, correctly detected & set server block, ?) AND we are unable to reproduce it. Issuing the same request with the same headers to the same instance of nginx is unfortunately not enough to regenerate the warning. Does anybody have some examples of situations in which the rewrite handler will not run therefore these variables will seem as uninitialized? Or can you point me to what additional info I can provide? FYI: I know I can mute the warnings, but when the variables are uninitialized and they are used in the ?if=? of ?access_log? directive it then does not work as it should :/ Regards, Tomas Kvasnicka From nginx-forum at nginx.us Mon Dec 14 02:27:10 2015 From: nginx-forum at nginx.us (winghokwan) Date: Sun, 13 Dec 2015 21:27:10 -0500 Subject: how to use keepalive with Nginx revers proxy? In-Reply-To: <20140403130231.GL34696@mdounin.ru> References: <20140403130231.GL34696@mdounin.ru> Message-ID: I have the same question, any update on it. I am using Lua and Redis for the reverse proxy lookup and don't define upstream. How can we use keepalive? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,248924,263444#msg-263444 From nginx-forum at nginx.us Mon Dec 14 02:28:27 2015 From: nginx-forum at nginx.us (winghokwan) Date: Sun, 13 Dec 2015 21:28:27 -0500 Subject: how to use keepalive with Nginx revers proxy? In-Reply-To: References: <20140403130231.GL34696@mdounin.ru> Message-ID: I have the same issue. I use Lua and Redis for the reverse proxy lookup and don't use upstream. How to use keepalive then? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,248924,263445#msg-263445 From junk at slact.net Mon Dec 14 05:48:05 2015 From: junk at slact.net (nobody) Date: Mon, 14 Dec 2015 05:48:05 +0000 Subject: [ANN] Nchan: a pubsub server module for Websocket, EventSource, Long-Poll and more Message-ID: <566E5815.2070106@slact.net> The module formerly known as NGINX HTTP Push Module is now Nchan. Nchan is a light, scalable, and flexible pub/sub server for the modern web. It can be configured as a standalone server, or as a shim between an application and tens, thousands, or millions of live subscribers. It can buffer messages in memory, on-disk, or via Redis. All connections are handled asynchronously and distributed among any number of worker processes. It can also scale to many nginx instances with Redis. Messages are published to channels with HTTP POST requests or websockets, and subscribed also through websockets, long-polling, EventSource (SSE), old-fashioned interval polling, and more. Any location can be a subscriber endpoint for up to 4 channels. Each subscriber can be optionally authenticated via a custom application url, and an events meta channel is available for debugging. This is a beta release, and I'm looking for feedback on the module code, functionality, as well as the documentation. https://nchan.slact.net From full.vladd at gmail.com Mon Dec 14 09:43:48 2015 From: full.vladd at gmail.com (Vlad Fulgeanu) Date: Mon, 14 Dec 2015 11:43:48 +0200 Subject: How to set up nginx for file uploading In-Reply-To: References: Message-ID: I managed to make some changes (I was using another config file, although "nginx -t" was showing the correct file, witch is very strange) and I don't get HTTP 503 back. The config file now looks like this: http://dpaste.com/18J54VV The problem I have now is that I get > 500 (Internal Server Error) as a response to the POST request. I can see that the files are actually uploaded to "/home/project/previews/uploaded-files", but for whatever reason they don't get passed to the backend server, because the backend doesn't log anything (though it says that it's an Internal Server Error). On Sat, Dec 12, 2015 at 7:11 PM, Vlad Fulgeanu wrote: > The problem is that I have tested the application locally (without nginx) > and the uploading works just fine. > That's why I think it has something to do with de way nginx is configured. > > On Fri, Dec 11, 2015 at 6:07 PM, Richard Stanway > wrote: >> >> "The response had HTTP status code 503. " >> >> It looks your backend is failing, as it's returning a HTTP/503 error and >> likely not including the correct headers. You should look into why your >> backed is returning a 503 as this doesn't seem like an nginx issue any more. >> >> On Fri, Dec 11, 2015 at 3:50 PM, Vlad Fulgeanu >> wrote: >>> >>> I added these in "location /upload/preview": >>> >>>> add_header 'Access-Control-Allow-Origin' 'https://test.project.com'; >>>> add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, >>>> X-Requested-With, Cache-Control, If-None-Match'; >>> >>> >>> And now it gives me: >>> >>>> No 'Access-Control-Allow-Origin' header is present on the requested >>>> resource. Origin 'https://test.project.com' is therefore not allowed access. >>>> The response had HTTP status code 503. >>> >>> >>> As a response to the actual request (not the pre-flight) >>> >>> On Fri, Dec 11, 2015 at 4:27 PM, Richard Stanway >>> wrote: >>>> >>>> Your config doesn't appear to add any Access-Control-Allow-Origin >>>> header, so unless your backend is adding this, you will need to add an >>>> appropriate Access-Control-Allow-Origin header. >>>> >>>> On Fri, Dec 11, 2015 at 1:14 PM, Vlad Fulgeanu >>>> wrote: >>>>> >>>>> Hi everyone! >>>>> >>>>> I am having some trouble setting up nginx for file uploading. >>>>> >>>>> I am using nginx as a proxy in front of my nodejs server (that has hapi >>>>> as server framework). >>>>> >>>>> Here is the nginx.conf file's portion for this server: >>>>> http://dpaste.com/0VJKE5K >>>>> >>>>> The problem is that I get >>>>>> >>>>>> No 'Access-Control-Allow-Origin' header is present on the requested >>>>>> resource. Origin 'https://test.project.com' is therefore not allowed access. >>>>> >>>>> immediately after sending the pre-flight request when uploading the >>>>> file. >>>>> >>>>> Can anyone please help me? >>>>> Thanks in advance. >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > From mdounin at mdounin.ru Mon Dec 14 15:00:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Dec 2015 18:00:38 +0300 Subject: Is stale-while-revalidate (RFC 5861) currently supported in nginx? In-Reply-To: References: Message-ID: <20151214150038.GN74233@mdounin.ru> Hello! On Sun, Dec 13, 2015 at 01:32:12PM -0800, Adrian D'Atri-Guiran wrote: > There is a very old post from September, 2011 which says it is not > supported, I am just wondering if anything has changed since then? > > http://mailman.nginx.org/pipermail/nginx/2011-September/028926.html Using stale responses in nginx is controlled by the proxy_cache_use_stale directive, which predated RFC 5861 and works slightly differently. See here for details: http://nginx.org/r/proxy_cache_use_stale No support for Cache-Control extensions defined by RFC 5861 is currently implemented. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Dec 14 16:17:40 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Dec 2015 19:17:40 +0300 Subject: HTTP/2 Gateway In-Reply-To: References: <20151207021354.GQ74233@mdounin.ru> Message-ID: <20151214161740.GP74233@mdounin.ru> Hello! On Fri, Dec 11, 2015 at 10:03:15PM +0000, Nicholas Capo wrote: > Is HTTP/2 proxy support planned for the near future? Short answer: No, there are no plans. Long answer: There is almost no sense to implement it, as the main HTTP/2 benefit is that it allows multiplexing many requests within a single connection, thus [almost] removing the limit on number of simalteneous requests - and there is no such limit when talking to your own backends. Moreover, things may even become worse when using HTTP/2 to backends, due to single TCP connection being used instead of multiple ones. On the other hand, implementing HTTP/2 protocol and request multiplexing within a single connection in the upstream module will require major changes to the upstream module. Due to the above, there are no plans to implement HTTP/2 support in the upstream module, at least in the foreseeable future. If you still think that talking to backends via HTTP/2 is something needed - feel free to provide patches. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Dec 14 16:29:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Dec 2015 19:29:21 +0300 Subject: how to use keepalive with Nginx revers proxy? In-Reply-To: References: <20140403130231.GL34696@mdounin.ru> Message-ID: <20151214162920.GQ74233@mdounin.ru> Hello! On Sun, Dec 13, 2015 at 09:28:27PM -0500, winghokwan wrote: > I have the same issue. I use Lua and Redis for the reverse proxy lookup and > don't use upstream. How to use keepalive then? The only way now is to define upstream{} blocks in advance, and then provide appropriate upstream{} block name from your dynamic lookup code. -- Maxim Dounin http://nginx.org/ From gfrankliu at gmail.com Mon Dec 14 17:57:02 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 14 Dec 2015 09:57:02 -0800 Subject: HTTP/2 Gateway In-Reply-To: <20151214161740.GP74233@mdounin.ru> References: <20151207021354.GQ74233@mdounin.ru> <20151214161740.GP74233@mdounin.ru> Message-ID: "multiplexing" seems to be a good use case for upstream proxying. We don't have control how fast end users adopting HTTP/2, so we may still have tons of HTTP/1.x requests coming in, but we can certainly upgrade upstream servers that we control to support HTTP/2. If nginx upstream proxy module can also support HTTP/2, we will be able to take advantage of "multiplexing" the connection between nginx and upstream. Thanks! Frank On Mon, Dec 14, 2015 at 8:17 AM, Maxim Dounin wrote: > Hello! > > On Fri, Dec 11, 2015 at 10:03:15PM +0000, Nicholas Capo wrote: > > > Is HTTP/2 proxy support planned for the near future? > > Short answer: > > No, there are no plans. > > Long answer: > > There is almost no sense to implement it, as the main HTTP/2 > benefit is that it allows multiplexing many requests within a > single connection, thus [almost] removing the limit on number of > simalteneous requests - and there is no such limit when talking to > your own backends. Moreover, things may even become worse when > using HTTP/2 to backends, due to single TCP connection being used > instead of multiple ones. > > On the other hand, implementing HTTP/2 protocol and request > multiplexing within a single connection in the upstream module > will require major changes to the upstream module. > > Due to the above, there are no plans to implement HTTP/2 support > in the upstream module, at least in the foreseeable future. If > you still think that talking to backends via HTTP/2 is something > needed - feel free to provide patches. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Dec 14 18:15:18 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 14 Dec 2015 21:15:18 +0300 Subject: HTTP/2 Gateway In-Reply-To: References: <20151214161740.GP74233@mdounin.ru> Message-ID: <2080721.vXL7gCjpdl@vbart-workstation> On Monday 14 December 2015 09:57:02 Frank Liu wrote: > "multiplexing" seems to be a good use case for upstream proxying. We don't > have control how fast end users adopting HTTP/2, so we may still have tons > of HTTP/1.x requests coming in, but we can certainly upgrade upstream > servers that we control to support HTTP/2. If nginx upstream proxy module > can also support HTTP/2, we will be able to take advantage of > "multiplexing" the connection between nginx and upstream. > [..] What's the advantage? wbr, Valentin V. Bartenev From mdounin at mdounin.ru Mon Dec 14 18:19:36 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Dec 2015 21:19:36 +0300 Subject: HTTP/2 Gateway In-Reply-To: References: <20151207021354.GQ74233@mdounin.ru> <20151214161740.GP74233@mdounin.ru> Message-ID: <20151214181936.GT74233@mdounin.ru> Hello! On Mon, Dec 14, 2015 at 09:57:02AM -0800, Frank Liu wrote: > "multiplexing" seems to be a good use case for upstream proxying. We don't > have control how fast end users adopting HTTP/2, so we may still have tons > of HTTP/1.x requests coming in, but we can certainly upgrade upstream > servers that we control to support HTTP/2. If nginx upstream proxy module > can also support HTTP/2, we will be able to take advantage of > "multiplexing" the connection between nginx and upstream. As already said, "advantage" is mostly nonexistent when considering connections between nginx and upstreams, or even disadvantage in some cases. But if you think that it will be beneficial - feel free to provide patches. -- Maxim Dounin http://nginx.org/ From nicholas.capo at gmail.com Mon Dec 14 18:24:01 2015 From: nicholas.capo at gmail.com (Nicholas Capo) Date: Mon, 14 Dec 2015 18:24:01 +0000 Subject: HTTP/2 Gateway In-Reply-To: <2080721.vXL7gCjpdl@vbart-workstation> References: <20151214161740.GP74233@mdounin.ru> <2080721.vXL7gCjpdl@vbart-workstation> Message-ID: My specific use case is to support an HTTP/2 application behind a load balancer (reverse proxy). Also as a backend LB between services that could use a long running HTTP/2 connection to do their communication. Places where I need an LB, but also know that both ends would /prefer/ to use HTTP/2. Nicholas On Mon, Dec 14, 2015 at 12:15 PM Valentin V. Bartenev wrote: > On Monday 14 December 2015 09:57:02 Frank Liu wrote: > > "multiplexing" seems to be a good use case for upstream proxying. We > don't > > have control how fast end users adopting HTTP/2, so we may still have > tons > > of HTTP/1.x requests coming in, but we can certainly upgrade upstream > > servers that we control to support HTTP/2. If nginx upstream proxy module > > can also support HTTP/2, we will be able to take advantage of > > "multiplexing" the connection between nginx and upstream. > > > [..] > > What's the advantage? > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sunil at mesosphere.io Tue Dec 15 00:26:33 2015 From: sunil at mesosphere.io (Sunil Shah) Date: Mon, 14 Dec 2015 16:26:33 -0800 Subject: rewrite causing $ to be escaped Message-ID: Hi, We're running Jenkins behind an Nginx reverse proxy and see issues where Jenkins running in Tomcat returns 404 because it receives URLs that have $ encoded as %24. From the Tomcat logs: 10.0.7.212 - - [15/Dec/2015:00:15:22 +0000] "POST /%24stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render HTTP/1.0" 404 992 If we access Tomcat directly, it does not return a 404 and $ is not encoded: 10.0.7.212 - - [15/Dec/2015:00:16:54 +0000] "POST /$stapler/bound/95ed7e0f-f703-458a-a456-8bf729670a4a/render HTTP/1.1" 200 437 When I log the $request_uri and $uri variables, it doesn't look like the $ is being encoded: Dec 15 00:15:22 ip-10-0-7-212.us-west-2.compute.internal nginx[2732]: ip-10-0-7-212.us-west-2.compute.internal nginx: [15/Dec/2015:00:15:22 +0000] Hi Sunil! Request URI: /service/jenkins/$stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render is now URI: /$stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render However, it looks like the rewrite rule in our location block is causing the $ to be encoded. If I comment out this block or change the regex so it doesn't match, this works. Here's the relevant configuration block: location ~ ^/service/jenkins/(?.*) { rewrite ^/service/jenkins/?.*$ /$path break; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://10.0.0.63:10115; proxy_redirect http://$host/service/jenkins/ /service/jenkins/; proxy_redirect http://$host/ /service/jenkins/; } Is there a way to bypass this encoding (an equivalent to Apache's noescape flag?)? Thanks in advance, Sunil -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikolai at lusan.id.au Tue Dec 15 01:10:42 2015 From: nikolai at lusan.id.au (Nikolai Lusan) Date: Tue, 15 Dec 2015 11:10:42 +1000 Subject: Mixing fcgi and fcgiwrap Message-ID: <1450141842.3409.2.camel@lusan.id.au> Hi, I have an issue where I need to move a site that has a legacy C cgi interface onto a server that already has FCGI hosts. Rather than trying to mix slow cgi and fast cgi I was thinking of using fcgiwrap ... does anyone know if this plays well with a regular fcgi setup on the same server? -- Nikolai Lusan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mdounin at mdounin.ru Tue Dec 15 01:25:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2015 04:25:45 +0300 Subject: rewrite causing $ to be escaped In-Reply-To: References: Message-ID: <20151215012545.GU74233@mdounin.ru> Hello! On Mon, Dec 14, 2015 at 04:26:33PM -0800, Sunil Shah wrote: > We're running Jenkins behind an Nginx reverse proxy and see issues where > Jenkins running in Tomcat returns 404 because it receives URLs that have $ > encoded as %24. From the Tomcat logs: > 10.0.7.212 - - [15/Dec/2015:00:15:22 +0000] "POST > /%24stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render HTTP/1.0" 404 > 992 > > If we access Tomcat directly, it does not return a 404 and $ is not encoded: > 10.0.7.212 - - [15/Dec/2015:00:16:54 +0000] "POST > /$stapler/bound/95ed7e0f-f703-458a-a456-8bf729670a4a/render HTTP/1.1" 200 > 437 > > When I log the $request_uri and $uri variables, it doesn't look like the $ > is being encoded: > Dec 15 00:15:22 ip-10-0-7-212.us-west-2.compute.internal nginx[2732]: > ip-10-0-7-212.us-west-2.compute.internal nginx: [15/Dec/2015:00:15:22 > +0000] Hi Sunil! Request URI: > /service/jenkins/$stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render > is now URI: /$stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render > > However, it looks like the rewrite rule in our location block is causing > the $ to be encoded. If I comment out this block or change the regex so it > doesn't match, this works. > > Here's the relevant configuration block: > location ~ ^/service/jenkins/(?.*) { > rewrite ^/service/jenkins/?.*$ /$path break; The '$' character isn't encoded by nginx on rewrites. If you see it encoded - probably something else did it. Just tested it here, and such a configuration sends the following request line to upstream server: GET /$stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render HTTP/1.0 That is, '$' is not escaped. If in doubt, try looking into debug logs to see what nginx actually sent to upstream server, see http://nginx.org/en/docs/debugging_log.html for details. Note well that better solution for such URI change would be to use static location and proxy_pass with URI component, like this: location /service/jenkins/ { proxy_pass http://10.0.0.63:10115/; ... } (Note tralinig "/" in proxy_pass.) No rewrites, no regular expressions, the same result. May need adjustments if you use other conflicting regular expressions in your config. See http://nginx.org/r/location for details on location matching, and http://nginx.org/r/proxy_pass for details on proxy_pass. [...] -- Maxim Dounin http://nginx.org/ From bramverdonck at telenet.be Tue Dec 15 09:13:35 2015 From: bramverdonck at telenet.be (Bram Verdonck) Date: Tue, 15 Dec 2015 10:13:35 +0100 Subject: Load balancing based on hash of the domain In-Reply-To: <5DB424CE-6065-48AE-88ED-9DB90BBE41ED@telenet.be> References: <5DB424CE-6065-48AE-88ED-9DB90BBE41ED@telenet.be> Message-ID: <3488FB85-5A7B-4402-A677-60C3DFD34353@telenet.be> Hi all, I wish to use load balancing in front of a shared webhosting cluster with multiple domains. To make optimal use of resources, it would make sense to do load balancing based on domain instead of random or based on IP. I noticed that there is a hash parameter but I?m unable to use $server_name or $host as a parameter, only $request_uri and $remote_addr. Is there any way that I could use $server_name? In my opinion it makes no sense to randomize all requests for the different domains and would make sense to load balance based on a hash of the domain. Regards, Bram From sunil at mesosphere.io Tue Dec 15 09:28:04 2015 From: sunil at mesosphere.io (Sunil Shah) Date: Tue, 15 Dec 2015 01:28:04 -0800 Subject: rewrite causing $ to be escaped In-Reply-To: <20151215012545.GU74233@mdounin.ru> References: <20151215012545.GU74233@mdounin.ru> Message-ID: Thanks Maxim! You're right - we were looking in the wrong place - the rewrite rule was just a red herring. On Mon, Dec 14, 2015 at 5:25 PM, Maxim Dounin wrote: > Hello! > > On Mon, Dec 14, 2015 at 04:26:33PM -0800, Sunil Shah wrote: > > > We're running Jenkins behind an Nginx reverse proxy and see issues where > > Jenkins running in Tomcat returns 404 because it receives URLs that have > $ > > encoded as %24. From the Tomcat logs: > > 10.0.7.212 - - [15/Dec/2015:00:15:22 +0000] "POST > > /%24stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render HTTP/1.0" > 404 > > 992 > > > > If we access Tomcat directly, it does not return a 404 and $ is not > encoded: > > 10.0.7.212 - - [15/Dec/2015:00:16:54 +0000] "POST > > /$stapler/bound/95ed7e0f-f703-458a-a456-8bf729670a4a/render HTTP/1.1" 200 > > 437 > > > > When I log the $request_uri and $uri variables, it doesn't look like the > $ > > is being encoded: > > Dec 15 00:15:22 ip-10-0-7-212.us-west-2.compute.internal nginx[2732]: > > ip-10-0-7-212.us-west-2.compute.internal nginx: [15/Dec/2015:00:15:22 > > +0000] Hi Sunil! Request URI: > > > /service/jenkins/$stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render > > is now URI: /$stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render > > > > However, it looks like the rewrite rule in our location block is causing > > the $ to be encoded. If I comment out this block or change the regex so > it > > doesn't match, this works. > > > > Here's the relevant configuration block: > > location ~ ^/service/jenkins/(?.*) { > > rewrite ^/service/jenkins/?.*$ /$path break; > > The '$' character isn't encoded by nginx on rewrites. If you see > it encoded - probably something else did it. Just tested it here, > and such a configuration sends the following request line to > upstream server: > > GET /$stapler/bound/c43ae9fc-dcca-4fbe-b247-82279fa65d55/render HTTP/1.0 > > That is, '$' is not escaped. > > If in doubt, try looking into debug logs to see what nginx > actually sent to upstream server, see > http://nginx.org/en/docs/debugging_log.html for details. > > Note well that better solution for such URI change would be to use > static location and proxy_pass with URI component, like this: > > location /service/jenkins/ { > proxy_pass http://10.0.0.63:10115/; > ... > } > > (Note tralinig "/" in proxy_pass.) > > No rewrites, no regular expressions, the same result. May need > adjustments if you use other conflicting regular expressions in > your config. See http://nginx.org/r/location for details on > location matching, and http://nginx.org/r/proxy_pass for details > on proxy_pass. > > [...] > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Dec 15 09:40:04 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Dec 2015 12:40:04 +0300 Subject: HTTP/2 Gateway In-Reply-To: References: <2080721.vXL7gCjpdl@vbart-workstation> Message-ID: <5454809.Qy8znxMuo4@vbart-workstation> On Monday 14 December 2015 18:24:01 Nicholas Capo wrote: > My specific use case is to support an HTTP/2 application behind a load > balancer (reverse proxy). > > Also as a backend LB between services that could use a long running HTTP/2 > connection to do their communication. > > Places where I need an LB, but also know that both ends would /prefer/ to > use HTTP/2. > Can you name such applications that are only able to talk HTTP/2? HTTP/2 isn't a better and shiny version of HTTP, it's completely different transport layer with a number of disadvantages as well. wbr, Valentin V. Bartenev From a.portnov at ism-ukraine.com Tue Dec 15 09:44:45 2015 From: a.portnov at ism-ukraine.com (Aleksey Portnov) Date: Tue, 15 Dec 2015 09:44:45 +0000 Subject: using variables in certificate path names Message-ID: <7B29E79534A00243AF7D95A3D81DC84AE78E9430@corp-exch03.corp.ism.nl> Hello! Is it possible and correct something like: server { listen 1.1.1.1:443 ssl; server_name sitename.de sitename.fr sitename.nl; root /var/www/vhosts/Live/public_html; ssl_certificate /etc/ssl/web/$host.pem; ssl_certificate_key /etc/ssl/web/$host.key; ... #commont part for all sites ... } -- Sincerely yours, Alexey Portnov -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Dec 15 09:53:42 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Dec 2015 12:53:42 +0300 Subject: using variables in certificate path names In-Reply-To: <7B29E79534A00243AF7D95A3D81DC84AE78E9430@corp-exch03.corp.ism.nl> References: <7B29E79534A00243AF7D95A3D81DC84AE78E9430@corp-exch03.corp.ism.nl> Message-ID: <1815750.3OYXC67gMp@vbart-workstation> On Tuesday 15 December 2015 09:44:45 Aleksey Portnov wrote: > Hello! > > Is it possible and correct something like: > > server { > listen 1.1.1.1:443 ssl; > > server_name sitename.de sitename.fr sitename.nl; > root /var/www/vhosts/Live/public_html; > > ssl_certificate /etc/ssl/web/$host.pem; > ssl_certificate_key /etc/ssl/web/$host.key; > > ... > #commont part for all sites > ... > } > Currently it's not possible. Certificates and keys are loaded while reading configuration. wbr, Valentin V. Bartenev From vbart at nginx.com Tue Dec 15 09:58:40 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Dec 2015 12:58:40 +0300 Subject: Load balancing based on hash of the domain In-Reply-To: <3488FB85-5A7B-4402-A677-60C3DFD34353@telenet.be> References: <5DB424CE-6065-48AE-88ED-9DB90BBE41ED@telenet.be> <3488FB85-5A7B-4402-A677-60C3DFD34353@telenet.be> Message-ID: <2031248.YAk4HBQ74G@vbart-workstation> On Tuesday 15 December 2015 10:13:35 Bram Verdonck wrote: > Hi all, > > > I wish to use load balancing in front of a shared webhosting cluster with multiple domains. > To make optimal use of resources, it would make sense to do load balancing based on domain instead of random or based on IP. > I noticed that there is a hash parameter but I?m unable to use $server_name or $host as a parameter, only $request_uri and $remote_addr. > > Is there any way that I could use $server_name? > > In my opinion it makes no sense to randomize all requests for the different domains and would make sense to load balance based on a hash of the domain. > What's the problem with the "hash" directive of upstream block? Why can't you use it? http://nginx.org/r/hash wbr, Valentin V. Bartenev From maxim at nginx.com Tue Dec 15 10:01:15 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 15 Dec 2015 13:01:15 +0300 Subject: using variables in certificate path names In-Reply-To: <1815750.3OYXC67gMp@vbart-workstation> References: <7B29E79534A00243AF7D95A3D81DC84AE78E9430@corp-exch03.corp.ism.nl> <1815750.3OYXC67gMp@vbart-workstation> Message-ID: <566FE4EB.3090705@nginx.com> On 12/15/15 12:53 PM, Valentin V. Bartenev wrote: > On Tuesday 15 December 2015 09:44:45 Aleksey Portnov wrote: >> Hello! >> >> Is it possible and correct something like: >> >> server { >> listen 1.1.1.1:443 ssl; >> >> server_name sitename.de sitename.fr sitename.nl; >> root /var/www/vhosts/Live/public_html; >> >> ssl_certificate /etc/ssl/web/$host.pem; >> ssl_certificate_key /etc/ssl/web/$host.key; >> >> ... >> #commont part for all sites >> ... >> } >> > > Currently it's not possible. Certificates and keys > are loaded while reading configuration. > .. and we are working on a similar feature. -- Maxim Konovalov From bramverdonck at telenet.be Tue Dec 15 10:56:43 2015 From: bramverdonck at telenet.be (Bram) Date: Tue, 15 Dec 2015 11:56:43 +0100 Subject: Load balancing based on hash of the domain In-Reply-To: <2031248.YAk4HBQ74G@vbart-workstation> References: <5DB424CE-6065-48AE-88ED-9DB90BBE41ED@telenet.be> <3488FB85-5A7B-4402-A677-60C3DFD34353@telenet.be> <2031248.YAk4HBQ74G@vbart-workstation> Message-ID: <9614B2E3-968B-4988-A863-C8B80F7E4A66@telenet.be> Problem is that it will not accept anything besides $remote_addr and $request_uri. For example: upstream loadbalancer { hash $server_name consistent: #hash $request_uri consistent; server 10.0.0.1:8080; server 10.0.0.2:8080; } Will fail with: invalid number of arguments in "hash" directive in /etc/nginx/nginx-loadbalancing.conf:6 > Op 15-dec.-2015, om 10:58 heeft Valentin V. Bartenev het volgende geschreven: > > On Tuesday 15 December 2015 10:13:35 Bram Verdonck wrote: >> Hi all, >> >> >> I wish to use load balancing in front of a shared webhosting cluster with multiple domains. >> To make optimal use of resources, it would make sense to do load balancing based on domain instead of random or based on IP. >> I noticed that there is a hash parameter but I?m unable to use $server_name or $host as a parameter, only $request_uri and $remote_addr. >> >> Is there any way that I could use $server_name? >> >> In my opinion it makes no sense to randomize all requests for the different domains and would make sense to load balance based on a hash of the domain. >> > > What's the problem with the "hash" directive of upstream block? > Why can't you use it? > > http://nginx.org/r/hash > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Dec 15 11:23:43 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Dec 2015 11:23:43 +0000 Subject: Load balancing based on hash of the domain In-Reply-To: <9614B2E3-968B-4988-A863-C8B80F7E4A66@telenet.be> References: <5DB424CE-6065-48AE-88ED-9DB90BBE41ED@telenet.be> <3488FB85-5A7B-4402-A677-60C3DFD34353@telenet.be> <2031248.YAk4HBQ74G@vbart-workstation> <9614B2E3-968B-4988-A863-C8B80F7E4A66@telenet.be> Message-ID: <20151215112343.GA19381@daoine.org> On Tue, Dec 15, 2015 at 11:56:43AM +0100, Bram wrote: Hi there, > Problem is that it will not accept anything besides $remote_addr and $request_uri. > For example: > > upstream loadbalancer { > hash $server_name consistent: Use ; not : > #hash $request_uri consistent; > server 10.0.0.1:8080; Line 6 is probably that one, because that is where the "hash" directive terminates. > server 10.0.0.2:8080; > } > > Will fail with: > > invalid number of arguments in "hash" directive in /etc/nginx/nginx-loadbalancing.conf:6 f -- Francis Daly francis at daoine.org From vbart at nginx.com Tue Dec 15 11:26:35 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Dec 2015 14:26:35 +0300 Subject: Load balancing based on hash of the domain In-Reply-To: <9614B2E3-968B-4988-A863-C8B80F7E4A66@telenet.be> References: <5DB424CE-6065-48AE-88ED-9DB90BBE41ED@telenet.be> <2031248.YAk4HBQ74G@vbart-workstation> <9614B2E3-968B-4988-A863-C8B80F7E4A66@telenet.be> Message-ID: <2824393.zCXN8bUDKB@vbart-workstation> On Tuesday 15 December 2015 11:56:43 Bram wrote: > Problem is that it will not accept anything besides $remote_addr and $request_uri. > For example: > > upstream loadbalancer { > hash $server_name consistent: There's a typo: you've used a colon at the end of the directive. wbr, Valentin V. Bartenev From bramverdonck at telenet.be Tue Dec 15 11:31:34 2015 From: bramverdonck at telenet.be (Bram) Date: Tue, 15 Dec 2015 12:31:34 +0100 Subject: Load balancing based on hash of the domain In-Reply-To: <2824393.zCXN8bUDKB@vbart-workstation> References: <5DB424CE-6065-48AE-88ED-9DB90BBE41ED@telenet.be> <2031248.YAk4HBQ74G@vbart-workstation> <9614B2E3-968B-4988-A863-C8B80F7E4A66@telenet.be> <2824393.zCXN8bUDKB@vbart-workstation> Message-ID: <341BE7D9-DFB7-42F3-B983-3BA57DB3649F@telenet.be> Wow, was struggling with that for a week already. Can?t believe that was it. Thank you Valentin and Francis! > Op 15-dec.-2015, om 12:26 heeft Valentin V. Bartenev het volgende geschreven: > > On Tuesday 15 December 2015 11:56:43 Bram wrote: >> Problem is that it will not accept anything besides $remote_addr and $request_uri. >> For example: >> >> upstream loadbalancer { >> hash $server_name consistent: > > There's a typo: you've used a colon at the end of the directive. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From zxcvbn4038 at gmail.com Tue Dec 15 17:06:58 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 15 Dec 2015 12:06:58 -0500 Subject: HTTP/2 Gateway In-Reply-To: <5454809.Qy8znxMuo4@vbart-workstation> References: <2080721.vXL7gCjpdl@vbart-workstation> <5454809.Qy8znxMuo4@vbart-workstation> Message-ID: I think what they are asking is to support the transport layer so that they don't have to support both protocols on whatever endpoint they are developing. Maybe I'm wrong and someone has grand plans about multiplexing requests to an upstream with http/2, but I haven't seen anyone ask for that explicitly yet. On Tue, Dec 15, 2015 at 4:40 AM, Valentin V. Bartenev wrote: > On Monday 14 December 2015 18:24:01 Nicholas Capo wrote: > > My specific use case is to support an HTTP/2 application behind a load > > balancer (reverse proxy). > > > > Also as a backend LB between services that could use a long running > HTTP/2 > > connection to do their communication. > > > > Places where I need an LB, but also know that both ends would /prefer/ to > > use HTTP/2. > > > > Can you name such applications that are only able to talk HTTP/2? > > HTTP/2 isn't a better and shiny version of HTTP, it's completely > different transport layer with a number of disadvantages as well. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholas.capo at gmail.com Tue Dec 15 17:31:50 2015 From: nicholas.capo at gmail.com (Nicholas Capo) Date: Tue, 15 Dec 2015 17:31:50 +0000 Subject: HTTP/2 Gateway In-Reply-To: References: <2080721.vXL7gCjpdl@vbart-workstation> <5454809.Qy8znxMuo4@vbart-workstation> Message-ID: > Can you name such applications that are only able to talk HTTP/2? The developers I support would like to use GRPC [1] which is HTTP/2 only. I need to provide an HA/LB system to support them. I'm not saying this would be easy to implement, only that I need it :-) Thank you for your help, Nicholas [1] https://github.com/grpc/grpc On Tue, Dec 15, 2015 at 11:07 AM CJ Ess wrote: > I think what they are asking is to support the transport layer so that > they don't have to support both protocols on whatever endpoint they are > developing. > > Maybe I'm wrong and someone has grand plans about multiplexing requests to > an upstream with http/2, but I haven't seen anyone ask for that explicitly > yet. > > > On Tue, Dec 15, 2015 at 4:40 AM, Valentin V. Bartenev > wrote: > >> On Monday 14 December 2015 18:24:01 Nicholas Capo wrote: >> > My specific use case is to support an HTTP/2 application behind a load >> > balancer (reverse proxy). >> > >> > Also as a backend LB between services that could use a long running >> HTTP/2 >> > connection to do their communication. >> > >> > Places where I need an LB, but also know that both ends would /prefer/ >> to >> > use HTTP/2. >> > >> >> Can you name such applications that are only able to talk HTTP/2? >> >> HTTP/2 isn't a better and shiny version of HTTP, it's completely >> different transport layer with a number of disadvantages as well. >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Dec 15 18:04:20 2015 From: nginx-forum at nginx.us (chuang39) Date: Tue, 15 Dec 2015 13:04:20 -0500 Subject: How is module type value (NGX_HTTP_MODULE) is generated? Message-ID: Hi, I am curious how is module type like NGX_HTTP_MODULE in ngx_http_config.h is generated? Thanks! #define NGX_HTTP_MODULE 0x50545445 /* "HTTP" */ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263474,263474#msg-263474 From jansen at sport-thieme.net Wed Dec 16 12:59:05 2015 From: jansen at sport-thieme.net (Frank Jansen) Date: Wed, 16 Dec 2015 13:59:05 +0100 Subject: SSL-Session-Cache on non standard port Message-ID: SSL-Session-Cache isnot working on non-standard port. I tried the solutions mentioned in http://forum.nginx.org/read.php?2,152294,152294 - configuring ssl_session_cache outside of server blocks - configuring ssl_session_cache inside every server block But the cache is only working on vhosts with port 443, not with hosts on 444. Is this a bug or am i missing something? Mit freundlichen Gr??en / Best Regards i. A. / p.p. Frank Jansen Sport-Thieme GmbH Marketing * E-Commerce Cloud-Management & Systemadministration Telefon: +49 (0) 30 ? 610 704 57 Telefax: +49 (0) 30 ? 610 704 70 Ehrenbergstr. 19 D-10245 Berlin E-Mail: jansen at sport-thieme.net http://www.sport-thieme.com Sitz der Gesellschaft: 38368 Grasleben Gesch?ftsf?hrer: Maximilian Hohe Handelsregister: AG Braunschweig - HRB 100733 Ust.-IdNr.: DE815548051 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 16 13:28:51 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Dec 2015 16:28:51 +0300 Subject: SSL-Session-Cache on non standard port In-Reply-To: References: Message-ID: <20151216132851.GS74233@mdounin.ru> Hello! On Wed, Dec 16, 2015 at 01:59:05PM +0100, Frank Jansen wrote: > SSL-Session-Cache isnot working on non-standard port. > > I tried the solutions mentioned in > http://forum.nginx.org/read.php?2,152294,152294 > > - configuring ssl_session_cache outside of server blocks > - configuring ssl_session_cache inside every server block > > But the cache is only working on vhosts with port 443, not with hosts on > 444. > > Is this a bug or am i missing something? How do you test it? For nginx there is no difference between 443 and 444 ports. Though browsers may be picky and refuse to reuse sessions to non-standard ports. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Dec 16 13:56:39 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 16 Dec 2015 16:56:39 +0300 Subject: How is module type value (NGX_HTTP_MODULE) is generated? In-Reply-To: References: Message-ID: <10515881.F3jVJYonIS@vbart-laptop> On Tuesday 15 December 2015 13:04:20 chuang39 wrote: > Hi, I am curious how is module type like NGX_HTTP_MODULE in > ngx_http_config.h is generated? Thanks! > #define NGX_HTTP_MODULE 0x50545445 /* "HTTP" */ > It's "HTTP" in ASCII. wbr, Valentin V. Bartenev From v.d.petrov at gmail.com Wed Dec 16 15:56:02 2015 From: v.d.petrov at gmail.com (Vsevolod Petrov) Date: Wed, 16 Dec 2015 18:56:02 +0300 Subject: preserve client source address when proxying to upstream Message-ID: Hello, proxy_bind directive allows to specify source IP address for proxied connections. This directive can be set to local IP address. I'm wondering if there's a way to set $remote_addr as proxy_bind address? Or any other non-local IP address? The idea is to see original client source IP address at the server site. While it's not http traffic I cannot use XFF header. Destination MAC address in the response packet from the server is set to nginx server interface address. So, there's no problem at layer 2 communication. Can nginx listen for responses coming to non-local destination address? Thanks in advance! -- Vsevolod Petrov -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 16 16:52:16 2015 From: nginx-forum at nginx.us (mex) Date: Wed, 16 Dec 2015 11:52:16 -0500 Subject: using nginx to mitigate the latest joomla-vuln - discussion In-Reply-To: <04c58cc7e2c7f11102a74232f3c6e567.NginxMailingListEnglish@forum.nginx.org> References: <04c58cc7e2c7f11102a74232f3c6e567.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7946c75da5d16fc6cb8b7f223d33c635.NginxMailingListEnglish@forum.nginx.org> this one: https://www.nginx.com/blog/new-joomla-exploit-cve-2015-8562/ i'd suggest to change the ua-detection from "JDatabaseDriverMysql" to a regex detecting the PHP-Object-Injection to cover additional attack-vectors (like my gurus @ emergingthreats said: "mitigation against the vuln, not the exploit you should create" :D i also suggest to delete the "O:" - detection which will lead to a lot of false positives, as well as using "{" alone. http { map $http_user_agent $blocked_ua { "~O:\+?\d+:.*:\+?\d+:{(s|S):\+?\d+:.*;.*}" 1; default 0; } ... server { ... if ($blocked_ua) { return 403; } ... } ... } cheers, mex p.s. repost, because of forum-snafu Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263478,263483#msg-263483 From mdounin at mdounin.ru Wed Dec 16 16:56:05 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Dec 2015 19:56:05 +0300 Subject: preserve client source address when proxying to upstream In-Reply-To: References: Message-ID: <20151216165605.GV74233@mdounin.ru> Hello! On Wed, Dec 16, 2015 at 06:56:02PM +0300, Vsevolod Petrov wrote: > Hello, > > proxy_bind directive allows to specify source IP address for proxied > connections. > This directive can be set to local IP address. > > I'm wondering if there's a way to set $remote_addr as proxy_bind address? > Or any other non-local IP address? > > The idea is to see original client source IP address at the server site. > While it's not http traffic I cannot use XFF header. > > Destination MAC address in the response packet from the server is set to > nginx server interface address. So, there's no problem at layer 2 > communication. > > Can nginx listen for responses coming to non-local destination address? In theory this is possible with appropriate OS-level support, and as long as you are able to route packets properly. In particular, this should be possible on OpenBSD using SO_BINDANY, on FreeBSD using IP_BINDANY, and on Linux using IP_TRANSPARENT/IP_FREEBIND. An erlier attempt to make it work on nginx can be found here (OpenBSD-specific patch): http://mailman.nginx.org/pipermail/nginx-devel/2010-October/000533.html As far as I understand, doing proper support should be mostly trivial now with variables support in proxy_bind. -- Maxim Dounin http://nginx.org/ From maxim at nginx.com Wed Dec 16 17:08:00 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 16 Dec 2015 20:08:00 +0300 Subject: using nginx to mitigate the latest joomla-vuln - discussion In-Reply-To: <7946c75da5d16fc6cb8b7f223d33c635.NginxMailingListEnglish@forum.nginx.org> References: <04c58cc7e2c7f11102a74232f3c6e567.NginxMailingListEnglish@forum.nginx.org> <7946c75da5d16fc6cb8b7f223d33c635.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56719A70.4080209@nginx.com> On 12/16/15 7:52 PM, mex wrote: > this one: https://www.nginx.com/blog/new-joomla-exploit-cve-2015-8562/ > [...] Thanks for feedback. I passed your comment to appropriate people. -- Maxim Konovalov From nginx-forum at nginx.us Wed Dec 16 20:38:51 2015 From: nginx-forum at nginx.us (no.1) Date: Wed, 16 Dec 2015 15:38:51 -0500 Subject: Reverse proxy to QNAP does not work In-Reply-To: <20151129105117.GU3351@daoine.org> References: <20151129105117.GU3351@daoine.org> Message-ID: <81e6f6672d4c1029321ab3aeed97e890.NginxMailingListEnglish@forum.nginx.org> Hi Francis, thanks for the details. I guess the trick is first to bypass a QNAP internal redirect to the NAS GUI (if possible or to integrate it on the first request). And second to to adapt the login request the right way. (btw: I try to avoid to change the qnap service, because of the regular QNAP firmware updates.) For the first topic the internal network traffic analysis from Firefox shows a lot of GET requests and some POST regarding the login: On the address bar a request of http://qnap/ will be redirected to http://qnap:8080/ >> http://qnap:8080/redirect.html?count=0.xxxx >> http://qnap:8080/cgi-bin/QTS.cgi?count=yyyyyy and finally to http://qnap:8080/cgi-bin/ which "delivers" the login page. So, the service to enter the qnap is available at http://qnap:8080/cgi-bin/ (There are also other services available like the photo station (e.g. http://qnap:8080/photo or the music station, if activated). I don't know why QNAP uses a redirect (I guess it has something to do with the QNAP webserver), so I concentrate on the second point: "Bypassing the redirect by request the login page directly" (which could be a good workaround). So I tried it with http://qnap:8080/cgi-bin/login.html locally, which leads successfully to the login page http://qnap:8080/cgi-bin/. Changing the location part as suggested into: location ^~ /nas/ { proxy_pass http://qnap:8080/cgi-bin/; proxy_set_header X-Real-IP $remote_addr; } and trying the address https://example.com/nas/login.html results in a broken login page with a lot of 404 errors. Comparing the http headers of both login pages the one from the internal request contains a bit more. There is a additional Connection "Keep-Alive" and a Keep-Alive with "timeout=15, max=97" and another server name which is "http server 1.0" (that should be the QNAP internal server description) instead of "nginx/1.6.2" via the external request. The content type of the GET requests doesn't change using the external address, it stays on html. Internal requests show different content types like css, js, xml, jpg. Can anybody tell why it stays on external requests on content type "html"? Does it have to do something with Cache-Control? I would appreciate any hint. Kind regs no.1 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263081,263492#msg-263492 From nginx-forum at nginx.us Thu Dec 17 05:44:01 2015 From: nginx-forum at nginx.us (pumbac) Date: Thu, 17 Dec 2015 00:44:01 -0500 Subject: POST request body manipulation In-Reply-To: <55392330.5090400@nems.it> References: <55392330.5090400@nems.it> Message-ID: I have the same requirement to map $request_body. $request_body is logging to my access_log correctly via proxy_pass setting. I need to map $request_body to determine which part of it should be logged, as sometime it is too large to be logged, like uploading files. any suggestions? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,258335,263495#msg-263495 From nginx-forum at nginx.us Thu Dec 17 07:45:56 2015 From: nginx-forum at nginx.us (juanin) Date: Thu, 17 Dec 2015 02:45:56 -0500 Subject: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:64 Message-ID: I have this error when using the php http module. http_send_content_disposition('a.pdf', true); http_send_content_type("application/x-octetstream"); http_throttle(0.1, 40480); http_send_file('/pathtopdf/a.pdf')); Any help? Is there a way to increase the memory? Tried to add push_stream_shared_memory_size 64M; in the conf file but I get unknown directive "push_stream_shared_memory_size" in /etc/nginx/nginx.conf:40 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263496,263496#msg-263496 From nginx-forum at nginx.us Thu Dec 17 09:55:31 2015 From: nginx-forum at nginx.us (Cugar15) Date: Thu, 17 Dec 2015 04:55:31 -0500 Subject: Building a redundant mail service In-Reply-To: <541AC9DE.7000305@noa.gr> References: <541AC9DE.7000305@noa.gr> Message-ID: <9926525244aa70d41a70ea3d6bb2cbd2.NginxMailingListEnglish@forum.nginx.org> Hi Nick, have you ever figured this out? If so, would it be possible to post your solution? Thanks, Cugar15 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,253381,263498#msg-263498 From vbart at nginx.com Thu Dec 17 11:51:49 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 17 Dec 2015 14:51:49 +0300 Subject: POST request body manipulation In-Reply-To: <55392330.5090400@nems.it> References: <55392330.5090400@nems.it> Message-ID: <2020210.I8nkHRLrYT@vbart-workstation> On Thursday 23 April 2015 18:52:00 Sandro Bordacchini wrote: > Hello everyone, > > i have a problem in configuring Nginx. > > I have a location that serves as a proxy for a well-specified url "/login". > This location can receive both GET and POST request. > GET request have no body and should be proxied to a default and > well-know host. > POST request contains the host to be proxied to in their body > (extractable by a regexp). > > To avoid use of "if", i was using a map: > > map $request_body $target_tenant_loginbody { > ~*account=(.*)%40(?P.*)&password.* $body_tenant; > default default.example.com; > } > > location /login { > echo_read_request_body; > > proxy_pass http://$target_tenant_loginbody:9000; > > # Debug > proxy_set_header X-Debug-Routing-Value > $target_tenant_loginbody; > > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > } > > This is not working (works with the GETs but not with the POSTs), seems > that the map returns always the default value even if the regexp works > (tested on regex101.com). > After a few tests, i understood that $request_body is empty or > non-initialized. I tried also with $echo_request_body, that seems > correctly initialized in location context but not in the map. > > I read about a lot of issues and people having problem with empty > $request_body. > > Maybe is there another approach you could direct me to? > [..] The $request_body variable is empty when the body doesn't fit into the "client_body_buffer_size", or if the "client_body_in_file_only" is enabled. Also I suggest to turn on the "client_body_in_single_buffer" directive. http://nginx.org/r/client_body_buffer_size http://nginx.org/r/client_body_in_file_only http://nginx.org/r/client_body_in_single_buffer wbr, Valentin V. Bartenev From vbart at nginx.com Thu Dec 17 11:57:58 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 17 Dec 2015 14:57:58 +0300 Subject: POST request body manipulation In-Reply-To: <2020210.I8nkHRLrYT@vbart-workstation> References: <55392330.5090400@nems.it> <2020210.I8nkHRLrYT@vbart-workstation> Message-ID: <6748860.vRUnDlmvTi@vbart-workstation> On Thursday 17 December 2015 14:51:49 Valentin V. Bartenev wrote: > On Thursday 23 April 2015 18:52:00 Sandro Bordacchini wrote: > > Hello everyone, > > > > i have a problem in configuring Nginx. > > > > I have a location that serves as a proxy for a well-specified url "/login". > > This location can receive both GET and POST request. > > GET request have no body and should be proxied to a default and > > well-know host. > > POST request contains the host to be proxied to in their body > > (extractable by a regexp). > > > > To avoid use of "if", i was using a map: > > > > map $request_body $target_tenant_loginbody { > > ~*account=(.*)%40(?P.*)&password.* $body_tenant; > > default default.example.com; > > } > > > > location /login { > > echo_read_request_body; > > > > proxy_pass http://$target_tenant_loginbody:9000; > > > > # Debug > > proxy_set_header X-Debug-Routing-Value > > $target_tenant_loginbody; > > > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For > > $proxy_add_x_forwarded_for; > > } > > > > This is not working (works with the GETs but not with the POSTs), seems > > that the map returns always the default value even if the regexp works > > (tested on regex101.com). > > After a few tests, i understood that $request_body is empty or > > non-initialized. I tried also with $echo_request_body, that seems > > correctly initialized in location context but not in the map. > > > > I read about a lot of issues and people having problem with empty > > $request_body. > > > > Maybe is there another approach you could direct me to? > > > [..] > > The $request_body variable is empty when the body doesn't fit into the > "client_body_buffer_size", or if the "client_body_in_file_only" is enabled. > > Also I suggest to turn on the "client_body_in_single_buffer" directive. > > http://nginx.org/r/client_body_buffer_size > http://nginx.org/r/client_body_in_file_only > http://nginx.org/r/client_body_in_single_buffer > [..] But in your case it's empty because the $target_tenant_loginbody variable is evaluated before the body has been read. So you can't use it in the proxy_pass directive. wbr, Valentin V. Bartenev From v.d.petrov at gmail.com Thu Dec 17 12:00:51 2015 From: v.d.petrov at gmail.com (Vsevolod Petrov) Date: Thu, 17 Dec 2015 15:00:51 +0300 Subject: preserve client source address when proxying to upstream Message-ID: Thanks for pointing me in the right direction, Maxim! I've found a number of posts where people are discussing nginx acting as listener at 0.0.0.0:80/0 for outbound traffic, making able the system to review every outgoing packet. In this way nginx can act as transparent proxy that do not perform destination address translation. What I'm asking for is a special handling for inbound packets. I still want nginx to perform destination address translation, but I need to keep original source address in the packet. As far as I understood, both scenarios relies on using IP_TRANSPARENT/IP_FREEBIND on Linux as you mentioned previously. While there's no complete solution at the moment, I think that it's great idea to add such functions in the future, at least in commercial version of nginx. From the other side, positioning nginx as ADC solution requires to give administrators more control over applications delivery and translating source/destination addresses/ports are just necessary options. -- Vsevolod Petrov 2015-12-16 19:56 GMT+03:00 Maxim Dounin : > Hello! > > On Wed, Dec 16, 2015 at 06:56:02PM +0300, Vsevolod Petrov wrote: > > > Hello, > > > > proxy_bind directive allows to specify source IP address for proxied > > connections. > > This directive can be set to local IP address. > > > > I'm wondering if there's a way to set $remote_addr as proxy_bind address? > > Or any other non-local IP address? > > > > The idea is to see original client source IP address at the server site. > > While it's not http traffic I cannot use XFF header. > > > > Destination MAC address in the response packet from the server is set to > > nginx server interface address. So, there's no problem at layer 2 > > communication. > > > > Can nginx listen for responses coming to non-local destination address? > > In theory this is possible with appropriate OS-level support, and > as long as you are able to route packets properly. In particular, > this should be possible on OpenBSD using SO_BINDANY, on FreeBSD > using IP_BINDANY, and on Linux using IP_TRANSPARENT/IP_FREEBIND. > > An erlier attempt to make it work on nginx can be found here > (OpenBSD-specific patch): > > http://mailman.nginx.org/pipermail/nginx-devel/2010-October/000533.html > > As far as I understand, doing proper support should be mostly > trivial now with variables support in proxy_bind. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Thu Dec 17 15:44:55 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 17 Dec 2015 16:44:55 +0100 Subject: using variables in certificate path names In-Reply-To: <566FE4EB.3090705@nginx.com> References: <7B29E79534A00243AF7D95A3D81DC84AE78E9430@corp-exch03.corp.ism.nl> <1815750.3OYXC67gMp@vbart-workstation> <566FE4EB.3090705@nginx.com> Message-ID: <134c4378eba8ae999eec5f3fdbb5a8f6@none.at> Am 15-12-2015 11:01, schrieb Maxim Konovalov: > On 12/15/15 12:53 PM, Valentin V. Bartenev wrote: >> On Tuesday 15 December 2015 09:44:45 Aleksey Portnov wrote: >>> Hello! >>> >>> Is it possible and correct something like: >>> >>> server { >>> listen 1.1.1.1:443 ssl; >>> >>> server_name sitename.de sitename.fr sitename.nl; >>> root /var/www/vhosts/Live/public_html; >>> >>> ssl_certificate /etc/ssl/web/$host.pem; >>> ssl_certificate_key /etc/ssl/web/$host.key; [snipp] >> Currently it's not possible. Certificates and keys >> are loaded while reading configuration. >> > .. and we are working on a similar feature. Due to the fact that I'm not sure if it's possible I ask ;-) Is it possible to load the certificates from $ENV{'CERT_PATH'}? Sorry if I missed it in the doc. BR aleks From nginx-forum at nginx.us Thu Dec 17 23:36:35 2015 From: nginx-forum at nginx.us (pumbac) Date: Thu, 17 Dec 2015 18:36:35 -0500 Subject: POST request body manipulation In-Reply-To: <2020210.I8nkHRLrYT@vbart-workstation> References: <2020210.I8nkHRLrYT@vbart-workstation> Message-ID: in my case, I can log the $request_body in the access_log via proxy_pass, even when uploading files more then 1MB. I just wanted to limit the size of the $request_body in the log. Here is my nginx.conf: http { include mime.types; default_type application/octet-stream; map $request_body $request_body_short { "~^(?.*filename.*)Content-Type" $SHORT; default $request_body; } log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" $request_time ' '"$request_body_short" $http_uid "$http_build" "$http_appversion" "$uid_got" "$host"'; access_log logs/access.log main; .... } if I changed the map default value to something like 'wrong', the logged $request_body would be 'wrong'. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,258335,263535#msg-263535 From lists.md at gmail.com Fri Dec 18 17:55:47 2015 From: lists.md at gmail.com (Marcelo MD) Date: Fri, 18 Dec 2015 15:55:47 -0200 Subject: High number connections-writing stuck Message-ID: Hi, Recently we added a 'thread_pool' directive to our main configuration. A few hours later we saw a huge increase in the connections_writing stat as reported by stub_status module. This number reached +- 3800 and is stuck there since. The server in question is operating normally, but this is very strange. Any hints on what this could be? Some info: - Here is a graph of the stats reported, for a server with thread_pool and another without: http://imgur.com/a/lF2EL - I don`t have older data anymore, but the jump from <100 to +- 3800 connections_writing happened in two sharp jumps. The first one following a reload; - The machines' hardware and software are identical except for the thread_pool directive in their nginx.conf. They live in two different data centers; - Both machines are performing normally. Nothing unusual in CPU or RAM usage. Nginx performance is about the same. - Reloading Nginx with 'nginx -s reload' does nothing. Restarting the process brings connections_writing down. Debug stuff: mallmann# uname -a Linux xxx 3.8.13-98.5.2.el6uek.x86_64 #2 SMP Tue Nov 3 18:32:04 PST 2015 x86_64 x86_64 x86_64 GNU/Linux mallmann# nginx -V nginx version: nginx/1.8.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-ipv6 --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-mail_ssl_module --with-pcre --with-google_perftools_module --add-module=/builddir/build/BUILD/nginx-1.8.0/headers-more-nginx-module-0.25 --add-module=/builddir/build/BUILD/nginx-1.8.0/ngx_http_bytes_filter_module --add-module=/builddir/build/BUILD/nginx-1.8.0/echo-nginx-module-0.55 --with-threads --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt=' -Wl,-E' Affected server: mallmann# lsof -n -u nginx | awk '{print $5}' | sort | uniq -c | sort -nr 4172 REG 2140 IPv4 100 unix 30 CHR 20 DIR 20 0000 3 sock 1 TYPE mallmann# curl http://127.0.0.1/status Active connections: 5924 server accepts handled requests 5864099 5864099 15527178 Reading: 0 Writing: 3883 Waiting: 2040 Normal server: mallmann# lsof -n -u nginx | awk '{print $5}' | sort | uniq -c | sort -nr 4454 REG 1967 IPv4 100 unix 30 CHR 20 DIR 20 0000 1 unknown 1 TYPE 1 sock mallmann# curl http://127.0.0.1/status Active connections: 2096 server accepts handled requests 1136132 1136132 3464904 Reading: 0 Writing: 107 Waiting: 1989 -- Marcelo Mallmann Dias -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Dec 18 19:08:49 2015 From: nginx-forum at nginx.us (Cugar15) Date: Fri, 18 Dec 2015 14:08:49 -0500 Subject: smtp proxy with postfix Message-ID: <9bb0ef121aed72198de0f37e2bc2ed7c.NginxMailingListEnglish@forum.nginx.org> Hello, I'd like to build a smtp Proxy with nginx (v1.8.0) and postfix (v2.9.6) on Debian7. Somehow I'm stuck with the following problem: 1) Configuration1: smtp_auth login plain cram-md5; xclient on; ==> a) My mail client can authenticate (IP:yy.yyy.yy.yy), send email and receive email (imap) - even with tls mail.log: connect from nginx_prox.de[xx.xxx.xx.xx] client=unknown[yy.yyy.yy.yy], sasl_method=XCLIENT, sasl_username=my_username b) But no emails from others are received - obviously everybody has to authenticate!! 2) Configuration2: smtp_auth none; xclient on; ==> creates an open relay! In Postfix, I have set: smtpd_authorized_xclient_hosts = xx.xxx.xx.xx What I'd like to achive is the current postfix behaviour: 1) Receive emails from every Sender 2) Only authorized users can send emails from outside the Network Help is appreciated... I found bits and pieces in the Forum and other places - but nothing seems to be consistent... Thanks, Norbert Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263559,263559#msg-263559 From info at elsitar.com Sat Dec 19 16:11:51 2015 From: info at elsitar.com (Xavier Cardil Coll) Date: Sat, 19 Dec 2015 17:11:51 +0100 Subject: How to fix nginx server returning 302 redirect instead of 200 Message-ID: On this setup, there is a server directive listening to port 80 that returns www to non www and returns 80 to 443. The second server is Nginx as SSL terminator, so it's an SSL virtual host, that proxies the request to Varnish, and the last server is the last host on the chain, that processes and serves back the requests. Now, when I bypass the chain and do a `curl -v 127.0.0.1:8081` ( this is the backend vhost, the last in the chain ) I get a 302 redirect instead a 200. This is causing problems on my CMS and also with Varnish communicating to the backend. This is the curl response : * Rebuilt URL to: 127.0.0.1:8081/ * Hostname was NOT found in DNS cache * Trying 127.0.0.1... * Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.38.0 > Host: 127.0.0.1:8081 > Accept: */* > < HTTP/1.1 302 Found * Server nginx/1.9.9 is not blacklisted < Server: nginx/1.9.9 < Date: Sat, 19 Dec 2015 16:04:14 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < X-Powered-By: HHVM/3.11.0 < Vary: Accept-Encoding < Location: https://domain.com < * Connection #0 to host 127.0.0.1 left intact And this is my nginx configuration : server { listen 80; server_name www.domain.com; return 301 $scheme://domain.com$request_uri; if ($scheme = http) { return 301 https://$server_name$request_uri; } } server { listen 443 default_server ssl http2; server_name domain.com; access_log off; ssl_certificate /etc/ssl/private/cert_chain.crt; ssl_certificate_key /etc/ssl/private/server.key; if ($allow = no) { return 403; } if ($bad_referer) { return 444; } location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header Host $host; proxy_redirect off; proxy_set_header HTTPS "on"; } } server { listen 127.0.0.1:8081; root /var/www/domain.com/wordpress; index index.php index.html index.htm; server_name domain.com; error_log /var/log/nginx/upstream.log info; if ($allow = no) { return 403; } if ($bad_referer) { return 444; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf|css|js)$ { add_header Cache-Control "public, max-age=600"; add_header Access-Control-Allow-Headers "X-Requested-With"; add_header Access-Control-Allow-Methods "GET, HEAD, OPTIONS"; add_header Access-Control-Allow-Origin "*"; access_log off; } client_body_buffer_size 124K; client_header_buffer_size 1k; client_max_body_size 100m; large_client_header_buffers 4 16k; error_page 404 /404.html; gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types application/json application/x-javascript application/xml text/javascript text/plain text/css application/javascript text/xml application/xml+rss; try_files $uri $uri/ /index.php?$args; # Rewrites for Yoast SEO XML Sitemap rewrite ^/sitemap_index.xml$ /index.php?sitemap=1 last; rewrite ^/([^/]+?)-sitemap([0-9]+)?.xml$ /index.php?sitemap=$1&sitemap_n=$2 last; include hhvm.conf; # include domain.com-ps.conf; # include multisite.conf; rewrite /wp-admin$ $scheme://$server_name$uri/ permanent; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } if ($bad_client) { return 403; } #location / { #try_files $uri $uri/ /index.php?$args; #} } -- ELSITAR -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Dec 19 16:44:03 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 19 Dec 2015 17:44:03 +0100 Subject: smtp proxy with postfix In-Reply-To: <9bb0ef121aed72198de0f37e2bc2ed7c.NginxMailingListEnglish@forum.nginx.org> References: <9bb0ef121aed72198de0f37e2bc2ed7c.NginxMailingListEnglish@forum.nginx.org> Message-ID: smtp_auth set + xclient on smtp_auth none + xclient off ? --- *B. R.* On Fri, Dec 18, 2015 at 8:08 PM, Cugar15 wrote: > Hello, > > I'd like to build a smtp Proxy with nginx (v1.8.0) and postfix (v2.9.6) > on > Debian7. > > Somehow I'm stuck with the following problem: > > 1) Configuration1: > smtp_auth login plain cram-md5; > xclient on; > > ==> a) My mail client can authenticate (IP:yy.yyy.yy.yy), send email > and > receive email (imap) - even with tls > mail.log: > connect from nginx_prox.de[xx.xxx.xx.xx] > client=unknown[yy.yyy.yy.yy], sasl_method=XCLIENT, > sasl_username=my_username > > b) But no emails from others are received - obviously everybody > has to authenticate!! > > 2) Configuration2: > smtp_auth none; > xclient on; > > ==> creates an open relay! > > In Postfix, I have set: smtpd_authorized_xclient_hosts = xx.xxx.xx.xx > > What I'd like to achive is the current postfix behaviour: > 1) Receive emails from every Sender > 2) Only authorized users can send emails from outside the Network > > Help is appreciated... I found bits and pieces in the Forum and other > places > - but nothing seems to be consistent... > > Thanks, > Norbert > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263559,263559#msg-263559 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Sat Dec 19 17:06:55 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Sat, 19 Dec 2015 18:06:55 +0100 Subject: How to fix nginx server returning 302 redirect instead of 200 In-Reply-To: References: Message-ID: This doesn't seem to be an nginx issue. The presence of the "X-Powered-By: HHVM/3.11.0" in your response means your backend is the one issuing the 302 redirect, so you should investigate that instead of nginx. On Sat, Dec 19, 2015 at 5:11 PM, Xavier Cardil Coll wrote: > On this setup, there is a server directive listening to port 80 that > returns www to non www and returns 80 to 443. > > The second server is Nginx as SSL terminator, so it's an SSL virtual host, > that proxies the request to Varnish, and the last server is the last host > on the chain, that processes and serves back the requests. > > Now, when I bypass the chain and do a `curl -v 127.0.0.1:8081` ( this is > the backend vhost, the last in the chain ) I get a 302 redirect instead a > 200. This is causing problems on my CMS and also with Varnish communicating > to the backend. > > This is the curl response : > > * Rebuilt URL to: 127.0.0.1:8081/ > * Hostname was NOT found in DNS cache > * Trying 127.0.0.1... > * Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0) > > GET / HTTP/1.1 > > User-Agent: curl/7.38.0 > > Host: 127.0.0.1:8081 > > Accept: */* > > > < HTTP/1.1 302 Found > * Server nginx/1.9.9 is not blacklisted > < Server: nginx/1.9.9 > < Date: Sat, 19 Dec 2015 16:04:14 GMT > < Content-Type: text/html > < Transfer-Encoding: chunked > < Connection: keep-alive > < X-Powered-By: HHVM/3.11.0 > < Vary: Accept-Encoding > < Location: https://domain.com > < > * Connection #0 to host 127.0.0.1 left intact > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Dec 19 17:08:19 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 19 Dec 2015 18:08:19 +0100 Subject: High number connections-writing stuck In-Reply-To: References: Message-ID: >From what you provided, I do not see anything unexpected... Enabling threads usage with thread_pool and aio threads enables asynchronous file reading/writing. The connections_writing stat indicates... the number of connections to which the server is writing a response. Since you are working asynchronously, if the request takes much less time to process than the response to be sent, it is normal this numbers increases, until reaching a balance point. You saw this number increasing when you reloaded the configuration -> shows the new configuration has been applied immediately You saw this number drop on restart -> restarting kills the current processes and the attached connections Nothing to see on CPU & RAM -> they are probably not the bottleneck, and one nginx strength is to have a minimal fingerprint there, compared to other Web server alternatives. Have a look at I/O wait to check if the worker processes might be having a hard time getting content they seek from the disk. Less often, it might also be a congestion on backend communication if content is being read from some. To me, it is just a matter of raton time to process request/time to serve answer. Multithreading means a worker might be answering several connections at the same time. If I am right about nginx' design, events might be processed in any order, but each worker still only does one thing at a time and onky switch task when the current one is finished/waiting on something else. Multithreading changes the whole behavior. Hope to help, --- *B. R.* On Fri, Dec 18, 2015 at 6:55 PM, Marcelo MD wrote: > Hi, > > Recently we added a 'thread_pool' directive to our main configuration. A > few hours later we saw a huge increase in the connections_writing stat as > reported by stub_status module. This number reached +- 3800 and is stuck > there since. The server in question is operating normally, but this is very > strange. > > Any hints on what this could be? > > > Some info: > > - Here is a graph of the stats reported, for a server with thread_pool and > another without: http://imgur.com/a/lF2EL > > - I don`t have older data anymore, but the jump from <100 to +- 3800 > connections_writing happened in two sharp jumps. The first one following a > reload; > > - The machines' hardware and software are identical except for the > thread_pool directive in their nginx.conf. They live in two different data > centers; > > - Both machines are performing normally. Nothing unusual in CPU or RAM > usage. Nginx performance is about the same. > > - Reloading Nginx with 'nginx -s reload' does nothing. Restarting the > process brings connections_writing down. > > > Debug stuff: > > mallmann# uname -a > Linux xxx 3.8.13-98.5.2.el6uek.x86_64 #2 SMP Tue Nov 3 18:32:04 PST 2015 > x86_64 x86_64 x86_64 GNU/Linux > > mallmann# nginx -V > nginx version: nginx/1.8.0 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) > built with OpenSSL 1.0.1e-fips 11 Feb 2013 > TLS SNI support enabled > configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/lib/nginx/tmp/client_body > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid > --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-ipv6 > --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_xslt_module > --with-http_image_filter_module --with-http_geoip_module > --with-http_sub_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_degradation_module --with-http_stub_status_module > --with-http_perl_module --with-mail --with-mail_ssl_module --with-pcre > --with-google_perftools_module > --add-module=/builddir/build/BUILD/nginx-1.8.0/headers-more-nginx-module-0.25 > --add-module=/builddir/build/BUILD/nginx-1.8.0/ngx_http_bytes_filter_module > --add-module=/builddir/build/BUILD/nginx-1.8.0/echo-nginx-module-0.55 > --with-threads --with-debug --with-cc-opt='-O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt=' -Wl,-E' > > > Affected server: > > mallmann# lsof -n -u nginx | awk '{print $5}' | sort | uniq -c | sort -nr > 4172 REG > 2140 IPv4 > 100 unix > 30 CHR > 20 DIR > 20 0000 > 3 sock > 1 TYPE > mallmann# curl http://127.0.0.1/status > Active connections: 5924 > server accepts handled requests > 5864099 5864099 15527178 > Reading: 0 Writing: 3883 Waiting: 2040 > > > Normal server: > > mallmann# lsof -n -u nginx | awk '{print $5}' | sort | uniq -c | sort -nr > 4454 REG > 1967 IPv4 > 100 unix > 30 CHR > 20 DIR > 20 0000 > 1 unknown > 1 TYPE > 1 sock > mallmann# curl http://127.0.0.1/status > Active connections: 2096 > server accepts handled requests > 1136132 1136132 3464904 > Reading: 0 Writing: 107 Waiting: 1989 > > -- > Marcelo Mallmann Dias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gglater62 at gmail.com Sat Dec 19 18:21:21 2015 From: gglater62 at gmail.com (gg) Date: Sat, 19 Dec 2015 19:21:21 +0100 Subject: using variables in certificate path names In-Reply-To: <566FE4EB.3090705@nginx.com> References: <566FE4EB.3090705@nginx.com> Message-ID: <20151219182121.GA5887@t41> On Tue, Dec 15, 2015 at 01:01:15PM +0300, Maxim Konovalov wrote: > On 12/15/15 12:53 PM, Valentin V. Bartenev wrote: > > On Tuesday 15 December 2015 09:44:45 Aleksey Portnov wrote: > >> Hello! > >> > >> Is it possible and correct something like: > >> > >> server { > >> listen 1.1.1.1:443 ssl; > >> > >> server_name sitename.de sitename.fr sitename.nl; > >> root /var/www/vhosts/Live/public_html; > >> > >> ssl_certificate /etc/ssl/web/$host.pem; > >> ssl_certificate_key /etc/ssl/web/$host.key; > >> > >> ... > >> #commont part for all sites > >> ... > >> } > >> > > > > Currently it's not possible. Certificates and keys > > are loaded while reading configuration. > > > .. and we are working on a similar feature. I have similar problem. There is: server { listen 1.1.1.1 listen 1.1.1.1:443 server_name _ ; .... } and many locations there. Number of different hostnames might be thousands. Some of them, (hundreds) might have certificates. How to serve them with nginx. Generate separate server block {} for every certificate is not a solution. From reallfqq-nginx at yahoo.fr Sat Dec 19 21:02:43 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 19 Dec 2015 22:02:43 +0100 Subject: using variables in certificate path names In-Reply-To: <20151219182121.GA5887@t41> References: <566FE4EB.3090705@nginx.com> <20151219182121.GA5887@t41> Message-ID: Generating a separate server block for every certificate is the solution. Since you are probably not managing the configuration by hand at this scale, use your favourite configuration management tool with a well-cooked template to generate nginx' configuration. --- *B. R.* On Sat, Dec 19, 2015 at 7:21 PM, gg wrote: > On Tue, Dec 15, 2015 at 01:01:15PM +0300, Maxim Konovalov wrote: > > On 12/15/15 12:53 PM, Valentin V. Bartenev wrote: > > > On Tuesday 15 December 2015 09:44:45 Aleksey Portnov wrote: > > >> Hello! > > >> > > >> Is it possible and correct something like: > > >> > > >> server { > > >> listen 1.1.1.1:443 ssl; > > >> > > >> server_name sitename.de sitename.fr sitename.nl; > > >> root /var/www/vhosts/Live/public_html; > > >> > > >> ssl_certificate /etc/ssl/web/$host.pem; > > >> ssl_certificate_key /etc/ssl/web/$host.key; > > >> > > >> ... > > >> #commont part for all sites > > >> ... > > >> } > > >> > > > > > > Currently it's not possible. Certificates and keys > > > are loaded while reading configuration. > > > > > .. and we are working on a similar feature. > > I have similar problem. > There is: > server { > listen 1.1.1.1 > listen 1.1.1.1:443 > server_name _ ; > .... > } > > and many locations there. Number of different hostnames might be thousands. > Some of them, (hundreds) might have certificates. How to serve them with > nginx. > Generate separate server block {} for every certificate is not a solution. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garbytrash at gmail.com Sun Dec 20 09:44:03 2015 From: garbytrash at gmail.com (Zenny) Date: Sun, 20 Dec 2015 10:44:03 +0100 Subject: mistserver features in nginx Message-ID: Hi, Just reading the following two documents and the first link refers to the some of the nginx features. https://mistserver.org/comparison https://mistserver.org/guides/ProFeatures_2.4.pdf But there are stuffs which I think are incorrect like recording option which nginx_rtmp module already has (https://github.com/arut/nginx-rtmp-module/wiki/Directives#record) but the first document states it does not have! Can you add more to this thread which the first document claims "does not have in nginx", but otherwise? Thanks. Happy Holidays, /z From vbart at nginx.com Sun Dec 20 12:46:39 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 20 Dec 2015 15:46:39 +0300 Subject: mistserver features in nginx In-Reply-To: References: Message-ID: <4536183.qiJKzNUTAD@vbart-laptop> On Sunday 20 December 2015 10:44:03 Zenny wrote: > Hi, > > Just reading the following two documents and the first link refers to > the some of the nginx features. > > https://mistserver.org/comparison > https://mistserver.org/guides/ProFeatures_2.4.pdf > > But there are stuffs which I think are incorrect like recording option > which nginx_rtmp module already has > (https://github.com/arut/nginx-rtmp-module/wiki/Directives#record) but > the first document states it does not have! > > Can you add more to this thread which the first document claims "does > not have in nginx", but otherwise? > [..] It's worth to add nginx plus to the comparison, since it has enchantments in this area. See: http://nginx.org/en/docs/http/ngx_http_f4f_module.html http://nginx.org/en/docs/http/ngx_http_hls_module.html http://nginx.org/en/docs/http/ngx_http_mp4_module.html#mp4_limit_rate There is also nginx plus package with the RTMP module: https://www.nginx.com/products/technical-specs/ (check the "NGINX Plus Extras Package" at the bottom of the page) wbr, Valentin V. Bartenev From vbart at nginx.com Sun Dec 20 13:04:57 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 20 Dec 2015 16:04:57 +0300 Subject: High number connections-writing stuck In-Reply-To: References: Message-ID: <3722539.PurOuhtCdi@vbart-laptop> On Friday 18 December 2015 15:55:47 Marcelo MD wrote: > Hi, > > Recently we added a 'thread_pool' directive to our main configuration. A > few hours later we saw a huge increase in the connections_writing stat as > reported by stub_status module. This number reached +- 3800 and is stuck > there since. The server in question is operating normally, but this is very > strange. > > Any hints on what this could be? > > > Some info: > > - Here is a graph of the stats reported, for a server with thread_pool and > another without: http://imgur.com/a/lF2EL > > - I don`t have older data anymore, but the jump from <100 to +- 3800 > connections_writing happened in two sharp jumps. The first one following a > reload; > > - The machines' hardware and software are identical except for the > thread_pool directive in their nginx.conf. They live in two different data > centers; > > - Both machines are performing normally. Nothing unusual in CPU or RAM > usage. Nginx performance is about the same. > > - Reloading Nginx with 'nginx -s reload' does nothing. Restarting the > process brings connections_writing down. [..] As I understand from your message everything works well. You should also check the error_log, if it doesn't have anything suspicious then there is nothing to worry about. The increased number of writing connections can be explained by increased concurrency. Now nginx processing cycle doesn't block on disk and can accept more connections at the same time. All the connections that were waiting in listen socket before are waiting now in thread pool. wbr, Valentin V. Bartenev From garbytrash at gmail.com Sun Dec 20 17:08:08 2015 From: garbytrash at gmail.com (Zenny) Date: Sun, 20 Dec 2015 18:08:08 +0100 Subject: mistserver features in nginx In-Reply-To: <4536183.qiJKzNUTAD@vbart-laptop> References: <4536183.qiJKzNUTAD@vbart-laptop> Message-ID: On 12/20/15, Valentin V. Bartenev wrote: > On Sunday 20 December 2015 10:44:03 Zenny wrote: >> Hi, >> >> Just reading the following two documents and the first link refers to >> the some of the nginx features. >> >> https://mistserver.org/comparison >> https://mistserver.org/guides/ProFeatures_2.4.pdf >> >> But there are stuffs which I think are incorrect like recording option >> which nginx_rtmp module already has >> (https://github.com/arut/nginx-rtmp-module/wiki/Directives#record) but >> the first document states it does not have! >> >> Can you add more to this thread which the first document claims "does >> not have in nginx", but otherwise? >> > [..] > > It's worth to add nginx plus to the comparison, since it has enchantments > in this area. > > See: > > http://nginx.org/en/docs/http/ngx_http_f4f_module.html > http://nginx.org/en/docs/http/ngx_http_hls_module.html > http://nginx.org/en/docs/http/ngx_http_mp4_module.html#mp4_limit_rate Thanks Valentin. It would be nicer if you can specify which feature of the mistserver (in the mistserver comparison list) are replaced by above. Expecting more inputs flowing in! > > There is also nginx plus package with the RTMP module: > https://www.nginx.com/products/technical-specs/ (check the "NGINX Plus > Extras Package" at the bottom of the page) Binaries are not something people may be interested in this list, I guess. However, free flow of information is always good. ;-) Spacibo! > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From adam at jooadam.hu Sun Dec 20 23:17:56 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Mon, 21 Dec 2015 00:17:56 +0100 Subject: Nginx Ubuntu package policy Message-ID: Hi, Is there any document specifying Nginx?s policy on the pre-built Ubuntu packages? I?d like to know how it is decided which Ubuntu releases are supported, from which point, for how long, with which Nginx releases (features/security), and if one can rely on packages once published remaining available. Thanks, ?d?m -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Mon Dec 21 08:49:09 2015 From: thresh at nginx.com (Konstantin Pavlov) Date: Mon, 21 Dec 2015 11:49:09 +0300 Subject: Nginx Ubuntu package policy In-Reply-To: References: Message-ID: <5677BD05.3000709@nginx.com> Hi ?d?m, On 21/12/2015 02:17, Jo? ?d?m wrote: > Hi, > > Is there any document specifying Nginx?s policy on the pre-built Ubuntu > packages? I?d like to know how it is decided which Ubuntu releases are > supported, from which point, for how long, with which Nginx releases > (features/security), and if one can rely on packages once published > remaining available. The packages are built for Ubuntu releases currently supported by Canonical. Once a new distribution is out, the packages are built for it, both stable and mainline. Once an old distribution is EOLed, the new packages are no longer built for it but the old ones still remain in the repo. The list of the supported distros is available on http://nginx.org/en/linux_packages.html. -- Konstantin Pavlov From nginx-forum at nginx.us Mon Dec 21 14:39:05 2015 From: nginx-forum at nginx.us (vedranf) Date: Mon, 21 Dec 2015 09:39:05 -0500 Subject: Range + If-Range requests not working with proxy cache hits Message-ID: <9d3e145268a200c1c58c739d85e6469e.NginxMailingListEnglish@forum.nginx.org> Hello, I'm using proxy_cache module and I noticed nginx replies with whole response and 200 OK status on requests such as this and for content that is already in cache: User-Agent: curl/7.26.0 Accept: */* Range:bytes=128648358-507448924 If-Range: Thu, 26 Nov 2015 13:48:46 GMT However, If I remove the "If-Range" request header, I get the correct content range in return. I enabled debug logging and was focused on this part of the code: 0207 if_range_time = ngx_parse_http_time(if_range->data, if_range->len); 0208 0209 ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, 0210 "http ir:%d lm:%d", 0211 if_range_time, r->headers_out.last_modified_time); 0212 0213 if (if_range_time != r->headers_out.last_modified_time) { 0214 goto next_filter; 0215 } which I found to output: 2015/12/21 12:55:18 [debug] 13727#13727: *716934 http ir:1448545726 lm:0 2015/12/21 12:55:18 [debug] 13727#13727: *716934 posix_memalign: 0000000004F32F50:4096 @16 2015/12/21 12:55:18 [debug] 13727#13727: *716934 HTTP/1.1 200 OK Server: nginx Date: Mon, 21 Dec 2015 12:55:18 GMT Content-Type: video/mp4 Content-Length: 507448925 Connection: keep-alive X-Powered-By: PHP/5.3.28 Cache-Control: max-age=10368000, public Last-Modified: Thu, 26 Nov 2015 13:48:46 GMT ... As well as: $ head -20 /home/disk2/cache/e2/9a/611e43135479fdcc9e8eb5e507349ae2|grep -a Last Last-Modified: Thu, 26 Nov 2015 13:48:46 GMT Given the info above, I'm confused about the "lm:0". How come is it 0 when in both cached file header and in the final response correctly set Last-Modified value. Am I missing something, do you know what might be causing this? Nginx is 1.8, proxy_force_ranges is unset and proxy_set_header doesn't contain any relevant values. It doesn't seem to be a big issue as adding Etag appears to help, but just wondering if you have any idea? Thanks, Vedran Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263611,263611#msg-263611 From adam at jooadam.hu Mon Dec 21 23:06:00 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Tue, 22 Dec 2015 00:06:00 +0100 Subject: Nginx Ubuntu package policy In-Reply-To: <5677BD05.3000709@nginx.com> References: <5677BD05.3000709@nginx.com> Message-ID: Thanks for the answer, Konstantin! ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Dec 22 00:29:37 2015 From: nginx-forum at nginx.us (naq90) Date: Mon, 21 Dec 2015 19:29:37 -0500 Subject: Setting up Reverse Proxy with Namecheap Message-ID: I'm running a site on my local network which I would like to access via a reverse proxy to a domain I purchased on namecheap. I'm using nginx on my freenas machine. The site is running locally at 192.168.1.7:8080. I do not know if my conf is correct or how to set up namecheap so that I can go to mysite.mydomain.com or mydomain.com/mysite. Any help would be greatly appreciated. Below is the conf I made. worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name www.mydomain.com; location /mysite { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://192.168.1.7:8080; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/www/nginx-dist; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263618,263618#msg-263618 From dewanggaba at xtremenitro.org Tue Dec 22 02:53:09 2015 From: dewanggaba at xtremenitro.org (Dewangga Alam) Date: Tue, 22 Dec 2015 09:53:09 +0700 Subject: is Nginx Plus R7 support ALPN? Message-ID: <5678BB15.3000708@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! I've tried Nginx Plus R7, but it come with openssl 1.0.1e which is doesn't support ALPN extension, is there any info to get ALPN extension? My box using CentOS 7 and using custom openssl 1.0.2d. $ openssl version OpenSSL 1.0.2d-fips 9 Jul 2015 Any info and hints are appreciated. Many thanks and happy holiday :) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJWeLsVAAoJEOV/0iCgKM1wLcoP/AronQBvSMLYoONrMuaOw31x CD8tm27kzwzhJbDrWb4sq5uQtYcON/DXc3M4k7fsF5u3WXENkeqnqc/s/0Os5iOl 5p5+NFoJ97T+uA7U1W3mWfIGQejDZ9RQFvdT1MQ9U7aVcXukBjpb2sx2JgeRgiWZ n4lHXeNnVbLzKplzjeIZliDGVzo1eaGFFx1LAMZ4wuA5B7DnwE27OdRJa3oZk5VC g8gpZK18V77Eoimdf8JWWJ+1Zin+GxdgNae6du7Hv0Mxn5nygIUWVDYjcTYqPlFw oIbyWzAQOFs/A5BN8tzOC+wfXcSxB75xvImJIl9Oasek6/pCQdRyNHSRFGK+jK5u Gza+ac/5cfVSAaG5/NRhIx4NUgyRUU0GwbjqWNxJ5qeeq75ZbQK1XLZQ5ZTKrQ1m sDzoBcJW9fzTa+XC1weuejOLuTNYX4SO2QK01tzEmDoH98W4KHWpHPFV6xKMES+K uR3mrCM0ViPNWAzImc4NDxUtEBi4J20ZL/G0D7YQmDeL8TSQcUN/awK43MqGPhrS sLRYq2YAU9zDJg/N4YPhMgrSSnB92YRVO49fJfMcN7hmzviBZ1BkjRz7HLY2CNL7 2f4jdcTqFzEExaiEiaz8MSBnttThJRILR9VLhsQaRc7XuYu4CoocQUcz9nd0jYk5 hC/8ViBCLQAmEeGcpZwe =NNh/ -----END PGP SIGNATURE----- From ruz at sports.ru Tue Dec 22 11:43:11 2015 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Tue, 22 Dec 2015 14:43:11 +0300 Subject: Cache doesn't expire as should Message-ID: Hi, I stuck in analyzing this and can not repeat in a test environment. We see '/' location "freeze" from time to time as nginx stops updating cache. This lasts for 30 minutes - an hour. Regular requests flow: HIT ... HIT UPDATING UPDATING EXPIRED (every 30 seconds) HIT ... So under normal conditions a page is served from the cache, cache record expires, next request goes to upstream locking the cache record, requests still served from cache with UPDATING cache status while UPDATING request is working, on return from the upstream cache is updated, unlocked and EXPIRED cache status show up in the log. Back to step one - serving from cache. Abnormal flow we see under nginx 1.6.x HIT ... HIT UPDATING STALE UPDATING UPDATING STALE UPDATING STALE STALE .... continues for a while ... HIT We first noticed this when attempted upgrade to 1.8.0 where situation was much worse: HIT ... HIT UPDATING UPDATING UPDATING UPDATING .... continues until cache clean ... We downgraded nginx back to 1.6, after some time realized that this situation also happens on 1.6, but page doesn't freeze forever. I don't say it only happens with '/', it's just the busiest page, we probably don't notice other pages. On 1.8 this happened also on top level sections /football/. I feel it's related to amount of pressure on the page. Our error log has "ignore long locked inactive cache entry" alerts, but I really couldn't match it to "defreeze" event. Access log has STALE/UPDATING requests between the alert and EXPIRED (cache updating) request. Any help on hunting it down would be appreciated. -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Dec 22 12:42:42 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 22 Dec 2015 07:42:42 -0500 Subject: Cache doesn't expire as should In-Reply-To: References: Message-ID: <094b5657c672fd69b265d88b30a33cf2.NginxMailingListEnglish@forum.nginx.org> Your (pastebin) config? Cache situation? (harddisks, cache configuration, etc.) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263625,263626#msg-263626 From ruz at sports.ru Tue Dec 22 13:06:39 2015 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Tue, 22 Dec 2015 16:06:39 +0300 Subject: Cache doesn't expire as should In-Reply-To: <094b5657c672fd69b265d88b30a33cf2.NginxMailingListEnglish@forum.nginx.org> References: <094b5657c672fd69b265d88b30a33cf2.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Dec 22, 2015 at 3:42 PM, itpp2012 wrote: > Your (pastebin) config? > Cache situation? (harddisks, cache configuration, etc.) > https://gist.github.com/ruz/05456767750715f6b54e Pasted relevant configuration. Can add more on request, our complete config is 4k lines splitted into 200 files. Hosts are FreeBSD 8.4-RELEASE-p4 with the cache on a memory disk with UFS. nginx/1.6.3 -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Dec 22 13:39:36 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 22 Dec 2015 08:39:36 -0500 Subject: Cache doesn't expire as should In-Reply-To: References: Message-ID: <090fdcc750671c6ac812c900126aa7d3.NginxMailingListEnglish@forum.nginx.org> Have you tried without locking? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263625,263629#msg-263629 From mdounin at mdounin.ru Tue Dec 22 13:49:00 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Dec 2015 16:49:00 +0300 Subject: Cache doesn't expire as should In-Reply-To: References: Message-ID: <20151222134900.GM74233@mdounin.ru> Hello! On Tue, Dec 22, 2015 at 02:43:11PM +0300, ?????? ??????? wrote: [...] > Our error log has "ignore long locked inactive cache entry" alerts, but I > really couldn't match it to "defreeze" event. Access log has STALE/UPDATING > requests between the alert and EXPIRED (cache updating) request. The "ignore long locked inactive cache entry" alerts indicate that a cache entry was locked by some request, and wasn't unlocked for a long time. The alert is expected to appear if a cache node is locked for cache inactive time (as set by proxy_cache_path inactive=, 10 minutes by default). Most likely reason is that a worker died or was killed while holding a lock on a cache node (i.e., while a request was waiting for a new response from a backend). Trivial things to consider: - check logs for segmentation faults; - if you are using 3rd party modules / patches, try without them; - make sure you don't kill worker processes yourself or using some automation scripts (in particular, don't try to terminate old worker processes after a configuration reload). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Dec 22 16:01:19 2015 From: nginx-forum at nginx.us (snieuwen) Date: Tue, 22 Dec 2015 11:01:19 -0500 Subject: serve precompressed files without also serving their uncompressed counterparts Message-ID: Hi, Is it possible to serve precompressed files without serving their uncompressed counterparts? For example: /var/www/ contains index.html.gz, but no index.html. How do I configure nginx to respond with index.html.gz when the client supports gzip or let nginx decompress the file on the fly when the client does not support gzip? Based on this answer on stackoverflow http://serverfault.com/a/611757, I am currently using the following configuration: location / { try_files $uri $uri/ @application; root /var/www; gzip_static on; gunzip on; } @application configures the application server. When I try get the index.html page, nginx return a 403 forbidden error. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263641,263641#msg-263641 From vbart at nginx.com Tue Dec 22 16:05:18 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 22 Dec 2015 19:05:18 +0300 Subject: serve precompressed files without also serving their uncompressed counterparts In-Reply-To: References: Message-ID: <6944632.amnOE2xKDj@vbart-workstation> On Tuesday 22 December 2015 11:01:19 snieuwen wrote: > Hi, > > Is it possible to serve precompressed files without serving their > uncompressed counterparts? > > For example: > /var/www/ contains index.html.gz, but no index.html. How do I configure > nginx to respond with index.html.gz when the client supports gzip or let > nginx decompress the file on the fly when the client does not support gzip? > > Based on this answer on stackoverflow http://serverfault.com/a/611757, I am > currently using the following configuration: > > location / { > try_files $uri $uri/ @application; > root /var/www; > gzip_static on; > gunzip on; > } > > @application configures the application server. > When I try get the index.html page, nginx return a 403 forbidden error. > [..] gzip_static always; See the documentation: nginx.org/r/gzip_static wbr, Valentin V. Bartenev From vbart at nginx.com Tue Dec 22 16:15:53 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 22 Dec 2015 19:15:53 +0300 Subject: serve precompressed files without also serving their uncompressed counterparts In-Reply-To: <6944632.amnOE2xKDj@vbart-workstation> References: <6944632.amnOE2xKDj@vbart-workstation> Message-ID: <2836599.sPa4DWRgSt@vbart-workstation> On Tuesday 22 December 2015 19:05:18 Valentin V. Bartenev wrote: > On Tuesday 22 December 2015 11:01:19 snieuwen wrote: > > Hi, > > > > Is it possible to serve precompressed files without serving their > > uncompressed counterparts? > > > > For example: > > /var/www/ contains index.html.gz, but no index.html. How do I configure > > nginx to respond with index.html.gz when the client supports gzip or let > > nginx decompress the file on the fly when the client does not support gzip? > > > > Based on this answer on stackoverflow http://serverfault.com/a/611757, I am > > currently using the following configuration: > > > > location / { > > try_files $uri $uri/ @application; > > root /var/www; > > gzip_static on; > > gunzip on; > > } > > > > @application configures the application server. > > When I try get the index.html page, nginx return a 403 forbidden error. > > > [..] > > gzip_static always; > > See the documentation: nginx.org/r/gzip_static > [..] But your problem is caused by "try_files", since you have configured it to check "$uri" and "$uri/" instead of "$uri.gz". The configuration below should work for you: location / { root /var/www; gzip_static always; gunzip on; error_page 404 = @application; } wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Dec 22 16:24:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Dec 2015 19:24:29 +0300 Subject: serve precompressed files without also serving their uncompressed counterparts In-Reply-To: <2836599.sPa4DWRgSt@vbart-workstation> References: <6944632.amnOE2xKDj@vbart-workstation> <2836599.sPa4DWRgSt@vbart-workstation> Message-ID: <20151222162429.GQ74233@mdounin.ru> Hello! On Tue, Dec 22, 2015 at 07:15:53PM +0300, Valentin V. Bartenev wrote: > On Tuesday 22 December 2015 19:05:18 Valentin V. Bartenev wrote: > > On Tuesday 22 December 2015 11:01:19 snieuwen wrote: > > > Hi, > > > > > > Is it possible to serve precompressed files without serving their > > > uncompressed counterparts? > > > > > > For example: > > > /var/www/ contains index.html.gz, but no index.html. How do I configure > > > nginx to respond with index.html.gz when the client supports gzip or let > > > nginx decompress the file on the fly when the client does not support gzip? > > > > > > Based on this answer on stackoverflow http://serverfault.com/a/611757, I am > > > currently using the following configuration: > > > > > > location / { > > > try_files $uri $uri/ @application; > > > root /var/www; > > > gzip_static on; > > > gunzip on; > > > } > > > > > > @application configures the application server. > > > When I try get the index.html page, nginx return a 403 forbidden error. > > > > > [..] > > > > gzip_static always; > > > > See the documentation: nginx.org/r/gzip_static > > > [..] > > But your problem is caused by "try_files", since you have configured > it to check "$uri" and "$uri/" instead of "$uri.gz". > > The configuration below should work for you: > > location / { > root /var/www; > > gzip_static always; > gunzip on; > > error_page 404 = @application; > } Likely 403 is returned because there is no index file (http://nginx.org/r/index), and the request is to "/", not to "/index.html". I don't think there is a good solution. -- Maxim Dounin http://nginx.org/ From ruz at sports.ru Tue Dec 22 22:55:01 2015 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 23 Dec 2015 01:55:01 +0300 Subject: Cache doesn't expire as should In-Reply-To: <090fdcc750671c6ac812c900126aa7d3.NginxMailingListEnglish@forum.nginx.org> References: <090fdcc750671c6ac812c900126aa7d3.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Dec 22, 2015 at 4:39 PM, itpp2012 wrote: > Have you tried without locking? No as it's unacceptable load increase on backends when cache record expires. -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 23 08:48:39 2015 From: nginx-forum at nginx.us (Ortal) Date: Wed, 23 Dec 2015 03:48:39 -0500 Subject: Multiple responses to one request Message-ID: Hello, I am writing a NGINX module, and I would like to know if there is an option to send the client a response in parts? Meaning, in case of large files my server return the response to the GET request in parts, I would like to send each part to the user without saving it to a temp file. I would like each part of the response to be send and the client will get a few responses to the same request. Thanks, Ortal Levi Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263649,263649#msg-263649 From ruz at sports.ru Wed Dec 23 08:57:53 2015 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 23 Dec 2015 11:57:53 +0300 Subject: Cache doesn't expire as should In-Reply-To: <20151222134900.GM74233@mdounin.ru> References: <20151222134900.GM74233@mdounin.ru> Message-ID: On Tue, Dec 22, 2015 at 4:49 PM, Maxim Dounin wrote: > Hello! > > On Tue, Dec 22, 2015 at 02:43:11PM +0300, ?????? ??????? wrote: > > [...] > > > Our error log has "ignore long locked inactive cache entry" alerts, but I > > really couldn't match it to "defreeze" event. Access log has > STALE/UPDATING > > requests between the alert and EXPIRED (cache updating) request. > > The "ignore long locked inactive cache entry" alerts indicate that > a cache entry was locked by some request, and wasn't unlocked for > a long time. The alert is expected to appear if a cache node is > locked for cache inactive time (as set by proxy_cache_path > inactive=, 10 minutes by default). Inactive is defined in the config, but it's set to default 10m. What happens with requests after this time? Do they hit backend and update cache? Do they use stale version? I'm going to check "long locked" messages in the log to see how many was for "/" location. The hash should be the same if we didn't change cache key, right? Most likely reason is that a worker died or was killed > while holding a lock on a cache node (i.e., while a request was > waiting for a new response from a backend). > Shouldn't be there a record in error log? Error log level at warn. > > Trivial things to consider: > > - check logs for segmentation faults; > no seg faults in logs - if you are using 3rd party modules / patches, try without them; > from freebsd port, updated gist [1] with `nginx -V` output > - make sure you don't kill worker processes yourself or using some > automation scripts (in particular, don't try to terminate old > worker processes after a configuration reload). > One recent appearance of the problem was at 1:30AM and I checked logs for crazy midnight deploys - nothing. Also, we don't use anything custom to restart nginx, just regular services management tools. [1] https://gist.github.com/ruz/05456767750715f6b54e > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lenaigst at maelenn.org Wed Dec 23 12:21:50 2015 From: lenaigst at maelenn.org (Thierry) Date: Wed, 23 Dec 2015 14:21:50 +0200 Subject: nginx modsecurity on Debian 8 Message-ID: <1221722568.20151223142150@maelenn.org> Hi, A bit lost ... I know nothing concerning nginx, I am more confortable with Apache2. I am using an email server who is using nginx on debian 8. I would need to install modsecurity as module. I have understood that I need to compile from the working directory of nginx .... ./configure --add-module=/opt/ModSecurity-nginx But how to deal with it if nginx as been installed from binary (debian package) ? I have followed these instructions: $ sudo dnf install gcc-c++ flex bison curl-devel curl yajl yajl-devel GeoIP-devel doxygen $ cd /opt/ $ git clone https://github.com/SpiderLabs/ModSecurity $ cd ModSecurity $ git checkout libmodsecurity $ sh build.sh $ ./configure $ make $ make install $ cd /opt/ $ git clone https://github.com/SpiderLabs/ModSecurity-nginx $ cd /opt/Modsecurity-nginx $ git checkout experimental $ cd /opt/ ******************************************************************* $ wget http://nginx.org/download/nginx-1.9.2.tar.gz $ tar -xvzf nginx-1.9.2.tar.gz $ yum install zlib-devel ******************************************************************* $ ./configure --add-module=/opt/ModSecurity-nginx Everything went fine until the last ./configure .... I didn't apply what's between " *** " because my nginx server is already installed and working. Any ideas ? Thx -- Cordialement, Thierry e-mail : lenaigst at maelenn.org From anoopalias01 at gmail.com Wed Dec 23 12:34:24 2015 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 23 Dec 2015 18:04:24 +0530 Subject: nginx modsecurity on Debian 8 In-Reply-To: <1221722568.20151223142150@maelenn.org> References: <1221722568.20151223142150@maelenn.org> Message-ID: nginx -V will show configure arguments. You need to add mod_sec at the beginning of whatever is in there. On Wed, Dec 23, 2015 at 5:51 PM, Thierry wrote: > Hi, > > A bit lost ... > I know nothing concerning nginx, I am more confortable with Apache2. > I am using an email server who is using nginx on debian 8. > I would need to install modsecurity as module. > I have understood that I need to compile from the working directory of > nginx .... > > ./configure --add-module=/opt/ModSecurity-nginx > > But how to deal with it if nginx as been installed from binary (debian > package) ? > > I have followed these instructions: > > $ sudo dnf install gcc-c++ flex bison curl-devel curl yajl yajl-devel > GeoIP-devel doxygen > $ cd /opt/ > $ git clone https://github.com/SpiderLabs/ModSecurity > $ cd ModSecurity > $ git checkout libmodsecurity > $ sh build.sh > $ ./configure > $ make > $ make install > $ cd /opt/ > $ git clone https://github.com/SpiderLabs/ModSecurity-nginx > $ cd /opt/Modsecurity-nginx > $ git checkout experimental > $ cd /opt/ > ******************************************************************* > $ wget http://nginx.org/download/nginx-1.9.2.tar.gz > $ tar -xvzf nginx-1.9.2.tar.gz > $ yum install zlib-devel > ******************************************************************* > $ ./configure --add-module=/opt/ModSecurity-nginx > > > > Everything went fine until the last ./configure .... > I didn't apply what's between " *** " because my nginx server is > already installed and working. > > Any ideas ? > > Thx > -- > Cordialement, > Thierry e-mail : lenaigst at maelenn.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lenaigst at maelenn.org Wed Dec 23 12:44:46 2015 From: lenaigst at maelenn.org (Thierry) Date: Wed, 23 Dec 2015 14:44:46 +0200 Subject: nginx modsecurity on Debian 8 In-Reply-To: References: <1221722568.20151223142150@maelenn.org> Message-ID: <10156766.20151223144446@maelenn.org> What I have ... Could you please explain to me what do I have to do ? I do not understand ... Sorry nginx version: nginx/1.6.2 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-auth-pam --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-dav-ext-module --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-echo --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-upstream-fair --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/ngx_http_substitutions_filter_module > nginx -V will show configure arguments. You need to add mod_sec at > the beginning of whatever is in there.? > On Wed, Dec 23, 2015 at 5:51 PM, Thierry wrote: > Hi, > > A bit lost ... > I know nothing concerning nginx, I am more confortable with Apache2. > I am using an email server who is using nginx on debian 8. > I would need to install modsecurity as module. > I have understood that I need to compile from the working directory of > nginx .... > > ./configure --add-module=/opt/ModSecurity-nginx > > But how to deal with it if nginx as been installed from binary (debian > package) ? > > I have followed these instructions: > > ?$ sudo dnf install gcc-c++ flex bison curl-devel curl yajl yajl-devel GeoIP-devel doxygen > $ cd /opt/ > $ git clone https://github.com/SpiderLabs/ModSecurity > $ cd ModSecurity > $ git checkout libmodsecurity > $ sh build.sh > $ ./configure > $ make > $ make install > $ cd /opt/ > $ git clone https://github.com/SpiderLabs/ModSecurity-nginx > $ cd /opt/Modsecurity-nginx > $ git checkout experimental > $ cd /opt/ > ******************************************************************* > $ wget http://nginx.org/download/nginx-1.9.2.tar.gz > $ tar -xvzf nginx-1.9.2.tar.gz > $ yum install zlib-devel > ******************************************************************* > $ ./configure --add-module=/opt/ModSecurity-nginx > > > > Everything went fine until the last ./configure .... > I? didn't? apply? what's? between? " *** " because my nginx server is > already installed and working. > > Any ideas ? > > Thx > -- > Cordialement, > ?Thierry? ? ? ? ? ? ? ? ? ? ? ? ? e-mail : lenaigst at maelenn.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From anoopalias01 at gmail.com Wed Dec 23 12:52:57 2015 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 23 Dec 2015 18:22:57 +0530 Subject: nginx modsecurity on Debian 8 In-Reply-To: <10156766.20151223144446@maelenn.org> References: <1221722568.20151223142150@maelenn.org> <10156766.20151223144446@maelenn.org> Message-ID: append the configure argument you already mentioned ./configure --add-module=/opt/ModSecurity-nginx with the --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-auth-pam --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-dav-ext-module --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-echo --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-upstream-fair --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/ngx_http_substitutions_filter_module ## One problem I see here is that you need to place the modules added there in their exact path like for example /tmp/buildd/nginx-1.6.2/debian/modules/nginx-upstream-fair .Otherwise you will have to modify those path accordingly. you need to install build deps for nginx too Also you might be able to use 1.8.0 stable version Follow - https://www.digitalocean.com/community/tutorials/how-to-add-ngx_pagespeed-module-to-nginx-in-debian-wheezy . The difference is you are adding mod_sec instead of pagespeed . On Wed, Dec 23, 2015 at 6:14 PM, Thierry wrote: > What I have ... > Could you please explain to me what do I have to do ? I do not understand > ... > Sorry > > nginx version: nginx/1.6.2 > TLS SNI support enabled > configure arguments: --with-cc-opt='-g -O2 -fstack-protector-strong > -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' > --with-ld-opt=-Wl,-z,relro --prefix=/usr/share/nginx > --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log > --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock > --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit > --with-ipv6 --with-http_ssl_module --with-http_stub_status_module > --with-http_realip_module --with-http_auth_request_module > --with-http_addition_module --with-http_dav_module --with-http_geoip_module > --with-http_gzip_static_module --with-http_image_filter_module > --with-http_spdy_module --with-http_sub_module --with-http_xslt_module > --with-mail --with-mail_ssl_module > --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-auth-pam > --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-dav-ext-module > --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-echo > --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-upstream-fair > --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/ngx_http_substitutions_filter_module > > > nginx -V will show configure arguments. You need to add mod_sec at > > the beginning of whatever is in there. > > > > > > On Wed, Dec 23, 2015 at 5:51 PM, Thierry wrote: > > > Hi, > > > > A bit lost ... > > I know nothing concerning nginx, I am more confortable with Apache2. > > I am using an email server who is using nginx on debian 8. > > I would need to install modsecurity as module. > > I have understood that I need to compile from the working directory of > > nginx .... > > > > ./configure --add-module=/opt/ModSecurity-nginx > > > > But how to deal with it if nginx as been installed from binary (debian > > package) ? > > > > I have followed these instructions: > > > > $ sudo dnf install gcc-c++ flex bison curl-devel curl yajl yajl-devel > GeoIP-devel doxygen > > $ cd /opt/ > > $ git clone https://github.com/SpiderLabs/ModSecurity > > $ cd ModSecurity > > $ git checkout libmodsecurity > > $ sh build.sh > > $ ./configure > > $ make > > $ make install > > $ cd /opt/ > > $ git clone https://github.com/SpiderLabs/ModSecurity-nginx > > $ cd /opt/Modsecurity-nginx > > $ git checkout experimental > > $ cd /opt/ > > ******************************************************************* > > $ wget http://nginx.org/download/nginx-1.9.2.tar.gz > > $ tar -xvzf nginx-1.9.2.tar.gz > > $ yum install zlib-devel > > ******************************************************************* > > $ ./configure --add-module=/opt/ModSecurity-nginx > > > > > > > > Everything went fine until the last ./configure .... > > I didn't apply what's between " *** " because my nginx server is > > already installed and working. > > > > Any ideas ? > > > > Thx > > -- > > Cordialement, > > Thierry e-mail : lenaigst at maelenn.org > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruz at sports.ru Wed Dec 23 12:54:31 2015 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 23 Dec 2015 15:54:31 +0300 Subject: x-accel-redirect enables caching for POST requests Message-ID: Hi, If location /a/ redirects a POST request to location /b/ using X-Accel-Redirect then result of the POST request is cached. Tested this and reproduced with nginx 1.6.3 and 1.9.7. Setup: https://gist.github.com/ruz/4a66ee78fedf27181799 Logs from hitting /a/: [1450873503.675] 200 HIT 127.0.0.1 "POST /a/ HTTP/1.1" [0.001, 0.001] { 127.0.0.1:5000} [1450873504.113] 200 HIT 127.0.0.1 "POST /a/ HTTP/1.1" [0.001, 0.001] { 127.0.0.1:5000} [1450873504.529] 200 HIT 127.0.0.1 "POST /a/ HTTP/1.1" [0.002, 0.002] { 127.0.0.1:5000} [1450873567.648] 200 EXPIRED 127.0.0.1 "POST /a/ HTTP/1.1" [0.001 : 0.002, 0.003] {127.0.0.1:5000 : 127.0.0.1:5000} Logs from hitting /b/ directly: [1450875056.289] 200 - 127.0.0.1 "POST /b/ HTTP/1.1" [0.005, 0.005] { 127.0.0.1:5000} [1450875058.073] 200 - 127.0.0.1 "POST /b/ HTTP/1.1" [0.001, 0.001] { 127.0.0.1:5000} Looks like a bug to me. Do I miss something? -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikolai at lusan.id.au Wed Dec 23 13:03:10 2015 From: nikolai at lusan.id.au (Nikolai Lusan) Date: Wed, 23 Dec 2015 23:03:10 +1000 Subject: nginx modsecurity on Debian 8 In-Reply-To: <1221722568.20151223142150@maelenn.org> References: <1221722568.20151223142150@maelenn.org> Message-ID: <1450875790.29197.10.camel@lusan.id.au> Greetings, On Wed, 2015-12-23 at 14:21 +0200, Thierry wrote: > A bit lost ... > I know nothing concerning nginx, I am more confortable with Apache2. > I am using an email server who is using nginx on debian 8. > I would need to install modsecurity as module. > I have understood that I need to compile from the working directory > of? > nginx .... FWIW I am in a similar boat. Apache has been my weapon of choice for a long time, I have inherited a system where they prefer nginx. We are a Debian shop, using Jessie (8) on production systems. I use the packages from the nginx repositories rather than the Debian builds.? deb http://nginx.org/packages/mainline/debian/ jessie nginx # nginx -V nginx version: nginx/1.9.7 built by gcc 4.9.2 (Debian 4.9.2-10)? built with OpenSSL 1.0.1k 8 Jan 2015 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-threads --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_v2_module --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' --with-ipv6 If that is of any help :) -- Nikolai Lusan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From lenaigst at maelenn.org Wed Dec 23 15:09:26 2015 From: lenaigst at maelenn.org (Thierry) Date: Wed, 23 Dec 2015 17:09:26 +0200 Subject: nginx modsecurity on Debian 8 In-Reply-To: <1450875790.29197.10.camel@lusan.id.au> References: <1221722568.20151223142150@maelenn.org> <1450875790.29197.10.camel@lusan.id.au> Message-ID: <1987470511.20151223170926@maelenn.org> Hi Nikolai, Seems for me to be a bit tricky. Not going to do something because I do not want to break something who is already working perfectly.... Why is it so complicated to install a module for nginx ? Thx anyway and happy christmas. Le mercredi 23 d?cembre 2015 ? 15:03:10, vous ?criviez : > Greetings, > On Wed, 2015-12-23 at 14:21 +0200, Thierry wrote: >> A bit lost ... >> I know nothing concerning nginx, I am more confortable with Apache2. >> I am using an email server who is using nginx on debian 8. >> I would need to install modsecurity as module. >> I have understood that I need to compile from the working directory >> of? >> nginx .... > FWIW I am in a similar boat. Apache has been my weapon of choice for a > long time, I have inherited a system where they prefer nginx. We are a > Debian shop, using Jessie (8) on production systems. I use the packages > from the nginx repositories rather than the Debian builds.? > deb http://nginx.org/packages/mainline/debian/ jessie nginx > # nginx -V > nginx version: nginx/1.9.7 > built by gcc 4.9.2 (Debian 4.9.2-10)? > built with OpenSSL 1.0.1k 8 Jan 2015 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx > --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-http_auth_request_module > --with-threads --with-stream --with-stream_ssl_module --with-mail > --with-mail_ssl_module --with-file-aio --with-http_v2_module > --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat > -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' > --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' --with-ipv6 > If that is of any help :) -- Cordialement, Thierry e-mail : lenaigst at maelenn.org From mdounin at mdounin.ru Wed Dec 23 15:35:57 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Dec 2015 18:35:57 +0300 Subject: Cache doesn't expire as should In-Reply-To: References: <20151222134900.GM74233@mdounin.ru> Message-ID: <20151223153557.GR74233@mdounin.ru> Hello! On Wed, Dec 23, 2015 at 11:57:53AM +0300, ?????? ??????? wrote: > On Tue, Dec 22, 2015 at 4:49 PM, Maxim Dounin wrote: > > > Hello! > > > > On Tue, Dec 22, 2015 at 02:43:11PM +0300, ?????? ??????? wrote: > > > > [...] > > > > > Our error log has "ignore long locked inactive cache entry" alerts, but I > > > really couldn't match it to "defreeze" event. Access log has > > STALE/UPDATING > > > requests between the alert and EXPIRED (cache updating) request. > > > > The "ignore long locked inactive cache entry" alerts indicate that > > a cache entry was locked by some request, and wasn't unlocked for > > a long time. The alert is expected to appear if a cache node is > > locked for cache inactive time (as set by proxy_cache_path > > inactive=, 10 minutes by default). > > > Inactive is defined in the config, but it's set to default 10m. > > What happens with requests after this time? Do they hit backend and update > cache? Do they use stale version? As long as an entry is locked and not accessed for the inactive time, it will cause the alert in question to be logged by cache manager. Nothing else happens - that is, the entry is still locked, and if it contains information that the entry is currently being updated, requests to this entry will return stale content with UPDATING status as per proxy_cache_use_stale. This alert may in theory happen in normal situation if a backend response takes longer than the inactive time set. But in general it indicates that there is a problem. (Note well that "locked" here should not be confused with request-level locks as in proxy_cache_lock.) > I'm going to check "long locked" messages in the log to see how many was > for "/" location. > The hash should be the same if we didn't change cache key, right? Any message indicate that there is a problem. If the "/" location is requested often, it'll likely won't be mentioned in the "ignore long locked" messages. > > Most likely reason is that a worker died or was killed > > while holding a lock on a cache node (i.e., while a request was > > waiting for a new response from a backend). > > > > Shouldn't be there a record in error log? Error log level at warn. There should be something like "worker process exited" messages, though at least some of them may be on the "notice" level (e.g., if you force nginx worker to quit by sending SIGTERM to it directly). > > Trivial things to consider: > > > > - check logs for segmentation faults; > > no seg faults in logs Any other alerts, crit or emerg messages? Any alerts about socket leaks on configuration reloads ("open socket left..."), sockets in CLOSED state? How long relevant requests to upstream server take (i.e., make sure that the behaviour observed isn't a result of an upstream server just being very slow)? > - if you are using 3rd party modules / patches, try without them; > > > > from freebsd port, updated gist [1] with `nginx -V` output > [1] https://gist.github.com/ruz/05456767750715f6b54e The only suspicious thing I see there is "ssi on" and "proxy_cache_lock" at the same time. There were some fixes related to proxy_cache_lock with subrequests in more recent versions. > > - make sure you don't kill worker processes yourself or using some > > automation scripts (in particular, don't try to terminate old > > worker processes after a configuration reload). > > > > One recent appearance of the problem was at 1:30AM and I checked > logs for crazy midnight deploys - nothing. > > Also, we don't use anything custom to restart nginx, just regular services > management tools. Just a regular kill will be enough screw up things. On the other hand, it may be an nginx bug as well. Note though, that the version you are using is quite old. Debugging anything but 1.9.x hardly make sense. You may want to try upgrading to the latest nginx-devel port. Note well that if a problem happens, it's a good idea to either completely restart nginx or do binary upgrade procedure ("service nginx upgrade") to make sure shared memory structures are intact. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Dec 23 15:49:02 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Dec 2015 18:49:02 +0300 Subject: x-accel-redirect enables caching for POST requests In-Reply-To: References: Message-ID: <20151223154902.GS74233@mdounin.ru> Hello! On Wed, Dec 23, 2015 at 03:54:31PM +0300, ?????? ??????? wrote: > Hi, > > If location /a/ redirects a POST request to location /b/ using > X-Accel-Redirect then result of the POST request is cached. > > Tested this and reproduced with nginx 1.6.3 and 1.9.7. > > Setup: > https://gist.github.com/ruz/4a66ee78fedf27181799 > > Logs from hitting /a/: > [1450873503.675] 200 HIT 127.0.0.1 "POST /a/ HTTP/1.1" [0.001, 0.001] { > 127.0.0.1:5000} > [1450873504.113] 200 HIT 127.0.0.1 "POST /a/ HTTP/1.1" [0.001, 0.001] { > 127.0.0.1:5000} > [1450873504.529] 200 HIT 127.0.0.1 "POST /a/ HTTP/1.1" [0.002, 0.002] { > 127.0.0.1:5000} > [1450873567.648] 200 EXPIRED 127.0.0.1 "POST /a/ HTTP/1.1" [0.001 : 0.002, > 0.003] {127.0.0.1:5000 : 127.0.0.1:5000} > > Logs from hitting /b/ directly: > [1450875056.289] 200 - 127.0.0.1 "POST /b/ HTTP/1.1" [0.005, 0.005] { > 127.0.0.1:5000} > [1450875058.073] 200 - 127.0.0.1 "POST /b/ HTTP/1.1" [0.001, 0.001] { > 127.0.0.1:5000} > > Looks like a bug to me. Do I miss something? X-Accel-Redirect changes a request from POST to GET. -- Maxim Dounin http://nginx.org/ From ruz at sports.ru Wed Dec 23 16:10:43 2015 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 23 Dec 2015 19:10:43 +0300 Subject: x-accel-redirect enables caching for POST requests In-Reply-To: <20151223154902.GS74233@mdounin.ru> References: <20151223154902.GS74233@mdounin.ru> Message-ID: On Wed, Dec 23, 2015 at 6:49 PM, Maxim Dounin wrote: > X-Accel-Redirect changes a request from POST to GET. > No, it doesn't. Getting request method POST on the backend and even form data is intact. -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruz at sports.ru Wed Dec 23 16:15:26 2015 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 23 Dec 2015 19:15:26 +0300 Subject: x-accel-redirect enables caching for POST requests In-Reply-To: References: <20151223154902.GS74233@mdounin.ru> Message-ID: On Wed, Dec 23, 2015 at 7:10 PM, ?????? ??????? wrote: > > On Wed, Dec 23, 2015 at 6:49 PM, Maxim Dounin wrote: > >> X-Accel-Redirect changes a request from POST to GET. >> > > No, it doesn't. Getting request method POST on the backend and even form > data is intact. > Output from the psgi app (updated gist with data dumper): 127.0.0.1 - - [23/Dec/2015:19:05:46 +0300] "POST /a/ HTTP/1.0" 200 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36" { CONTENT_LENGTH => 7, CONTENT_TYPE => 'application/json, application/x-www-form-urlencoded', HTTP_ACCEPT => '*/*', HTTP_ACCEPT_ENCODING => 'gzip, deflate', HTTP_ACCEPT_LANGUAGE => 'en-US,en;q=0.8,ru;q=0.6', HTTP_CACHE_CONTROL => 'no-cache', HTTP_CONNECTION => 'close', HTTP_HOST => '127.0.0.1:5000', HTTP_ORIGIN => 'chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm', HTTP_PRAGMA => 'no-cache', HTTP_USER_AGENT => 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36', PATH_INFO => '/b/', QUERY_STRING => '', REMOTE_ADDR => '127.0.0.1', REMOTE_PORT => 56137, REQUEST_METHOD => 'POST', REQUEST_URI => '/b/', SCRIPT_NAME => '', SERVER_NAME => 0, SERVER_PORT => 5000, SERVER_PROTOCOL => 'HTTP/1.0', 'psgi.errors' => *::STDERR, 'psgi.input' => bless( \*{'Stream::Buffered::PerlIO::$io'}, 'FileHandle' ), 'psgi.multiprocess' => '', 'psgi.multithread' => '', 'psgi.nonblocking' => '', 'psgi.run_once' => '', 'psgi.streaming' => 1, 'psgi.url_scheme' => 'http', 'psgi.version' => [ 1, 1 ], 'psgix.harakiri' => 1, 'psgix.input.buffered' => 1, 'psgix.io' => bless( \*Symbol::GEN5, 'IO::Socket::INET' ) } x=y&y=z 127.0.0.1 - - [23/Dec/2015:19:05:46 +0300] "POST /b/ HTTP/1.0" 200 22 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36" -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 23 16:19:04 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Dec 2015 19:19:04 +0300 Subject: x-accel-redirect enables caching for POST requests In-Reply-To: References: <20151223154902.GS74233@mdounin.ru> Message-ID: <20151223161904.GT74233@mdounin.ru> Hello! On Wed, Dec 23, 2015 at 07:10:43PM +0300, ?????? ??????? wrote: > On Wed, Dec 23, 2015 at 6:49 PM, Maxim Dounin wrote: > > > X-Accel-Redirect changes a request from POST to GET. > > > > No, it doesn't. Getting request method POST on the backend and even form > data is intact. It does, http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c#l2501: if (r->method != NGX_HTTP_HEAD) { r->method = NGX_HTTP_GET; } Though it looks like it only does so for nginx itself, and this indeed looks like a bug. The code should be similar to one in error_page handling: if (r->method != NGX_HTTP_HEAD) { r->method = NGX_HTTP_GET; r->method_name = ngx_http_core_get_method; } If you want to get request and request method intact, use X-Accel-Redirect to a named location instead. -- Maxim Dounin http://nginx.org/ From ruz at sports.ru Wed Dec 23 16:39:25 2015 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 23 Dec 2015 19:39:25 +0300 Subject: Cache doesn't expire as should In-Reply-To: <20151223153557.GR74233@mdounin.ru> References: <20151222134900.GM74233@mdounin.ru> <20151223153557.GR74233@mdounin.ru> Message-ID: On Wed, Dec 23, 2015 at 6:35 PM, Maxim Dounin wrote: > Hello! > > On Wed, Dec 23, 2015 at 11:57:53AM +0300, ?????? ??????? wrote: > > > On Tue, Dec 22, 2015 at 4:49 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Tue, Dec 22, 2015 at 02:43:11PM +0300, ?????? ??????? wrote: > > > > > > [...] > > > > > > > Our error log has "ignore long locked inactive cache entry" alerts, > but I > > > > really couldn't match it to "defreeze" event. Access log has > > > STALE/UPDATING > > > > requests between the alert and EXPIRED (cache updating) request. > > > > > > The "ignore long locked inactive cache entry" alerts indicate that > > > a cache entry was locked by some request, and wasn't unlocked for > > > a long time. The alert is expected to appear if a cache node is > > > locked for cache inactive time (as set by proxy_cache_path > > > inactive=, 10 minutes by default). > > > > > > Inactive is defined in the config, but it's set to default 10m. > > > > What happens with requests after this time? Do they hit backend and > update > > cache? Do they use stale version? > > As long as an entry is locked and not accessed for the inactive > time, it will cause the alert in question to be logged by cache > manager. Nothing else happens - that is, the entry is still > locked, and if it contains information that the entry is currently > being updated, requests to this entry will return stale content > with UPDATING status as per proxy_cache_use_stale. > > This alert may in theory happen in normal situation if a backend > response takes longer than the inactive time set. But in general > it indicates that there is a problem. > uwsgi on backends has harakiri setting at 9 seconds and 10 seconds timeout in nginx. Index page takes 0.2-0.7 seconds to render. (Note well that "locked" here should not be confused with > request-level locks as in proxy_cache_lock.) > > > I'm going to check "long locked" messages in the log to see how many was > > for "/" location. > > The hash should be the same if we didn't change cache key, right? > > Any message indicate that there is a problem. If the "/" location > is requested often, it'll likely won't be mentioned in the "ignore long > locked" messages. > > > > Most likely reason is that a worker died or was killed > > > while holding a lock on a cache node (i.e., while a request was > > > waiting for a new response from a backend). > > > > > > > Shouldn't be there a record in error log? Error log level at warn. > > There should be something like "worker process exited" messages, > though at least some of them may be on the "notice" level (e.g., > if you force nginx worker to quit by sending SIGTERM to it > directly). > > > > Trivial things to consider: > > > > > > - check logs for segmentation faults; > > > > no seg faults in logs > > Any other alerts, crit or emerg messages? Any alerts about > socket leaks on configuration reloads ("open socket left..."), > sockets in CLOSED state? How long relevant requests to upstream > server take (i.e., make sure that the behaviour observed isn't a > result of an upstream server just being very slow)? > Looks like we have some misconfiguration and I get the following: 2015/12/17 01:49:13 [*crit*] 5232#0: *599273664 rename() "/var/nginx/cache/0080683483" to "/var/www/storage/img/ru-cyber/answers/10-5.jpg" failed (13: Permission denied) while reading upstream, client: 188.115.151.67, server: www.sports.ru, request: "GET /storage/img/ru-cyber/answers/10-5.jpg HTTP/1.1", upstream: " http://192.168.1.10:80/storage/img/ru-cyber/answers/10-5.jpg", host: " www.sports.ru", referrer: "http://cyber.sports.ru/dota2/1034151992.html" 2015/12/17 01:50:17 [*crit*] 5229#0: *599313395 open() "/var/www/storage/img/fantasy/shirts/ule/slo.png" failed (13: Permission denied), client: 192.168.1.10, server: www.sports.ru, request: "GET /storage/img/fantasy/shirts/ule/slo.png HTTP/1.0", host: "www.sports.ru", referrer: " http://fantasy-h2h.ru/analytics/fantasy_statistics/liga_europa_2015" > > - if you are using 3rd party modules / patches, try without them; > > > > > > > from freebsd port, updated gist [1] with `nginx -V` output > > [1] https://gist.github.com/ruz/05456767750715f6b54e > > The only suspicious thing I see there is "ssi on" and > "proxy_cache_lock" at the same time. There were some fixes > related to proxy_cache_lock with subrequests in more recent > versions. > > > > - make sure you don't kill worker processes yourself or using some > > > automation scripts (in particular, don't try to terminate old > > > worker processes after a configuration reload). > > > > > > > One recent appearance of the problem was at 1:30AM and I checked > > logs for crazy midnight deploys - nothing. > > > > Also, we don't use anything custom to restart nginx, just regular > services > > management tools. > > Just a regular kill will be enough screw up things. On the other > hand, it may be an nginx bug as well. > > Note though, that the version you are using is quite old. > Debugging anything but 1.9.x hardly make sense. You may want to > try upgrading to the latest nginx-devel port. > As I said before 1.8.0 made things worse. May be we will try nginx 1.9.x on one of frontends and see how it goes. > > Note well that if a problem happens, it's a good idea to either > completely restart nginx or do binary upgrade procedure ("service > nginx upgrade") to make sure shared memory structures are intact. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 23 16:47:18 2015 From: nginx-forum at nginx.us (Alt) Date: Wed, 23 Dec 2015 11:47:18 -0500 Subject: nginx modsecurity on Debian 8 In-Reply-To: <1987470511.20151223170926@maelenn.org> References: <1987470511.20151223170926@maelenn.org> Message-ID: Hello Thierry, Here's a quick howto build a nginx debian package, I hope it's clear and that I'm not making mistakes. First, you need to get the source of nginx and others files to build the package. You can probably do something like "apt-get source nginx", but I prefer to go on this page: https://packages.debian.org/jessie/nginx and manually download the 3 files on the right. Something like "wget http://http.debian.net/debian/pool/main/n/nginx/nginx_1.6.2-5.dsc http://http.debian.net/debian/pool/main/n/nginx/nginx_1.6.2.orig.tar.gz http://http.debian.net/debian/pool/main/n/nginx/nginx_1.6.2-5.debian.tar.xz". Then, you uncompress everything with: "dpkg-source -x nginx_1.6.2-5.dsc" Then "cd nginx_1.6.2-5" Here, you will have to do something to add ModSecurity. Normally, you add a 3rd party module by adding something like "--add-module=full/path/to/the/module-source" in the "debian/rules" file (where there are others parameters like "--with-ipv6" or "--with-http_ssl_module"). Check the "debian/rules" and add your parameter only to the flavor you will use (full, light,...). Or add the parameter for each of them if you are not sure. I don't know if ModSecurity need something special. Last step, execute: "dpkg-buildpackage -B -uc" to compile everything and build the ".deb" packages. Note that you will get several of them: full, extras, light, with or without debug,... (regarding this flavors, see previous step: where you added the parameter). Then install your newly created package with: "dpkg -i nginx-THE-FLAVOR-YOU-WANT.deb" PS: I'm really sorry if there are some mistakes (maybe in the filenames?), I just wrote the instructions from memory. PS2: I suggest you first do all this steps, without the one regarding ModSecurity (so without editing "debian/rules"), just to be sure everything goes well. Best Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263657,263671#msg-263671 From lenaigst at maelenn.org Wed Dec 23 17:31:04 2015 From: lenaigst at maelenn.org (Thierry) Date: Wed, 23 Dec 2015 19:31:04 +0200 Subject: nginx modsecurity on Debian 8 In-Reply-To: References: <1987470511.20151223170926@maelenn.org> Message-ID: <491297238.20151223193104@maelenn.org> Bonjour Alt, Thx a lot .... But, I might mistaken ... My nginx is already working, as already been compiled with a certain number of modules ... I do not want to break something ... If I am doing what you said ... It might happen no ? Thx* Le mercredi 23 d?cembre 2015 ? 18:47:18, vous ?criviez : > Hello Thierry, > Here's a quick howto build a nginx debian package, I hope it's clear and > that I'm not making mistakes. > First, you need to get the source of nginx and others files to build the > package. You can probably do something like "apt-get source nginx", but I > prefer to go on this page: https://packages.debian.org/jessie/nginx and > manually download the 3 files on the right. > Something like "wget > http://http.debian.net/debian/pool/main/n/nginx/nginx_1.6.2-5.dsc > http://http.debian.net/debian/pool/main/n/nginx/nginx_1.6.2.orig.tar.gz > http://http.debian.net/debian/pool/main/n/nginx/nginx_1.6.2-5.debian.tar.xz". > Then, you uncompress everything with: "dpkg-source -x nginx_1.6.2-5.dsc" > Then "cd nginx_1.6.2-5" > Here, you will have to do something to add ModSecurity. Normally, you add a > 3rd party module by adding something like > "--add-module=full/path/to/the/module-source" in the "debian/rules" file > (where there are others parameters like "--with-ipv6" or > "--with-http_ssl_module"). > Check the "debian/rules" and add your parameter only to the flavor you will > use (full, light,...). Or add the parameter for each of them if you are not > sure. > I don't know if ModSecurity need something special. > Last step, execute: "dpkg-buildpackage -B -uc" to compile everything and > build the ".deb" packages. Note that you will get several of them: full, > extras, light, with or without debug,... (regarding this flavors, see > previous step: where you added the parameter). > Then install your newly created package with: "dpkg -i > nginx-THE-FLAVOR-YOU-WANT.deb" > PS: I'm really sorry if there are some mistakes (maybe in the filenames?), I > just wrote the instructions from memory. > PS2: I suggest you first do all this steps, without the one regarding > ModSecurity (so without editing "debian/rules"), just to be sure everything > goes well. > Best Regards. > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263657,263671#msg-263671 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Cordialement, Thierry e-mail : lenaigst at maelenn.org From me at myconan.net Wed Dec 23 17:33:11 2015 From: me at myconan.net (nanaya) Date: Thu, 24 Dec 2015 02:33:11 +0900 Subject: nginx modsecurity on Debian 8 In-Reply-To: <491297238.20151223193104@maelenn.org> References: <1987470511.20151223170926@maelenn.org> <491297238.20151223193104@maelenn.org> Message-ID: <1450891991.3964843.475076946.07810FA3@webmail.messagingengine.com> On Thu, Dec 24, 2015, at 02:31, Thierry wrote: > Bonjour Alt, > > Thx a lot .... But, I might mistaken ... > My nginx is already working, as already been compiled with a certain > number of modules ... I do not want to break something ... > If I am doing what you said ... It might happen no ? > nginx doesn't have support for loadable modules yet so any modules you want to add/remove requires recompiling nginx. From lenaigst at maelenn.org Wed Dec 23 18:38:10 2015 From: lenaigst at maelenn.org (Thierry) Date: Wed, 23 Dec 2015 20:38:10 +0200 Subject: nginx modsecurity on Debian 8 In-Reply-To: <1450891991.3964843.475076946.07810FA3@webmail.messagingengine.com> References: <1987470511.20151223170926@maelenn.org> <491297238.20151223193104@maelenn.org> <1450891991.3964843.475076946.07810FA3@webmail.messagingengine.com> Message-ID: <1791920703.20151223203810@maelenn.org> Bonjour nanaya, Ok, but if I recompile everything, do I lose the actual nginx's config ? Le mercredi 23 d?cembre 2015 ? 19:33:11, vous ?criviez : > On Thu, Dec 24, 2015, at 02:31, Thierry wrote: >> Bonjour Alt, >> >> Thx a lot .... But, I might mistaken ... >> My nginx is already working, as already been compiled with a certain >> number of modules ... I do not want to break something ... >> If I am doing what you said ... It might happen no ? >> > nginx doesn't have support for loadable modules yet so any modules you > want to add/remove requires recompiling nginx. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Cordialement, Thierry e-mail : lenaigst at maelenn.org From ruz at sports.ru Wed Dec 23 21:16:43 2015 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Thu, 24 Dec 2015 00:16:43 +0300 Subject: x-accel-redirect enables caching for POST requests In-Reply-To: <20151223161904.GT74233@mdounin.ru> References: <20151223154902.GS74233@mdounin.ru> <20151223161904.GT74233@mdounin.ru> Message-ID: On Wed, Dec 23, 2015 at 7:19 PM, Maxim Dounin wrote: > On Wed, Dec 23, 2015 at 07:10:43PM +0300, ?????? ??????? wrote: > > > On Wed, Dec 23, 2015 at 6:49 PM, Maxim Dounin > wrote: > > > > > X-Accel-Redirect changes a request from POST to GET. > > > > > > > No, it doesn't. Getting request method POST on the backend and even form > > data is intact. > > It does, > http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c#l2501: > > if (r->method != NGX_HTTP_HEAD) { > r->method = NGX_HTTP_GET; > } > > Though it looks like it only does so for nginx itself, and this > indeed looks like a bug. The code should be similar to one in > Would you create an issue for this in tracker or do I need to so it doesn't disappear in archives? > error_page handling: > > if (r->method != NGX_HTTP_HEAD) { > r->method = NGX_HTTP_GET; > r->method_name = ngx_http_core_get_method; > } > > If you want to get request and request method intact, use > X-Accel-Redirect to a named location instead. Understood and it works for our case. We actually don't want to send POST requests to location that issues internal redirects (our HRU service), but we haven't found good way to avoid it. -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 23 21:27:29 2015 From: nginx-forum at nginx.us (Alt) Date: Wed, 23 Dec 2015 16:27:29 -0500 Subject: nginx modsecurity on Debian 8 In-Reply-To: <491297238.20151223193104@maelenn.org> References: <491297238.20151223193104@maelenn.org> Message-ID: Hello Thierry, Just rebuilding a Debian package and installing it shouldn't break anything. But a problem or mistake can always happen, so I don't recommend doing eveything I said in my previous message on your production server. I don't think you want to spend your XMas fixing your server :-) So compile, package and test nginx on a test server (a virtual machine for example). And anyway, you really must have a backup of your production server (with all your config files), because shit can happen (mistake of an admin, hardware failure, a hack,...) and you could lose everything. If you want to keep the modules already compiled in, you should add ModSecurity to the same flavor you used on your server. If you installed the package "nginx-full" flavor, you should add ModSecurity to "nginx-full", rebuild the packages and install your new "nginx-full.deb". Best Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263657,263676#msg-263676 From nginx-forum at nginx.us Wed Dec 23 21:30:05 2015 From: nginx-forum at nginx.us (Alt) Date: Wed, 23 Dec 2015 16:30:05 -0500 Subject: nginx modsecurity on Debian 8 In-Reply-To: <1791920703.20151223203810@maelenn.org> References: <1791920703.20151223203810@maelenn.org> Message-ID: Hello again :-) As said in my last message, in theory you shouldn't lose your configuration. But : backup, backup and backup :-) And compile and test on a test server, not on a production server :-) Best Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263657,263677#msg-263677 From lists at ruby-forum.com Thu Dec 24 04:46:04 2015 From: lists at ruby-forum.com (Roman Ameth) Date: Thu, 24 Dec 2015 05:46:04 +0100 Subject: Live rtmp broadcast with Nginx and ffmpeg Message-ID: <4d386e53bb9002dcec8b69769bc29415@ruby-forum.com> Hello and good time of day. I trying to create live stream on Win 7 Prof licensed with Nginx. Vod working fine, but live not, and i can't understand why. http://puu.sh/m5PBf/0e4550407b.png - stream from virtual cam sent to ffmpeg. ffmpeg -f dshow -i video="e2eSoft VCam" -c:v libx264 -an -f flv "rtmp://192.168.0.155/live/video live=1" If i feed url rtmp://192.168.0.155/live/video to ffplay i can see stream playing. But all other players cant grab video data from this url. nginx.conf : http://pastebin.com/DipyiHUW Thanks in advance. -- Posted via http://www.ruby-forum.com/. From dewanggaba at xtremenitro.org Thu Dec 24 09:01:54 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 24 Dec 2015 16:01:54 +0700 Subject: Exclude specific location from cache Message-ID: <567BB482.50302@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! Currently my configuration looks like this : map $request_method $purge_method { PURGE 1; default 0; } map $arg_geoloc $bypass { default 1; 1 0; } # Exclude from cache # Expected URL http://domain.tld/ location = / { proxy_pass [..]; proxy_no_cache $bypass; proxy_cache_bypass $bypass; } # Expected URL http://domain.tld/indeks/* # Expected URL http://domain.tld/kanal/* location ~ ^/(indeks|kanal)/ { proxy_pass [..]; proxy_no_cache $bypass; proxy_cache_bypass $bypass; } # The rest are cached # Expected URL http://domain.tld/* (except the condition above) location / { proxy_pass [..]; proxy_cache my_cache; proxy_cache_purge $purge_method; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_valid 200 206 302 1h; proxy_cache_valid any 3s; proxy_cache_lock on; proxy_cache_revalidate on; proxy_cache_min_uses 10; } My goal is, exclude the homepage, indeks and kanal pages, then cache the rest URL. Am I right with the configuration above? Any help and advice are appreciated. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJWe7R/AAoJEOV/0iCgKM1w+o4P/RuZ3NJ1BiQ1B30kapHsAHhm ylW7O8E0Qx5vLF8aR0BzI3TTTsHEva9jXOrHwOf1+CiT+z/07IBvhjqDHOKhKdr5 46kMM9m/7L5ZEmH+AWM/InDQWgVkvP1UEJkYNoOAGB/NkT+zVv7g+MvJEmcf/abP BJxektr8j8NldhNx5QiSZswz2AChpg67wh/aWX0Q4vtytZOYn+1/lxMteQjrRBQa Ub75uw5yiPIKLTABdEBUJV4ulI+yel8VD6o0LOc1xbt4MUh/31Vp8kN5YziOhOK8 DB5bvCdLTFHyu8Bv8sAEmhuPxJiJ9Y9oL/HHVp5NTkDqNle7Otnr/f25jxnSFcYK 9OLY5UbWw6We8nJMk1psKbVWXPMdUBnAIvdRfIMcgHHb7QFmxxBfoCLiN7Xpt60g v4o7pGa5Oz/OVE/XdUT79uLE9vbRdFM8ZoXxkLa2mTPvUzndXuqyzgx+2GAlghb8 5te+y/aCskYAL9b/retRQrp/dBBe3TqY9Ni8CTuoZ+SgUOhk07aYIGB7Q7pEY5Sd cauWrxT4ijthE/GOM3NP7lVEtq5Cz+XDb9n5V2PFqxYi1LsLix6sYCIYwnYkVAqV E44MAuViVFpj+NULcriNnU3HNJB6OWQdoEcNXodHUwm71k8ctPUSBrQK5w6c42xP R6qDdDpqFU0oKoooXRHG =tzpt -----END PGP SIGNATURE----- From dewanggaba at xtremenitro.org Thu Dec 24 09:24:12 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 24 Dec 2015 16:24:12 +0700 Subject: Exclude specific location from cache In-Reply-To: <567BB482.50302@xtremenitro.org> References: <567BB482.50302@xtremenitro.org> Message-ID: <567BB9BC.8000203@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Sorry I forgot something, I'm using Nginx Plus R7. nginx version: nginx/1.9.4 (nginx-plus-http2-r7-p1) built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --build=nginx-plus-http2-r7-p1 - --prefix=/etc/nginx/ --sbin-path=/usr/sbin/nginx - --conf-path=/etc/nginx/nginx.conf - --error-log-path=/var/log/nginx/error.log - --http-log-path=/var/log/nginx/access.log - --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock - --http-client-body-temp-path=/var/cache/nginx/client_temp - --http-proxy-temp-path=/var/cache/nginx/proxy_temp - --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp - --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp - --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx - --group=nginx --with-http_ssl_module --with-http_v2_module - --with-http_realip_module --with-http_addition_module - --with-http_sub_module --with-http_dav_module --with-http_flv_module - --with-http_mp4_module --with-http_gzip_static_module - --with-http_gunzip_module --with-http_random_index_module - --with-http_secure_link_module --with-http_stub_status_module - --with-http_auth_request_module --with-mail --with-mail_ssl_module - --with-threads --with-file-aio --with-ipv6 --with-stream - --with-stream_ssl_module --with-http_f4f_module - --with-http_session_log_module --with-http_hls_module - --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions - -fstack-protector-strong --param=ssp-buffer-size=4 - -grecord-gcc-switches -m64 -mtune=generic' On 12/24/2015 04:01 PM, Dewangga Bachrul Alam wrote: > Hello! > > Currently my configuration looks like this : > > map $request_method $purge_method { PURGE 1; default 0; } > > map $arg_geoloc $bypass { default 1; 1 0; } > > # Exclude from cache # Expected URL http://domain.tld/ location = / > { proxy_pass [..]; proxy_no_cache $bypass; proxy_cache_bypass > $bypass; } > > # Expected URL http://domain.tld/indeks/* # Expected URL > http://domain.tld/kanal/* location ~ ^/(indeks|kanal)/ { proxy_pass > [..]; proxy_no_cache $bypass; proxy_cache_bypass $bypass; } > > # The rest are cached # Expected URL http://domain.tld/* (except > the condition above) location / { proxy_pass [..]; proxy_cache > my_cache; proxy_cache_purge $purge_method; > proxy_cache_use_stale error timeout updating http_500 http_502 > http_503 http_504; proxy_cache_valid 200 206 302 1h; > proxy_cache_valid any 3s; proxy_cache_lock on; > proxy_cache_revalidate on; proxy_cache_min_uses 10; } > > My goal is, exclude the homepage, indeks and kanal pages, then > cache the rest URL. Am I right with the configuration above? > > Any help and advice are appreciated. > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJWe7m5AAoJEOV/0iCgKM1w42YQAIor0QWCF6yiqM0Ea+Ek+LnU BJUMe+bJecJsStV++S6QnfpaqcvDBr4hl5NzsgdG1eY3BX2rKWx6AGMMJAlrgTuI eHSeb3Fvd7OBkGp0hr6+wkeeMfn/KsFL3kPwqqfS7KDAsH2z43Jzznj5XttFH+Vo GcGHMDqUtLURWxepXpfFAWsfHWmv4MbD1pGksbLlBnYMS8wkIgn0+mUlTSkYTldi SxWwtkRLDA2mQhw8vE99o3nmgpv26qTXJTX8+DgHOx3yhX364Ka8i4egSgrQO81a oYT6HP07R6o63CdjY8b7aShG7IwtOMYf7NM3bn7eu/9dn8t/y0vzfRRtN9ZtHnsp ejp8BiFkodOMclW27CQ2oMWIv7IJJuVV3q5IyVNFARYK1sGKM3GKWyiZLy+rLVGU K3SUnvPO9dH1d9UyzvnzLf/bS7EwXhdPnPkOIiX1xdN/RJls8XJogLBNxqaF/mHo ih0xTDvpLO6sYcEmSjECy1WZDoxmFbiL5pVi0sWD5unFoUuUk+XoFbrJCzi3tMTv CWUZt1WbOxV3sgQRHbnH6ZyzYOFaAaLPBYgCWJR8uRXGTtn2ybSYhFLtkNxsjxgO RqRYBptVbdRPbHb5AAU9YOh8Vi2ktMZCoX016pdLXvnm0l/ikYrf903e9wMO7DSB 9mCYvrrRShJqCRifm7GT =z8GZ -----END PGP SIGNATURE----- From maxim at nginx.com Thu Dec 24 09:28:26 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 24 Dec 2015 12:28:26 +0300 Subject: Exclude specific location from cache In-Reply-To: <567BB9BC.8000203@xtremenitro.org> References: <567BB482.50302@xtremenitro.org> <567BB9BC.8000203@xtremenitro.org> Message-ID: <567BBABA.3010906@nginx.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/24/15 12:24 PM, Dewangga Bachrul Alam wrote: > Sorry I forgot something, I'm using Nginx Plus R7. > [...] Hi Dewangga, please open a support ticket. Our engineers will help you with that in a minute. - -- Maxim Konovalov -----BEGIN PGP SIGNATURE----- iEYEARECAAYFAlZ7uroACgkQ7PDpCywXIIPx1QCfUIaXwI705Ft/XWrfKfYOuCI+ EpcAoKq8XaXyAYTKDkgfPwSMb6G92fQP =kLMV -----END PGP SIGNATURE----- From dewanggaba at xtremenitro.org Thu Dec 24 09:48:17 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 24 Dec 2015 16:48:17 +0700 Subject: Exclude specific location from cache In-Reply-To: <567BBABA.3010906@nginx.com> References: <567BB482.50302@xtremenitro.org> <567BB9BC.8000203@xtremenitro.org> <567BBABA.3010906@nginx.com> Message-ID: <567BBF61.8040906@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! On 12/24/2015 04:28 PM, Maxim Konovalov wrote: > On 12/24/15 12:24 PM, Dewangga Bachrul Alam wrote: >> Sorry I forgot something, I'm using Nginx Plus R7. > > [...] > > Hi Dewangga, > > please open a support ticket. Our engineers will help you with > that in a minute. Re-send to nginx-plus support :) Thank you Maxim. [..] -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJWe79eAAoJEOV/0iCgKM1wPa0QAJxaOgqw5s8OOa+IgwFq7xc/ LitVKRHukMVxskkpzehliu7OZ6UVv2J1uiJ3z4ltbbQ3Cie/XiWkutI1F25ur45K 7aLjCml9X3gaiYQ75GlS0zdJQD9gR6YmvVdHbv8SggUWQYHz7cFgPwP9lSE3Bo/O P7MEen8k3xYxkHK24B/jGg+DB+SH0s9mIChyhtqtbx0PkNsJbIHsQSB06Sle5I1E nQB81x3FvzCEhQfJXgUlZDWJgMe8+aw64ooexBpcE/yxSiYZ0vZxkGQT3VZUKHYn GdblSmLcQyWWqaId+5OXRMNVDQXfwIJAKLrfAQRG8KSiXp2yAgoTZrv4PC6lVcLa RGAbhdf7z8sIisTlhDOaO0PGbdll2j08jQMARK6bdnd5ZxCv8JKZkhXimXUmt/4e uN7r3f6jTcL4dr1RJ4V9QAsIeAhKUV8JJ+zrSfzd7Ts8Y7bzQeH2mvtziJKAA1yG ZoYF6UoWe67HyCsoYRpMXMahmSnAp+3DiTc9C4I4AxeanJTbpxADu2W83LdxOb51 We4N8vdFcjRrYqnclT0QQKezZEkXT0inSPq1H/bN1MBo610cryOemnSbrSO+bmEt vqzg8M3vNSsHL+wXEmAmOcwN6/7U4D5kRhno6gsLfg/aS45N3pmdj7RjIkoEDafb 0hI9g54DFT6CazqXGNBT =LXxw -----END PGP SIGNATURE----- From mdounin at mdounin.ru Thu Dec 24 18:24:04 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Dec 2015 21:24:04 +0300 Subject: x-accel-redirect enables caching for POST requests In-Reply-To: References: <20151223154902.GS74233@mdounin.ru> <20151223161904.GT74233@mdounin.ru> Message-ID: <20151224182404.GZ74233@mdounin.ru> Hello! On Thu, Dec 24, 2015 at 12:16:43AM +0300, ?????? ??????? wrote: > On Wed, Dec 23, 2015 at 7:19 PM, Maxim Dounin wrote: > > > On Wed, Dec 23, 2015 at 07:10:43PM +0300, ?????? ??????? wrote: > > > > > On Wed, Dec 23, 2015 at 6:49 PM, Maxim Dounin > > wrote: > > > > > > > X-Accel-Redirect changes a request from POST to GET. > > > > > > > > > > No, it doesn't. Getting request method POST on the backend and even form > > > data is intact. > > > > It does, > > http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c#l2501: > > > > if (r->method != NGX_HTTP_HEAD) { > > r->method = NGX_HTTP_GET; > > } > > > > Though it looks like it only does so for nginx itself, and this > > indeed looks like a bug. The code should be similar to one in > > > > Would you create an issue for this in tracker or do I need to so it doesn't > disappear in archives? No real need to open tickets. I've submitted a patch for an internal review here. Just in case, patch below. # HG changeset patch # User Maxim Dounin # Date 1450981046 -10800 # Thu Dec 24 21:17:26 2015 +0300 # Node ID 10e233c763566b8466f6e302511094866a14e77a # Parent 78b4e10b4367b31367aad3c83c9c3acdd42397c4 Upstream: fixed changing method on X-Accel-Redirect. Previously, only r->method was changed, resulting in handling of a request as GET within nginx itself, but not in requests to proxied servers. See http://mailman.nginx.org/pipermail/nginx/2015-December/049518.html. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2499,6 +2499,7 @@ ngx_http_upstream_process_headers(ngx_ht if (r->method != NGX_HTTP_HEAD) { r->method = NGX_HTTP_GET; + r->method_name = ngx_http_core_get_method; } ngx_http_internal_redirect(r, &uri, &args); -- Maxim Dounin http://nginx.org/ From martin.grotzke at googlemail.com Thu Dec 24 22:20:04 2015 From: martin.grotzke at googlemail.com (Martin Grotzke) Date: Thu, 24 Dec 2015 23:20:04 +0100 Subject: SSI URI path decoding/encoding issue In-Reply-To: References: Message-ID: Hi, we experienced an issue with SSIs (include virtual): nginx at first decodes the URI path and encodes it before passing the request to the upstream. If an URI path segment contains %2F (encoded slash), the decoded slash is not encoded again so that the request sent to the upstream differs from the original request. Other characters are not encoded as expected as well (at least the backtick, I've not checked it for others). I've described this issue more detailed here (also with debug logs): https://inoio.de/blog/2015/12/22/integrating-SCSs-be-careful-with-SSIs/ Does it make sense to submit an issue for this, or is there an explanation for this behavior / is it intentional? Thanks && cheers, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Fri Dec 25 06:39:15 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 24 Dec 2015 22:39:15 -0800 Subject: [ANN] OpenResty 1.9.7.1 released Message-ID: Hi folks I am happy to announce the new formal release, 1.9.7.1, of the OpenResty web platform based on NGINX and Lua: https://openresty.org/#Download Both the (portable) source code distribution and the Win32 binary distribution are provided there. Special thanks go to all our contributors and users around the globe. This is the first OpenResty formal version based on the NGINX 1.9.7 core. One highlight of this version is the reduction of in-pool memory allocations in the cosocket implementation to support long-running requests (like C2000K in a single box, as tested by one of our team members). Also we include various small features like ngx.worker.id(), ngx.worker.count(), ngx.timer.pending_count(), ngx.timer.running_count(), ngx.redirect(uri, 307), and etc. Changes since the last (formal) release, 1.9.3.2: * upgraded the Nginx core to 1.9.7. * see the changes here: * "./configure": now we automatically set the environment "MACOSX_DEPLOYMENT_TARGET" to the current Mac OS X version (unless the environment is already set) to ensure the LuaJIT build uses the current versions of the system libraries. thanks bsyk for the report. * win32: use Windows line breaks in the "resty" script file of the binary distribution. * win32: upgraded pcre to 8.38 and openssl to 1.0.2e. * win32: enabled ngx_http_realip_module, ngx_http_addition_module ngx_http_sub_module, and ngx_http_stub_status_module in the win32 binary package by default. * upgraded the ngx_lua module to 0.9.20. * feature: added new API functions ngx.worker.count() and ngx.worker.id() for returning the total count of nginx worker processes and the ordinal number (0, 1, 2, and etc) of the current worker. thanks YuanSheng Wang for the patch. also added pure C API for them. * feature: added new API functions ngx.timer.pending_count() and ngx.timer.running_count(). thanks Simon Eskildsen for the patch. * feature: added new config directive access_by_lua_no_postpone. thanks Delta Yeh for the patch. * feature: added new constant "ngx.HTTP_TEMPORARY_REDIRECT" (307) and support for 307 in ngx.redirect(). thanks RocFang for the patch. * feature: added new API function ngx.req.is_internal() for testing if the current request is an internal request. thanks Ruoshan Huang for the patch. * feature: added many more HTTP status constants as "ngx.HTTP_XXX". thanks Vadim A. Misbakh-Soloviov for the patch. * bugfix: bogus "nginx.conf" parse failure "Lua code block missing the "}" character" might happen when there are many Lua code blocks inlined. thanks Andreas Lubbe for the report. * bugfix: bogus "subrequests cycle" errors might occur with nginx 1.9.5+ due to the recent changes in the nginx core. * bugfix: ngx.req.get_uri_args/ngx.req.get_post_args: avoided allocating a zero-size buffer in the nginx memory pool since it might cause problems. thanks Chuanwen Chen for the report and patch. * bugfix: modifying the built-in header "X-Forwarded-For" via ngx.req.set_header() or ngx.req.clear_header() might not take effect in some parts of the nginx core (like $proxy_add_x_forwarded_for). thanks aviramc for the patch. * bugfix: we lacked detailed context info in error messages due to use of disabled Lua API in body_filter_by_lua*. thanks Dejiang Zhu for the patch. * bugfix: fixed a potential data alignment issue in the ngx.var setter API. * bugfix: we had data alignment issues in the subrequest API which can explode on systems like ARM. thanks Stefan Parvu for providing the test environment. * bugfix: there was a data alignment issue in the tcpsock:setkeepalive() implementation which might lead to crashes on ARM systems. thanks Stefan Parvu for the report. * bugfix: fixed C compiler warnings "comparison between signed and unsigned integer expressions" on Windows. * optimize: avoided allocating in the nginx request memory pool in stream-typed cosockets' receive*() methods. thanks Lourival Vieira Neto for the patch. * optimize: reduced memory allocations in stream-typed cosockets. thanks Dejiang Zhu for the patch. * avoided allocating the host name buffer when getting peers from the connection pool. * recycled the stream cosockets' request cleanup records. * doc: documented the minimum size threshold in lua_shared_dict. thanks mlr3000 for the original patch. * upgraded the lua-resty-core library to 0.1.3. * Makefile: added support for relative paths in "LUA_LIB_DIR". * minor code adjustments from Aapo Talvensaari. * upgraded the ngx_headers_more module to 0.29. * bugfix: changing the built-in header "X-Forwarded-For" via more_set_input_headers or more_clear_input_headers might not take effect in some parts of the nginx core (like $proxy_add_x_forwarded_for). * upgraded the lua-resty-redis library to 0.22. * tweaked Makefile to allow relative paths in "LUA_LIB_DIR" when "DESTDIR" is not specified. * optimize: moved string concatenation for the Redis request construction onto the C land (taking advantage of the feature that cosockets' send method accepts a table of strings). thanks Dejiang Zhu for the patch. * optimize: minor optimizations from Aapo Talvensaari. * upgraded resty-cli to 0.05. * bugfix: resty: nginx might report the error "The system cannot find the file specified" in "CreateFile()" on Windows XP. thanks cover_eye for the report. * upgraded LuaJIT to v2.1-20151219: https://github.com/openresty/luajit2/tags * Makefile: ensure we always install the symbolic link for the "luajit" file. * imported Mike Pall's latest changes: * FFI: Fix SPLIT pass for CONV i64.u64. * x64 LJ_GC64: Fix stack growth in vararg function setup. * DynASM/x86: Add rdpmc instruction. * OSX: Switch to Clang as the default compiler. * iOS: Disable os.execute() when building for iOS >= 8.0. * x86/x64: Disassemble AVX AVX2 instructions. * DynASM/x86: Add AVX and AVX2 opcodes. * DynASM/x86: Add AES-NI opcodes. * DynASM/x86: Restrict shld/shrd to operands with same width. * DynASM/x86: Fix some SSE instruction templates. * Fix pairs() recording. * FFI: Fix ipairs() recording. * Drop marks from replayed instructions when sinking. The HTML version of the change log with lots of helpful hyper-links can be browsed here: https://openresty.org/#ChangeLog1009007 OpenResty (aka. ngx_openresty) is a full-fledged web platform by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: https://qa.openresty.org Enjoy! And Merry Christmas if you celebrate it. Best regards, -agentzh From nginx-forum at forum.nginx.org Sat Dec 26 11:42:44 2015 From: nginx-forum at forum.nginx.org (Nginx Forum) Date: Sat, 26 Dec 2015 06:42:44 -0500 Subject: Nginx redirect/rewrite Rule Message-ID: <4dd4d3ce38c2c229c1fcf625c6ff1360.NginxMailingListEnglish@forum.nginx.org> Nginx redirect/rewrite Rule Old Url http://www.mydomain.com.br/forum/elsword-downloads-de-cheats-utilitarios/2369461-26-04-revolution-trainer-elsword.html 1. /forum/ - It is the folder that installed vbulletin 2. /elsword-downloads-de-cheats-utilitarios/ - is the forum name that the topic / in xenforo will not appear 3. 2369461 - is the ID that will have to appear in the url xenforo 4. -26-04-revolution-trainer-elsword.html - is the topic name that accessing, no matter what the xenforo because with the right ID it corrects the topic name in the URL. New Url http://www.mydomain.com.br/threads/26-04-revolution-trainer-elsword.2369461/ 1. /threads/ - the xenforo automatically add the address when accessing this one topic. 2. 26-04-revolution-trainer-elsword - Topic name, even if the xenforo system corrects'm wrong 3. 2369461 - Most importantly, the topic ID Another examples Old Url http://www.mydomain.com.br/forum/resolvidos/2343690-reposicao-dos-meus-posts.html New Url http://www.mydomain.com.br/threads/reposi??o-dos-meus-posts.2343690/ Old Url http://www.mydomain.com.br/forum/league-of-legends-downloads-de-cheats-utilitarios/2516190-drophack-1-3-funcional-apenas-por-30-dias-aproveite-o-mais-rapido-possivel-veja-ma.html New Url http://www.mydomain.com.br/threads/drophack-1-3-funcional-apenas-por-30-dias-aproveite-o-mais-rapido-possivel-veja-ma.2516190/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263698,263698#msg-263698 From igal at lucee.org Sun Dec 27 06:14:01 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Sat, 26 Dec 2015 22:14:01 -0800 Subject: Building nginx from Source on Windows Message-ID: <567F81A9.8010902@lucee.org> Hi, I posted a question at http://stackoverflow.com/questions/34478016/building-nginx-from-source-on-windows -- copied below: I'm trying to build nginx from source on Windows. I got the following done: 1) Installed mingw, gcc, and msys 2) Downloaded the nginx source code 3) Ran the following in the msys console from the nginx source folder: |$ auto/configure --with-cc=gcc --without-http_rewrite_module --without-http_gzip_module (output omitted) $ make -f objs/Makefile (output omitted) $ make install -f objs/Makefile (output omitted) | This produced the nginx.exe file in the objs folder, but when I tried to run it I get the following error: |$ nginx.exe nginx: [alert] could not open error log file: CreateFile() "/usr/local/nginx/logs/error.log" failed (3: The system cannot find the path specified) 2015/12/26 21:49:25 [emerg] 10200#9700: CreateFile() "/usr/local/nginx/conf/nginx.conf" failed (3: The system cannot find the path specified) | But when I run|ls /usr/local/nginx|I see that the|conf|and|logs|directories are there, and the conf directory has some files in it: |$ ls -l conf total 34 -rw-r--r-- 1 Admin Administrators 1077 Dec 26 21:30 fastcgi.conf -rw-r--r-- 1 Admin Administrators 1077 Dec 26 21:30 fastcgi.conf.default -rw-r--r-- 1 Admin Administrators 1007 Dec 26 21:30 fastcgi_params -rw-r--r-- 1 Admin Administrators 1007 Dec 26 21:30 fastcgi_params.default -rw-r--r-- 1 Admin Administrators 2837 Dec 26 21:30 koi-utf -rw-r--r-- 1 Admin Administrators 2223 Dec 26 21:30 koi-win -rw-r--r-- 1 Admin Administrators 3957 Dec 26 21:30 mime.types -rw-r--r-- 1 Admin Administrators 3957 Dec 26 21:30 mime.types.default -rw-r--r-- 1 Admin Administrators 2656 Dec 26 21:30 nginx.conf -rw-r--r-- 1 Admin Administrators 2656 Dec 26 21:30 nginx.conf.default -rw-r--r-- 1 Admin Administrators 636 Dec 26 21:30 scgi_params -rw-r--r-- 1 Admin Administrators 636 Dec 26 21:30 scgi_params.default -rw-r--r-- 1 Admin Administrators 664 Dec 26 21:30 uwsgi_params -rw-r--r-- 1 Admin Administrators 664 Dec 26 21:30 uwsgi_params.default -rw-r--r-- 1 Admin Administrators 3610 Dec 26 21:30 win-utf | I tried to run|$ chmod -R 0777 conf|but that did not seem to make a difference. What am I doing wrong? And how can I change it so that the logs and conf files will be searched in the local directory of nginx.exe and not in the|/usr/local/nginx folder|(which I found after some searching at|msys\1.0\local\nginx|)? Thanks! -- Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sun Dec 27 18:36:29 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 27 Dec 2015 23:36:29 +0500 Subject: Slow mp4 buffer over SSL !! Message-ID: Hi, We've shifted our static content to SSL recently and found that mp4 streaming is drastically slow over SSL ( around 90KBps on 4Mbps connection) and if we test the same video over HTTP it gives us full 400+KBps speed. Here is the SSL config : server { listen 443 spdy; ssl on; server_name cw004.domain.net www.cw004.domain.net; ssl_certificate /etc/ssl/certs/domain/domain-combined.crt; ssl_certificate_key /etc/ssl/certs/domain/domain.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-A$ location / { root /videos; index index.html index.htm index.php; } location ~ \.(flv)$ { flv; root /videos; expires 7d; include hotlink.inc; } include thumbs.inc; #location ~ \.(jpg)$ { # root /videos; # try_files $uri /files/thumbs/no_thumb.jpg; # } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; log_not_found off; } You have new mail in /var/mail/root [root at cw004 /usr/local/etc/nginx/vhosts]# ^C [root at cw004 /usr/local/etc/nginx/vhosts]# cat virtual-ssl.conf server { listen 443 spdy; ssl on; server_name cw004.domain.net www.cw004.domain.net; ssl_certificate /etc/ssl/certs/domain/domain-combined.crt; ssl_certificate_key /etc/ssl/certs/domain/domain.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4'; ssl_prefer_server_ciphers on; location / { root /videos; index index.html index.htm index.php; } location ~ \.(flv)$ { flv; root /videos; expires 7d; include hotlink.inc; } include thumbs.inc; #location ~ \.(jpg)$ { # root /videos; # try_files $uri /files/thumbs/no_thumb.jpg; # } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; log_not_found off; } location ~ \.(mp4)$ { mp4; mp4_buffer_size 4M; mp4_max_buffer_size 10M; expires 1y; add_header Cache-Control "public"; root /videos; include hotlink.inc; } # pass the PHP scripts to FastCGI server listening on unix:/var/run/www.socket location ~ \.php$ { root /videos; fastcgi_pass unix:/var/run/www.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } ------------------------- Is there optimization being missed for SSL ? Thanks. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Sun Dec 27 19:44:16 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Sun, 27 Dec 2015 11:44:16 -0800 Subject: Building nginx from Source on Windows In-Reply-To: <567F81A9.8010902@lucee.org> References: <567F81A9.8010902@lucee.org> Message-ID: <56803F90.6050107@lucee.org> update: so I ran the following command: auto/configure --with-cc=gcc --prefix= --conf-path=conf/nginx.conf --pid-path=nginx.pid --http-log-path=logs/access.log --error-log-path=logs/error.log--without-http_gzip_module --without-http_rewrite_module and it creates the objs/Makefile (35kb) and Makefile (1kb) and directory structure of the sources. there is also an autoconf.err file with a single line in it: checking for gcc -pipe switch running: make -f objs/Makefile populates the directory structure with .o files, and produces nginx.exe which is about 3.14 MB (3,293,712 bytes). can I just use that file or do I still need to run "make install"? running make install -f objs/Makefile generates an error: test -d '' || mkdir -p '' mkdir: `': No such file or directory make: *** [install] Error 1 running the nginx.exe file creates two processes named nginx, but requests to localhost:8080 (I changed the port in nginx.conf) hang, and error.log shows the following: 2015/12/27 11:22:48 [alert] 10032#9464: GetQueuedCompletionStatus() returned operation error (1236: The network connection was aborted by the local system) 2015/12/27 11:22:48 [alert] 10032#9464: *1 connection already closed while waiting for request, client: 127.0.0.1, server: 0.0.0.0:8080 On 12/26/2015 10:14 PM, Igal @ Lucee.org wrote: > Hi, > > I posted a question at > http://stackoverflow.com/questions/34478016/building-nginx-from-source-on-windows > -- copied below: > > I'm trying to build nginx from source on Windows. I got the following > done: > > 1) Installed mingw, gcc, and msys > > 2) Downloaded the nginx source code > > 3) Ran the following in the msys console from the nginx source folder: > > |$ auto/configure --with-cc=gcc --without-http_rewrite_module > --without-http_gzip_module (output omitted) $ make -f objs/Makefile > (output omitted) $ make install -f objs/Makefile (output omitted) | > > This produced the nginx.exe file in the objs folder, but when I tried > to run it I get the following error: > > |$ nginx.exe nginx: [alert] could not open error log file: CreateFile() > "/usr/local/nginx/logs/error.log" failed (3: The system cannot find > the path specified) 2015/12/26 21:49:25 [emerg] 10200#9700: > CreateFile() "/usr/local/nginx/conf/nginx.conf" failed (3: The system > cannot find the path specified) | > > But when I run|ls /usr/local/nginx|I see that > the|conf|and|logs|directories are there, and the conf directory has > some files in it: > > |$ ls -l conf total 34 -rw-r--r-- 1 Admin Administrators 1077 Dec 26 > 21:30 fastcgi.conf -rw-r--r-- 1 Admin Administrators 1077 Dec 26 21:30 > fastcgi.conf.default -rw-r--r-- 1 Admin Administrators 1007 Dec 26 > 21:30 fastcgi_params -rw-r--r-- 1 Admin Administrators 1007 Dec 26 > 21:30 fastcgi_params.default -rw-r--r-- 1 Admin Administrators 2837 > Dec 26 21:30 koi-utf -rw-r--r-- 1 Admin Administrators 2223 Dec 26 > 21:30 koi-win -rw-r--r-- 1 Admin Administrators 3957 Dec 26 21:30 > mime.types -rw-r--r-- 1 Admin Administrators 3957 Dec 26 21:30 > mime.types.default -rw-r--r-- 1 Admin Administrators 2656 Dec 26 21:30 > nginx.conf -rw-r--r-- 1 Admin Administrators 2656 Dec 26 21:30 > nginx.conf.default -rw-r--r-- 1 Admin Administrators 636 Dec 26 21:30 > scgi_params -rw-r--r-- 1 Admin Administrators 636 Dec 26 21:30 > scgi_params.default -rw-r--r-- 1 Admin Administrators 664 Dec 26 21:30 > uwsgi_params -rw-r--r-- 1 Admin Administrators 664 Dec 26 21:30 > uwsgi_params.default -rw-r--r-- 1 Admin Administrators 3610 Dec 26 > 21:30 win-utf | > > I tried to run|$ chmod -R 0777 conf|but that did not seem to make a > difference. > > What am I doing wrong? And how can I change it so that the logs and > conf files will be searched in the local directory of nginx.exe and > not in the|/usr/local/nginx folder|(which I found after some searching > at|msys\1.0\local\nginx|)? > > Thanks! > > > > -- > > Igal Sapir > Lucee Core Developer > Lucee.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at elsitar.com Mon Dec 28 02:52:51 2015 From: info at elsitar.com (Xavier Cardil Coll) Date: Mon, 28 Dec 2015 03:52:51 +0100 Subject: Nginx catching all domains without wildcard Message-ID: My Nginx config is catching all subdomains without specifying a wildcard. I have created an special config for each subdomain, but seems that all subdomains pass trough the main domain configuration. I have discovered this by removing the subdomains configuration files from nginx.conf and watching how it still catches all the subdomains, so when I send a request to uk.domain.com, instead of Nginx catching the subdomain configuration, goes through the main domain configuration. This is causing trouble with applying mod_pagespeed individually to each of the sites and also managing the GA universal code. The config for the main domain is this : http://pastebin.com/4AvGgRCc http config : http://pastebin.com/CkJLHyHH If I haven't setup a wildcard, how come the main domain config is catching all subdomains ?? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Dec 28 12:41:16 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 28 Dec 2015 13:41:16 +0100 Subject: Nginx catching all domains without wildcard In-Reply-To: References: Message-ID: In the configuration you provided, there are 3 server blocks: - 80 www.domain.com - 443 (default server) domain.com - 127.0.0.1:8081 elsitar.com There is no server block defined for uk.domain.com (you main config file has an include directive for a /etc/nginx/uk.domain.com file which is commented though), thus a default server block will be selected. On port 443, for all IP addresses, the domain.com server block will be chosen as it is defined as being the default. On port 80, there is no default port defined, thus the first (and only) one will be used. On 8081 for the localhost IP address 127.0.0.1, as for 80, there is no default port so the first (and only) server block defined for the IP:port pair will be selected as the default. There is no magical behavior, please make sure you understand the docs. There is even a webpage dedicated to server names: http://nginx.org/en/docs/http/server_names.html --- *B. R.* On Mon, Dec 28, 2015 at 3:52 AM, Xavier Cardil Coll wrote: > My Nginx config is catching all subdomains without specifying a wildcard. > I have created an special config for each subdomain, but seems that all > subdomains pass trough the main domain configuration. I have discovered > this by removing the subdomains configuration files from nginx.conf and > watching how it still catches all the subdomains, so when I send a request > to uk.domain.com, instead of Nginx catching the subdomain configuration, > goes through the main domain configuration. This is causing trouble with > applying mod_pagespeed individually to each of the sites and also managing > the GA universal code. > The config for the main domain is this : > > http://pastebin.com/4AvGgRCc > > http config : > > http://pastebin.com/CkJLHyHH > > If I haven't setup a wildcard, how come the main domain config is catching > all subdomains ?? > > Thanks > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 28 14:53:03 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Dec 2015 17:53:03 +0300 Subject: Building nginx from Source on Windows In-Reply-To: <56803F90.6050107@lucee.org> References: <567F81A9.8010902@lucee.org> <56803F90.6050107@lucee.org> Message-ID: <20151228145303.GT74233@mdounin.ru> Hello! On Sun, Dec 27, 2015 at 11:44:16AM -0800, Igal @ Lucee.org wrote: > update: so I ran the following command: > > auto/configure --with-cc=gcc --prefix= --conf-path=conf/nginx.conf > --pid-path=nginx.pid --http-log-path=logs/access.log > --error-log-path=logs/error.log--without-http_gzip_module > --without-http_rewrite_module The "--with-select_module" option is missing, and that's what causes your troubles. [...] > populates the directory structure with .o files, and produces nginx.exe > which is about 3.14 MB (3,293,712 bytes). can I just use that file or do I > still need to run "make install"? running Running "make install" won't do anything good with an empty prefix. Just run the nginx.exe binary. See also this howto article: http://nginx.org/en/docs/howto_build_on_win32.html It describes how to build nginx using MS Visual C as currently done for official win32 builds. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Dec 28 16:25:03 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Dec 2015 19:25:03 +0300 Subject: SSI URI path decoding/encoding issue In-Reply-To: References: Message-ID: <20151228162503.GU74233@mdounin.ru> Hello! On Thu, Dec 24, 2015 at 11:20:04PM +0100, Martin Grotzke wrote: > we experienced an issue with SSIs (include virtual): nginx at first decodes > the URI path and encodes it before passing the request to the upstream. If > an URI path segment contains %2F (encoded slash), the decoded slash is not > encoded again so that the request sent to the upstream differs from the > original request. Other characters are not encoded as expected as well (at > least the backtick, I've not checked it for others). > > I've described this issue more detailed here (also with debug logs): > https://inoio.de/blog/2015/12/22/integrating-SCSs-be-careful-with-SSIs/ > > Does it make sense to submit an issue for this, or is there an explanation > for this behavior / is it intentional? This behaviour is somewhat intentional. Internal handling of URIs in nginx is in an unencoded form, and more or less the only sensible handling of %2F when you need it unencoded (e.g., to to properly check location prefix or to map to a file) is to decode it to "/". And after it's unencoded there is no way to encode it back properly if you'll need to send it somewhere else with proxy_pass. As a result, when you modify an URI whith %2F within nginx by using rewrite (or construct such an URI with SSI), the special meaning of %2F is lost. It's only preserved if you pass original client URI unmodified - without any rewrites and using proxy_pass without an URI component. -- Maxim Dounin http://nginx.org/ From nick at lifebloodnetworks.com Mon Dec 28 19:43:00 2015 From: nick at lifebloodnetworks.com (Nicholas J Ingrassellino) Date: Mon, 28 Dec 2015 14:43:00 -0500 Subject: Reverse Proxy File Upload Failure Message-ID: <568190C4.6080408@lifebloodnetworks.com> I am running Nginx v1.8.0 to serve all my static files and node.js to do the dynamic stuff. One of the things Nginx needs to do is proxy files over to node.js which stores the file. node.js seems to get ~26K of the file and then nothing (not even sure the connection closes when the data "finishes" sending). My configuration is so: location /attachment_upload/ { client_body_temp_path /tmp/; client_body_in_file_only clean; client_body_buffer_size 256k; client_max_body_size 1g; proxy_set_header X-FILE $request_body_file; proxy_http_version 1.1; proxy_buffering off; proxy_pass http://10.10.1.20:8090/attachment_upload/; expires epoch; } I had setup curl to see if I could get more information. It returns right away (so it does not look like a time-out to me) without error. I have checked the Nginx error logs and do not see anything in there. Not sure what to check next... From igal at lucee.org Mon Dec 28 19:44:32 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Mon, 28 Dec 2015 11:44:32 -0800 Subject: Building nginx from Source on Windows In-Reply-To: <20151228145303.GT74233@mdounin.ru> References: <567F81A9.8010902@lucee.org> <56803F90.6050107@lucee.org> <20151228145303.GT74233@mdounin.ru> Message-ID: <56819120.5090609@lucee.org> Thank you, Maxim! I prefer to build with gcc if that is possible. MS Visual C is a mess IMO. As a Java developer I'm used to more intuitive tools and options, I guess. I will try to add --with-select_module and hopefully that will do the trick. I will report back once I have some findings. On 12/28/2015 6:53 AM, Maxim Dounin wrote: > Hello! > > On Sun, Dec 27, 2015 at 11:44:16AM -0800, Igal @ Lucee.org wrote: > >> update: so I ran the following command: >> >> auto/configure --with-cc=gcc --prefix= --conf-path=conf/nginx.conf >> --pid-path=nginx.pid --http-log-path=logs/access.log >> --error-log-path=logs/error.log--without-http_gzip_module >> --without-http_rewrite_module > The "--with-select_module" option is missing, and that's what > causes your troubles. > > [...] > >> populates the directory structure with .o files, and produces nginx.exe >> which is about 3.14 MB (3,293,712 bytes). can I just use that file or do I >> still need to run "make install"? running > Running "make install" won't do anything good with an empty > prefix. Just run the nginx.exe binary. > > See also this howto article: > > http://nginx.org/en/docs/howto_build_on_win32.html > > It describes how to build nginx using MS Visual C as currently > done for official win32 builds. > From mdounin at mdounin.ru Mon Dec 28 19:49:24 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Dec 2015 22:49:24 +0300 Subject: Building nginx from Source on Windows In-Reply-To: <56819120.5090609@lucee.org> References: <567F81A9.8010902@lucee.org> <56803F90.6050107@lucee.org> <20151228145303.GT74233@mdounin.ru> <56819120.5090609@lucee.org> Message-ID: <20151228194924.GV74233@mdounin.ru> Hello! On Mon, Dec 28, 2015 at 11:44:32AM -0800, Igal @ Lucee.org wrote: > Thank you, Maxim! > > I prefer to build with gcc if that is possible. MS Visual C is a mess IMO. > As a Java developer I'm used to more intuitive tools and options, I guess. > > I will try to add --with-select_module and hopefully that will do the trick. > I will report back once I have some findings. MinGW's gcc is expected to work fine as well. The difference is that you have to specify --with-cc=gcc (instead of cl) and run make instead of nmake. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Dec 28 21:50:23 2015 From: nginx-forum at forum.nginx.org (sumeetmaru) Date: Mon, 28 Dec 2015 16:50:23 -0500 (EST) Subject: nginx configuration with self signed certificates - getting error In-Reply-To: References: Message-ID: Please if anyone could take a look and reply? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263695,263727#msg-263727 From 2305958068 at qq.com Tue Dec 29 09:07:37 2015 From: 2305958068 at qq.com (=?gb18030?B?zOy608qvIC0g0Lu98MX0?=) Date: Tue, 29 Dec 2015 17:07:37 +0800 Subject: how to add in section. How do I do it? Thanks! Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From aapo.talvensaari at gmail.com Tue Dec 29 11:18:42 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Tue, 29 Dec 2015 13:18:42 +0200 Subject: how to add > > > in section. How do I do it? > Okay, I will do it for you: If you later need to remove it, it is quite simple: -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcgeopc at gmail.com Tue Dec 29 12:47:24 2015 From: pcgeopc at gmail.com (Geo P.C.) Date: Tue, 29 Dec 2015 18:17:24 +0530 Subject: Through Nginx how we can hide a part of url? Message-ID: We are using a WordPress application with Nginx as web-server. Currently we are accessing profile page as http://geo.mysite.com/members/geouser/ and is working fine. Can anyone please help me to accomplish the following that is when we access profile page, in WordPress its should go to the url *http://geo.mysite.com/members/ geouser/* but in browser or in address bar in browser it should not display something like *http://geo.mysite.com/members/ geouser123* or *http://geo.mysite.com/members/ user123/* etc.. So our need is in browser it should either hide or change real name but inside application it should point correctly. Can anyone please help on it. Thanks Geo -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 29 13:17:03 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Dec 2015 16:17:03 +0300 Subject: how to add > > > in section. How do I do it? Assuming you want to do it using nginx, try something like this: sub_filter '' ''; See docs here for details: http://nginx.org/en/docs/http/ngx_http_sub_module.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Dec 29 13:58:32 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Dec 2015 16:58:32 +0300 Subject: nginx configuration with self signed certificates - getting error In-Reply-To: References: Message-ID: <20151229135832.GY74233@mdounin.ru> Hello! On Mon, Dec 28, 2015 at 04:50:23PM -0500, sumeetmaru wrote: > Please if anyone could take a look and reply? Messages you are seeing suggests that you've failed to provide proper trusted certificates via proxy_ssl_trusted_certificate and ssl_client_certificate files. An example configuration which demonstrates how to configure mutual authentication using a self-signed certificate: server { listen 8080; location / { proxy_pass https://127.0.0.1:8443/empty; proxy_ssl_verify on; proxy_ssl_trusted_certificate test.crt; proxy_ssl_name "test.example.com"; proxy_ssl_certificate test.crt; proxy_ssl_certificate_key test.key; } } server { listen 8443 ssl; ssl_certificate test.crt; ssl_certificate_key test.key; ssl_verify_client on; ssl_client_certificate test.crt; location / { return 204; } } -- Maxim Dounin http://nginx.org/ From gaccardo at gmail.com Tue Dec 29 16:28:42 2015 From: gaccardo at gmail.com (Guido) Date: Tue, 29 Dec 2015 13:28:42 -0300 Subject: Help trying to understand a rewrite rule Message-ID: Hi everyone, I have 2 graphites running with uWSGI in two separated servers, both listening in the port 3031. In a third server I also have an nginx installed . In that nginx, I've this configuration: upstream graphite { server 10.0.2.22:3031; } upstream graphite2 { server 10.0.2.21:3031; } server { listen 80; location / { include uwsgi_params; uwsgi_pass graphite; } location /graphite02 { rewrite /graphite02/(.+) /$1 break; include uwsgi_params; uwsgi_pass graphite2; } } My intention is: * Every request http://nginx_ip/ goes to the first graphite * Every request http://nginx_ip/graphite02/ goes to the second one BUT as / not /graphite02 My configuration doesn't do what I need, instead every request http://nginx_ip/graphite02/ goes to the FIRST graphite as / Also, I've tried modifying this: ... location /graphite02 { uwsgo_param PATH_INFO /; include uwsgi_params; uwsgi_pass graphite2; } ... With no success. Can you help me to understand where is my problem? Perhaps I misunderstood completely the way that rewrites works. Thanks! -- -- Guido Accardo -- "... What we know is a drop, what we ignore is the ocean ..." Isaac Newton -------------- next part -------------- An HTML attachment was scrubbed... URL: From aapo.talvensaari at gmail.com Tue Dec 29 17:02:09 2015 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Tue, 29 Dec 2015 19:02:09 +0200 Subject: how to add > > > > > > in section. How do I do it? > > Assuming you want to do it using nginx, try something like this: > > sub_filter '' > ''; > > See docs here for details: > > http://nginx.org/en/docs/http/ngx_http_sub_module.html > Here is another way to do it: https://developers.google.com/speed/pagespeed/module/filter-insert-ga It relies on even bigger 3rd party module that is also not included and compiler by default. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ml at coldnorthadmin.com Wed Dec 30 04:23:56 2015 From: ml at coldnorthadmin.com (Laurent Dumont) Date: Tue, 29 Dec 2015 23:23:56 -0500 Subject: Nginx Load Balancer - Random file named "download" when using Chrome. Message-ID: <56835C5C.1020604@coldnorthadmin.com> Hi guys, We are using Nginx 1.4.6 as a Load Balancer for our website. Sometimes, when a user loads the website, the browser will start downloading a file named "download" which contains the HTTP Response headers. So far, what we know is that it only happens when the browser is Chrome and it's maybe once every 60-70 page loads. I've narrowed it down to the LB since loading the page directly on the app server never triggers a download. You also see the following error : "Resource interpreted as Document but transferred with MIME type application/octet-stream:" in the Chrome console. Any ideas? From 2305958068 at qq.com Wed Dec 30 06:50:24 2015 From: 2305958068 at qq.com (=?gb18030?B?zOy608qvIC0g0Lu98MX0?=) Date: Wed, 30 Dec 2015 14:50:24 +0800 Subject: proxy Message-ID: In my local machine browser, I want to open http://localhost/abc to proxy a real http://www.abc.com/ as well as proxying all child resources such as http://localhost/abc/bcd/a.html to proxy http://www.abc.com/bcd/a.html http://localhost/abc/bc.html to proxy http://www.abc.com/bc.html ... How do I perform it? Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Dec 30 07:47:33 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 30 Dec 2015 12:47:33 +0500 Subject: Nginx + Php-fpm Ownership issue !! Message-ID: Hi, We've installed nginx + php-fpm on FreeBSD OS and both of them are listening on www user / group. Here is the config : NGINX : user www ; PHP-FPM : [www] listen = /var/run/www.socket user = www group = www listen.owner = www listen.group = www listen.allowed_clients = 127.0.0.1 ------------------------------------------------ According to these configs, now any files / directory should be created with www:www permissions in webroot directory by nginx/php-fpm but thats not happening, new uploaded files via nginx are being uploading with root:www permissions due to which most of uploading is getting failed. Here is the permission failed on (/videos/files/logs/2015/12/30/full-145145901836a71.log) : 2015/12/30 12:03:49 [error] 976#0: *1502344 FastCGI sent in stderr: "PHP message: PHP Warning: file_put_contents(/videos/files/logs/2015/12/30/full-145145901836a71.log): failed to open stream: Permission denied in /videos/functions.php on line 759" while reading response header from upstream, client: 5.254.102.94, server: cw005.videos.com, request: "POST /actions/file_uploader.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/www.socket:", host: " cw005.videos.com", referrer: "http://domain.com/upload" ----------------------- Now if i check the ownership of newly created "/videos/files/logs/2015/12/30/" by webserver, it is as follows : root:www /videos/files/logs/2015/12/30/ It should had been www:www. Where we doing wrong ? Please help. Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From artemrts at ukr.net Wed Dec 30 07:59:48 2015 From: artemrts at ukr.net (wishmaster) Date: Wed, 30 Dec 2015 09:59:48 +0200 Subject: Nginx + Php-fpm Ownership issue !! In-Reply-To: References: Message-ID: <1451462262.194733307.1pxdh8tv@frv34.fwdcdn.com> Hi, > > Hi, > > > We've installed nginx + php-fpm on FreeBSD OS and both of them are listening on www user / group. Here is the config : > > NGINX : > > user www ; > > > > PHP-FPM : > > > [www] > listen = /var/run/www.socket > user = www > group = www > listen.owner = www > listen.group = www > listen.allowed_clients = 127.0.0.1 > I think you have mistake in the owner ownership. Below my config for nginx + php-fpm bundle.. ;;;;;;;;;;;;;;;;;;;;;;;; MY pool for Joomla CMS ;;;;;;;;;; [joomla1] user = www-joomla1 group = www-joomla1 listen = /var/run/php-fpm-joomla1.sock listen.owner = www-joomla1 listen.group = www listen.mode = 0660 From shahzaib.cb at gmail.com Wed Dec 30 08:37:16 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 30 Dec 2015 13:37:16 +0500 Subject: Nginx + Php-fpm Ownership issue !! In-Reply-To: <1451462262.194733307.1pxdh8tv@frv34.fwdcdn.com> References: <1451462262.194733307.1pxdh8tv@frv34.fwdcdn.com> Message-ID: Thanks for reply. However, our developer just notified us that the directory with root owner was created by a cron which ran by user root and created that issue though I've slightly modified nginx user directive with following : former user www ; later user www www; Regards. Shahzaib On Wed, Dec 30, 2015 at 12:59 PM, wishmaster wrote: > Hi, > > > > Hi, > > > > > > We've installed nginx + php-fpm on FreeBSD OS and both of them are > listening on www user / group. Here is the config : > > > > NGINX : > > > > user www ; > > > > > > > > PHP-FPM : > > > > > > [www] > > listen = /var/run/www.socket > > user = www > > group = www > > listen.owner = www > > listen.group = www > > listen.allowed_clients = 127.0.0.1 > > > > I think you have mistake in the owner ownership. > Below my config for nginx + php-fpm bundle.. > > ;;;;;;;;;;;;;;;;;;;;;;;; MY pool for Joomla CMS ;;;;;;;;;; > > [joomla1] > > user = www-joomla1 > group = www-joomla1 > listen = /var/run/php-fpm-joomla1.sock > > listen.owner = www-joomla1 > listen.group = www > listen.mode = 0660 > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Wed Dec 30 08:42:46 2015 From: lists at ruby-forum.com (Tony Tong) Date: Wed, 30 Dec 2015 09:42:46 +0100 Subject: gzip compression not working In-Reply-To: References: Message-ID: i find a free online service to minify js http://www.online-code.net/minify-js.html and compress css http://www.online-code.net/minify-css.html, so it will reduce the size of web page. -- Posted via http://www.ruby-forum.com/. From reallfqq-nginx at yahoo.fr Wed Dec 30 10:52:05 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 30 Dec 2015 11:52:05 +0100 Subject: Nginx Load Balancer - Random file named "download" when using Chrome. In-Reply-To: <56835C5C.1020604@coldnorthadmin.com> References: <56835C5C.1020604@coldnorthadmin.com> Message-ID: nginx serves content, but does not generate it. If users download anything, it is either related to the application or to the client (browser module?). I suggest you check both sides to find where/how this URI is generated and why it is accessed. The incorrect MIME type means a backend has badly positioned a Content-Type header, or in its absence nginx has been configured to serve this URI as application/octet-stream (cf. http://nginx.org/en/docs/http/ngx_http_core_module.html#default_type). For its content, as stated before, nginx does not generate it by itself: you will need to find its source. ?Happy debugging,? --- *B. R.* On Wed, Dec 30, 2015 at 5:23 AM, Laurent Dumont wrote: > Hi guys, > > We are using Nginx 1.4.6 as a Load Balancer for our website. Sometimes, > when a user loads the website, the browser will start downloading a file > named "download" which contains the HTTP Response headers. So far, what we > know is that it only happens when the browser is Chrome and it's maybe once > every 60-70 page loads. I've narrowed it down to the LB since loading the > page directly on the app server never triggers a download. > > You also see the following error : "Resource interpreted as Document but > transferred with MIME type application/octet-stream:" in the Chrome console. > > Any ideas? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Dec 30 13:20:27 2015 From: nginx-forum at forum.nginx.org (Cugar15) Date: Wed, 30 Dec 2015 08:20:27 -0500 (EST) Subject: smtp proxy with postfix In-Reply-To: References: Message-ID: <3f121cd7ccfdd48ecdbed21ac69b80b1.NginxMailingListEnglish@forum.nginx.org> Well, here we go again... somehow, I'm not getting this smtp proxy to work with nginx. I moved to haproxy, and this combination works ok. Creating a tcp connection passes over to postfix and the postfix prompt is seen using a telnet connection - and all works just fine. However, I'd like to stick with nginx if possible....actually if possible at all! Here are my findings - and maybe somebody can help to confirm or disagree: 1) Xclient = on will basically bypass sals authorithation in postfix. Postfix/Sasl will assume that the message is already authenticated. All the auth login commands are basically exectuted 2) Xclient = off will not trigger any sals authentication in postfix. Somehow, it seems, that the credentials are not forwarded to postfix Is this really the expected behaviour? IMAP behaviour is completely different. Here the authentication works just fine... Comments appreciated, Cugar15 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263559,263783#msg-263783 From nginx-forum at forum.nginx.org Wed Dec 30 14:36:28 2015 From: nginx-forum at forum.nginx.org (Parzip) Date: Wed, 30 Dec 2015 09:36:28 -0500 (EST) Subject: https redirection not working correctly Message-ID: Hello! I am trying to set up nginx to - switch from http traffic to https - send alls https traffic to my odoo backend on port 8069 This is already working for different subdomains, but not for the domain itself. http://(www.)subdomain.domain.ch => https://(www.)subdomain.domain.ch http://(www.)domain.ch => http://(www.)domain.ch, backend ist beeing loaded but not secured 1) Why is domain.ch not beeing redirected to https://domain.ch? 2) I would like to set up the let's encrypt ssl renewal script described here: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-14-04 For this I need to put a file into the webroot folder, but I don't know how to define this folder... Thank you for your help. This is my "odoo" file in sites-available: ## odoo backend ## upstream odoo { server 127.0.0.1:8069; } ## https site## server { listen 443 default; server_name *.xxxxx.ch xxxxx.ch www.xxxxx.ch; # root /usr/share/nginx/html; # index index.html index.htm; # log files access_log /var/log/nginx/odoo-access.log; error_log /var/log/nginx/odoo-error.log; # ssl files ssl on; ssl_certificate /etc/letsencrypt/live/xxxxx.ch/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/xxxxx.ch/privkey.pem; keepalive_timeout 60; # limit ciphers ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers AES256+EECDH:AES256+EDH:!aNULL; # proxy buffers proxy_buffers 16 64k; proxy_buffer_size 128k; ## default location ## location / { proxy_pass http://odoo; # force timeouts if the backend dies proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; # set headers proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } # cache some static data in memory for 60mins location ~* /web/static/ { proxy_cache_valid 200 60m; proxy_buffering on; expires 864000; proxy_pass http://odoo; } } ## http redirects to https ## server { listen 80; server_name *.xxxxx.ch www.xxxxx.ch xxxxx.ch; # Strict Transport Security add_header Strict-Transport-Security max-age=2592000; rewrite ^/.*$ https://$host$request_uri? permanent; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263786,263786#msg-263786 From ahutchings at nginx.com Wed Dec 30 14:43:27 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Wed, 30 Dec 2015 14:43:27 +0000 Subject: mistserver features in nginx In-Reply-To: References: <4536183.qiJKzNUTAD@vbart-laptop> Message-ID: <5683ED8F.9040501@nginx.com> Hi Zenny, On 20/12/15 17:08, Zenny wrote: > On 12/20/15, Valentin V. Bartenev wrote: >> On Sunday 20 December 2015 10:44:03 Zenny wrote: >>> Hi, >>> >>> Just reading the following two documents and the first link refers to >>> the some of the nginx features. >>> >>> https://mistserver.org/comparison >>> https://mistserver.org/guides/ProFeatures_2.4.pdf >>> >>> But there are stuffs which I think are incorrect like recording option >>> which nginx_rtmp module already has >>> (https://github.com/arut/nginx-rtmp-module/wiki/Directives#record) but >>> the first document states it does not have! >>> >>> Can you add more to this thread which the first document claims "does >>> not have in nginx", but otherwise? >>> >> [..] >> >> It's worth to add nginx plus to the comparison, since it has enchantments >> in this area. >> >> See: >> >> http://nginx.org/en/docs/http/ngx_http_f4f_module.html >> http://nginx.org/en/docs/http/ngx_http_hls_module.html >> http://nginx.org/en/docs/http/ngx_http_mp4_module.html#mp4_limit_rate > > Thanks Valentin. It would be nicer if you can specify which feature of > the mistserver (in the mistserver comparison list) are replaced by > above. > > Expecting more inputs flowing in! I'm not an expert on our RTMP module but here are a few things I quickly found: * Live transcoding is possible using ffmpeg (as in our documented examples) * I'm not quite sure what is meant by multi-protocol DVR here. But through ffmpeg multiple protocols are supported. * As mentioned, live recording is supported * Stdin is a bad comparison. Web servers shouldn't take data via stdin. * MPEG-DASH is supported * "Progressive FLV Streaming" is an oxymoron. You are either doing progressive downloading or streaming. It looks like the comparison is rather old and open to interpretation. It is very much an apples to oranges comparison. Kind Regards -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From kworthington at gmail.com Wed Dec 30 14:51:28 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 30 Dec 2015 09:51:28 -0500 Subject: https redirection not working correctly In-Reply-To: References: Message-ID: Hello! After your upstream block, but before your server (https) block put something like this: server { listen 80; server_name xxxxxxxxx.ch www.xxxxxxxxx.ch; return 301 https://$server_name$request_uri; } ...and remove the ## http redirects to https ## at the bottom. Best regards, Kevin -- Kevin Worthington kworthington att gmail dat com http://kevinworthington.com/ http://twitter.com/kworthington On Wed, Dec 30, 2015 at 9:36 AM, Parzip wrote: > Hello! > > I am trying to set up nginx to > > - switch from http traffic to https > - send alls https traffic to my odoo backend on port 8069 > > This is already working for different subdomains, but not for the domain > itself. > > http://(www.)subdomain.domain.ch => https://(www.)subdomain.domain.ch > http://(www.)domain.ch => http://(www.)domain.ch, backend ist beeing > loaded > but not secured > > 1) Why is domain.ch not beeing redirected to https://domain.ch? > > 2) I would like to set up the let's encrypt ssl renewal script described > here: > > https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-14-04 > For this I need to put a file into the webroot folder, but I don't know how > to define this folder... > > Thank you for your help. > > > This is my "odoo" file in sites-available: > > > ## odoo backend ## > upstream odoo { > server 127.0.0.1:8069; > } > > ## https site## > server { > listen 443 default; > server_name *.xxxxx.ch xxxxx.ch www.xxxxx.ch; > # root /usr/share/nginx/html; > # index index.html index.htm; > > # log files > access_log /var/log/nginx/odoo-access.log; > error_log /var/log/nginx/odoo-error.log; > > # ssl files > ssl on; > ssl_certificate /etc/letsencrypt/live/xxxxx.ch/fullchain.pem; > ssl_certificate_key /etc/letsencrypt/live/xxxxx.ch/privkey.pem; > keepalive_timeout 60; > > # limit ciphers > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_prefer_server_ciphers on; > ssl_ciphers AES256+EECDH:AES256+EDH:!aNULL; > > > # proxy buffers > proxy_buffers 16 64k; > proxy_buffer_size 128k; > > ## default location ## > location / { > proxy_pass http://odoo; > # force timeouts if the backend dies > proxy_next_upstream error timeout invalid_header http_500 http_502 > http_503 http_504; > proxy_redirect off; > > # set headers > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto https; > } > > # cache some static data in memory for 60mins > location ~* /web/static/ { > proxy_cache_valid 200 60m; > proxy_buffering on; > expires 864000; > proxy_pass http://odoo; > } > } > > ## http redirects to https ## > server { > listen 80; > server_name *.xxxxx.ch www.xxxxx.ch xxxxx.ch; > > # Strict Transport Security > add_header Strict-Transport-Security max-age=2592000; > rewrite ^/.*$ https://$host$request_uri? permanent; > } > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263786,263786#msg-263786 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 30 15:04:27 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Dec 2015 18:04:27 +0300 Subject: smtp proxy with postfix In-Reply-To: <3f121cd7ccfdd48ecdbed21ac69b80b1.NginxMailingListEnglish@forum.nginx.org> References: <3f121cd7ccfdd48ecdbed21ac69b80b1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151230150427.GE74233@mdounin.ru> Hello! On Wed, Dec 30, 2015 at 08:20:27AM -0500, Cugar15 wrote: > Well, here we go again... somehow, I'm not getting this smtp proxy to work > with nginx. > I moved to haproxy, and this combination works ok. Creating a tcp connection > passes over to postfix > and the postfix prompt is seen using a telnet connection - and all works > just fine. > > However, I'd like to stick with nginx if possible....actually if possible at > all! If TCP proxying is enough in your case - you can consider using stream proxy module instead, see here: http://nginx.org/en/docs/stream/ngx_stream_core_module.html > Here are my findings - and maybe somebody can help to confirm or disagree: > > 1) Xclient = on will basically bypass sals authorithation in postfix. > Postfix/Sasl will assume that the message is already authenticated. > All the auth login commands are basically exectuted Yes. All information obtained by nginx is passed via the XCLIENT command. > 2) Xclient = off will not trigger any sals authentication in postfix. > Somehow, it seems, that the credentials are not forwarded to postfix Yes. Authentication is checked by auth_http script, and there is no need to do additional authentication to SMTP backend. As long as appropriate checks are done by auth_http, it's enough to allow your nginx IP to submit mail. If it's not enough in your particular setup (e.g., you want correct "Received" headers to be added), enable XCLIENT. > Is this really the expected behaviour? > IMAP behaviour is completely different. Here the authentication works just > fine... Yes, that's expected. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Dec 30 15:31:22 2015 From: nginx-forum at forum.nginx.org (Parzip) Date: Wed, 30 Dec 2015 10:31:22 -0500 (EST) Subject: https redirection not working correctly In-Reply-To: References: Message-ID: <33fb0360870d85228562b2cec450f64d.NginxMailingListEnglish@forum.nginx.org> Hello Kevin! Thank you very much, but it's still not working.... ## odoo backend ## upstream odoo { server 127.0.0.1:8069; } ## http redirects to https ## server { listen 80; server_name *.XXXXX.ch www.XXXXX.ch XXXXX.ch; return 301 https://$server_name$request_uri; } ## https site## server { listen 443 default; server_name *.XXXXX.ch XXXXX.ch www.XXXXX.ch; # root /usr/share/nginx/html; # index index.html index.htm; # log files access_log /var/log/nginx/odoo-access.log; error_log /var/log/nginx/odoo-error.log; # ssl files ssl on; ssl_certificate /etc/letsencrypt/live/XXXXX.ch/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/XXXXX.ch/privkey.pem; keepalive_timeout 60; # limit ciphers ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers AES256+EECDH:AES256+EDH:!aNULL; # proxy buffers proxy_buffers 16 64k; proxy_buffer_size 128k; ## default location ## location / { proxy_pass http://odoo; # force timeouts if the backend dies proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; # set headers proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } # cache some static data in memory for 60mins location ~* /web/static/ { proxy_cache_valid 200 60m; proxy_buffering on; expires 864000; proxy_pass http://odoo; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263786,263792#msg-263792 From nginx-forum at forum.nginx.org Wed Dec 30 15:37:02 2015 From: nginx-forum at forum.nginx.org (Parzip) Date: Wed, 30 Dec 2015 10:37:02 -0500 (EST) Subject: https redirection not working correctly In-Reply-To: <33fb0360870d85228562b2cec450f64d.NginxMailingListEnglish@forum.nginx.org> References: <33fb0360870d85228562b2cec450f64d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2667cb1ceca9abb25bbe351d5cb2c014.NginxMailingListEnglish@forum.nginx.org> could this be related to the forwarding of the address? This is my domain registrar setting: XXXXX.ch => web alias to subdomain.XXXXX.ch subdomain.XXXXX.ch => A record to IP address of my server I need this setup because my odoo-erp selects the database according to my subdomains. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263786,263793#msg-263793 From nginx-forum at forum.nginx.org Wed Dec 30 15:49:06 2015 From: nginx-forum at forum.nginx.org (Cugar15) Date: Wed, 30 Dec 2015 10:49:06 -0500 (EST) Subject: smtp proxy with postfix In-Reply-To: <20151230150427.GE74233@mdounin.ru> References: <20151230150427.GE74233@mdounin.ru> Message-ID: <01c7af691716abd563f0e4688ba7d572.NginxMailingListEnglish@forum.nginx.org> HI Maxim, thanks for reply! 1) Interesting, I will look into the ngx_stream_core_module 2) I still have one question for Xclient = on - since I'm banged my head against it for days now: You state: All information obtained by nginx is passed via the XCLIENT command. Is this true for all credentials?? Like username and password as optained with a 'auth login' sequence: Somehow, I can find: sasl_method=XCLIENT, sasl_username=username at mydomain.de in the postfix logfile. However, I cannot find the password information... Thanks again, Cugar15 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263559,263794#msg-263794 From mdounin at mdounin.ru Wed Dec 30 15:55:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Dec 2015 18:55:58 +0300 Subject: smtp proxy with postfix In-Reply-To: <01c7af691716abd563f0e4688ba7d572.NginxMailingListEnglish@forum.nginx.org> References: <20151230150427.GE74233@mdounin.ru> <01c7af691716abd563f0e4688ba7d572.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151230155558.GG74233@mdounin.ru> Hello! On Wed, Dec 30, 2015 at 10:49:06AM -0500, Cugar15 wrote: > HI Maxim, thanks for reply! > > 1) Interesting, I will look into the ngx_stream_core_module > > 2) I still have one question for Xclient = on - since I'm banged my head > against it for days now: > > You state: All information obtained by nginx is passed via the XCLIENT > command. > > Is this true for all credentials?? Like username and password as optained > with a 'auth login' sequence: > Somehow, I can find: sasl_method=XCLIENT, sasl_username=username at mydomain.de > in the postfix logfile. > However, I cannot find the password information... Passwords are not present in XCLIENT and aren't expected to. Authentication is done by nginx and it's nginx responsibility to check passwords, and it does so using auth_http service. Note well that in many authentication methods passwords aren't sent at all, appropriate hashes are used instead. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed Dec 30 17:05:02 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 30 Dec 2015 17:05:02 +0000 Subject: Nginx redirect/rewrite Rule In-Reply-To: <4dd4d3ce38c2c229c1fcf625c6ff1360.NginxMailingListEnglish@forum.nginx.org> References: <4dd4d3ce38c2c229c1fcf625c6ff1360.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151230170502.GC19381@daoine.org> On Sat, Dec 26, 2015 at 06:42:44AM -0500, Nginx Forum wrote: Hi there, > Old Url > http://www.mydomain.com.br/forum/elsword-downloads-de-cheats-utilitarios/2369461-26-04-revolution-trainer-elsword.html > > 1. /forum/ - It is the folder that installed vbulletin > 2. /elsword-downloads-de-cheats-utilitarios/ - is the forum name that the > topic / in xenforo will not appear > 3. 2369461 - is the ID that will have to appear in the url xenforo > 4. -26-04-revolution-trainer-elsword.html - is the topic name that > accessing, no matter what the xenforo because with the right ID it corrects > the topic name in the URL. > > > New Url > http://www.mydomain.com.br/threads/26-04-revolution-trainer-elsword.2369461/ > > 1. /threads/ - the xenforo automatically add the address when accessing this > one topic. > 2. 26-04-revolution-trainer-elsword - Topic name, even if the xenforo system > corrects'm wrong > 3. 2369461 - Most importantly, the topic ID Every request below /forum/ will be redirected (if it matches this pattern) or return 404 (if it does not). You may prefer to change the 404 to return a redirect to /threads/, for example. === location ^~ /forum/ { location ~ ^/forum/[^/]*/([0-9]*)-(.*).html$ { return 301 /threads/$2.$1/; } return 404; } === f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Dec 30 17:24:31 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 30 Dec 2015 17:24:31 +0000 Subject: Reverse proxy to QNAP does not work In-Reply-To: <81e6f6672d4c1029321ab3aeed97e890.NginxMailingListEnglish@forum.nginx.org> References: <20151129105117.GU3351@daoine.org> <81e6f6672d4c1029321ab3aeed97e890.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151230172431.GD19381@daoine.org> On Wed, Dec 16, 2015 at 03:38:51PM -0500, no.1 wrote: Hi there, > thanks for the details. I guess the trick is first to bypass a QNAP internal > redirect to the NAS GUI (if possible or to integrate it on the first > request). And second to to adapt the login request the right way. (btw: I > try to avoid to change the qnap service, because of the regular QNAP > firmware updates.) You have a good reason not to change the qnap service itself to be below /nas/; that's fine. You're reluctant to use nas.home.example.com to access the qnap service while using owncloud.home.example.com to access the owncloud service -- that's fine too. You may find that things do not work smoothly in that case, but that's what happens. > For the first topic the internal network traffic analysis from Firefox shows > a lot of GET requests and some POST regarding the login: I'm unsure what precisely you are doing to see these responses. It will probably be easier for someone else to replicate what you are seeing, if you can write down exactly what you do and exactly what you see. For example, starting from a shell on the nginx server (the RasPi machine), what responses do you get when you type each of curl -v http://qnap:8080/ curl -v http://qnap:8080/redirect.html?count=0.xxxx curl -v http://qnap:8080/cgi-bin/QTS.cgi?count=yyyyyy curl -v http://qnap:8080/cgi-bin/ curl -v http://qnap:8080/photo (using the urls that you mention)? I suspect that some will be HTTP 301 or 302 redirects, and some will be HTTP 200 responses with content which includes links to other things. nginx's proxy_redirect can modify the http headers, but not the body content -- so whether the application works easily through a reverse proxy system depends on the application. The results from the above might hint at whether the application works easily through a reverse proxy. > So I tried it with http://qnap:8080/cgi-bin/login.html locally, which leads > successfully to the login page http://qnap:8080/cgi-bin/. Changing the > location part as suggested into: > location ^~ /nas/ { > proxy_pass http://qnap:8080/cgi-bin/; > proxy_set_header X-Real-IP $remote_addr; > } > and trying the address https://example.com/nas/login.html results in a > broken login page with a lot of 404 errors. It is possible that location ^~ /nas/ { proxy_pass http://qnap:8080/; } will be enough, where you would request /nas/cgi-bin/login.html; but it all depends on your qnap application. > Can anybody tell why it stays on external requests on content type "html"? > Does it have to do something with Cache-Control? One request gets one response; that response content may lead to more requests for css and jpg and the like content. If the initial response is 404, there is unlikely to be a follow-up request. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Dec 30 17:30:16 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 30 Dec 2015 17:30:16 +0000 Subject: proxy In-Reply-To: References: Message-ID: <20151230173016.GE19381@daoine.org> On Wed, Dec 30, 2015 at 02:50:24PM +0800, ??? - ??? wrote: Hi there, > In my local machine browser, I want to open http://localhost/abc to proxy a real http://www.abc.com/ as well as proxying all child resources such as > > > http://localhost/abc/bcd/a.html to proxy http://www.abc.com/bcd/a.html > http://localhost/abc/bc.html to proxy http://www.abc.com/bc.html > ... > > > How do I perform it? proxy_pass (http://nginx.org/r/proxy_pass). You should reverse-proxy to a server that you control. Making things work where the url hierarchies differ -- /abc/ in nginx being the same as / in the upstream server -- can be difficult. But sometimes it works. == location ^~ /abc/ { proxy_pass http://www.abc.com/; } == and then try accessing http://localhost/abc/bc.html and see what you get back. f -- Francis Daly francis at daoine.org