From radvenka at cisco.com Mon Feb 1 01:00:04 2016 From: radvenka at cisco.com (Radha Venkatesh (radvenka)) Date: Mon, 1 Feb 2016 01:00:04 +0000 Subject: "Connection Refused" with nginx as reverse proxy Message-ID: I have set up nginx as a reverse proxy with this configuration worker_processes 1; pid /run/nginx.pid; events { worker_connections 4096; } http { include /etc/nginx/default.d/proxy.conf; default_type application/octet-stream; sendfile on; tcp_nopush on; server_names_hash_bucket_size 128; server { listen 127.107.138.162:8080; server_name _; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; access_log on; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { proxy_pass http://127.0.0.1:8080; } location /rhesos-server { proxy_pass http://10.154.181.43:8080/rhesos-server/rhesos/api/v1/ping; } location /rhesos { proxy_pass http://10.154.181.43:8080/rhesos-server/rhesos/api/v1/ping; } } } However, when I use the curl command to send a request using the proxy like this, I see a "Connection Refused" curl -v -x 'http://:@128.107.138.162:8080' http://10.154.181.43:8080/rhesos-server/rhesos/api/v1/ping * About to connect() to proxy 128.107.138.162 port 8080 (#0) * Trying 128.107.138.162... * Adding handle: conn: 0x7f99a980aa00 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x7f99a980aa00) send_pipe: 1, recv_pipe: 0 * Failed connect to 10.154.181.43:8080; Connection refused * Closing connection 0 curl: (7) Failed connect to 10.154.181.43:8080; Connection refused Whereas when I try this without the proxy, it succeeds .. curl -v http://10.154.181.43:8080/rhesos-server/rhesos/api/v1/ping * About to connect() to 10.154.181.43 port 8080 (#0) * Trying 10.154.181.43... * Adding handle: conn: 0x7fa3da003a00 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x7fa3da003a00) send_pipe: 1, recv_pipe: 0 * Connected to 10.154.181.43 (10.154.181.43) port 8080 (#0) > GET /rhesos-server/rhesos/api/v1/ping HTTP/1.1 > User-Agent: curl/7.30.0 > Host: 10.154.181.43:8080 > Accept: */* > < HTTP/1.1 200 OK * Server Apache-Coyote/1.1 is not blacklisted < Server: Apache-Coyote/1.1 < Cache-Control: no-cache < Content-Type: application/json;charset=UTF-8 < Transfer-Encoding: chunked < Date: Sat, 30 Jan 2016 00:59:28 GMT < * Connection #0 to host 10.154.181.43 left intact {"serviceName":"Rhesos","serviceType":"REQUIRED","serviceState":"online","message":"Healthy","lastUpdated":"2016-01-30T00:59:28.571Z","upstreamServices":[{"serviceName":"CommonIdentityScim","serviceType":"REQUIRED","serviceState":"online","message":"CommonIdentityScim is healthy","lastUpdated":"2016-01-30T00:59:20.641Z","upstreamServices":[],"baseUrl":"https://identity.webex.com"} Can someone tell what is wrong in my configuration? Thanks, Radha. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Feb 1 05:24:22 2016 From: nginx-forum at forum.nginx.org (rgrraj) Date: Mon, 01 Feb 2016 00:24:22 -0500 Subject: Absolute rather than relative times in expires directives In-Reply-To: <20160125150425.GD9449@mdounin.ru> References: <20160125150425.GD9449@mdounin.ru> Message-ID: Hi Thank you all for the suggestions and thoughts. I did double check the nginx version I am having and it is 1.9.2. root at ser2:~# nginx -V nginx version: nginx/1.9.2 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) configure arguments: --add-module=/tmp/nginx/ngx_http_substitutions_filter_module-master --add-module=/tmp/nginx/headers-more-nginx-module-0.26 --add-module=/tmp/nginx/ngx_pagespeed-release-1.9.32.3-beta --sbin-path=/usr/local/sbin/nginx --conf-path=/etc/nginx/nginx.conf root at ser2:~# but when I use the expires directive it shows invalid error same as before. Seems like to get the feature I need to update the nginx version. Thanks Govind Posted at Nginx Forum: https://forum.nginx.org/read.php?2,115406,264224#msg-264224 From francis at daoine.org Mon Feb 1 08:32:08 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 1 Feb 2016 08:32:08 +0000 Subject: "Connection Refused" with nginx as reverse proxy In-Reply-To: References: Message-ID: <20160201083208.GI19381@daoine.org> On Mon, Feb 01, 2016 at 01:00:04AM +0000, Radha Venkatesh (radvenka) wrote: Hi there, > server { > listen 127.107.138.162:8080; > However, when I use the curl command to send a request using the proxy like this, I see a "Connection Refused" > > > curl -v -x 'http://:@128.107.138.162:8080' http://10.154.181.43:8080/rhesos-server/rhesos/api/v1/ping Two things there: * the "listen" address and the "curl -x" address are not the same. * nginx is not a proxy Just make your curl request as normal to the nginx server. > Can someone tell what is wrong in my configuration? Possibly nothing. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Feb 1 09:14:20 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 1 Feb 2016 09:14:20 +0000 Subject: Absolute rather than relative times in expires directives In-Reply-To: References: <20160125150425.GD9449@mdounin.ru> Message-ID: <20160201091420.GJ19381@daoine.org> On Mon, Feb 01, 2016 at 12:24:22AM -0500, rgrraj wrote: Hi there, > Thank you all for the suggestions and thoughts. > > I did double check the nginx version I am having and it is 1.9.2. > but when I use the expires directive it shows invalid error same as before. > > Seems like to get the feature I need to update the nginx version. All I can say is: it works for me. "http" section of nginx.conf: === http { map $time_iso8601 $expiresc { default "3h"; ~T22 "@00h00"; ~T23 "@00h00"; } server { listen 8080; location /a/ { expires $expiresc; } } } === Selected output from $ curl -i http://127.0.0.1:8080/a/b HTTP/1.1 200 OK Server: nginx/1.9.2 Date: Mon, 01 Feb 2016 09:09:06 GMT Expires: Mon, 01 Feb 2016 12:09:06 GMT Cache-Control: max-age=10800 If you get different output, it may be worth investigating properly what is happening. But it may be simpler to just deploy your tested-working version. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Feb 1 12:44:39 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Feb 2016 15:44:39 +0300 Subject: Absolute rather than relative times in expires directives In-Reply-To: References: <20160125150425.GD9449@mdounin.ru> Message-ID: <20160201124439.GM98618@mdounin.ru> Hello! On Mon, Feb 01, 2016 at 12:24:22AM -0500, rgrraj wrote: > Hi > > Thank you all for the suggestions and thoughts. > > I did double check the nginx version I am having and it is 1.9.2. > root at ser2:~# nginx -V > nginx version: nginx/1.9.2 > built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) > configure arguments: > --add-module=/tmp/nginx/ngx_http_substitutions_filter_module-master > --add-module=/tmp/nginx/headers-more-nginx-module-0.26 > --add-module=/tmp/nginx/ngx_pagespeed-release-1.9.32.3-beta > --sbin-path=/usr/local/sbin/nginx --conf-path=/etc/nginx/nginx.conf > root at ser2:~# > > but when I use the expires directive it shows invalid error same as before. This likely means that nginx you are checking for "nginx -V" is not the same as one that really works. This may happen if you have more than one nginx binary (e.g., one in /usr/, and another one in /usr/local), or you haven't restarted/upgraded running nginx binary after upgrading nginx on disk, see here: http://nginx.org/en/docs/control.html#upgrade -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Feb 1 13:59:35 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Feb 2016 16:59:35 +0300 Subject: upstream max_fails/fail_timeout logic? In-Reply-To: <56ACE552.7060208@gmail.com> References: <56ACE552.7060208@gmail.com> Message-ID: <20160201135935.GO98618@mdounin.ru> Hello! On Sat, Jan 30, 2016 at 05:31:14PM +0100, Thomas Nyberg wrote: > Hello I've set up an http proxy to a couple of other servers and am using > max_fails and fail_time in addition to having a proxy_read_timeout to force > failover in case of a read timeout. It seems to work fine, but I have two > questions. > > 1) I'm not totally understanding the logic. I can tell that if the timeout > hits the max number of times, it must sit out for the rest of the > fail_timeout time and then it seems to start working again at the end of the > time. But it also seems like it only needs to fail once (i.e. not a full set > of max_fails) to be removed from consideration again. But then it seems like > it doesn't fail again for a long time, it needs to fail max_fails times > again. How does this logic work exactly? After fail_timeout, one request will be passed to the server in question. The server is considered again alive if the request succeeds. If the request fails, nginx will wait for fail_timeout again. Note that this is actually consistent with max_fails counting logic as well, as failures are actually counted not with fail_timeout sliding window, but within a session with fail_timeout timeout. That is, fail_timeout defines minimal interval between failures for nginx to forget about previous failures. E.g., with max_fails=5 fail_timeout=10s, if a server fails 1 request each 5 seconds, it will be considered down after 5 failures happened during previous 20 seconds. > 2) Is the fact that an upstream server is taken down (in this temporary > fashion) logged somewhere? I.e. some file where it just says "server hit max > fails" or something? In recent versions (1.9.1+) the "upstream server temporarily disabled" warning will be logged. > 3) Extending 2), is there any way to "hook" into that server failure? I.e. > if the server fails, is there a way with nginx to execute some sort of a > program (either internal or external)? No (except by monitoring logs). Note well that "down" state is per worker (unless you are using upstream zone to share state between worker processes), and this also complicates things. consider all In general it's a good idea to monitor backends separately, and don't expect nginx to do anything if a backend fails. -- Maxim Dounin http://nginx.org/ From radvenka at cisco.com Mon Feb 1 16:24:20 2016 From: radvenka at cisco.com (Radha Venkatesh (radvenka)) Date: Mon, 1 Feb 2016 16:24:20 +0000 Subject: "Connection Refused" with nginx as reverse proxy In-Reply-To: <20160201083208.GI19381@daoine.org> References: <20160201083208.GI19381@daoine.org> Message-ID: Francis, Thank you for pointing out the discrepency in the ip address. I have fixed that and the netstat out looks like this netstat -anp | grep 8080 tcp 0 0 128.107.138.162:8080 0.0.0.0:* LISTEN And I issued the curl command like this, but I still see this error curl -v http://128.107.138.162:8080/rhesos-server/rhesos/api/v1/ping * About to connect() to 128.107.138.162 port 8080 (#0) * Trying 128.107.138.162... * Adding handle: conn: 0x7ffb9c003a00 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x7ffb9c003a00) send_pipe: 1, recv_pipe: 0 * Failed connect to 128.107.138.162:8080; Connection refused * Closing connection 0 curl: (7) Failed connect to 128.107.138.162:8080; Connection refused Also, my curl command with the proxy set in the command is trying to simulate how the real request is going to come in to the server. Thanks, Radha. On 2/1/16, 12:32 AM, "Francis Daly" wrote: >On Mon, Feb 01, 2016 at 01:00:04AM +0000, Radha Venkatesh (radvenka) >wrote: > >Hi there, > >> server { >> listen 127.107.138.162:8080; > >> However, when I use the curl command to send a request using the proxy >>like this, I see a "Connection Refused" >> >> >> curl -v -x 'http://:@128.107.138.162:8080' >>http://10.154.181.43:8080/rhesos-server/rhesos/api/v1/ping > >Two things there: > >* the "listen" address and the "curl -x" address are not the same. >* nginx is not a proxy > >Just make your curl request as normal to the nginx server. > >> Can someone tell what is wrong in my configuration? > >Possibly nothing. > > f >-- >Francis Daly francis at daoine.org > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Feb 1 17:49:30 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 1 Feb 2016 17:49:30 +0000 Subject: "Connection Refused" with nginx as reverse proxy In-Reply-To: References: <20160201083208.GI19381@daoine.org> Message-ID: <20160201174930.GL19381@daoine.org> On Mon, Feb 01, 2016 at 04:24:20PM +0000, Radha Venkatesh (radvenka) wrote: Hi there, > And I issued the curl command like this, but I still see this error > > curl -v http://128.107.138.162:8080/rhesos-server/rhesos/api/v1/ping > > * About to connect() to 128.107.138.162 port 8080 (#0) > * Trying 128.107.138.162... > * Adding handle: conn: 0x7ffb9c003a00 > * Adding handle: send: 0 > * Adding handle: recv: 0 > * Curl_addHandleToPipeline: length: 1 > * - Conn 0 (0x7ffb9c003a00) send_pipe: 1, recv_pipe: 0 > * Failed connect to 128.107.138.162:8080; Connection refused > * Closing connection 0 > curl: (7) Failed connect to 128.107.138.162:8080; Connection refused If the connection attempt gets to nginx, the nginx logs can show something about it. But it looks to me like your security system is preventing the request getting to nginx in the first place. Check your firewall or other network control logs to see what is happening. > Also, my curl command with the proxy set in the command is trying to > simulate how the real request is going to come in to the server. nginx is not a proxy. So no request that comes to the server will involve "curl -x" with the nginx host:port. As far as any client is concerned, nginx is a web server. Good luck with it, f -- Francis Daly francis at daoine.org From kworthington at gmail.com Mon Feb 1 22:08:19 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Mon, 1 Feb 2016 17:08:19 -0500 Subject: How to configure a one page site In-Reply-To: <136ec9bdfcf528544c8435269e8a6a07.NginxMailingListEnglish@forum.nginx.org> References: <136ec9bdfcf528544c8435269e8a6a07.NginxMailingListEnglish@forum.nginx.org> Message-ID: You need to define a location block that points to the content you wish to serve. This page may be of help to you: http://nginx.org/en/docs/beginners_guide.html Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ http://twitter.com/kworthington On Mon, Jan 25, 2016 at 3:10 AM, ex-para wrote: > I would like to know how to configure a one page site in which I delete the > welcome to nginx and add my site details. I know how to edit the site > etc... > as I have configured the site to work this way before but I have forgot > how. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,264093,264093#msg-264093 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Feb 1 22:53:02 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 1 Feb 2016 14:53:02 -0800 Subject: echo-nginx-module and HTTP2 In-Reply-To: References: <20160128104504.Horde.gmBxk8Ku529Cc0xauggGzQ_@andreasschulze.de> <20160129081953.Horde.jBSLaoLM_b7TSuovLq5A04h@andreasschulze.de> Message-ID: Hello! On Fri, Jan 29, 2016 at 8:40 PM, Kurt Cancemi wrote: > I was doing some debugging and though I haven't found a fix. The problem is > in the ngx_http_echo_client_request_headers_variable() function c->buffer is > NULL when http v2 is used for some reason (internal to nginx). > This is expected since the HTTP/2 mode of NGINX reads the request header into a different place. We should branch the code accordingly. Regards, -agentzh From alex at samad.com.au Tue Feb 2 03:32:44 2016 From: alex at samad.com.au (Alex Samad) Date: Tue, 2 Feb 2016 14:32:44 +1100 Subject: question about client certs Message-ID: Hi Is it possible with nginx to do this https://www.abc.com / /noclientcert/ /clientcert/ so you can get to / with no client cert, but /clientcert/ you need a cert, but for /noclientcert/ you don't need a cert. Looks like from the config doco you can only set it for the whole tree ... A From sca at andreasschulze.de Tue Feb 2 06:05:52 2016 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 02 Feb 2016 07:05:52 +0100 Subject: question about client certs In-Reply-To: Message-ID: <20160202070552.Horde.U-MI3NotLVzf_TULB9-9kOx@andreasschulze.de> Alex Samad: > Is it possible with nginx to do this > > https://www.abc.com > / > /noclientcert/ > /clientcert/ > > > so you can get to / with no client cert, but /clientcert/ you need a > cert, but for /noclientcert/ you don't need a cert. as far as I learned it's not possible and the usual answer to such feature requests is: "use different virtual hosts" Andreas From reallfqq-nginx at yahoo.fr Tue Feb 2 07:51:07 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 2 Feb 2016 08:51:07 +0100 Subject: question about client certs In-Reply-To: <20160202070552.Horde.U-MI3NotLVzf_TULB9-9kOx@andreasschulze.de> References: <20160202070552.Horde.U-MI3NotLVzf_TULB9-9kOx@andreasschulze.de> Message-ID: Your question shows you need to understand how HTTP over TLS works. TLS enciphers HTTP content, thus nothing is readable (either headers or body). How do you select the right certificate based on HTTP content? You can't. Wait, Host-HTTP-Header-based certificate delivery exists, how is that possible? With TLS it is basically impossible, but it works though a TLS extension called Server Name Indication (SNI). nginx docs talk about that: http://nginx.org/en/docs/http/configuring_https_servers.html#name_based_https_servers Now what you ask requires access to enciphered HTTP content. Short answer: there is no way to do that, you will need to use different servers, either using SNI (as Andreas suggested) or separate IP addresses. --- *B. R.* On Tue, Feb 2, 2016 at 7:05 AM, A. Schulze wrote: > > Alex Samad: > > Is it possible with nginx to do this >> >> https://www.abc.com >> / >> /noclientcert/ >> /clientcert/ >> >> >> so you can get to / with no client cert, but /clientcert/ you need a >> cert, but for /noclientcert/ you don't need a cert. >> > > as far as I learned it's not possible and the usual answer > to such feature requests is: "use different virtual hosts" > > Andreas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 2 08:19:09 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 02 Feb 2016 03:19:09 -0500 Subject: Nginx HTTP/2 server push support ? Message-ID: Curious if anyone has heard or knows if or when Nginx HTTP/2 support will add server push https://http2.github.io/faq/#how-can-i-use-http2-server-push ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264252,264252#msg-264252 From al-nginx at none.at Tue Feb 2 09:56:56 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 02 Feb 2016 10:56:56 +0100 Subject: question about client certs In-Reply-To: References: Message-ID: <450b6930110bf9454abcb77c77b8ed20@none.at> Dear Alex. Am 02-02-2016 04:32, schrieb Alex Samad: > Hi > > Is it possible with nginx to do this > > https://www.abc.com > / > /noclientcert/ > /clientcert/ > > > so you can get to / with no client cert, but /clientcert/ you need a > cert, but for /noclientcert/ you don't need a cert. > > Looks like from the config doco you can only set it for the whole tree > ... I would try to use this directives http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_verify_client http://nginx.org/en/docs/http/ngx_http_map_module.html and in a map make something like this. map $ssl_client_cert $clientcert { default ""; "~.*CLIENT_CERT_CHECK" clientcert; } and location $clientcert { } location no$clientcert { } is this possible ;-)? BR Aleks From vbart at nginx.com Tue Feb 2 14:02:27 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 02 Feb 2016 17:02:27 +0300 Subject: Error occurs when both padding and continuation are enabled In-Reply-To: <5749540.g02AWSFeFM@vbart-workstation> References: <5749540.g02AWSFeFM@vbart-workstation> Message-ID: <2472405.kVxzOqxaNp@vbart-workstation> On Tuesday 26 January 2016 14:54:56 Valentin V. Bartenev wrote: > On Tuesday 26 January 2016 16:00:08 Shengtuo Hu wrote: > > Hi, > > > > Another error I met recently. It occurred when both padding and > > continuation are enabled. > > > > A normal HEADERS frame was divided as follows: > > HEADERS ===> HEADERS(PADDED_FLAG) + CONTINUATION + CONTINUATION > > > > After sending these frames, I got an error. In the debug log file, I found > > "client sent inappropriate frame while CONTINUATION was expected while > > processing HTTP/2 connection". Then I read the source code (v 1.9.9), and > > located the function "ngx_http_v2_handle_continuation" (ngx_http_v2.c, line > > 1749). It seems NGINX does not skip the "padding part", but tries to read > > "type" field in the next CONTINUATION frame directly. > > > [..] > > Yes, you're right. It isn't able to skip padding between HEADERS and > CONTINUATION frames. > That has been fixed: http://hg.nginx.org/nginx/rev/0e0e2e522fa2 Thanks for the report. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Feb 2 16:00:13 2016 From: nginx-forum at forum.nginx.org (ex-para) Date: Tue, 02 Feb 2016 11:00:13 -0500 Subject: How to configure a one page site In-Reply-To: References: Message-ID: Thanks, I have read what you point out a few times but as you bring my attention to it I will study it more carefully and hope I find the answer. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264093,264285#msg-264285 From alex at samad.com.au Tue Feb 2 22:22:14 2016 From: alex at samad.com.au (Alex Samad) Date: Wed, 3 Feb 2016 09:22:14 +1100 Subject: question about client certs In-Reply-To: <450b6930110bf9454abcb77c77b8ed20@none.at> References: <450b6930110bf9454abcb77c77b8ed20@none.at> Message-ID: Yep I think thats what i was asking. We have a home grown RP at work that does it and IIS used to do it, apply cert requirements on part of the tree. On 2 February 2016 at 20:56, Aleksandar Lazic wrote: > Dear Alex. > > Am 02-02-2016 04:32, schrieb Alex Samad: >> >> Hi >> >> Is it possible with nginx to do this >> >> https://www.abc.com >> / >> /noclientcert/ >> /clientcert/ >> >> >> so you can get to / with no client cert, but /clientcert/ you need a >> cert, but for /noclientcert/ you don't need a cert. >> >> Looks like from the config doco you can only set it for the whole tree ... > > > I would try to use this directives > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_verify_client > http://nginx.org/en/docs/http/ngx_http_map_module.html > > and in a map make something like this. > > map $ssl_client_cert $clientcert { > default ""; > "~.*CLIENT_CERT_CHECK" clientcert; > } > > and > > location $clientcert { > } > > location no$clientcert { > } > > is this possible ;-)? > > BR Aleks From nginx-forum at forum.nginx.org Wed Feb 3 07:22:21 2016 From: nginx-forum at forum.nginx.org (wellimp) Date: Wed, 03 Feb 2016 02:22:21 -0500 Subject: Configure complex dynamic urls Message-ID: <3c28b6befa86a2e62edc3e9f10b11529.NginxMailingListEnglish@forum.nginx.org> Hi I've been trying to configure an nginx to accept a complex dynamic url, so far I got this. (http://pastebin.com/aSmW5GPn), this what I got so far. Here is the deal, the url should be working like these: mydomain.com - will be the normal domain with information, user registration and user authentication. (http://mydomain.com/signin, http://mydomain.com/register) the user can create database from their account. mydomain.com/database/create, view the database information mydomain.com/database/view, and list all the database htey have created http://mydomain.com/database/list so far everything can be done and I can manage this. The issue start here. to the user manage their database, create tables, etc, I want the url format to be: http://{user}.mydomain.com/{database}, this should proxy to a system/program that handles the database management (that should be hidden and only accessed by the url mention before). If user go to http://{user}.mydomain.com/{database}/login, they are actually going to the another database manager, pre-filling the username and database to be used. Thanks for you help in advance PS. mydomain.com is using nginx+php and the database management software is using htaccess+php+nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264302,264302#msg-264302 From al-nginx at none.at Wed Feb 3 08:37:25 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 03 Feb 2016 09:37:25 +0100 Subject: question about client certs In-Reply-To: References: <450b6930110bf9454abcb77c77b8ed20@none.at> Message-ID: <8008c450d0398f73480c09cd7bb9e371@none.at> Am 02-02-2016 23:22, schrieb Alex Samad: > Yep I think thats what i was asking. Cool it would be nice if you can tell us if it's works and how was your solution ;-) BR Aleks > We have a home grown RP at work that does it and IIS used to do it, > apply cert requirements on part of the tree. > > On 2 February 2016 at 20:56, Aleksandar Lazic wrote: >> Dear Alex. >> >> Am 02-02-2016 04:32, schrieb Alex Samad: >>> >>> Hi >>> >>> Is it possible with nginx to do this >>> >>> https://www.abc.com >>> / >>> /noclientcert/ >>> /clientcert/ >>> >>> >>> so you can get to / with no client cert, but /clientcert/ you need a >>> cert, but for /noclientcert/ you don't need a cert. >>> >>> Looks like from the config doco you can only set it for the whole >>> tree ... >> >> >> I would try to use this directives >> >> http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_verify_client >> http://nginx.org/en/docs/http/ngx_http_map_module.html >> >> and in a map make something like this. >> >> map $ssl_client_cert $clientcert { >> default ""; >> "~.*CLIENT_CERT_CHECK" clientcert; >> } >> >> and >> >> location $clientcert { >> } >> >> location no$clientcert { >> } >> >> is this possible ;-)? >> >> BR Aleks From radecki.rafal at gmail.com Wed Feb 3 13:24:41 2016 From: radecki.rafal at gmail.com (=?UTF-8?Q?Rafa=C5=82_Radecki?=) Date: Wed, 3 Feb 2016 14:24:41 +0100 Subject: "upstream sent too big header while reading response header from upstream"? Message-ID: Hi All. I am currently trying to find the source of "upstream sent too big header while reading response header from upstream" in my logfiles because nginx as a consequence returns "502 Bad Gateway". Basically I used tcpdump to get the traffic and I compared two cases: 1) the communication is ok: 12167 07:03:51.466937 IP 10.10.3.7.80 > 10.10.2.121.43817: Flags [P.], seq 1779345520:1779348777, ack 1001934214, win 219, options [nop,nop,TS val 2527240108 ecr 619023542], length 3257 12168 E...7. at .@... 12169 12170 .. 12171 12172 .y.P.)j..p;.M............ 12173 ....$...HTTP/1.1 302 Found^M 12174 Content-Length: 58^M 12175 Content-Type: text/html; charset=utf-8^M 12176 Date: Wed, 03 Feb 2016 12:03:51 GMT^M 12177 Location: /^M 12178 Set-Cookie: rfid-mgt-console=... 12179 Set-Cookie: rfid-mgt-console.sig=xxx; path=/; expires=Wed, 03 Feb 2016 13:03:51 GMT; httponly^M 12180 Vary: Accept, Accept-Encoding^M 12181 X-Cf-Requestid: yyy^M 12182 X-Powered-By: Express^M 12183 ^M 12184

Moved Temporarily. Redirecting to /

In this case everything up to "Moved T..." is in one response from upstream (10.10.3.7). 2) nginx throws mentioned error to error.log and returns 502 code 56874 06:31:45.307207 IP 10.10.3.7.80 > 10.10.2.121.58073: Flags [P.], seq 953075345:953079441, ack 3028520986, win 219, options [nop,nop,TS val 2526758567 ecr 617097381], length 4096 56875 E..4.. at .@... 56876 56877 .. 56878 56879 .y.P..8...........h...... 56880 ..F.$.(.HTTP/1.1 302 Found^M 56881 Content-Length: 58^M 56882 Content-Type: text/html; charset=utf-8^M 56883 Date: Wed, 03 Feb 2016 11:31:45 GMT^M 56884 Location: /^M 56885 Set-Cookie: rfid-mgt-console=... 56886 Set-Cookie: rfid-mgt-console.sig=xxx; path=/; expires=Wed, 03 Feb 2016 12:31:45 GMT; httponly^M 56887 Vary: Accept, Accept-Encoding^M 56888 X-Cf-Requestid: yyy 56889 06:31:45.307213 IP 10.10.2.121.58073 > 10.10.3.7.80: Flags [.], ack 953079441, win 280, options [nop,nop,TS val 617097490 ecr 2526758567], length 0 56890 E..4.. at .@.zw 56891 56892 .y 56893 56894 .....P....8.............. 56895 $.)...F. 56896 06:31:45.307218 IP 10.10.3.7.80 > 10.10.2.121.58073: Flags [P.], seq 953079441:953079542, ack 3028520986, win 219, options [nop,nop,TS val 2526758567 ecr 617097381], length 101 56897 E..... at .@... 56898 56899 .. 56900 56901 .y.P..8...........JU..... 56902 ..F.$.(.503-564da79aecb5^M 56903 X-Powered-By: Express^M 56904 ^M 56905

Moved Temporarily. Redirecting to /

In this case response from upstream (10.10.3.7) is much larger because of larger "Set-Cookie: rfid-mgt-console=..." returned and is divided into two parts. "upstream sent too big header while reading response header from upstream" is written to error.log. For clarity I only pasted the part of the traffic which differs. Initially in "http" section in nginx.conf: ... http { proxy_max_temp_file_size 0; proxy_buffering off; ... I tried to change it to: ... http { proxy_max_temp_file_size 0; proxy_buffering on; proxy_buffers 8 256k; ... And I also added: ... proxy_buffering on; proxy_buffers 8 256k; ... to my "server" sections. I performed a restart but the error did not change. Can someone help me with this one? BR, Rafal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Wed Feb 3 13:42:03 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 3 Feb 2016 14:42:03 +0100 Subject: "upstream sent too big header while reading response header from upstream"? In-Reply-To: References: Message-ID: You want proxy_buffer_size. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size On Wed, Feb 3, 2016 at 2:24 PM, Rafa? Radecki wrote: > Hi All. > > I am currently trying to find the source of "upstream sent too big header > while reading response header from upstream" in my logfiles because nginx > as a consequence returns "502 Bad Gateway". > > Basically I used tcpdump to get the traffic and I compared two cases: > > 1) the communication is ok: > > 12167 07:03:51.466937 IP 10.10.3.7.80 > 10.10.2.121.43817: Flags [P.], seq > 1779345520:1779348777, ack 1001934214, win 219, options [nop,nop,TS val > 2527240108 ecr 619023542], length 3257 > 12168 E...7. at .@... > 12169 > 12170 .. > 12171 > 12172 .y.P.)j..p;.M............ > 12173 ....$...HTTP/1.1 302 Found^M > 12174 Content-Length: 58^M > 12175 Content-Type: text/html; charset=utf-8^M > 12176 Date: Wed, 03 Feb 2016 12:03:51 GMT^M > 12177 Location: /^M > 12178 Set-Cookie: rfid-mgt-console=... > 12179 Set-Cookie: rfid-mgt-console.sig=xxx; path=/; expires=Wed, 03 Feb > 2016 13:03:51 GMT; httponly^M > 12180 Vary: Accept, Accept-Encoding^M > 12181 X-Cf-Requestid: yyy^M > 12182 X-Powered-By: Express^M > 12183 ^M > 12184

Moved Temporarily. Redirecting to /

> > In this case everything up to "Moved T..." is in one response from > upstream (10.10.3.7). > > 2) nginx throws mentioned error to error.log and returns 502 code > 56874 06:31:45.307207 IP 10.10.3.7.80 > 10.10.2.121.58073: Flags [P.], seq > 953075345:953079441, ack 3028520986, win 219, options [nop,nop,TS val > 2526758567 ecr 617097381], length 4096 > 56875 E..4.. at .@... > 56876 > 56877 .. > 56878 > 56879 .y.P..8...........h...... > 56880 ..F.$.(.HTTP/1.1 302 Found^M > 56881 Content-Length: 58^M > 56882 Content-Type: text/html; charset=utf-8^M > 56883 Date: Wed, 03 Feb 2016 11:31:45 GMT^M > 56884 Location: /^M > 56885 Set-Cookie: rfid-mgt-console=... > 56886 Set-Cookie: rfid-mgt-console.sig=xxx; path=/; expires=Wed, 03 Feb > 2016 12:31:45 GMT; httponly^M > 56887 Vary: Accept, Accept-Encoding^M > 56888 X-Cf-Requestid: yyy > > 56889 06:31:45.307213 IP 10.10.2.121.58073 > 10.10.3.7.80: Flags [.], ack > 953079441, win 280, options [nop,nop,TS val 617097490 ecr 2526758567], > length 0 > 56890 E..4.. at .@.zw > 56891 > 56892 .y > 56893 > 56894 .....P....8.............. > 56895 $.)...F. > > 56896 06:31:45.307218 IP 10.10.3.7.80 > 10.10.2.121.58073: Flags [P.], seq > 953079441:953079542, ack 3028520986, win 219, options [nop,nop,TS val > 2526758567 ecr 617097381], length 101 > 56897 E..... at .@... > 56898 > 56899 .. > 56900 > 56901 .y.P..8...........JU..... > 56902 ..F.$.(.503-564da79aecb5^M > 56903 X-Powered-By: Express^M > 56904 ^M > 56905

Moved Temporarily. Redirecting to /

> > In this case response from upstream (10.10.3.7) is much larger because of > larger "Set-Cookie: rfid-mgt-console=..." returned and is divided into two > parts. "upstream sent too big header while reading response header from > upstream" is written to error.log. > > For clarity I only pasted the part of the traffic which differs. > > Initially in "http" section in nginx.conf: > > ... > http { > proxy_max_temp_file_size 0; > > proxy_buffering off; > ... > > I tried to change it to: > > ... > http { > proxy_max_temp_file_size 0; > > proxy_buffering on; > > proxy_buffers 8 256k; > ... > > And I also added: > > ... > proxy_buffering on; > > proxy_buffers 8 256k; > ... > > to my "server" sections. I performed a restart but the error did not > change. > > Can someone help me with this one? > > BR, > Rafal. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radecki.rafal at gmail.com Wed Feb 3 14:10:24 2016 From: radecki.rafal at gmail.com (=?UTF-8?Q?Rafa=C5=82_Radecki?=) Date: Wed, 3 Feb 2016 15:10:24 +0100 Subject: "upstream sent too big header while reading response header from upstream"? In-Reply-To: References: Message-ID: Thanks, solved ;) BR, Rafal. 2016-02-03 14:42 GMT+01:00 Richard Stanway : > You want proxy_buffer_size. > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size > > On Wed, Feb 3, 2016 at 2:24 PM, Rafa? Radecki > wrote: > >> Hi All. >> >> I am currently trying to find the source of "upstream sent too big header >> while reading response header from upstream" in my logfiles because nginx >> as a consequence returns "502 Bad Gateway". >> >> Basically I used tcpdump to get the traffic and I compared two cases: >> >> 1) the communication is ok: >> >> 12167 07:03:51.466937 IP 10.10.3.7.80 > 10.10.2.121.43817: Flags [P.], >> seq 1779345520:1779348777, ack 1001934214, win 219, options [nop,nop,TS val >> 2527240108 ecr 619023542], length 3257 >> 12168 E...7. at .@... >> 12169 >> 12170 .. >> 12171 >> 12172 .y.P.)j..p;.M............ >> 12173 ....$...HTTP/1.1 302 Found^M >> 12174 Content-Length: 58^M >> 12175 Content-Type: text/html; charset=utf-8^M >> 12176 Date: Wed, 03 Feb 2016 12:03:51 GMT^M >> 12177 Location: /^M >> 12178 Set-Cookie: rfid-mgt-console=... >> 12179 Set-Cookie: rfid-mgt-console.sig=xxx; path=/; expires=Wed, 03 Feb >> 2016 13:03:51 GMT; httponly^M >> 12180 Vary: Accept, Accept-Encoding^M >> 12181 X-Cf-Requestid: yyy^M >> 12182 X-Powered-By: Express^M >> 12183 ^M >> 12184

Moved Temporarily. Redirecting to /

>> >> In this case everything up to "Moved T..." is in one response from >> upstream (10.10.3.7). >> >> 2) nginx throws mentioned error to error.log and returns 502 code >> 56874 06:31:45.307207 IP 10.10.3.7.80 > 10.10.2.121.58073: Flags [P.], >> seq 953075345:953079441, ack 3028520986, win 219, options [nop,nop,TS val >> 2526758567 ecr 617097381], length 4096 >> 56875 E..4.. at .@... >> 56876 >> 56877 .. >> 56878 >> 56879 .y.P..8...........h...... >> 56880 ..F.$.(.HTTP/1.1 302 Found^M >> 56881 Content-Length: 58^M >> 56882 Content-Type: text/html; charset=utf-8^M >> 56883 Date: Wed, 03 Feb 2016 11:31:45 GMT^M >> 56884 Location: /^M >> 56885 Set-Cookie: rfid-mgt-console=... >> 56886 Set-Cookie: rfid-mgt-console.sig=xxx; path=/; expires=Wed, 03 Feb >> 2016 12:31:45 GMT; httponly^M >> 56887 Vary: Accept, Accept-Encoding^M >> 56888 X-Cf-Requestid: yyy >> >> 56889 06:31:45.307213 IP 10.10.2.121.58073 > 10.10.3.7.80: Flags [.], ack >> 953079441, win 280, options [nop,nop,TS val 617097490 ecr 2526758567], >> length 0 >> 56890 E..4.. at .@.zw >> 56891 >> 56892 .y >> 56893 >> 56894 .....P....8.............. >> 56895 $.)...F. >> >> 56896 06:31:45.307218 IP 10.10.3.7.80 > 10.10.2.121.58073: Flags [P.], >> seq 953079441:953079542, ack 3028520986, win 219, options [nop,nop,TS val >> 2526758567 ecr 617097381], length 101 >> 56897 E..... at .@... >> 56898 >> 56899 .. >> 56900 >> 56901 .y.P..8...........JU..... >> 56902 ..F.$.(.503-564da79aecb5^M >> 56903 X-Powered-By: Express^M >> 56904 ^M >> 56905

Moved Temporarily. Redirecting to /

>> >> In this case response from upstream (10.10.3.7) is much larger because of >> larger "Set-Cookie: rfid-mgt-console=..." returned and is divided into two >> parts. "upstream sent too big header while reading response header from >> upstream" is written to error.log. >> >> For clarity I only pasted the part of the traffic which differs. >> >> Initially in "http" section in nginx.conf: >> >> ... >> http { >> proxy_max_temp_file_size 0; >> >> proxy_buffering off; >> ... >> >> I tried to change it to: >> >> ... >> http { >> proxy_max_temp_file_size 0; >> >> proxy_buffering on; >> >> proxy_buffers 8 256k; >> ... >> >> And I also added: >> >> ... >> proxy_buffering on; >> >> proxy_buffers 8 256k; >> ... >> >> to my "server" sections. I performed a restart but the error did not >> change. >> >> Can someone help me with this one? >> >> BR, >> Rafal. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Feb 3 21:21:59 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 3 Feb 2016 21:21:59 +0000 Subject: question about client certs In-Reply-To: <8008c450d0398f73480c09cd7bb9e371@none.at> References: <450b6930110bf9454abcb77c77b8ed20@none.at> <8008c450d0398f73480c09cd7bb9e371@none.at> Message-ID: <20160203212159.GB13590@daoine.org> On Wed, Feb 03, 2016 at 09:37:25AM +0100, Aleksandar Lazic wrote: > Am 02-02-2016 23:22, schrieb Alex Samad: Hi there, > Cool it would be nice if you can tell us if it's works and how was > your solution ;-) I think that "location" does not take variables, and so this will not work. More below. > >On 2 February 2016 at 20:56, Aleksandar Lazic wrote: > >>Am 02-02-2016 04:32, schrieb Alex Samad: > >>>Is it possible with nginx to do this > >>> > >>>https://www.abc.com > >>>/ > >>>/noclientcert/ > >>>/clientcert/ > >>> > >>>so you can get to / with no client cert, but /clientcert/ you need a > >>>cert, but for /noclientcert/ you don't need a cert. > >>> > >>>Looks like from the config doco you can only set it for the > >>>whole tree ... Untested by me, but if you set ssl_verify_client optional; and then within your location ^~ /clientcert/ {} you have something like if ($ssl_client_verify != SUCCESS) { return 403; } would that fit your needs? (If the content below /clientcert/ is all handled by an external process, then possibly it could do its own validation or verification using values provided by nginx.) http://nginx.org/r/$ssl_client_verify for some details. Good luck with it, f -- Francis Daly francis at daoine.org From agentzh at gmail.com Wed Feb 3 22:07:51 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 3 Feb 2016 14:07:51 -0800 Subject: [ANN] SF Bay Area OpenResty Meetup Message-ID: Hi folks, I've recently created the Bay Area OpenResty Meeup group on meetup.com: http://www.meetup.com/Bay-Area-OpenResty-Meetup/ You're welcome to join us in this group. We're currently planning a face-to-face meetup at 5:30pm ~ 6:30pm on 9 March 2016 in CloudFlare's office (101 Townsend St San Francisco). If you would like to come, please make a note on that group's web page. If you also would like to share anything (1min ~ 5min), please also drop me a line :) I hope that I would not be the only person speaking *grin* The meetup is free of charge and open to everyone interested in OpenResty, NGINX, and/or Lua/LuaJIT. We'll have a screen, a microphone, and some food and beverage as well. If it goes well, we can do it every couple of months :) Any updates about this meetup will be published on the group page given above. See you then! Best regards, -agentzh From nginx-forum at forum.nginx.org Wed Feb 3 22:32:05 2016 From: nginx-forum at forum.nginx.org (smsmaddy1981) Date: Wed, 03 Feb 2016 17:32:05 -0500 Subject: Restart Nginx than root user? In-Reply-To: <56A57D28.2080304@greengecko.co.nz> References: <56A57D28.2080304@greengecko.co.nz> Message-ID: <15454c91f3bce41d63b119d0b931829f.NginxMailingListEnglish@forum.nginx.org> Hi Steve, Thanks for your email. And, Is the same solution what I have described in my initiated email. No luck there. Any other suggestions please? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264087,264318#msg-264318 From nginx-forum at forum.nginx.org Wed Feb 3 23:03:21 2016 From: nginx-forum at forum.nginx.org (smsmaddy1981) Date: Wed, 03 Feb 2016 18:03:21 -0500 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) Message-ID: <5d95738419d0d9e5442f09299622dab0.NginxMailingListEnglish@forum.nginx.org> Requirement is to start NGinx other than root user 1) Added entry tp /etc/sudoers for specific user as below: gvp ALL=(ALL) NOPASSWD: ALL 2) Tried starting NGinx as gvp user, below error thrown: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) 3) Some blog says: https://www.ruby-forum.com/topic/201522 you have to start it as root, users don't have privs to open ports below 1024 What is the solution here pls.? Best regards, Maddy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264319,264319#msg-264319 From l at ymx.ch Wed Feb 3 23:14:55 2016 From: l at ymx.ch (Lukas) Date: Thu, 4 Feb 2016 00:14:55 +0100 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: <5d95738419d0d9e5442f09299622dab0.NginxMailingListEnglish@forum.nginx.org> References: <5d95738419d0d9e5442f09299622dab0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160203231455.GA48509@lpr.ch> > smsmaddy1981 [2016-02-04 00:03]: > > Requirement is to start NGinx other than root user > > 1) Added entry tp /etc/sudoers for specific user as below: > gvp ALL=(ALL) NOPASSWD: ALL > > 2) Tried starting NGinx as gvp user, below error thrown: > nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) > As user gvp, do you run sudo /etc/init.d/nginx start or /etc/init.d/nginx start ? The former works, at least for me. -- Lukas Ruf | Ad Personam Consecom | Ad Laborem From mat999 at gmail.com Thu Feb 4 03:11:15 2016 From: mat999 at gmail.com (SplitIce) Date: Thu, 4 Feb 2016 14:11:15 +1100 Subject: [ANN] SF Bay Area OpenResty Meetup In-Reply-To: References: Message-ID: Oh I wish I could go, bit far to fly (from Aus) unfortunately. On Thu, Feb 4, 2016 at 9:07 AM, Yichun Zhang (agentzh) wrote: > Hi folks, > > I've recently created the Bay Area OpenResty Meeup group on meetup.com: > > http://www.meetup.com/Bay-Area-OpenResty-Meetup/ > > You're welcome to join us in this group. > > We're currently planning a face-to-face meetup at 5:30pm ~ 6:30pm on 9 > March 2016 in CloudFlare's office (101 Townsend St San Francisco). If > you would like to come, please make a note on that group's web page. > If you also would like to share anything (1min ~ 5min), please also > drop me a line :) I hope that I would not be the only person speaking > *grin* > > The meetup is free of charge and open to everyone interested in > OpenResty, NGINX, and/or Lua/LuaJIT. We'll have a screen, a > microphone, and some food and beverage as well. > > If it goes well, we can do it every couple of months :) > > Any updates about this meetup will be published on the group page given > above. > > See you then! > > Best regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 4 12:31:20 2016 From: nginx-forum at forum.nginx.org (smsmaddy1981) Date: Thu, 04 Feb 2016 07:31:20 -0500 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: <20160203231455.GA48509@lpr.ch> References: <20160203231455.GA48509@lpr.ch> Message-ID: Hi Lukas, as below: [gvp at alt-cti03 (TEST1) /var/gvp/Nginx/nginx-1.8.0/sbin] ./nginx nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) Best regards, Maddy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264319,264338#msg-264338 From ahutchings at nginx.com Thu Feb 4 13:01:02 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 4 Feb 2016 13:01:02 +0000 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: References: <20160203231455.GA48509@lpr.ch> Message-ID: <56B34B8E.5020505@nginx.com> Hi Maddy, In Linux (and most other Unix based systems) ports below 1024 need to be opened using the root user. So you need to start NGINX as root which will open the port and then drop down to an unprivileged user for the port. Kind Regards Andrew On 04/02/16 12:31, smsmaddy1981 wrote: > Hi Lukas, > as below: > > [gvp at alt-cti03 (TEST1) /var/gvp/Nginx/nginx-1.8.0/sbin] ./nginx > nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) > > > > Best regards, > Maddy > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264319,264338#msg-264338 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From daniel at mostertman.org Thu Feb 4 13:05:53 2016 From: daniel at mostertman.org (=?UTF-8?Q?Dani=c3=abl_Mostertman?=) Date: Thu, 4 Feb 2016 14:05:53 +0100 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: References: <20160203231455.GA48509@lpr.ch> Message-ID: <56B34CB1.8020208@mostertman.org> Hi Maddy, Op 4-2-2016 om 13:31 schreef smsmaddy1981: > [gvp at alt-cti03 (TEST1) /var/gvp/Nginx/nginx-1.8.0/sbin] ./nginx > nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) You edited the sudoers file to allow gvp to run programs as root. In order to do so, you have to put "sudo" in front of the command, which stands for "do as super user". In this case, your line would've been instead: [gvp at alt-cti03 (TEST1) /var/gvp/Nginx/nginx-1.8.0/sbin] sudo ./nginx As a side-note, allowing every single program to be run as superuser without the necessity of a password can pose a big security threat if your account is ever compromised. Kind regards, Dani?l From nginx-forum at forum.nginx.org Thu Feb 4 15:36:38 2016 From: nginx-forum at forum.nginx.org (orglee) Date: Thu, 04 Feb 2016 10:36:38 -0500 Subject: compiling nginx from sources and including xslt sources Message-ID: Hi, I'm trying to compile nginx from source but despite providing ./configure with --with-ld-opt I can't seem to see it. This is the complete command I'm trying to use. ./configure --prefix=/opt/nginx-1.9.9/conf --sbin-path=/opt/nginx-1.9.9/sbin/nginx --conf-path=/opt/nginx-1.9.9/conf/nginx.conf --pid-path=/opt/nginx-1.9.9/nginx.pid --loc k-path=/opt/nginx-1.9.9/nginx.lock --error-log-path=/opt/nginx-1.9.9/logs/error.log --http-log-path=/opt/nginx-1.9.9/logs/access.log --http-client-body-temp-path=/opt/nginx- 1.9.9/lib/body --http-fastcgi-temp-path=/opt/nginx-1.9.9/lib/fastcgi --http-proxy-temp-path=/opt/nginx-1.9.9/lib/proxy --http-scgi-temp-path=/opt/nginx-1.9.9/lib/scgi --http -uwsgi-temp-path=/opt/nginx-1.9.9/lib/uwsgi --user=www-data --group=www-data --with-debug --with-http_v2_module --with-http_gzip_static_module --with-http_addition_module -- with-http_dav_module --with-http_realip_module --with-http_stub_status_module --with-http_sub_module --with-ipv6 --with-mail --with-mail_ssl_module --with-http_ssl_module -- with-openssl=/opt/src/nginx/openssl-1_0_1g --with-sha1=/opt/src/nginx/openssl-1_0_1g --with-md5=/opt/src/nginx/openssl-1_0_1g --with-http_xslt_module --with-pcre=/opt/src/ng inx/pcre-8.32 --with-pcre-jit --with-zlib=/opt/src/nginx/zlib-1.2.8 --with-ld-opt="-I/opt/src/nginx/libxml-2.9.3 -I/opt/src/nginx/xslt-1.1.28" ~Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264346,264346#msg-264346 From nginx-forum at forum.nginx.org Thu Feb 4 15:39:10 2016 From: nginx-forum at forum.nginx.org (smsmaddy1981) Date: Thu, 04 Feb 2016 10:39:10 -0500 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: <56B34B8E.5020505@nginx.com> References: <56B34B8E.5020505@nginx.com> Message-ID: Thanks Andrew, Daniel... I am able to restart NGinx as gvp user now with the suggested command: sudo ./nginx sudo ./nginx -s stop sudo ./nginx -s reload Thanks for supporting here. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264319,264347#msg-264347 From mdounin at mdounin.ru Thu Feb 4 15:50:54 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Feb 2016 18:50:54 +0300 Subject: compiling nginx from sources and including xslt sources In-Reply-To: References: Message-ID: <20160204155054.GD70403@mdounin.ru> Hello! On Thu, Feb 04, 2016 at 10:36:38AM -0500, orglee wrote: > Hi, > > I'm trying to compile nginx from source but despite providing ./configure > with --with-ld-opt I can't seem to see it. > > This is the complete command I'm trying to use. [...] > --with-ld-opt="-I/opt/src/nginx/libxml-2.9.3 > -I/opt/src/nginx/xslt-1.1.28" The --with-ld-opt option is used when calling linker. It makes no sense to use "-I" there. If you want to set include paths, consider using --with-cc-opt instead. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Feb 4 17:55:43 2016 From: nginx-forum at forum.nginx.org (orglee) Date: Thu, 04 Feb 2016 12:55:43 -0500 Subject: compiling nginx from sources and including xslt sources In-Reply-To: <20160204155054.GD70403@mdounin.ru> References: <20160204155054.GD70403@mdounin.ru> Message-ID: I'm still getting: (...) checking for libxslt ... not found checking for libxslt in /usr/local/ ... not found checking for libxslt in /usr/pkg/ ... not found checking for libxslt in /opt/local/ ... not found ./configure: error: the HTTP XSLT module requires the libxml2/libxslt libraries. You can either do not enable the module or install the libraries. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264346,264350#msg-264350 From mdounin at mdounin.ru Thu Feb 4 18:34:32 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Feb 2016 21:34:32 +0300 Subject: compiling nginx from sources and including xslt sources In-Reply-To: References: <20160204155054.GD70403@mdounin.ru> Message-ID: <20160204183432.GG70403@mdounin.ru> Hello! On Thu, Feb 04, 2016 at 12:55:43PM -0500, orglee wrote: > I'm still getting: > > (...) > checking for libxslt ... not found > checking for libxslt in /usr/local/ ... not found > checking for libxslt in /usr/pkg/ ... not found > checking for libxslt in /opt/local/ ... not found > > ./configure: error: the HTTP XSLT module requires the libxml2/libxslt > libraries. You can either do not enable the module or install the libraries. That's because there are no libxml2/libxslt libraries available via the paths you've specified. Note that you need libraries to be built and installed, just a source code won't work - nginx doesn't know how to compile libxml2/libxslt for you. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Feb 4 18:50:00 2016 From: nginx-forum at forum.nginx.org (orglee) Date: Thu, 04 Feb 2016 13:50:00 -0500 Subject: compiling nginx from sources and including xslt sources In-Reply-To: <20160204183432.GG70403@mdounin.ru> References: <20160204183432.GG70403@mdounin.ru> Message-ID: <27a75614a3f7746a94d97035b6b8930a.NginxMailingListEnglish@forum.nginx.org> Oh ok I'm dumb. :P Can I somehow specify those library paths to? I really would like to avoid compiling them into directory structure of my current system. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264346,264359#msg-264359 From nginx-forum at forum.nginx.org Thu Feb 4 19:28:07 2016 From: nginx-forum at forum.nginx.org (sirwiz) Date: Thu, 04 Feb 2016 14:28:07 -0500 Subject: eventfd() failed (38: Function not implemented) Message-ID: I try to run Nginx, but i have error: eventfd() failed (38: Function not implemented) I found this patch (https://forum.nginx.org/read.php?29,258676,258685#REPLY) to 1.9.0, i cannot do diff so i do it manualy. So i found ngx_epoll_init(ngx_cycle_t And now my file: static ngx_int_t ngx_epoll_init(ngx_cycle_t *cycle, ngx_msec_t timer); #if (NGX_HAVE_EVENTFD) if (ngx_epoll_notify_init(cycle->log) != NGX_OK) { - return NGX_ERROR; + ngx_epoll_module_ctx.actions.notify = NULL; } #endif But i have compiler error src/event/modules/ngx_epoll_module.c:105: error: expected identifier or '(' before 'if' make[1]: *** [objs/src/event/modules/ngx_epoll_module.o] Error 1 Any ideas how to make it working? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264360,264360#msg-264360 From nginx-forum at forum.nginx.org Fri Feb 5 01:38:15 2016 From: nginx-forum at forum.nginx.org (FSC) Date: Thu, 04 Feb 2016 20:38:15 -0500 Subject: Prevent reverse proxy from sending range headers to source server Message-ID: <9971426452dfe72b95511d6d8a910e23.NginxMailingListEnglish@forum.nginx.org> Hello everyone! I set up my reverse proxy to cache files stored at AWS S3. I want to minimize the amount of traffic generated at AWS. Many of the files are mp4s. Clients make GET requests with the range set. A video player may get multiple chunks of a file at once. How can I have the proxy server NOT send the client's Range header along to AWS S3? I want the cached version to be used. The file should only be revalidated after the set period of 30 days. So it seems like I would need something along proxy_set_header but something that unsets the header sent to the proxied server. E.g. proxy_set_header Range NULL The proxy_ignore_headers option won't take Range. I appreciate any help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264362,264362#msg-264362 From rpaprocki at fearnothingproductions.net Fri Feb 5 01:40:30 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Thu, 4 Feb 2016 17:40:30 -0800 Subject: Prevent reverse proxy from sending range headers to source server In-Reply-To: <9971426452dfe72b95511d6d8a910e23.NginxMailingListEnglish@forum.nginx.org> References: <9971426452dfe72b95511d6d8a910e23.NginxMailingListEnglish@forum.nginx.org> Message-ID: >From the docs ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header): If the value of a header field is an empty string then this field will not be passed to a proxied server: proxy_set_header Range ""; proxy_ignore_header is meant to handle response headers, not request headers On Thu, Feb 4, 2016 at 5:38 PM, FSC wrote: > Hello everyone! > > I set up my reverse proxy to cache files stored at AWS S3. I want to > minimize the amount of traffic generated at AWS. > > Many of the files are mp4s. Clients make GET requests with the range set. A > video player may get multiple chunks of a file at once. > > How can I have the proxy server NOT send the client's Range header along to > AWS S3? I want the cached version to be used. The file should only be > revalidated after the set period of 30 days. > > So it seems like I would need something along proxy_set_header but > something > that unsets the header sent to the proxied server. > > E.g. proxy_set_header Range NULL > > The proxy_ignore_headers option won't take Range. > > I appreciate any help. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,264362,264362#msg-264362 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Feb 5 09:51:20 2016 From: nginx-forum at forum.nginx.org (justpusher) Date: Fri, 05 Feb 2016 04:51:20 -0500 Subject: Possible to have a limit_req "nodelay burst" option? In-Reply-To: References: <75889ab591fdc9f01bb9d5f8c49ff467.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5435f97c7bb2df465bf8a58a33410e74.NginxMailingListEnglish@forum.nginx.org> I support this proposal. We need this functionality, too. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,238389,264364#msg-264364 From nginx-forum at forum.nginx.org Fri Feb 5 10:02:52 2016 From: nginx-forum at forum.nginx.org (smsmaddy1981) Date: Fri, 05 Feb 2016 05:02:52 -0500 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: <56B34CB1.8020208@mostertman.org> References: <56B34CB1.8020208@mostertman.org> Message-ID: @Daniel, You were true on the side-note, sudo prefixed command for NGinx actions will cause an serious threat. I missed the note... @Andrew also stated "In Linux (and most other Unix based systems) ports below 1024 need to be opened using the root user. So you need to start NGINX as root which will open the port and then drop down to an unprivileged user for the port." Is there a way to achieve this? I am not sure, if the below is relevant? http://stackoverflow.com/questions/413807/is-there-a-way-for-non-root-processes-to-bind-to-privileged-ports-1024-on-l Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264319,264365#msg-264365 From luky-37 at hotmail.com Fri Feb 5 10:08:27 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 5 Feb 2016 11:08:27 +0100 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: References: <56B34CB1.8020208@mostertman.org>, Message-ID: > also stated "In Linux (and most other Unix based systems) ports below 1024 > need to be opened using the root user. So you need to start NGINX as root > which will open the port and then drop down to an unprivileged user for the > port." > > Is there a way to achieve this? Configure the user directive so that the workers run unprivileged: http://nginx.org/r/user From nginx-forum at forum.nginx.org Fri Feb 5 11:26:28 2016 From: nginx-forum at forum.nginx.org (abauer) Date: Fri, 05 Feb 2016 06:26:28 -0500 Subject: Strange upstream behaviour Message-ID: <40d3d72f302d7ea8985d747145af5bf7.NginxMailingListEnglish@forum.nginx.org> Hello, We are using nginx as a loadbalancer in front of docker containers. Most of the time this works without problems. But sometimes (~0.1% of the requests) the requests are sent to the server group name instead of one of the members of the servergroup. upstream gateway { server 127.0.0.1:6000 weight=1000000; server 1.2.3.4:6000; } server { listen :443; ssl on; server_name ; location / { proxy_pass http://gateway; } } Most of the time the requests are logged as expected: 03/Feb/2016:04:00:25 +0100 "/api/v1/login" 200 52 "Jersey/2.7 (HttpUrlConnection 1.8.0_51)" "time=0.192" "" "upstream=127.0.0.1:6000" or 03/Feb/2016:04:00:25 +0100 "/api/v1/login" 200 52 "Jersey/2.7 (HttpUrlConnection 1.8.0_51)" "time=0.192" "" "upstream=1.2.3.4:6000" But randomly this happens: 03/Feb/2016:04:00:25 +0100 "/api/v1/login" 502 52 "Jersey/2.7 (HttpUrlConnection 1.8.0_51)" "time=0.192" "" "upstream=gateway" As you can see, it uses the server group name as the upstream target (which fails of course since this is not a valid host). What could be the cause of this behaviour? All the best, Armin Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264368,264368#msg-264368 From nginx-forum at forum.nginx.org Fri Feb 5 11:33:55 2016 From: nginx-forum at forum.nginx.org (FSC) Date: Fri, 05 Feb 2016 06:33:55 -0500 Subject: Prevent reverse proxy from sending range headers to source server In-Reply-To: References: Message-ID: <3be89eccd0bbb8b41b787fddc313d819.NginxMailingListEnglish@forum.nginx.org> No idea how I missed that. Thank you very much! Also for people that might find this thread later: Adding the parameter "updating" to the directive "proxy_cache_use_stale" is important as well if you want to minimize the traffic the origin server generates. proxy_cache_use_stale updating; proxy_set_header Range ""; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264362,264370#msg-264370 From nginx-forum at forum.nginx.org Fri Feb 5 11:36:30 2016 From: nginx-forum at forum.nginx.org (kmg) Date: Fri, 05 Feb 2016 06:36:30 -0500 Subject: Option for geoIP blocking in Stream server section on nginx 1.9.x Version ! Message-ID: <5b5480a954e3051b446c4e5f329deac9.NginxMailingListEnglish@forum.nginx.org> Hi, I'm so happy that Stream feature is enabled from Nginx-1.9.x. Also I started to use this feature in my Testing environment which is running smoothly. But One thing I got struggled , is to block IP based on GeoIP country as like Server block. Does GeoIP blocking feature available in Stream block ?. if it so, kindly please share some example syntax to me. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264371,264371#msg-264371 From mdounin at mdounin.ru Fri Feb 5 12:55:31 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Feb 2016 15:55:31 +0300 Subject: compiling nginx from sources and including xslt sources In-Reply-To: <27a75614a3f7746a94d97035b6b8930a.NginxMailingListEnglish@forum.nginx.org> References: <20160204183432.GG70403@mdounin.ru> <27a75614a3f7746a94d97035b6b8930a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160205125531.GI70403@mdounin.ru> Hello! On Thu, Feb 04, 2016 at 01:50:00PM -0500, orglee wrote: > Oh ok I'm dumb. :P > > Can I somehow specify those library paths to? I really would like to avoid > compiling them into directory structure of my current system. Refer to the library docs to find out how to compile them and install to some private location. Usually the relevant configure option is "--prefix". Then use appropriate compiler and linker options to provide paths to include files and libraries during nginx compilation. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Feb 5 13:05:31 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Feb 2016 16:05:31 +0300 Subject: eventfd() failed (38: Function not implemented) In-Reply-To: References: Message-ID: <20160205130531.GJ70403@mdounin.ru> Hello! On Thu, Feb 04, 2016 at 02:28:07PM -0500, sirwiz wrote: > I try to run Nginx, but i have error: eventfd() failed (38: Function not > implemented) > > I found this patch (https://forum.nginx.org/read.php?29,258676,258685#REPLY) > to 1.9.0, i cannot do diff so i do it manualy. > > So i found ngx_epoll_init(ngx_cycle_t > > And now my file: > > static ngx_int_t ngx_epoll_init(ngx_cycle_t *cycle, ngx_msec_t timer); > #if (NGX_HAVE_EVENTFD) > if (ngx_epoll_notify_init(cycle->log) != NGX_OK) { > - return NGX_ERROR; > + ngx_epoll_module_ctx.actions.notify = NULL; > } > #endif > > But i have compiler error > > src/event/modules/ngx_epoll_module.c:105: error: expected identifier or '(' > before 'if' > make[1]: *** [objs/src/event/modules/ngx_epoll_module.o] Error 1 > > Any ideas how to make it working? Try obtaining latest nginx version instead, 1.9.10. It can be downloaded here: http://nginx.org/en/download.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Feb 5 13:24:05 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Feb 2016 16:24:05 +0300 Subject: Prevent reverse proxy from sending range headers to source server In-Reply-To: <9971426452dfe72b95511d6d8a910e23.NginxMailingListEnglish@forum.nginx.org> References: <9971426452dfe72b95511d6d8a910e23.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160205132404.GK70403@mdounin.ru> Hello! On Thu, Feb 04, 2016 at 08:38:15PM -0500, FSC wrote: > Hello everyone! > > I set up my reverse proxy to cache files stored at AWS S3. I want to > minimize the amount of traffic generated at AWS. > > Many of the files are mp4s. Clients make GET requests with the range set. A > video player may get multiple chunks of a file at once. > > How can I have the proxy server NOT send the client's Range header along to > AWS S3? I want the cached version to be used. The file should only be > revalidated after the set period of 30 days. > > So it seems like I would need something along proxy_set_header but something > that unsets the header sent to the proxied server. As already mentioned by Robert, proxy_set_header with an empty value will prevent a header from being sent to upstream servers. Note well that when caching is enabled, nginx will not send the Range header to upstream servers. It is removed automatically along with several other headers, see http://nginx.org/r/proxy_set_header: : If caching is enabled, the header fields ?If-Modified-Since?, : ?If-Unmodified-Since?, ?If-None-Match?, ?If-Match?, ?Range?, and : ?If-Range? from the original request are not passed to the proxied : server. That is, there is no need to do anything special with the Range header. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Feb 5 13:33:32 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Feb 2016 16:33:32 +0300 Subject: Strange upstream behaviour In-Reply-To: <40d3d72f302d7ea8985d747145af5bf7.NginxMailingListEnglish@forum.nginx.org> References: <40d3d72f302d7ea8985d747145af5bf7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160205133332.GL70403@mdounin.ru> Hello! On Fri, Feb 05, 2016 at 06:26:28AM -0500, abauer wrote: > We are using nginx as a loadbalancer in front of docker containers. Most of > the time this works without problems. But sometimes (~0.1% of the requests) > the requests are sent to the server group name instead of one of the members > of the servergroup. [...] > But randomly this happens: > > 03/Feb/2016:04:00:25 +0100 "/api/v1/login" 502 52 "Jersey/2.7 > (HttpUrlConnection 1.8.0_51)" "time=0.192" "" > "upstream=gateway" > > As you can see, it uses the server group name as the upstream target (which > fails of course since this is not a valid host). > > What could be the cause of this behaviour? Upstream name can be seen in the $upstream_addr variable if nginx wasn't able to select an upstream server to connect to because all servers were down as per max_fails/fail_timeout. At the same time the "no live upstreams" error is logged to the error log. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Feb 5 13:34:11 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Feb 2016 16:34:11 +0300 Subject: Option for geoIP blocking in Stream server section on nginx 1.9.x Version ! In-Reply-To: <5b5480a954e3051b446c4e5f329deac9.NginxMailingListEnglish@forum.nginx.org> References: <5b5480a954e3051b446c4e5f329deac9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160205133411.GM70403@mdounin.ru> Hello! On Fri, Feb 05, 2016 at 06:36:30AM -0500, kmg wrote: > Hi, > > I'm so happy that Stream feature is enabled from Nginx-1.9.x. Also I started > to use this feature in my Testing environment which is running smoothly. > > But One thing I got struggled , is to block IP based on GeoIP country as > like Server block. Does GeoIP blocking feature available in Stream block ?. > if it so, kindly please share some example syntax to me. No, it's not something current supported in stream proxy. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Feb 5 15:00:56 2016 From: nginx-forum at forum.nginx.org (abauer) Date: Fri, 05 Feb 2016 10:00:56 -0500 Subject: Strange upstream behaviour In-Reply-To: <20160205133332.GL70403@mdounin.ru> References: <20160205133332.GL70403@mdounin.ru> Message-ID: <49b8e399705d4429535e6d228bcb6ed0.NginxMailingListEnglish@forum.nginx.org> Thank you, that would explain the message. Ill check why the upstream servers might not have been reachable. All the best, Armin Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264368,264381#msg-264381 From nginx-forum at forum.nginx.org Fri Feb 5 16:02:31 2016 From: nginx-forum at forum.nginx.org (jeeeff) Date: Fri, 05 Feb 2016 11:02:31 -0500 Subject: proxy_cache_lock allow multiple requests to remote server in some cases In-Reply-To: <6129e33610b874bc698e0d6ad22ce105.NginxMailingListEnglish@forum.nginx.org> References: <6129e33610b874bc698e0d6ad22ce105.NginxMailingListEnglish@forum.nginx.org> Message-ID: <858dcd8c398117b6505c16c8796b5e2e.NginxMailingListEnglish@forum.nginx.org> Anyone else noticed the same behavior? I wasn't sure if that kind of behavior was correct, but as I said the lock works properly and only one request get forwarded to the backend server when there is no cached item in the cache (that is: the cache is empty), so to me the behavior should be the same when the cache is being refreshed (I don't see why it would be different?), but that is not the case... so it looks like a defect to me. I also tried to confirm if other cache software do the same thing, such as varnish, and in theory they have this "waiting queue" which should guarantee that only one request gets forwarded, but there is a bug right now which actually makes the thing even worst than Nginx (at least Nginx sends a If-Modified-Since, but varnish just do a full request for all concurrent requests, their bug tracking ticket is https://www.varnish-cache.org/trac/ticket/1799). The workaround mentioned does no even provide the same behavior (it hits the old cache instead of waiting the queued request), so does not work for me. Sorry for mentioning about a third party product such as varnish on this forum, but since I wasn't receiving any response, I wanted to see if other solutions had this functionality (and confirm my post made sense...), but then I am now back with Nginx as it works better (and is easier to use) anyway, at least until the bug is resolved on varnish. If this issue is resolved on nginx, that would be even better for me... nginx is more flexible and easier to use. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264203,264383#msg-264383 From mdounin at mdounin.ru Fri Feb 5 16:47:09 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Feb 2016 19:47:09 +0300 Subject: proxy_cache_lock allow multiple requests to remote server in some cases In-Reply-To: <858dcd8c398117b6505c16c8796b5e2e.NginxMailingListEnglish@forum.nginx.org> References: <6129e33610b874bc698e0d6ad22ce105.NginxMailingListEnglish@forum.nginx.org> <858dcd8c398117b6505c16c8796b5e2e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160205164709.GQ70403@mdounin.ru> Hello! On Fri, Feb 05, 2016 at 11:02:31AM -0500, jeeeff wrote: > Anyone else noticed the same behavior? > > I wasn't sure if that kind of behavior was correct, but as I said the lock > works properly and only one request get forwarded to the backend server when > there is no cached item in the cache (that is: the cache is empty), so to me > the behavior should be the same when the cache is being refreshed (I don't > see why it would be different?), but that is not the case... so it looks > like a defect to me. [...] You can find the detailed response to your original message here: http://mailman.nginx.org/pipermail/nginx/2016-January/049734.html Unfortunately, forum interface is broken and is unable to show it in the correct thread. Consider using the mailing list directly instead, see http://nginx.org/en/support.html. Additionally, it's also unlikely that you'll see this message as well, due to the same forum bug. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Feb 5 17:33:17 2016 From: nginx-forum at forum.nginx.org (jeeeff) Date: Fri, 05 Feb 2016 12:33:17 -0500 Subject: proxy_cache_lock allow multiple requests to remote server in some cases In-Reply-To: <20160205164709.GQ70403@mdounin.ru> References: <20160205164709.GQ70403@mdounin.ru> Message-ID: <1e960cadfaa9d27ec96defc3ebccfbac.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > You can find the detailed response to your original message here: > > http://mailman.nginx.org/pipermail/nginx/2016-January/049734.html > > Unfortunately, forum interface is broken and is unable to show it > in the correct thread. Consider using the mailing list directly > instead, see http://nginx.org/en/support.html. > > Additionally, it's also unlikely that you'll see this message as > well, due to the same forum bug. > > -- > Maxim Dounin > http://nginx.org/ > Thanks a lot for the info! Yes, after I re-read the documentation, I kind of understood that "populate a new element" could have been interpreted the wrong way. Anyway, now I know for sure what the implemented behavior is, and that I am not making a mistake or missing some configuration. And as you suggested, I also thought of looking at the code to see if that could be fixed/implemented, that is, if I get some time to get into the project source code... Jeeeff Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264203,264386#msg-264386 From jemord at dimagi.com Fri Feb 5 20:17:57 2016 From: jemord at dimagi.com (Jon Emord) Date: Fri, 05 Feb 2016 20:17:57 +0000 Subject: Dual Certificate (RSA and ECC) support Message-ID: Hello, I see that "Dual Certificate (RSA and ECC) support" is currently on the roadmap for nginx 1.9, but is not listed as in progress. Is this feature guaranteed to be in 1.9? Will this feature allow us to serve a SHA-256 and SHA-1 cert at the same time? Our platform is often used in developing countries that use J2ME phones and most only support SHA-1 certificates. Thank you, Jon Emord -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Sat Feb 6 17:13:52 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Sat, 6 Feb 2016 12:13:52 -0500 Subject: Possible bug http2 module Message-ID: <56B629D0.7020501@ohlste.in> Hello, I am running a WordPress multisite install and recently turned off http2 on the domain in order to use a third party module which evidently doesn't play nicely with http2 (echo module). In testing I noticed that the site was still being served with http2 enabled according to both Chrome and Firefox. I confirmed with curl. I recompiled nginx without any third party modules: # nginx -V nginx version: nginx/1.9.10 built with OpenSSL 1.0.2f 28 Jan 2016 TLS SNI support enabled configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --with-file-aio --with-ipv6 --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --with-http_stub_status_module --with-pcre --with-http_v2_module --with-http_ssl_module I then adjusted the config files so as not to reference any third party modules and performed a binary "upgrade". I still see that http2 is in use on both Chrome and Firefox, and also via curl. # curl -I -k https://my.ip4.add.ress/ HTTP/2.0 302 server:nginx/1.9.10 date:Sat, 06 Feb 2016 16:49:44 GMT content-type:text/html; charset=UTF-8 # curl -I https:/mydomain.net/ HTTP/2.0 200 server:nginx/1.9.10 date:Sat, 06 Feb 2016 16:50:41 GMT content-type:text/html; charset=UTF-8 There are other domains on that IPv4 which use http2. Disabling http2 on all of them resulted in the expected behavior in the browsers and in curl: # curl -I -k https:///my.ip4.add.ress/ HTTP/1.1 302 Moved Temporarily Server: nginx/1.9.10 Date: Sat, 06 Feb 2016 17:03:39 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive # curl -I https://mydomain.net/ HTTP/1.1 200 OK Server: nginx/1.9.10 Date: Sat, 06 Feb 2016 17:05:05 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive I don't see any reference to this at http://nginx.org/en/docs/http/ngx_http_v2_module.html so I am guessing this is unintended. Obvious workaround is to place this domain on an IPv4 in which there are no http2 sites. More permanent solution should be considered. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From vbart at nginx.com Sat Feb 6 17:22:39 2016 From: vbart at nginx.com (=?utf-8?B?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Sat, 06 Feb 2016 20:22:39 +0300 Subject: Possible bug http2 module In-Reply-To: <56B629D0.7020501@ohlste.in> References: <56B629D0.7020501@ohlste.in> Message-ID: <3665951.8rUKGJE8IT@vbart-laptop> On Saturday 06 February 2016 12:13:52 Jim Ohlstein wrote: > Hello, > > I am running a WordPress multisite install and recently turned off http2 > on the domain in order to use a third party module which evidently > doesn't play nicely with http2 (echo module). In testing I noticed that > the site was still being served with http2 enabled according to both > Chrome and Firefox. I confirmed with curl. > > I recompiled nginx without any third party modules: > > # nginx -V > nginx version: nginx/1.9.10 > built with OpenSSL 1.0.2f 28 Jan 2016 > TLS SNI support enabled > configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I > /usr/local/include' --with-ld-opt='-L /usr/local/lib' > --conf-path=/usr/local/etc/nginx/nginx.conf > --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid > --error-log-path=/var/log/nginx-error.log --user=www --group=www > --with-file-aio --with-ipv6 > --http-client-body-temp-path=/var/tmp/nginx/client_body_temp > --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp > --http-proxy-temp-path=/var/tmp/nginx/proxy_temp > --http-scgi-temp-path=/var/tmp/nginx/scgi_temp > --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp > --http-log-path=/var/log/nginx-access.log --with-http_stub_status_module > --with-pcre --with-http_v2_module --with-http_ssl_module > > > I then adjusted the config files so as not to reference any third party > modules and performed a binary "upgrade". > > I still see that http2 is in use on both Chrome and Firefox, and also > via curl. > > # curl -I -k https://my.ip4.add.ress/ > HTTP/2.0 302 > server:nginx/1.9.10 > date:Sat, 06 Feb 2016 16:49:44 GMT > content-type:text/html; charset=UTF-8 > > # curl -I https:/mydomain.net/ > HTTP/2.0 200 > server:nginx/1.9.10 > date:Sat, 06 Feb 2016 16:50:41 GMT > content-type:text/html; charset=UTF-8 > > There are other domains on that IPv4 which use http2. Disabling http2 on > all of them resulted in the expected behavior in the browsers and in curl: > > # curl -I -k https:///my.ip4.add.ress/ > HTTP/1.1 302 Moved Temporarily > Server: nginx/1.9.10 > Date: Sat, 06 Feb 2016 17:03:39 GMT > Content-Type: text/html; charset=UTF-8 > Connection: keep-alive > > # curl -I https://mydomain.net/ > HTTP/1.1 200 OK > Server: nginx/1.9.10 > Date: Sat, 06 Feb 2016 17:05:05 GMT > Content-Type: text/html; charset=UTF-8 > Connection: keep-alive > > I don't see any reference to this at > http://nginx.org/en/docs/http/ngx_http_v2_module.html so I am guessing > this is unintended. > [..] A quote from the documentation: | The http2 parameter (1.9.5) configures the port to accept HTTP/2 | connections. http://nginx.org/en/docs/http/ngx_http_core_module.html#listen wbr, Valentin V. Bartenev From jim at ohlste.in Sat Feb 6 17:44:08 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Sat, 6 Feb 2016 12:44:08 -0500 Subject: Possible bug http2 module In-Reply-To: <3665951.8rUKGJE8IT@vbart-laptop> References: <56B629D0.7020501@ohlste.in> <3665951.8rUKGJE8IT@vbart-laptop> Message-ID: <56B630E8.3070204@ohlste.in> On 2/6/16 12:22 PM, ???????? ???????? wrote: > On Saturday 06 February 2016 12:13:52 Jim Ohlstein wrote: >> Hello, >> >> I am running a WordPress multisite install and recently turned off http2 >> on the domain in order to use a third party module which evidently >> doesn't play nicely with http2 (echo module). In testing I noticed that >> the site was still being served with http2 enabled according to both >> Chrome and Firefox. I confirmed with curl. >> >> I recompiled nginx without any third party modules: >> >> # nginx -V >> nginx version: nginx/1.9.10 >> built with OpenSSL 1.0.2f 28 Jan 2016 >> TLS SNI support enabled >> configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I >> /usr/local/include' --with-ld-opt='-L /usr/local/lib' >> --conf-path=/usr/local/etc/nginx/nginx.conf >> --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid >> --error-log-path=/var/log/nginx-error.log --user=www --group=www >> --with-file-aio --with-ipv6 >> --http-client-body-temp-path=/var/tmp/nginx/client_body_temp >> --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp >> --http-proxy-temp-path=/var/tmp/nginx/proxy_temp >> --http-scgi-temp-path=/var/tmp/nginx/scgi_temp >> --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp >> --http-log-path=/var/log/nginx-access.log --with-http_stub_status_module >> --with-pcre --with-http_v2_module --with-http_ssl_module >> >> >> I then adjusted the config files so as not to reference any third party >> modules and performed a binary "upgrade". >> >> I still see that http2 is in use on both Chrome and Firefox, and also >> via curl. >> >> # curl -I -k https://my.ip4.add.ress/ >> HTTP/2.0 302 >> server:nginx/1.9.10 >> date:Sat, 06 Feb 2016 16:49:44 GMT >> content-type:text/html; charset=UTF-8 >> >> # curl -I https:/mydomain.net/ >> HTTP/2.0 200 >> server:nginx/1.9.10 >> date:Sat, 06 Feb 2016 16:50:41 GMT >> content-type:text/html; charset=UTF-8 >> >> There are other domains on that IPv4 which use http2. Disabling http2 on >> all of them resulted in the expected behavior in the browsers and in curl: >> >> # curl -I -k https:///my.ip4.add.ress/ >> HTTP/1.1 302 Moved Temporarily >> Server: nginx/1.9.10 >> Date: Sat, 06 Feb 2016 17:03:39 GMT >> Content-Type: text/html; charset=UTF-8 >> Connection: keep-alive >> >> # curl -I https://mydomain.net/ >> HTTP/1.1 200 OK >> Server: nginx/1.9.10 >> Date: Sat, 06 Feb 2016 17:05:05 GMT >> Content-Type: text/html; charset=UTF-8 >> Connection: keep-alive >> >> I don't see any reference to this at >> http://nginx.org/en/docs/http/ngx_http_v2_module.html so I am guessing >> this is unintended. >> > [..] > > A quote from the documentation: > > | The http2 parameter (1.9.5) configures the port to accept HTTP/2 > | connections. > > http://nginx.org/en/docs/http/ngx_http_core_module.html#listen Ahh. That's not in the http2 module documentation, where I looked and where it should perhaps also be mentioned, and it's not clear that the above applies to every server. So if I write: listen 443 ssl http2; in a server directive anywhere as dosumneted in http://nginx.org/en/docs/http/ngx_http_v2_module.html#example, then http2 is enabled in all servers on all IP's even if it's not specifically enabled in a listen directive in a particular server? That seems wrong, intuitively. There are (more and more) times when shared IPv4's are necessary, and dictating this behavior for all servers on a given IPv4 is probably less than optimal. If it's technically a necessity it could perhaps be more explicitly documented. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From vbart at nginx.com Sat Feb 6 18:00:20 2016 From: vbart at nginx.com (=?utf-8?B?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Sat, 06 Feb 2016 21:00:20 +0300 Subject: Possible bug http2 module In-Reply-To: <56B630E8.3070204@ohlste.in> References: <56B629D0.7020501@ohlste.in> <3665951.8rUKGJE8IT@vbart-laptop> <56B630E8.3070204@ohlste.in> Message-ID: <2150385.xejF8gPqhH@vbart-laptop> On Saturday 06 February 2016 12:44:08 Jim Ohlstein wrote: > On 2/6/16 12:22 PM, ???????? ???????? wrote: > > On Saturday 06 February 2016 12:13:52 Jim Ohlstein wrote: > >> Hello, > >> > >> I am running a WordPress multisite install and recently turned off http2 > >> on the domain in order to use a third party module which evidently > >> doesn't play nicely with http2 (echo module). In testing I noticed that > >> the site was still being served with http2 enabled according to both > >> Chrome and Firefox. I confirmed with curl. > >> > >> I recompiled nginx without any third party modules: > >> > >> # nginx -V > >> nginx version: nginx/1.9.10 > >> built with OpenSSL 1.0.2f 28 Jan 2016 > >> TLS SNI support enabled > >> configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I > >> /usr/local/include' --with-ld-opt='-L /usr/local/lib' > >> --conf-path=/usr/local/etc/nginx/nginx.conf > >> --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid > >> --error-log-path=/var/log/nginx-error.log --user=www --group=www > >> --with-file-aio --with-ipv6 > >> --http-client-body-temp-path=/var/tmp/nginx/client_body_temp > >> --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp > >> --http-proxy-temp-path=/var/tmp/nginx/proxy_temp > >> --http-scgi-temp-path=/var/tmp/nginx/scgi_temp > >> --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp > >> --http-log-path=/var/log/nginx-access.log --with-http_stub_status_module > >> --with-pcre --with-http_v2_module --with-http_ssl_module > >> > >> > >> I then adjusted the config files so as not to reference any third party > >> modules and performed a binary "upgrade". > >> > >> I still see that http2 is in use on both Chrome and Firefox, and also > >> via curl. > >> > >> # curl -I -k https://my.ip4.add.ress/ > >> HTTP/2.0 302 > >> server:nginx/1.9.10 > >> date:Sat, 06 Feb 2016 16:49:44 GMT > >> content-type:text/html; charset=UTF-8 > >> > >> # curl -I https:/mydomain.net/ > >> HTTP/2.0 200 > >> server:nginx/1.9.10 > >> date:Sat, 06 Feb 2016 16:50:41 GMT > >> content-type:text/html; charset=UTF-8 > >> > >> There are other domains on that IPv4 which use http2. Disabling http2 on > >> all of them resulted in the expected behavior in the browsers and in > >> curl: > >> > >> # curl -I -k https:///my.ip4.add.ress/ > >> HTTP/1.1 302 Moved Temporarily > >> Server: nginx/1.9.10 > >> Date: Sat, 06 Feb 2016 17:03:39 GMT > >> Content-Type: text/html; charset=UTF-8 > >> Connection: keep-alive > >> > >> # curl -I https://mydomain.net/ > >> HTTP/1.1 200 OK > >> Server: nginx/1.9.10 > >> Date: Sat, 06 Feb 2016 17:05:05 GMT > >> Content-Type: text/html; charset=UTF-8 > >> Connection: keep-alive > >> > >> I don't see any reference to this at > >> http://nginx.org/en/docs/http/ngx_http_v2_module.html so I am guessing > >> this is unintended. > > > > [..] > > > > A quote from the documentation: > > | The http2 parameter (1.9.5) configures the port to accept HTTP/2 > > | connections. > > > > http://nginx.org/en/docs/http/ngx_http_core_module.html#listen > > Ahh. That's not in the http2 module documentation, where I looked and > where it should perhaps also be mentioned, and it's not clear that the > above applies to every server. > > So if I write: > > listen 443 ssl http2; > > in a server directive anywhere as dosumneted in > http://nginx.org/en/docs/http/ngx_http_v2_module.html#example, then > http2 is enabled in all servers on all IP's even if it's not > specifically enabled in a listen directive in a particular server? That > seems wrong, intuitively. There are (more and more) times when shared > IPv4's are necessary, and dictating this behavior for all servers on a > given IPv4 is probably less than optimal. If it's technically a > necessity it could perhaps be more explicitly documented. It's pretty much the same as "ssl" parameter works. Ones connected an HTTP/2 client is able to request any host over the HTTP/2 connection, and in fact browsers do. So there's no proper way to disable HTTP/2 for one virtual server while keeping enable for others. Could you suggest a good phrase to improve the docs? wbr, Valentin V. Bartenev From kyprizel at gmail.com Sat Feb 6 19:27:31 2016 From: kyprizel at gmail.com (kyprizel) Date: Sat, 6 Feb 2016 22:27:31 +0300 Subject: Dual Certificate (RSA and ECC) support In-Reply-To: References: Message-ID: This patches are pretty stable (except you can't use different OCSP responders for SHA1 and SHA256 certs and use different ssl_stapling_files). https://github.com/wikimedia/operations-software-nginx/tree/wmf-1.9.3-1/debian/patches On Fri, Feb 5, 2016 at 11:17 PM, Jon Emord wrote: > Hello, > > I see that "Dual Certificate (RSA and ECC) support" is currently on the > roadmap for nginx 1.9, but is not listed as in progress. > > Is this feature guaranteed to be in 1.9? Will this feature allow us to > serve a SHA-256 and SHA-1 cert at the same time? > > Our platform is often used in developing countries that use J2ME phones > and most only support SHA-1 certificates. > > Thank you, > Jon Emord > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Sat Feb 6 19:51:30 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Sat, 6 Feb 2016 14:51:30 -0500 Subject: Possible bug http2 module In-Reply-To: <2150385.xejF8gPqhH@vbart-laptop> References: <56B629D0.7020501@ohlste.in> <3665951.8rUKGJE8IT@vbart-laptop> <56B630E8.3070204@ohlste.in> <2150385.xejF8gPqhH@vbart-laptop> Message-ID: <56B64EC2.3080609@ohlste.in> On 2/6/16 1:00 PM, ???????? ???????? wrote: [snip] So if I write: >> >> listen 443 ssl http2; >> >> in a server directive anywhere as dosumneted in >> http://nginx.org/en/docs/http/ngx_http_v2_module.html#example, then >> http2 is enabled in all servers on all IP's even if it's not >> specifically enabled in a listen directive in a particular server? That >> seems wrong, intuitively. There are (more and more) times when shared >> IPv4's are necessary, and dictating this behavior for all servers on a >> given IPv4 is probably less than optimal. If it's technically a >> necessity it could perhaps be more explicitly documented. > > It's pretty much the same as "ssl" parameter works. I thought of that but there's a fundamental difference from the point of view of the server admin. If I enable SSL on one or more vhosts sharing an IP address and choose not to use SSL in another vhost sharing that same IP, I simply do not write an ssl server configuration for that vhost. There's no certificate for it so pretty much any client will throw an error if a request is made for that vhost on 443. The user then can use http instead of https or s/he can choose to use an incorrect certificate. Most choose the former. With the advent of free certificate services like Letsencrypt, StartSSL and Wotan, I, like many server admins, am shifting more and more to https (and http2). But as we've seen, there are times when http2 cannot be used (or is undesirable) but we still prefer SSL, or we've long ago enabled HSTS and pretty much can no longer back out, at least not without significant disruption. In my use case, the server in question is a FreeBSD KVM in a large Linux machine that hosts many other KVM's and LXC's. Each of my customers needs at least one IPv4 and most want more than one. The host will only provide me with a finite number of IPv4 addresses per physical server. If IPv6 were ubiquitous I would not have a problem as I have a /64 on each machine. I try to limit my own use of IPv4's so as to have them available. With SNI that really isn't a problem. I no longer worry whether my sites support ancient clients, so I host many SSL domains on the same IP. If I want to use http2 on all but one, and I _cannot_ use http2 on that one, then I cannot use http2 on any, as I have come to find out. > > Ones connected an HTTP/2 client is able to request any host over the HTTP/2 > connection, and in fact browsers do. So there's no proper way to disable > HTTP/2 for one virtual server while keeping enable for others. The most elegant solution (if it's possible - my programming skills start and end with shell scripts) would be a programmatic one where a directive is optionally turned on in the http context of the nginx configuation file. Something like: check_http2 (on|off); with a default of "off". Such a directive could check for explicit http2 enabling in a vhost where SSL is also enabled, and downgrade connections to those where it is not enabled. Of course this would probably violate half a dozen RFC's, but it would be helpful in the use case I just described, which is in fact a real world situation. A server admin who does not want to spend the CPU cycles on such a check simply does not enable it, but one who needs it can use it. > > Could you suggest a good phrase to improve the docs? Something like the following in the http2 and core module documentation: NOTE: Enabling http2 globally (via "listen 443 ssl http2;" in ANY server block), or on any single IP (via "listen 1.2.3.4:443 ssl http2;"), enables http2 on ALL vhosts using SSL, either globally or on that specific IP. If this is not the desired behavior use one IP for vhosts using http2 and a different IP for vhosts using SSL but for which you do not want to enable http2. Enable http2 only for the IP's where it is desired. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From nginx-forum at forum.nginx.org Tue Feb 9 10:06:39 2016 From: nginx-forum at forum.nginx.org (mongia.ramandeep) Date: Tue, 09 Feb 2016 05:06:39 -0500 Subject: SPDY + HTTP/2 In-Reply-To: References: Message-ID: <3bc316545bc691679b5c824fbe79e587.NginxMailingListEnglish@forum.nginx.org> I was able to patch nginx-1.9.10 which supports both SPDY + HTTP2. I don't intend to use both of them simultaneously on the same interface. My question: Is there a reason why this is not done? It gives me an option to choose between the two per interface. server { listen x.x.x.x:443 ssl http2; server_name abc.com; ... } server { listen y.y.y.y:443 ssl spdy; server_name def.com; ... } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263245,264414#msg-264414 From nginx-forum at forum.nginx.org Tue Feb 9 12:12:24 2016 From: nginx-forum at forum.nginx.org (maziar) Date: Tue, 09 Feb 2016 07:12:24 -0500 Subject: Nginx for media streaming Message-ID: I want to setup nginx for media streaming web site like youtube I have some movie on my server with HD quality and I want to serve video like YouTube its mean that nginx should change video's quality by user internet connection quality, I found that this feature name is adaptive streaming in nginx, And also , like YouTube you can change video timeline to minute 2 without buffering last 2 minute Note that I buy jwplayer for this web site Please help me what should I do for these requirements And please tell me which plugging or addons or stuff I should add to my nginx for compiling Thank you Maziar Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264416,264416#msg-264416 From rainer at ultra-secure.de Tue Feb 9 13:02:09 2016 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Tue, 09 Feb 2016 14:02:09 +0100 Subject: Nginx for media streaming In-Reply-To: References: Message-ID: <22236878de2c794341194b343fbf5c72@ultra-secure.de> Am 2016-02-09 13:12, schrieb maziar: > I want to setup nginx for media streaming web site like youtube > I have some movie on my server with HD quality and I want to serve > video > like YouTube its mean that nginx should change video's quality by user > internet connection quality, I found that this feature name is adaptive > streaming in nginx, > And also , like YouTube you can change video timeline to minute 2 > without > buffering last 2 minute > Note that I buy jwplayer for this web site > > Please help me what should I do for these requirements > And please tell me which plugging or addons or stuff I should add to my > nginx for compiling > Thank you > Maziar AFAIK, you need the commercial version of NGINX: https://www.nginx.com/products/streaming-media-delivery/ Plugins for Simple Flash and MP4-streaming are available in the open source version - though I don't know how useful they are in the real world. http://nginx.org/en/docs/http/ngx_http_mp4_module.html It's years since a customer needed any of these - most everybody hosts his stuff on youtube these days. From kontas at ceid.upatras.gr Tue Feb 9 13:37:35 2016 From: kontas at ceid.upatras.gr (Christos Kontas) Date: Tue, 9 Feb 2016 15:37:35 +0200 Subject: Set 'Content-Length' Header Field in Body Handler Message-ID: <20160209133732.GA14849@diogenis.ceid.upatras.gr> Hello! I am currently working on a filter module, in which I would like to rewrite the content received from an upstream. Similar to `gzip` module, I don't know the exact value to set to the 'Content-Length' header field during filtering the headers of the request, not only before processing part of the request's body. Is there any way I can postpone the transmission of the response headers until the time this value can be calculated during the filtering of the request body? Thank you in advance! Christos Kontas, kontas at ceid.upatras.gr From mdounin at mdounin.ru Tue Feb 9 14:29:32 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Feb 2016 17:29:32 +0300 Subject: nginx-1.9.11 Message-ID: <20160209142931.GF70672@mdounin.ru> Changes with nginx 1.9.11 09 Feb 2016 *) Feature: TCP support in resolver. *) Feature: dynamic modules. *) Bugfix: the $request_length variable did not include size of request headers when using HTTP/2. *) Bugfix: in the ngx_http_v2_module. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Feb 9 15:10:24 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 9 Feb 2016 10:10:24 -0500 Subject: [nginx-announce] nginx-1.9.11 In-Reply-To: <20160209142949.GG70672@mdounin.ru> References: <20160209142949.GG70672@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.11 for Windows https://kevinworthington.com/nginxwin1911 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Feb 9, 2016 at 9:29 AM, Maxim Dounin wrote: > Changes with nginx 1.9.11 09 Feb > 2016 > > *) Feature: TCP support in resolver. > > *) Feature: dynamic modules. > > *) Bugfix: the $request_length variable did not include size of request > headers when using HTTP/2. > > *) Bugfix: in the ngx_http_v2_module. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimssupp at rushpost.com Tue Feb 9 15:17:42 2016 From: jimssupp at rushpost.com (jimssupp at rushpost.com) Date: Tue, 09 Feb 2016 07:17:42 -0800 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: <20160209142931.GF70672@mdounin.ru> References: <20160209142931.GF70672@mdounin.ru> Message-ID: <1455031062.715719.516265722.79891081@webmail.messagingengine.com> Hi starting wit a running nginx/1.9.10 on linux/64 simple build/upgrade to nginx/1.9.11; same config as before no errors in build nginx -v nginx version: nginx/1.9.11 but, systemctl start nginx.serviceJob for nginx.service failed. See "systemctl status nginx.service" and "journalctl -xn" for details. systemctl status nginx.service -l nginx.service - The nginx HTTP and reverse proxy server Loaded: loaded (/etc/systemd/system/nginx.service; enabled) Active: failed (Result: signal) since Tue 2016-02-09 07:09:47 PST; 24s ago Process: 4509 ExecStartPre=/usr/local/sbin/nginx -t -c /usr/local/etc/nginx/nginx.conf (code=killed, signal=FPE) Main PID: 7589 (code=exited, status=0/SUCCESS) nginx -t -c /usr/local/etc/nginx/nginx.conf Floating point exception in syslog Feb 9 07:11:07 jimsweb kernel: [393917.665398] traps: nginx[4679] trap divide error ip:41a95e sp:7ffef4fcbd00 error:0 in nginx[400000+ed000] dropback to 1.9.10 works fine if there's other info that'll help, let me know. Jim From jimssupp at rushpost.com Tue Feb 9 15:41:07 2016 From: jimssupp at rushpost.com (jimssupp at rushpost.com) Date: Tue, 09 Feb 2016 07:41:07 -0800 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: <1455031062.715719.516265722.79891081@webmail.messagingengine.com> References: <20160209142931.GF70672@mdounin.ru> <1455031062.715719.516265722.79891081@webmail.messagingengine.com> Message-ID: <1455032467.720101.516293218.47F753B8@webmail.messagingengine.com> It's brotli. Removing --add-module=/usr/local/src/ngx_brotli from 1.9.11 config, rebuilding, it's fixed. Runs ok. brotli's OK atm with 1.9.10, but not with 1.9.11 From nginx-forum at forum.nginx.org Tue Feb 9 16:53:15 2016 From: nginx-forum at forum.nginx.org (locojohn) Date: Tue, 09 Feb 2016 11:53:15 -0500 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: <1455032467.720101.516293218.47F753B8@webmail.messagingengine.com> References: <1455032467.720101.516293218.47F753B8@webmail.messagingengine.com> Message-ID: <0af69267af30fcac17aa9c1be41483be.NginxMailingListEnglish@forum.nginx.org> I experience the same issue, but I do not have the module reported by anonymous user installed: system:~#nginx -t Floating point exception dmesg: [659221.307369] traps: nginx[20028] trap divide error ip:422fba sp:7ffd8c72a8a0 error:0 in nginx[400000+95d000] Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264422,264434#msg-264434 From vbart at nginx.com Tue Feb 9 16:56:12 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 09 Feb 2016 19:56:12 +0300 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: <0af69267af30fcac17aa9c1be41483be.NginxMailingListEnglish@forum.nginx.org> References: <1455032467.720101.516293218.47F753B8@webmail.messagingengine.com> <0af69267af30fcac17aa9c1be41483be.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3250915.Mj8n3KtA6P@vbart-workstation> On Tuesday 09 February 2016 11:53:15 locojohn wrote: > I experience the same issue, but I do not have the module reported by > anonymous user installed: > > system:~#nginx -t > Floating point exception > > dmesg: > [659221.307369] traps: nginx[20028] trap divide error ip:422fba > sp:7ffd8c72a8a0 error:0 in nginx[400000+95d000] > What 3rd-party modules do you have instead? Please, try without them. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Feb 9 17:01:00 2016 From: nginx-forum at forum.nginx.org (locojohn) Date: Tue, 09 Feb 2016 12:01:00 -0500 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: <3250915.Mj8n3KtA6P@vbart-workstation> References: <3250915.Mj8n3KtA6P@vbart-workstation> Message-ID: It crashes with pagespeed module now. 1.9.10 was working fine. Andrejs. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264422,264436#msg-264436 From nginx-forum at forum.nginx.org Tue Feb 9 18:06:21 2016 From: nginx-forum at forum.nginx.org (gglater62) Date: Tue, 09 Feb 2016 13:06:21 -0500 Subject: x-accel-redirect enables caching for POST requests In-Reply-To: <20151224182404.GZ74233@mdounin.ru> References: <20151224182404.GZ74233@mdounin.ru> Message-ID: <4e007288c6773673d6c372210c178d8b.NginxMailingListEnglish@forum.nginx.org> Hi, Are POST disallowed for X-Accel-Redirect? There is setup: nginx sends requests to Apache, which replies with X-Accel-Redirect and X-Path-Info. In X-Accel-Redirect there is path to php script, then nginx did do POST or GET to that script, but it stopped to work since 1.9.10. Is there a possibility to preserve original $request_method? How to make it work without patching nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263661,264437#msg-264437 From mdounin at mdounin.ru Tue Feb 9 18:15:18 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Feb 2016 21:15:18 +0300 Subject: x-accel-redirect enables caching for POST requests In-Reply-To: <4e007288c6773673d6c372210c178d8b.NginxMailingListEnglish@forum.nginx.org> References: <20151224182404.GZ74233@mdounin.ru> <4e007288c6773673d6c372210c178d8b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160209181518.GL70672@mdounin.ru> Hello! On Tue, Feb 09, 2016 at 01:06:21PM -0500, gglater62 wrote: > Hi, > > Are POST disallowed for X-Accel-Redirect? > There is setup: > nginx sends requests to Apache, which replies with X-Accel-Redirect and > X-Path-Info. > In X-Accel-Redirect there is path to php script, then nginx did do POST or > GET to that script, but it stopped to work since 1.9.10. > Is there a possibility to preserve original $request_method? How to make it > work without patching nginx? Original request method is preserved when using X-Accel-Redirect to a named location. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Feb 9 20:00:08 2016 From: vbart at nginx.com (=?utf-8?B?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Tue, 09 Feb 2016 23:00:08 +0300 Subject: SPDY + HTTP/2 In-Reply-To: <3bc316545bc691679b5c824fbe79e587.NginxMailingListEnglish@forum.nginx.org> References: <3bc316545bc691679b5c824fbe79e587.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2055491.DsUa469nvt@vbart-laptop> On Tuesday 09 February 2016 05:06:39 mongia.ramandeep wrote: > I was able to patch nginx-1.9.10 which supports both SPDY + HTTP2. I don't > intend to use both of them simultaneously on the same interface. > > My question: Is there a reason why this is not done? It gives me an option > to choose between the two per interface. > > server { > listen x.x.x.x:443 ssl http2; > server_name abc.com; > ... > } > server { > listen y.y.y.y:443 ssl spdy; > server_name def.com; > ... > } > The answer is that there's a huge difference between patching something to get some feature for your own build and actually to provide support, to keep maintaining the code. We aren't going to spend more time on maintaining already deprecated experimental protocol. A lot of work is being done for every release to find and fix bugs, to implement new features, to test on various platforms. All these procedures are essentially to provide high quality product for millions of users around the world with various use cases and configurations. Any additional line of code adds additional cost to this work. wbr, Valentin V. Bartenev From sca at andreasschulze.de Tue Feb 9 19:59:42 2016 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 09 Feb 2016 20:59:42 +0100 Subject: nginx-1.9.11 In-Reply-To: <20160209142931.GF70672@mdounin.ru> Message-ID: <20160209205942.Horde.8LShQT14m5Or6VbXNIjUjZg@andreasschulze.de> Maxim Dounin: > Changes with nginx 1.9.11 09 Feb 2016 > > *) Feature: TCP support in resolver. the rDNS module (https://www.nginx.com/resources/wiki/modules/rdns/) don't compile anymore cc -c -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wall -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules -I src/http/v2 -I src/mail \ -o objs/addon/nginx-http-rdns-20140411/ngx_http_rdns_module.o \ ./nginx-http-rdns-20140411//ngx_http_rdns_module.c ./nginx-http-rdns-20140411//ngx_http_rdns_module.c: In function 'merge_loc_conf': ./nginx-http-rdns-20140411//ngx_http_rdns_module.c:217:89: error: 'ngx_resolver_t' has no member named 'udp_connections' if (conf->conf.enabled && ((core_loc_cf->resolver == NULL) || (core_loc_cf->resolver->udp_connections.nelts == 0))) { ^ make[2]: *** [objs/addon/nginx-http-rdns-20140411/ngx_http_rdns_module.o] Error 1 unrelated to above: attached a patch to correct a manpage warning (not new in this version) Andreas -------------- next part -------------- A non-text attachment was scrubbed... Name: hyphen-used-as-minus-sign.patch Type: text/x-diff Size: 659 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Tue Feb 9 20:05:20 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 09 Feb 2016 15:05:20 -0500 Subject: nginx-1.9.11 In-Reply-To: <20160209205942.Horde.8LShQT14m5Or6VbXNIjUjZg@andreasschulze.de> References: <20160209205942.Horde.8LShQT14m5Or6VbXNIjUjZg@andreasschulze.de> Message-ID: <66e6b770e3d9e8f6236601c7429e8d7a.NginxMailingListEnglish@forum.nginx.org> A. Schulze Wrote: ------------------------------------------------------- > Maxim Dounin: > > > Changes with nginx 1.9.11 09 > Feb 2016 > > > > *) Feature: TCP support in resolver. > > the rDNS module (https://www.nginx.com/resources/wiki/modules/rdns/) > don't compile anymore It breaks udp in (openresty)Lua as well. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264422,264444#msg-264444 From arut at nginx.com Tue Feb 9 20:42:05 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 9 Feb 2016 23:42:05 +0300 Subject: nginx-1.9.11 In-Reply-To: <20160209205942.Horde.8LShQT14m5Or6VbXNIjUjZg@andreasschulze.de> References: <20160209142931.GF70672@mdounin.ru> <20160209205942.Horde.8LShQT14m5Or6VbXNIjUjZg@andreasschulze.de> Message-ID: <20160209204205.GC87317@Romans-MacBook-Air.local> On Tue, Feb 09, 2016 at 08:59:42PM +0100, A. Schulze wrote: > > Maxim Dounin: > > >Changes with nginx 1.9.11 09 Feb 2016 > > > > *) Feature: TCP support in resolver. > > the rDNS module (https://www.nginx.com/resources/wiki/modules/rdns/) don't > compile anymore > > cc -c -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wall > -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I > src/http -I src/http/modules -I src/http/v2 -I src/mail \ > -o objs/addon/nginx-http-rdns-20140411/ngx_http_rdns_module.o \ > ./nginx-http-rdns-20140411//ngx_http_rdns_module.c > ./nginx-http-rdns-20140411//ngx_http_rdns_module.c: In function > 'merge_loc_conf': > ./nginx-http-rdns-20140411//ngx_http_rdns_module.c:217:89: error: > 'ngx_resolver_t' has no member named 'udp_connections' > if (conf->conf.enabled && ((core_loc_cf->resolver == NULL) || > (core_loc_cf->resolver->udp_connections.nelts == 0))) { > ^ > make[2]: *** [objs/addon/nginx-http-rdns-20140411/ngx_http_rdns_module.o] > Error 1 "connections" should be used instead of "udp_connections". [..] -- Roman Arutyunyan From nginx-forum at forum.nginx.org Tue Feb 9 20:53:45 2016 From: nginx-forum at forum.nginx.org (gglater62) Date: Tue, 09 Feb 2016 15:53:45 -0500 Subject: x-accel-redirect enables caching for POST requests In-Reply-To: <20160209181518.GL70672@mdounin.ru> References: <20160209181518.GL70672@mdounin.ru> Message-ID: <9dd34aed7a04bb03278d1fae37a76d44.NginxMailingListEnglish@forum.nginx.org> I found a workaround: set $method $request_method; if ($request ~ ^POST) { set $method POST; } fastcgi_param REQUEST_METHOD $method; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263661,264447#msg-264447 From sca at andreasschulze.de Tue Feb 9 21:31:44 2016 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 09 Feb 2016 22:31:44 +0100 Subject: nginx-1.9.11 (Patch to build rDNS module) In-Reply-To: <20160209204205.GC87317@Romans-MacBook-Air.local> References: <20160209142931.GF70672@mdounin.ru> <20160209205942.Horde.8LShQT14m5Or6VbXNIjUjZg@andreasschulze.de> <20160209204205.GC87317@Romans-MacBook-Air.local> Message-ID: <20160209223144.Horde.Z6cU1O1rYGuXiIEFMNcamaw@andreasschulze.de> Roman Arutyunyan: > On Tue, Feb 09, 2016 at 08:59:42PM +0100, A. Schulze wrote: >> >> Maxim Dounin: >> >> >Changes with nginx 1.9.11 >> 09 Feb 2016 >> > >> > *) Feature: TCP support in resolver. >> >> the rDNS module (https://www.nginx.com/resources/wiki/modules/rdns/) don't >> compile anymore >> >> cc -c -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wall >> -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I >> src/http -I src/http/modules -I src/http/v2 -I src/mail \ >> -o objs/addon/nginx-http-rdns-20140411/ngx_http_rdns_module.o \ >> ./nginx-http-rdns-20140411//ngx_http_rdns_module.c >> ./nginx-http-rdns-20140411//ngx_http_rdns_module.c: In function >> 'merge_loc_conf': >> ./nginx-http-rdns-20140411//ngx_http_rdns_module.c:217:89: error: >> 'ngx_resolver_t' has no member named 'udp_connections' >> if (conf->conf.enabled && ((core_loc_cf->resolver == NULL) || >> (core_loc_cf->resolver->udp_connections.nelts == 0))) { >> ^ >> make[2]: *** [objs/addon/nginx-http-rdns-20140411/ngx_http_rdns_module.o] >> Error 1 > > "connections" should be used instead of "udp_connectionss" Roman, thanks for the hint. The attached patch solve at least the compile error. (still untested if the module still work) Andreas -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-http-rdns-20140411.patch Type: text/x-diff Size: 717 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Tue Feb 9 23:50:15 2016 From: nginx-forum at forum.nginx.org (smsmaddy1981) Date: Tue, 09 Feb 2016 18:50:15 -0500 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: References: Message-ID: Hi Lukas, Any other reference on the user directive usage please? Best Regards, Maddy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264319,264450#msg-264450 From nginx-forum at forum.nginx.org Wed Feb 10 08:44:13 2016 From: nginx-forum at forum.nginx.org (Ortal) Date: Wed, 10 Feb 2016 03:44:13 -0500 Subject: Set last modified out header Message-ID: Hello, I am building my own NGINX module, when I try to set the last modified out header I have a struct timespec which I need to convert to last modified. When I set the last modified with the tv_sec the last modified get invalid date. How can I set the last modified correctly? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264458,264458#msg-264458 From nginx-forum at forum.nginx.org Wed Feb 10 10:01:41 2016 From: nginx-forum at forum.nginx.org (jshare) Date: Wed, 10 Feb 2016 05:01:41 -0500 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: References: <3250915.Mj8n3KtA6P@vbart-workstation> Message-ID: <3f022ad603f2ea4e5e94b932eb71ecda.NginxMailingListEnglish@forum.nginx.org> I also get a compile error related to Pagespeed: -o objs/addon/src/ngx_message_handler.o \ /root/ngx_pagespeed-release-1.10.33.4-beta/src/ngx_message_handler.cc cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -march=i686 -D_GLIBCXX_USE_CXX11_ABI=0 -I src/core -I src/event -I src/event/modules -I src/os/unix -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/third_party/chromium/src -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/third_party/google-sparsehash/src -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/third_party/google-sparsehash/gen/arch/linux/ia32/include -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/third_party/protobuf/src -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/third_party/re2/src -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/out/Debug/obj/gen -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/out/Debug/obj/gen/protoc_out/instaweb -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/third_party/apr/src/include -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/third_party/aprutil/src/include -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/third_party/apr/gen/arch/linux/ia32/include -I /root/ngx_pagespeed-release-1.10.33.4-beta/psol/include/third_party/aprutil/gen/arch/linux/ia32/include -I /usr/include/libxml2 -I objs -I src/http -I src/http/modules -I src/http/v2 -I src/mail \ -o objs/addon/src/ngx_pagespeed.o \ /root/ngx_pagespeed-release-1.10.33.4-beta/src/ngx_pagespeed.cc /root/ngx_pagespeed-release-1.10.33.4-beta/src/ngx_pagespeed.cc:3133:1: error: deprecated conversion from string constant to ?char*? [-Werror=write-strings] }; ^ /root/ngx_pagespeed-release-1.10.33.4-beta/src/ngx_pagespeed.cc:3148:1: error: deprecated conversion from string constant to ?char*? [-Werror=write-strings] }; ^ cc1plus: all warnings being treated as errors make[1]: *** [objs/addon/src/ngx_pagespeed.o] Error 1 make[1]: Leaving directory `/root/temp/nginx-1.9.11' make: *** [build] Error 2 ------------ 1.9.10 compiles without an issue with pagespeed 1.10.33.2 or .4 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264422,264460#msg-264460 From nginx-forum at forum.nginx.org Wed Feb 10 10:35:55 2016 From: nginx-forum at forum.nginx.org (maziar) Date: Wed, 10 Feb 2016 05:35:55 -0500 Subject: Nginx for media streaming In-Reply-To: <22236878de2c794341194b343fbf5c72@ultra-secure.de> References: <22236878de2c794341194b343fbf5c72@ultra-secure.de> Message-ID: <6e77b32e59f73af1e81902b17b9232f8.NginxMailingListEnglish@forum.nginx.org> is there any better help for open source version ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264416,264461#msg-264461 From luky-37 at hotmail.com Wed Feb 10 11:15:53 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 10 Feb 2016 12:15:53 +0100 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: <3f022ad603f2ea4e5e94b932eb71ecda.NginxMailingListEnglish@forum.nginx.org> References: <3250915.Mj8n3KtA6P@vbart-workstation>, , <3f022ad603f2ea4e5e94b932eb71ecda.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I also get a compile error related to Pagespeed: > [...] > /root/ngx_pagespeed-release-1.10.33.4-beta/src/ngx_pagespeed.cc:3148:1: > error: deprecated conversion from string constant to ?char*? > [-Werror=write-strings] > }; > ^ > cc1plus: all warnings being treated as errors Apply this patch: http://hg.nginx.org/nginx/rev/ff1e625ae55b Or pull latest source code from the repository. Regards, Lukas From mdounin at mdounin.ru Wed Feb 10 13:00:46 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Feb 2016 16:00:46 +0300 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: References: <3250915.Mj8n3KtA6P@vbart-workstation> <3f022ad603f2ea4e5e94b932eb71ecda.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160210130046.GP70672@mdounin.ru> Hello! On Wed, Feb 10, 2016 at 12:15:53PM +0100, Lukas Tribus wrote: > > > I also get a compile error related to Pagespeed: > > [...] > > /root/ngx_pagespeed-release-1.10.33.4-beta/src/ngx_pagespeed.cc:3148:1: > > error: deprecated conversion from string constant to ?char*? > > [-Werror=write-strings] > > }; > > ^ > > cc1plus: all warnings being treated as errors > > Apply this patch: > http://hg.nginx.org/nginx/rev/ff1e625ae55b > > Or pull latest source code from the repository. Trivial workaround for build failures due to warnings is to use -Wno-error, e.g.: ./configure --with-cc-opt="-Wno-error" ... In this particular case, -Wno-write-strings can be used instead, e.g.: ./configure --with-cc-opt="-Wno-write-strings" ... -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Feb 10 13:15:43 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Feb 2016 16:15:43 +0300 Subject: Set last modified out header In-Reply-To: References: Message-ID: <20160210131543.GR70672@mdounin.ru> Hello! On Wed, Feb 10, 2016 at 03:44:13AM -0500, Ortal wrote: > Hello, > I am building my own NGINX module, when I try to set the last modified out > header I have a struct timespec which I need to convert to last modified. > When I set the last modified with the tv_sec the last modified get invalid > date. > > How can I set the last modified correctly? The "struct timespec" is not guaranteed to contain time since any specific point in the past. So the question is: how did you got the struct timespec? E.g., when using clock_gettime(CLOCK_REALTIME), tv_sec is expected to contain seconds since the Epoch, and it should work fine. But when using clock_gettime(CLOCK_MONOTONIC) tv_sec is "since an unspecified point in the past" (quoting POSIX, see [1]), and it's not possible to convert it to time since the Epoch as required by r->headers_out.last_modified_time. [1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_gettime.html -- Maxim Dounin http://nginx.org/ From richard at kearsley.me Wed Feb 10 16:25:06 2016 From: richard at kearsley.me (Richard Kearsley) Date: Wed, 10 Feb 2016 16:25:06 +0000 Subject: does proxy_ssl_verify verify server name? Message-ID: <56BB6462.4040502@kearsley.me> Hello I'm trying to enable this option on a proxy_pass location: proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 9 /etc/ssl/certs/ca-certificates.crt is compiled by update-ca-certificates (http://manpages.ubuntu.com/manpages/trusty/man8/update-ca-certificates.8.html) My understanding is that this option will prevent, for example, self-signed certificates or certificates where the server name requested is different than in the certificate, is that correct? I have tried it and while it works for self-signed (returns 502) it still lets a non matching server name through the proxy (properly signed certificate, but wrong name) Thanks Richard From steeeeeveee at gmx.net Wed Feb 10 16:34:10 2016 From: steeeeeveee at gmx.net (Steve) Date: Wed, 10 Feb 2016 17:34:10 +0100 Subject: 1.9.11 does not build Message-ID: An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 10 17:27:37 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Feb 2016 20:27:37 +0300 Subject: does proxy_ssl_verify verify server name? In-Reply-To: <56BB6462.4040502@kearsley.me> References: <56BB6462.4040502@kearsley.me> Message-ID: <20160210172737.GV70672@mdounin.ru> Hello! On Wed, Feb 10, 2016 at 04:25:06PM +0000, Richard Kearsley wrote: > Hello > I'm trying to enable this option on a proxy_pass location: > > proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt; > proxy_ssl_verify on; > proxy_ssl_verify_depth 9 > > /etc/ssl/certs/ca-certificates.crt is compiled by update-ca-certificates (http://manpages.ubuntu.com/manpages/trusty/man8/update-ca-certificates.8.html) > > My understanding is that this option will prevent, for example, self-signed > certificates or certificates where the server name requested is different > than in the certificate, is that correct? Yes. > I have tried it and while it works for self-signed (returns 502) it still > lets a non matching server name through the proxy (properly signed > certificate, but wrong name) Please provide an example. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Feb 11 02:30:49 2016 From: nginx-forum at forum.nginx.org (qbitx) Date: Wed, 10 Feb 2016 21:30:49 -0500 Subject: nginx-1.9.11 In-Reply-To: <20160209142931.GF70672@mdounin.ru> References: <20160209142931.GF70672@mdounin.ru> Message-ID: <9fe6c80af109c2b6b0ef85b001d2e98c.NginxMailingListEnglish@forum.nginx.org> I'm having issues with 1.9.11. With the exact same build configuration as I used for 1.9.10, and the exact same config files for nginx - I am now getting this when I run nginx -t at the command-line: nginx: [emerg] unknown directive "charset" in /etc/nginx/nginx.conf:62 nginx: configuration file /etc/nginx/nginx.conf test failed When I comment out that particular line in the nginx.conf file, I am then presented with another error: nginx: [emerg] unknown directive "add_header" in /etc/nginx/ssl.conf:28 nginx: configuration file /etc/nginx/nginx.conf test failed Not sure why I'm observing this behavior, when the exact same build configuration for 1.9.10 worked just fine. It seems like some of the core modules (e.g.: ngx_http_charset_module, ngx_http_headers_module) are somehow missing from my build? Am I missing something? These are the module-related flags I provide to "./configure": --with-pcre-jit \ --with-ipv6 \ --with-file-aio \ --with-http_ssl_module \ --with-http_v2_module \ --with-http_realip_module \ --without-http_map_module \ --without-http_memcached_module \ --without-http_autoindex_module \ --without-http_split_clients_module \ --without-http_upstream_ip_hash_module \ --without-http_upstream_least_conn_module \ --without-http_upstream_keepalive_module \ --without-http_userid_module \ --without-http_empty_gif_module \ --without-http_auth_basic_module \ --without-mail_pop3_module \ --without-mail_smtp_module \ --without-mail_imap_module \ --without-http_uwsgi_module \ --without-http_scgi_module \ --without-http_browser_module \ --add-module=$HOME/naxsi-${NAXSI_VER}/naxsi_src \ --add-module=$HOME/ngx_pagespeed-${PAGESPEED_VER}-beta Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264422,264481#msg-264481 From mdounin at mdounin.ru Thu Feb 11 03:07:47 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Feb 2016 06:07:47 +0300 Subject: nginx-1.9.11 In-Reply-To: <9fe6c80af109c2b6b0ef85b001d2e98c.NginxMailingListEnglish@forum.nginx.org> References: <20160209142931.GF70672@mdounin.ru> <9fe6c80af109c2b6b0ef85b001d2e98c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160211030747.GW70672@mdounin.ru> Hello! On Wed, Feb 10, 2016 at 09:30:49PM -0500, qbitx wrote: > I'm having issues with 1.9.11. With the exact same build configuration as I > used for 1.9.10, and the exact same config files for nginx - I am now > getting this when I run nginx -t at the command-line: > > nginx: [emerg] unknown directive "charset" in /etc/nginx/nginx.conf:62 > nginx: configuration file /etc/nginx/nginx.conf test failed > > When I comment out that particular line in the nginx.conf file, I am then > presented with another error: > > nginx: [emerg] unknown directive "add_header" in /etc/nginx/ssl.conf:28 > nginx: configuration file /etc/nginx/nginx.conf test failed > > Not sure why I'm observing this behavior, when the exact same build > configuration for 1.9.10 worked just fine. It seems like some of the core > modules (e.g.: ngx_http_charset_module, ngx_http_headers_module) are somehow > missing from my build? Am I missing something? These are the > module-related flags I provide to "./configure": [...] > --add-module=$HOME/naxsi-${NAXSI_VER}/naxsi_src \ > --add-module=$HOME/ngx_pagespeed-${PAGESPEED_VER}-beta Most likely this is ngx_pagespeed 3rd party module which causes the problem you are seeing. It tries to do unsupported modifications of order of modules, and was beaten by internal changes in configure scripts in 1.9.11. Quick look suggests the problem is expected to be fixed by this changeset: https://github.com/pagespeed/ngx_pagespeed/commit/daa6031294403a25a4fd2f4c7b6a16a349df8af2 That is, either update ngx_pagespeed to the latest git checkout, or compile nginx without it. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Feb 11 08:40:07 2016 From: nginx-forum at forum.nginx.org (jshare) Date: Thu, 11 Feb 2016 03:40:07 -0500 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: <20160210130046.GP70672@mdounin.ru> References: <20160210130046.GP70672@mdounin.ru> Message-ID: >./configure --with-cc-opt="-Wno-write-strings" ... When I tried this, I get: Starting nginx: nginx: [emerg] unknown directive "charset" in /etc/nginx/nginx.conf:15 nginx: configuration file /etc/nginx/nginx.conf test failed ---- But I just saw your other reply re the ngx_pagespeed changeset, so that's what I'll look into Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264422,264484#msg-264484 From nginx-forum at forum.nginx.org Thu Feb 11 10:43:30 2016 From: nginx-forum at forum.nginx.org (Deeptha) Date: Thu, 11 Feb 2016 05:43:30 -0500 Subject: i'm getting following error when compiling nginx 1.8.1 Message-ID: following is the error message /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c: In function 'ngx_tcp_send': /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c:351: error: 'NGX_LOG_DEBUG_TCP' undeclared (first use in this function) /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c:351: error: (Each undeclared identifier is reported only once /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c:351: error: for each function it appears in.) /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c: In function 'ngx_tcp_finalize_session': /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c:410: error: 'NGX_LOG_DEBUG_TCP' undeclared (first use in this function) /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c: In function 'ngx_tcp_close_connection': /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c:431: error: 'NGX_LOG_DEBUG_TCP' undeclared (first use in this function) /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c: In function 'ngx_tcp_cleanup_add': /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c:519: error: 'NGX_LOG_DEBUG_TCP' undeclared (first use in this function) make[1]: *** [objs/addon/nginx_tcp_proxy_module-master/ngx_tcp_session.o] Error 1 make[1]: Leaving directory `/root/rpmbuild/BUILD/nginx-1.8.1' make: *** [build] Error 2 error: Bad exit status from /var/tmp/rpm-tmp.y2Vbr0 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.y2Vbr0 (%build) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ following are the modules i compiles nginx with nginx-sticky-module-master nginx_upstream_check_module-master nginx_tcp_proxy_module-maste Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264485,264485#msg-264485 From nginx-forum at forum.nginx.org Thu Feb 11 15:14:36 2016 From: nginx-forum at forum.nginx.org (dontknowwhoiam) Date: Thu, 11 Feb 2016 10:14:36 -0500 Subject: Nginx and conditional proxy_next_upstream directive Message-ID: <9134c14da4f9ea11d0caf53699756b1f.NginxMailingListEnglish@forum.nginx.org> I want nginx to prevent trying the next upstream if the request is a POST request and the request just timed out. POSTs should only be repeated on error. I tried this config to implement it: if ($request_method = POST) { proxy_next_upstream error; } But this fails with: nginx: [emerg] "proxy_next_upstream" directive is not allowed here... I tried it within location context as well as server context, but got always the same error while configurations like these are running fine: if ($request_method = POST) { return 403; } Or just (without if): proxy_next_upstream error; Is this a bug in nginx or is this related to "if is evil"? (https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/) What is the best way to implement the above functionality with ningx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264486,264486#msg-264486 From nginx-forum at forum.nginx.org Thu Feb 11 18:26:48 2016 From: nginx-forum at forum.nginx.org (piyushmalhotra) Date: Thu, 11 Feb 2016 13:26:48 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: <9cb5a30f43761d8343e263458878c2da.NginxMailingListEnglish@forum.nginx.org> References: <9cb5a30f43761d8343e263458878c2da.NginxMailingListEnglish@forum.nginx.org> Message-ID: I am facing the same problem. I had one ssl certificate setup for the following domains: firstdomain.com *.firstdomain.com a.firstdomain.com b.firstdomain.com a.seconddomain.com b.seconddomain.com When i read that it could be due to multiple different domains using the same ssl certificate, i removed the seconddomain.com server blocks from my nginx. So now i am left with: firstdomain.com *.firstdomain.com a.firstdomain.com b.firstdomain.com The error frequency seems to have gone down but i still see some errors. How do you suggest i solve it? I can try and merge a.firstdomain.com and b.subdomain.com so that they can be accessed from the same server block *.firstdomain.com Then only 2 server blocks will be left: firstdomain.com *.firstdomain.com Do you think i'll still face the same issue after this? If yes, should i move firstdomain.com to a different server? If no, how do you suggest i try to solve this? It would be great if you can help me solve this. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,256373,264488#msg-264488 From agentzh at gmail.com Thu Feb 11 21:57:44 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 11 Feb 2016 13:57:44 -0800 Subject: i'm getting following error when compiling nginx 1.8.1 In-Reply-To: References: Message-ID: Hello! On Thu, Feb 11, 2016 at 2:43 AM, Deeptha wrote: > /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c > /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c: > In function 'ngx_tcp_send': > /root/rpmbuild/SOURCES/modules/nginx_tcp_proxy_module-master/ngx_tcp_session.c:351: > error: 'NGX_LOG_DEBUG_TCP' undeclared (first use in this function) Maybe you should just use the official ngx_stream_proxy_module in NGINX 1.9.x? http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html Best regards, -agentzh From steve at greengecko.co.nz Thu Feb 11 23:04:24 2016 From: steve at greengecko.co.nz (steve) Date: Fri, 12 Feb 2016 12:04:24 +1300 Subject: problems with building nginx 1.9.11 ( with pagespeed ) Message-ID: <56BD1378.1080005@greengecko.co.nz> Server debian jessie, and I'm using the same build script ( which does include 3rd party stuff ) a 1.9.10 which built flawlessly.... To get it to build at all, I had to add the option ' -Wno-write-strings' to the CFLAGS in objs/Makefile, and the resultant executable resulted in the current config showing up [emerg] 27021#0: unknown directive "gzip" in /etc/nginx/nginx.conf reverting to 1.9.10 runs fine. config is as follows: ./configure --prefix=/etc/nginx \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/run/nginx.lock \ --http-client-body-temp-path=/var/cache/nginx/client_temp \ --http-proxy-temp-path=/var/cache/nginx/proxy_temp \ --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \ --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \ --http-scgi-temp-path=/var/cache/nginx/scgi_temp \ --user=nginx \ --group=nginx \ --with-ipv6 \ --with-http_ssl_module \ --with-http_v2_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_sub_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_stub_status_module \ --with-http_geoip_module \ --with-mail \ --with-mail_ssl_module \ --with-file-aio \ --with-openssl=/usr/local/src/openssl-1.0.2f \ --add-module=../echo-nginx-module \ --add-module=../ngx_pagespeed-release-1.10.33.4-beta \ '--with-cc-opt=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic ' -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From steve at greengecko.co.nz Thu Feb 11 23:05:59 2016 From: steve at greengecko.co.nz (steve) Date: Fri, 12 Feb 2016 12:05:59 +1300 Subject: problems with building nginx 1.9.11 ( with pagespeed ) In-Reply-To: <56BD1378.1080005@greengecko.co.nz> References: <56BD1378.1080005@greengecko.co.nz> Message-ID: <56BD13D7.6030804@greengecko.co.nz> Sorry! On 02/12/2016 12:04 PM, steve wrote: > Server debian jessie, and I'm using the same build script ( which does > include 3rd party stuff ) a 1.9.10 which built flawlessly.... > > To get it to build at all, I had to add the option ' > -Wno-write-strings' to the CFLAGS in objs/Makefile, and the resultant > executable resulted in the current config showing up > > [emerg] 27021#0: unknown directive "gzip" in /etc/nginx/nginx.conf > > Just read the rest of the list. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From vigs.prof at gmail.com Fri Feb 12 17:34:19 2016 From: vigs.prof at gmail.com (Vignesh VR) Date: Fri, 12 Feb 2016 23:04:19 +0530 Subject: Remove path prefix during proxy_pass Message-ID: Hi, I am trying to use Nginx as a reverse proxy and want to redirect to different upstream servers based on the URL. Looking to use the path prefix to distinguish the upstream servers to be used. But the issue is the path prefix gets sent to the upstream server which I don't want to. For eg., this is what I am trying to do: Nginx Server: nginx (External IP) upstream 1: upstream1 (Internal IP) upstream 2: upstream2 (Internal IP) Desired outcome: http://nginx/server1 --------------> http://upstream1 http://nginx/server2 ----------------> http://upstream2 The problem I am having is that http://nginx/server1 gets translated to http://upstream1/server1. I am not able to remove the path prefix when the traffic gets directed to the upstream servers. I have tried most of the suggestions on the net: location /server1/ { proxy_pass http://upstream1/; } (tried without slash) location /server1/ { proxy_pass http://upstream1; } (tried using regex) location /server1/ { rewrite ^/server1/(.*) /$1 break; proxy_pass http://upstream1/; } All of the above results in varied behaviors but none gives the desired outcome. I am currently using a non-commercial version of nginx. And please note that I am using IP addresses everywhere and not hostnames. So name resolution issues anywhere here. Any guidance / help / pointers will be highly appreciated. Thanks Vigs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Feb 12 18:22:50 2016 From: nginx-forum at forum.nginx.org (smsmaddy1981) Date: Fri, 12 Feb 2016 13:22:50 -0500 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: References: Message-ID: <38430e3b22826dbfb8883edbc1615c59.NginxMailingListEnglish@forum.nginx.org> Any other feasibility to achieve restart of Nginx from user other than root? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264319,264498#msg-264498 From nginx-forum at forum.nginx.org Fri Feb 12 19:22:20 2016 From: nginx-forum at forum.nginx.org (nitin) Date: Fri, 12 Feb 2016 14:22:20 -0500 Subject: Proxy domain rewrite using proxy_cookie_domain Message-ID: <4500a01440bb7a5e5dbab8bd9945adee.NginxMailingListEnglish@forum.nginx.org> I want to rewrite domains in the cookie when NGIX is acting as reverse proxy. I see that NGIX support this using proxy_cookie_domain module. But I am unable to find out where does it keep the original domain which is being replaced? In my opinion NGIX would need the original domain to find out where to send the cookie when it comes back to NIGX in next request. Let's says NGIX domain is: external.com backend server #1 sets cookie domain as: server1.com backend server #2 sets cookie domain as: server2.com Both these domains are replaced by NGIX to NGIX's domain so both cookies' domain is now external.com when the request comes to NGIX, both these cookies will be sent by browser to NGIX, now how does NGIX decide which cookies to be sent to the backend server? it needs to keep the original domain mapping to find this how, does it keep somewhere? please help in answering this question. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264499,264499#msg-264499 From nginx-forum at forum.nginx.org Fri Feb 12 19:35:03 2016 From: nginx-forum at forum.nginx.org (zharvey) Date: Fri, 12 Feb 2016 14:35:03 -0500 Subject: Does nginx have a Routing API? Message-ID: I am trying to see if nginx (OSS, not commercial) can be used not only as a load balancer, but as a network router/switch in the event that I need to shut my app down and redirect traffic to a CDN/static page, etc. I was hoping to find a REST API that might allow me to configure routing rules on the fly but alas, I see nothing. Does nginx provide this functionality out of the box? Or can I pair it with something that does? This would be a Java app it is balancing, and I see there is an nginx-clojure (https://github.com/nginx-clojure/nginx-clojure) module. So perhaps I could expose a REST endpoint through Java (running on the nginx server) somehow...thoughts? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264500,264500#msg-264500 From nginx-forum at forum.nginx.org Fri Feb 12 22:32:39 2016 From: nginx-forum at forum.nginx.org (intercollector) Date: Fri, 12 Feb 2016 17:32:39 -0500 Subject: Error Log - 403 access logs Message-ID: We use our nginx error logs to monitor our system closely. Recently, we've been hit with a lot of requests with malicious intent, and thus have blocked their IPs using a "deny " directive. This has worked as expected, but unfortunately, our error logs are still flooded with "error" with "access forbidden by rule...." messages. Why is a successful denial being logged as an error? I would expect that this is correct behaviour, and thus should not be logged as an error. In any event, is their any way to suppress 403 denied messages from the error log without bumping up the logging level? We don't want to change the log level, as the "upstream timed out" error is also at level "error" and is something we really want to keep an eye on. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264501,264501#msg-264501 From mdounin at mdounin.ru Sat Feb 13 01:46:27 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 13 Feb 2016 04:46:27 +0300 Subject: Remove path prefix during proxy_pass In-Reply-To: References: Message-ID: <20160213014627.GM70672@mdounin.ru> Hello! On Fri, Feb 12, 2016 at 11:04:19PM +0530, Vignesh VR wrote: > Hi, > > I am trying to use Nginx as a reverse proxy and want to redirect to > different upstream servers based on the URL. Looking to use the path prefix > to distinguish the upstream servers to be used. But the issue is the path > prefix gets sent to the upstream server which I don't want to. > > For eg., this is what I am trying to do: > > Nginx Server: nginx (External IP) > > upstream 1: upstream1 (Internal IP) > > upstream 2: upstream2 (Internal IP) > > Desired outcome: > http://nginx/server1 --------------> http://upstream1 > > http://nginx/server2 ----------------> http://upstream2 > > > The problem I am having is that http://nginx/server1 gets translated to > http://upstream1/server1. I am not able to remove the path prefix when the > traffic gets directed to the upstream servers. > > I have tried most of the suggestions on the net: > > > location /server1/ { > proxy_pass http://upstream1/; > } This is correct configuration for what you want, it will replace "/server1/" with "/" in requests to upstream. > (tried without slash) > location /server1/ { > proxy_pass http://upstream1; > } This is expected to preserve URI as is. > (tried using regex) > location /server1/ { > rewrite ^/server1/(.*) /$1 break; > proxy_pass http://upstream1/; > } This configuration is slightly more complicated than it should be, though will also cause "/server1/" to be replaced with "/". > All of the above results in varied behaviors but none gives the desired > outcome. I am currently using a non-commercial version of nginx. > > And please note that I am using IP addresses everywhere and not hostnames. > So name resolution issues anywhere here. > > Any guidance / help / pointers will be highly appreciated. At least two of the above configurations are expected to result in what you are asking for. So the problem is likely elsewhere. Some things to check: - If you are testing it right? Make sure to use low-level tools like telnet or curl. Browsers tend to do much more than they should and hide correct information even in developer tools. In particular, browser caches often confuse users. - Check if the configuration you change is actually used. Sometimes people forgot to reload a configuration, or it's not reloaded due to some syntax error, or they just try to change wrong part of the configuration. Something like "return 200 ok;" in the appropriate location is a good way to check things. - Check how your backends handle things. Note that if your backend expects to be reacheable on certain path (or even domain), it will use [wrong] full path in redirects, links to various resources, or may even hardcode in binary objects like flash. This will render the site unusable even if all proxying is done correctly. Some trivial things like redirects can be fixed by nginx itself (see http://nginx.org/r/proxy_redirect), but in general you have to configure upstream servers to properly handle this. Note well that debugging log is known to be helpful while debugging complex cases, see http://nginx.org/en/docs/debugging_log.html. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sat Feb 13 01:54:02 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 13 Feb 2016 04:54:02 +0300 Subject: Proxy domain rewrite using proxy_cookie_domain In-Reply-To: <4500a01440bb7a5e5dbab8bd9945adee.NginxMailingListEnglish@forum.nginx.org> References: <4500a01440bb7a5e5dbab8bd9945adee.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160213015402.GN70672@mdounin.ru> Hello! On Fri, Feb 12, 2016 at 02:22:20PM -0500, nitin wrote: > I want to rewrite domains in the cookie when NGIX is acting as reverse > proxy. I see that NGIX support this using proxy_cookie_domain module. > > But I am unable to find out where does it keep the original domain which is > being replaced? In my opinion NGIX would need the original domain to find > out where to send the cookie when it comes back to NIGX in next request. > > Let's says NGIX domain is: external.com > backend server #1 sets cookie domain as: server1.com > backend server #2 sets cookie domain as: server2.com > > Both these domains are replaced by NGIX to NGIX's domain so both cookies' > domain is now external.com > > when the request comes to NGIX, both these cookies will be sent by browser > to NGIX, now how does NGIX decide which cookies to be sent to the backend > server? it needs to keep the original domain mapping to find this how, does > it keep somewhere? The domain is only present in Set-Cookie response headers, but it is not available in HTTP requests. The client decides which cookies to send back to nginx in the Cookie request header, and nginx just passes the header with all cookies unmodified. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Sat Feb 13 07:20:49 2016 From: nginx-forum at forum.nginx.org (maziar) Date: Sat, 13 Feb 2016 02:20:49 -0500 Subject: pseudo-streaming problem Message-ID: i have compile nginx like this : nginx version: nginx/1.8.1 built by gcc 4.9.2 (Debian 4.9.2-10) built with OpenSSL 1.0.1k 8 Jan 2015 TLS SNI support enabled configure arguments: --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-http_dav_module --with-http_flv_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_mp4_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --user=www-data --group=www-data --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_stub_status_module --with-http_spdy_module --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --with-http_secure_link_module --add-module=naxsi-master/naxsi_src/ --with-http_gunzip_module --with-file-aio --with-http_addition_module --with-http_random_index_module --add-module=ngx_cache_purge-2.3/ --with-http_degradation_module --with-http_auth_request_module --with-pcre --with-google_perftools_module --with-debug --http-client-body-temp-path=/var/lib/nginx/client --add-module=nginx-rtmp-module-master/ --add-module=headers-more-nginx-module-master --add-module=nginx-vod-module-master/ and this is my html page source and i buy jwplayer, but i can not do this : http://example.com/elephants_dream.mp4?start=238.88 please help me , what should i do ? JWplayer HLS test
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264516,264516#msg-264516 From nginx-forum at forum.nginx.org Sat Feb 13 12:28:20 2016 From: nginx-forum at forum.nginx.org (jbostoen) Date: Sat, 13 Feb 2016 07:28:20 -0500 Subject: Nginx as proxy for Exchange 2013 : RPC? Message-ID: <3e3078b9be01907db38ac33acad4540c.NginxMailingListEnglish@forum.nginx.org> I've been searching the internet and tried a few approaches, but I was wondering: Is it possible to use Nginx (non-plus edition) as a proxy for Exchange (2013) - in particular: RPC. I did manage to get the other stuff working. Maybe by installing/enabling a (free) module? If so, would someone be so kind to share a working config? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264518,264518#msg-264518 From nginx-forum at forum.nginx.org Sat Feb 13 13:29:15 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 13 Feb 2016 08:29:15 -0500 Subject: Nginx as proxy for Exchange 2013 : RPC? In-Reply-To: <3e3078b9be01907db38ac33acad4540c.NginxMailingListEnglish@forum.nginx.org> References: <3e3078b9be01907db38ac33acad4540c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4f3386371f01c2ca1c3fbca069242889.NginxMailingListEnglish@forum.nginx.org> https://forum.nginx.org/read.php?11,261682,261769 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264518,264519#msg-264519 From nginx-forum at forum.nginx.org Sat Feb 13 14:02:59 2016 From: nginx-forum at forum.nginx.org (jbostoen) Date: Sat, 13 Feb 2016 09:02:59 -0500 Subject: Nginx as proxy for Exchange 2013 : RPC? In-Reply-To: <4f3386371f01c2ca1c3fbca069242889.NginxMailingListEnglish@forum.nginx.org> References: <3e3078b9be01907db38ac33acad4540c.NginxMailingListEnglish@forum.nginx.org> <4f3386371f01c2ca1c3fbca069242889.NginxMailingListEnglish@forum.nginx.org> Message-ID: <45647994a24aa2acd42b36deb58656cf.NginxMailingListEnglish@forum.nginx.org> It doesn't seem to mention the RPC (not RDP)? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264518,264520#msg-264520 From nginx-forum at forum.nginx.org Sat Feb 13 14:11:48 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 13 Feb 2016 09:11:48 -0500 Subject: Nginx as proxy for Exchange 2013 : RPC? In-Reply-To: <45647994a24aa2acd42b36deb58656cf.NginxMailingListEnglish@forum.nginx.org> References: <3e3078b9be01907db38ac33acad4540c.NginxMailingListEnglish@forum.nginx.org> <4f3386371f01c2ca1c3fbca069242889.NginxMailingListEnglish@forum.nginx.org> <45647994a24aa2acd42b36deb58656cf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3b2bef7327affd7d5213ca3fabd467a0.NginxMailingListEnglish@forum.nginx.org> If data is a tcp stream use stream, if not then a http {} block should work. https://forum.nginx.org/read.php?2,14903,35128#msg-35128 http://windowsitpro.com/exchange-server-2013/exchange-server-2013-transition-rpc-http Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264518,264522#msg-264522 From nginx-forum at forum.nginx.org Sat Feb 13 14:56:25 2016 From: nginx-forum at forum.nginx.org (jbostoen) Date: Sat, 13 Feb 2016 09:56:25 -0500 Subject: Nginx as proxy for Exchange 2013 : RPC? In-Reply-To: <3b2bef7327affd7d5213ca3fabd467a0.NginxMailingListEnglish@forum.nginx.org> References: <3e3078b9be01907db38ac33acad4540c.NginxMailingListEnglish@forum.nginx.org> <4f3386371f01c2ca1c3fbca069242889.NginxMailingListEnglish@forum.nginx.org> <45647994a24aa2acd42b36deb58656cf.NginxMailingListEnglish@forum.nginx.org> <3b2bef7327affd7d5213ca3fabd467a0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1f8669a59d5c1a437b5eeca0c1108c19.NginxMailingListEnglish@forum.nginx.org> So I just need a stream and proxy_pass it to our Exchange? (I'm new to Nginx, haven't seen stream config before; we're using SSL for everything). Also, is it possible to have a combination? The deal is: the current setup in our company is only 1 public IP address for a couple of services. Which used to be fine, since SSL was only used for Exchange (and our firewall was configured to only send the HTTP-traffic to the Nginx, and HTTPS to Exchange). But now, I need to able to split it out, based on the names ( apps.domain.ext and mail.domain.ext ). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264518,264523#msg-264523 From fahimehashrafy at gmail.com Sat Feb 13 15:57:33 2016 From: fahimehashrafy at gmail.com (Fahimeh Ashrafy) Date: Sat, 13 Feb 2016 19:27:33 +0330 Subject: hhvm and nginx Message-ID: Hello all I am using hhvm instead of php-fpm and using nginx. how can I tune hhvm to have better performance? Thanks a lot -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Sat Feb 13 18:19:17 2016 From: sca at andreasschulze.de (A. Schulze) Date: Sat, 13 Feb 2016 19:19:17 +0100 Subject: nginx-1.9.11 (Patch to build rDNS module) In-Reply-To: <20160209223144.Horde.Z6cU1O1rYGuXiIEFMNcamaw@andreasschulze.de> References: <20160209142931.GF70672@mdounin.ru> <20160209205942.Horde.8LShQT14m5Or6VbXNIjUjZg@andreasschulze.de> <20160209204205.GC87317@Romans-MacBook-Air.local> <20160209223144.Horde.Z6cU1O1rYGuXiIEFMNcamaw@andreasschulze.de> Message-ID: <20160213191917.Horde.7xnyk0yyGrEW-uMKZF5qdFG@andreasschulze.de> A. Schulze: > The attached patch solve at least the compile error. now also verified the module work with nginx-1.9.11 Andreas From brauliobo at gmail.com Sat Feb 13 18:22:08 2016 From: brauliobo at gmail.com (=?UTF-8?Q?Br=C3=A1ulio_Bhavamitra?=) Date: Sat, 13 Feb 2016 18:22:08 +0000 Subject: hhvm and nginx In-Reply-To: References: Message-ID: In this list you should ask: how can I tune nginx? But first some research should help. Em s?b, 13 de fev de 2016 12:57, Fahimeh Ashrafy escreveu: > Hello all > > I am using hhvm instead of php-fpm and using nginx. how can I tune hhvm to > have better performance? > > Thanks a lot > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Feb 14 11:22:25 2016 From: nginx-forum at forum.nginx.org (jbostoen) Date: Sun, 14 Feb 2016 06:22:25 -0500 Subject: Nginx as proxy for Exchange 2013 : RPC? In-Reply-To: <1f8669a59d5c1a437b5eeca0c1108c19.NginxMailingListEnglish@forum.nginx.org> References: <3e3078b9be01907db38ac33acad4540c.NginxMailingListEnglish@forum.nginx.org> <4f3386371f01c2ca1c3fbca069242889.NginxMailingListEnglish@forum.nginx.org> <45647994a24aa2acd42b36deb58656cf.NginxMailingListEnglish@forum.nginx.org> <3b2bef7327affd7d5213ca3fabd467a0.NginxMailingListEnglish@forum.nginx.org> <1f8669a59d5c1a437b5eeca0c1108c19.NginxMailingListEnglish@forum.nginx.org> Message-ID: <42d8f57dcf8c75f2faa8d64ffa78cf00.NginxMailingListEnglish@forum.nginx.org> I've had most success so far with this approach (Tigunov's config - https://gist.github.com/taddev/7275873). ( btw, I'm using Basic Authentication rather than NTLM ). server { server_name mail.contoso.com; server_name autodiscover.contoso.com; keepalive_timeout 3h; proxy_read_timeout 3h; #reset_timedout_connection on; tcp_nodelay on; listen 443 ssl; client_max_body_size 3G; #proxy_pass_header Authorization; proxy_pass_header Date; proxy_pass_header Server; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Accept-Encoding ""; proxy_pass_request_headers on; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_request_buffering off; proxy_buffering off; proxy_set_header Connection "Keep-Alive"; location / { proxy_pass https://exchange.internal/; proxy_next_upstream error timeout invalid_header http_500 http_503; } } With Microsoft's Remote Connectivity Analyzer, I now get up to the point where ActiveSync is tested. "OPTIONS" works, FolderSync fails ( something about a request being aborted or canceled, no further details ). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264518,264529#msg-264529 From lucas at slcoding.com Sun Feb 14 19:14:20 2016 From: lucas at slcoding.com (Lucas Rolff) Date: Sun, 14 Feb 2016 20:14:20 +0100 Subject: proxy_pass not seen as SNI-client according to Apache directive Message-ID: <56C0D20C.6090308@slcoding.com> Hi guys, I'm having a rather odd behavior - I use nginx as a reverse proxy (basically as a CDN) - where if the file isn't in cache, I do use proxy_pass to the origin server, to get the file and then cache it. This works perfectly in most cases, but if the origin is running apache and happen to use the Apache Directive "SSLStrictSNIVHostCheck" where it's set to On. Basically it decides whether a non-SNI client is allowed to access a name-based virtual host over SSL or not. But when using proxy_pass this seems to the apache server that it's a non-SNI client: [Sun Feb 14 19:32:50 2016] [error] No hostname was provided via SNI for a name based virtual host [Sun Feb 14 19:33:00 2016] [error] No hostname was provided via SNI for a name based virtual host I was able to replicate this issue on multiple nginx versions (both on 1.8.1, 1.9.9 and 1.9.10). It results in 403 forbidden for the client. If I set the directive SSLStrictSNIVHostCheck to off, I do not get a 403 forbidden - and the files I try to fetch gets fetched correctly. (Meaning proxy_pass do understand SNI). The nginx zone does a proxy_pass https://my_domain; and the my_domain is running on a server that runs SNI. Best Regards, Lucas Rolff From mdounin at mdounin.ru Sun Feb 14 20:58:22 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 14 Feb 2016 23:58:22 +0300 Subject: proxy_pass not seen as SNI-client according to Apache directive In-Reply-To: <56C0D20C.6090308@slcoding.com> References: <56C0D20C.6090308@slcoding.com> Message-ID: <20160214205822.GA34421@mdounin.ru> Hello! On Sun, Feb 14, 2016 at 08:14:20PM +0100, Lucas Rolff wrote: > I'm having a rather odd behavior - I use nginx as a reverse proxy (basically > as a CDN) - where if the file isn't in cache, I do use proxy_pass to the > origin server, to get the file and then cache it. > > This works perfectly in most cases, but if the origin is running apache and > happen to use the Apache Directive "SSLStrictSNIVHostCheck" where it's set > to On. http://nginx.org/r/proxy_ssl_server_name -- Maxim Dounin http://nginx.org/ From rpaprocki at fearnothingproductions.net Sun Feb 14 21:46:48 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sun, 14 Feb 2016 13:46:48 -0800 Subject: proxy_pass not seen as SNI-client according to Apache directive In-Reply-To: <20160214205822.GA34421@mdounin.ru> References: <56C0D20C.6090308@slcoding.com> <20160214205822.GA34421@mdounin.ru> Message-ID: <34F1666C-9867-4E77-8AF7-C09E00428860@fearnothingproductions.net> > On Feb 14, 2016, at 12:58, Maxim Dounin wrote: > > Hello! > >> On Sun, Feb 14, 2016 at 08:14:20PM +0100, Lucas Rolff wrote: >> >> I'm having a rather odd behavior - I use nginx as a reverse proxy (basically >> as a CDN) - where if the file isn't in cache, I do use proxy_pass to the >> origin server, to get the file and then cache it. >> >> This works perfectly in most cases, but if the origin is running apache and >> happen to use the Apache Directive "SSLStrictSNIVHostCheck" where it's set >> to On. > > http://nginx.org/r/proxy_ssl_server_name Out of curiosity, is there a philosophical/design reason this option is not enabled by default? From lucas at slcoding.com Sun Feb 14 21:52:11 2016 From: lucas at slcoding.com (Lucas Rolff) Date: Sun, 14 Feb 2016 22:52:11 +0100 Subject: proxy_pass not seen as SNI-client according to Apache directive In-Reply-To: <34F1666C-9867-4E77-8AF7-C09E00428860@fearnothingproductions.net> References: <56C0D20C.6090308@slcoding.com> <20160214205822.GA34421@mdounin.ru> <34F1666C-9867-4E77-8AF7-C09E00428860@fearnothingproductions.net> Message-ID: <56C0F70B.2080105@slcoding.com> Hi Maxim, Thank you a lot for the quick reply, I'll give it a test tomorrow morning! And Robert has a valid point indeed, why is it actually disabled by default? > Robert Paprocki > 14 February 2016 at 22:46 > > > Out of curiosity, is there a philosophical/design reason this option > is not enabled by default? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Maxim Dounin > 14 February 2016 at 21:58 > Hello! > > > http://nginx.org/r/proxy_ssl_server_name > > Lucas Rolff > 14 February 2016 at 20:14 > Hi guys, > > I'm having a rather odd behavior - I use nginx as a reverse proxy > (basically as a CDN) - where if the file isn't in cache, I do use > proxy_pass to the origin server, to get the file and then cache it. > > This works perfectly in most cases, but if the origin is running > apache and happen to use the Apache Directive "SSLStrictSNIVHostCheck" > where it's set to On. > > Basically it decides whether a non-SNI client is allowed to access a > name-based virtual host over SSL or not. > But when using proxy_pass this seems to the apache server that it's a > non-SNI client: > [Sun Feb 14 19:32:50 2016] [error] No hostname was provided via SNI > for a name based virtual host > [Sun Feb 14 19:33:00 2016] [error] No hostname was provided via SNI > for a name based virtual host > > I was able to replicate this issue on multiple nginx versions (both on > 1.8.1, 1.9.9 and 1.9.10). > It results in 403 forbidden for the client. > > If I set the directive SSLStrictSNIVHostCheck to off, I do not get a > 403 forbidden - and the files I try to fetch gets fetched correctly. > (Meaning proxy_pass do understand SNI). > > The nginx zone does a proxy_pass https://my_domain; and the my_domain > is running on a server that runs SNI. > > Best Regards, > Lucas Rolff -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 15 02:16:38 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Feb 2016 05:16:38 +0300 Subject: proxy_pass not seen as SNI-client according to Apache directive In-Reply-To: <34F1666C-9867-4E77-8AF7-C09E00428860@fearnothingproductions.net> References: <56C0D20C.6090308@slcoding.com> <20160214205822.GA34421@mdounin.ru> <34F1666C-9867-4E77-8AF7-C09E00428860@fearnothingproductions.net> Message-ID: <20160215021638.GB34421@mdounin.ru> Hello! On Sun, Feb 14, 2016 at 01:46:48PM -0800, Robert Paprocki wrote: > > On Feb 14, 2016, at 12:58, Maxim Dounin wrote: > > > > Hello! > > > >> On Sun, Feb 14, 2016 at 08:14:20PM +0100, Lucas Rolff wrote: > >> > >> I'm having a rather odd behavior - I use nginx as a reverse proxy (basically > >> as a CDN) - where if the file isn't in cache, I do use proxy_pass to the > >> origin server, to get the file and then cache it. > >> > >> This works perfectly in most cases, but if the origin is running apache and > >> happen to use the Apache Directive "SSLStrictSNIVHostCheck" where it's set > >> to On. > > > > http://nginx.org/r/proxy_ssl_server_name > > Out of curiosity, is there a philosophical/design reason this > option is not enabled by default? There was no support for client-side SNI till nginx 1.7.0, and when introduced it was set off by default to avoid breaking existing configurations. Additionally, client-side SNI discloses information about domain name used to connect to (which is bad from security point of view), and hardly make sense without peer certificate verification (http://nginx.org/r/proxy_ssl_verify), which is also off by default and can't be enabled without a list of trusted certificates. -- Maxim Dounin http://nginx.org/ From steve at greengecko.co.nz Mon Feb 15 04:30:19 2016 From: steve at greengecko.co.nz (steve) Date: Mon, 15 Feb 2016 17:30:19 +1300 Subject: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) In-Reply-To: <38430e3b22826dbfb8883edbc1615c59.NginxMailingListEnglish@forum.nginx.org> References: <38430e3b22826dbfb8883edbc1615c59.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56C1545B.8090009@greengecko.co.nz> As you're using a 'privileged' port ( ie one with a value lower than approx 1000 ), the process needs to be run by a superuser. You can set it up through sudo to reduce the risk for an ordinary user. On 02/13/2016 07:22 AM, smsmaddy1981 wrote: > Any other feasibility to achieve restart of Nginx from user other than root? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264319,264498#msg-264498 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at forum.nginx.org Mon Feb 15 06:29:01 2016 From: nginx-forum at forum.nginx.org (nitin) Date: Mon, 15 Feb 2016 01:29:01 -0500 Subject: Proxy domain rewrite using proxy_cookie_domain In-Reply-To: <20160213015402.GN70672@mdounin.ru> References: <20160213015402.GN70672@mdounin.ru> Message-ID: Thanks for reply. In case client is just a browser then it will send all the cookies with NGIX domain which means that NGIX will send all the cookies to backend server irrespective of who initially set it in set-cookie header.. This could be a security issue then. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264499,264537#msg-264537 From nginx-forum at forum.nginx.org Mon Feb 15 10:11:40 2016 From: nginx-forum at forum.nginx.org (vps4) Date: Mon, 15 Feb 2016 05:11:40 -0500 Subject: how can i set diffrent proxy cache time by diffrent uri Message-ID: <5ba1222474f43f84be195840bea1a7a6.NginxMailingListEnglish@forum.nginx.org> i tried to use like this server { set $cache_time 1d; if ($request_uri = "/") { set $cache_time 3d; } if ($request_uri ~ "^/other") { set $cache_time 30d; } location / { try_files $uri @fetch; } location @fetch { proxy_cache_valid 200 301 302 $cache_time; } } then i got error "invalid time value "$cache_time" in /etc/nginx/xxx.conf how can i fix this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264539,264539#msg-264539 From mdounin at mdounin.ru Mon Feb 15 13:06:17 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Feb 2016 16:06:17 +0300 Subject: Proxy domain rewrite using proxy_cookie_domain In-Reply-To: References: <20160213015402.GN70672@mdounin.ru> Message-ID: <20160215130617.GD34421@mdounin.ru> Hello! On Mon, Feb 15, 2016 at 01:29:01AM -0500, nitin wrote: > Thanks for reply. > In case client is just a browser then it will send all the cookies with NGIX > domain which means that NGIX will send all the cookies to backend server > irrespective of who initially set it in set-cookie header.. This could be a > security issue then. For sure - if you are using untrusted backend servers in your domain this can be a security issue. Regardless of what nginx does, actually - just Set-Cookie may be enough to be an issue. Moreover, any javascript returned by a backend server will be able to read all cookies as well. Of course this should be considered when using multiple backend servers within a single domain. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Feb 15 13:10:23 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Feb 2016 16:10:23 +0300 Subject: how can i set diffrent proxy cache time by diffrent uri In-Reply-To: <5ba1222474f43f84be195840bea1a7a6.NginxMailingListEnglish@forum.nginx.org> References: <5ba1222474f43f84be195840bea1a7a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160215131023.GE34421@mdounin.ru> Hello! On Mon, Feb 15, 2016 at 05:11:40AM -0500, vps4 wrote: > i tried to use like this > > server { > > set $cache_time 1d; > > if ($request_uri = "/") { > set $cache_time 3d; > } > > if ($request_uri ~ "^/other") { > set $cache_time 30d; > } > > location / { > try_files $uri @fetch; > } > > location @fetch { > proxy_cache_valid 200 301 302 $cache_time; > } > } > > then i got error "invalid time value "$cache_time" in /etc/nginx/xxx.conf > > how can i fix this? Variables are not supported by the proxy_cache_valid directive. Use different locations instead, e.g.: location / { try_files $uri @fetch; } location /other { try_files $uri @fetch_other; } location @fetch { proxy_cache_valid 200 301 302 3d; ... } location @fetch_other { proxy_cache_valid 200 301 302 30d; ... } -- Maxim Dounin http://nginx.org/ From florian.hesse at rbb-online.de Mon Feb 15 13:30:52 2016 From: florian.hesse at rbb-online.de (florian.hesse at rbb-online.de) Date: Mon, 15 Feb 2016 14:30:52 +0100 Subject: PUT/POST files to akamai CDN Message-ID: Hello everyone, i want to use nginx as a simple and effective way to generate mpeg-dash streams and push them to the akamai CDN network. There for i need to push the generated files with the POST Request method to the akamai entrypoint. But the problem is i have continuously generated files that have to upload same time they have generated. With curl i would use the -d for POST Requests. Does anyone of you think this is possible with nginx? Thank you very much and best regards, Florian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Feb 15 19:01:43 2016 From: nginx-forum at forum.nginx.org (maziar) Date: Mon, 15 Feb 2016 14:01:43 -0500 Subject: hhvm and nginx In-Reply-To: References: Message-ID: <59258e5c65dbd33438e03a0715eba858.NginxMailingListEnglish@forum.nginx.org> this may help you https://www.digitalocean.com/community/tutorials/how-to-install-hhvm-with-nginx-on-ubuntu-14-04 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264524,264548#msg-264548 From nginx-forum at forum.nginx.org Tue Feb 16 06:16:09 2016 From: nginx-forum at forum.nginx.org (matrixmatrix) Date: Tue, 16 Feb 2016 01:16:09 -0500 Subject: http2 enable or disable per virtual host Message-ID: <8e7d595333f7541f004f8d78978a736b.NginxMailingListEnglish@forum.nginx.org> Hi the ngx_http_v2_module could not switch enable or disable per virtual host. The following code, http.domainname.com is enabled http2. I would like to enable only the http2 of http2.domainname.com. server { listen 443 ssl http2; server_name http2.domainname.com; ssl_prefer_server_ciphers on; ssl_ciphers AESGCM:HIGH:!aNULL:!MD5; ssl on; ssl_certificate cert.pem; ssl_certificate_key key.pem; } server { listen 443 ssl; server_name http.domainname.com; ssl_prefer_server_ciphers on; ssl_ciphers AESGCM:HIGH:!aNULL:!MD5; ssl on; ssl_certificate cert.pem; ssl_certificate_key key.pem; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264549,264549#msg-264549 From ingwie2000 at googlemail.com Tue Feb 16 06:47:58 2016 From: ingwie2000 at googlemail.com (Kevin "Ingwie Phoenix" Ingwersen) Date: Tue, 16 Feb 2016 07:47:58 +0100 Subject: Forwarding HTTPS to VM's HTTPS... In-Reply-To: <8e7d595333f7541f004f8d78978a736b.NginxMailingListEnglish@forum.nginx.org> References: <8e7d595333f7541f004f8d78978a736b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1E8A60DA-0191-40BB-B187-673749277574@googlemail.com> Hey guys! StackOverflow didn?t do anything this time, so I decided to visit here and try asking my question here! :) A bit of backstory: I have had a fatal server crash. 464 days of uptime with unapplied updates from an OS upgrade, kernel patches and more. When I did do a reboot?it all exploded right into my face :( So I reinstalled. Now that I have learned this lesson, I decided to begin deploying things in containers - just raw Virtual Box VMs now, as I haven?t gotten used to Docker or Vagrant. But using a VM with NAT allows me to forward ports. One of the VMs is your typical web-server setup; MySQL, PHP5 (FPM) and Nginx (1.8.x). So I have my main server - the VM host - listening on 80 and 443 and forwarded the VM?s ports as 11080 and 11443. Forwarding regular HTTP works flawlessly by just proxy_pass?ing to the other port. No problem here. But how do I work out a reverse-proxy for HTTPS traffic? Mainly, I have another VM that runs OwnCloud. I want to forward my host?s 443 port to the VM?s exposed 12443 port so that OwnCloud stops complaining about being opened via raw HTTP. Since I am re-using configuration a lot, I have created a basic_proxy file, and a regular sites-enabled/ file. You can see them here: https://gist.github.com/IngwiePhoenix/19631bd07af62d23b8f3 > Would be cool if I could keep with this approach to simply forward traffic to my various VMs, but keeping my config reusable! Kind regards, Ingwie. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 16 08:37:01 2016 From: nginx-forum at forum.nginx.org (cehes) Date: Tue, 16 Feb 2016 03:37:01 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: <251d2d0b363c3b612e11a2ca77924613.NginxMailingListEnglish@forum.nginx.org> References: <20120109140615.GE67687@mdounin.ru> <251d2d0b363c3b612e11a2ca77924613.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, we are now 4 years later, is there a solution now ? I read things like : ---------------------------------------- upstream http_backend { server 1.1.1.1:80; keepalive 16; } server { ... location / { proxy_pass http://http_backend/; proxy_http_version 1.1; proxy_set_header Connection ""; ... } } ---------------------------------------- Is this the solution ? Somebody tried it ? thanks a lot Posted at Nginx Forum: https://forum.nginx.org/read.php?2,72871,264550#msg-264550 From nginx-forum at forum.nginx.org Tue Feb 16 08:47:50 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 16 Feb 2016 03:47:50 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: References: <20120109140615.GE67687@mdounin.ru> <251d2d0b363c3b612e11a2ca77924613.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8300da8338e98ed57d24aa1228379942.NginxMailingListEnglish@forum.nginx.org> LDAP works fine https://github.com/kvspb/nginx-auth-ldap Posted at Nginx Forum: https://forum.nginx.org/read.php?2,72871,264551#msg-264551 From nginx-forum at forum.nginx.org Tue Feb 16 10:04:46 2016 From: nginx-forum at forum.nginx.org (cehes) Date: Tue, 16 Feb 2016 05:04:46 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: <8300da8338e98ed57d24aa1228379942.NginxMailingListEnglish@forum.nginx.org> References: <20120109140615.GE67687@mdounin.ru> <251d2d0b363c3b612e11a2ca77924613.NginxMailingListEnglish@forum.nginx.org> <8300da8338e98ed57d24aa1228379942.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6417f5edd9578ab1f5929bf04a77517a.NginxMailingListEnglish@forum.nginx.org> Thanks for this quick reply... This seems quite hard to implement for someone like me not used with that. :) I'll try if the way described in my previous post do not work thanks again ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,72871,264552#msg-264552 From nginx-forum at forum.nginx.org Tue Feb 16 12:18:09 2016 From: nginx-forum at forum.nginx.org (hheiko) Date: Tue, 16 Feb 2016 07:18:09 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: References: <20120109140615.GE67687@mdounin.ru> <251d2d0b363c3b612e11a2ca77924613.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4591f5f27dccbf0077c5b40428d1cdb4.NginxMailingListEnglish@forum.nginx.org> cehes Wrote: ------------------------------------------------------- > > server { > ... > > location / { > proxy_pass http://http_backend/; > proxy_http_version 1.1; > proxy_set_header Connection ""; > ... > } > } This is working fine for us, against IIS and Apache + addon_modules Heiko Posted at Nginx Forum: https://forum.nginx.org/read.php?2,72871,264553#msg-264553 From miguelmclara at gmail.com Tue Feb 16 13:14:06 2016 From: miguelmclara at gmail.com (Miguel C) Date: Tue, 16 Feb 2016 13:14:06 +0000 Subject: Forwarding HTTPS to VM's HTTPS... In-Reply-To: <1E8A60DA-0191-40BB-B187-673749277574@googlemail.com> References: <8e7d595333f7541f004f8d78978a736b.NginxMailingListEnglish@forum.nginx.org> <1E8A60DA-0191-40BB-B187-673749277574@googlemail.com> Message-ID: I have similar setups with freebsd jails... usually one the jails is a 'frontend proxy server' which I'm guessing is what you're aiming at but with linux containers.... Make sure the firewall allow traffic from the frontend to backends which could be other nginx servers or just php-fpm it self depending on the setup, but all you really need is to use proxy_pass. Since you want HTTPS you need to have the certificates config in the frontend, regardless if the connection to the backends is also encrypted or not. A simple example assuming one VM(LXC) as php-fpm running you could just setup the frontend as you would normally do just use: fastcgi_pass CONTAINER_IP:FPM_PORT Another scenario is ofc you have nginx running in the LXC container which is already "fastcgi_passing" to php, in this case you would use proxy_pass to the backend niginx, IE: server { listen IP:443; server_name expemple.org; ssl on; ssl_certificate /usr/local/etc/nginx/ssl/site.crt; ssl_certificate_key /usr/local/etc/nginx/ssl/site.key; location / { proxy_pass http://lxc_nginx; } } upstream lxc_nginx { server 10.221.186.23:80; <<<< --- Note that in this case the connection from frontend to the nginx container is not encrypted, but you can use 443 here as long as the backup as the proper ssl config (ssl_certificate and key) } Melhores Cumprimentos // Best Regards ----------------------------------------------- *Miguel Clara* *IT - Sys Admin & Developer* On Tue, Feb 16, 2016 at 6:47 AM, Kevin "Ingwie Phoenix" Ingwersen < ingwie2000 at googlemail.com> wrote: > Hey guys! > > StackOverflow didn?t do anything this time, so I decided to visit here and > try asking my question here! :) > > A bit of backstory: > I have had a fatal server crash. 464 days of uptime with unapplied updates > from an OS upgrade, kernel patches and more. When I did do a reboot?it all > exploded right into my face :( So I reinstalled. > > Now that I have learned this lesson, I decided to begin deploying things > in containers - just raw Virtual Box VMs now, as I haven?t gotten used to > Docker or Vagrant. But using a VM with NAT allows me to forward ports. > > One of the VMs is your typical web-server setup; MySQL, PHP5 (FPM) and > Nginx (1.8.x). So I have my main server - the VM host - listening on 80 and > 443 and forwarded the VM?s ports as 11080 and 11443. Forwarding regular > HTTP works flawlessly by just proxy_pass?ing to the other port. No problem > here. > > But how do I work out a reverse-proxy for HTTPS traffic? Mainly, I have > another VM that runs OwnCloud. I want to forward my host?s 443 port to the > VM?s exposed 12443 port so that OwnCloud stops complaining about being > opened via raw HTTP. > > Since I am re-using configuration a lot, I have created a basic_proxy > file, and a regular sites-enabled/ file. You can see them here: > https://gist.github.com/IngwiePhoenix/19631bd07af62d23b8f3 < > https://gist.github.com/IngwiePhoenix/19631bd07af62d23b8f3> > > Would be cool if I could keep with this approach to simply forward traffic > to my various VMs, but keeping my config reusable! > > Kind regards, > Ingwie. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelmclara at gmail.com Tue Feb 16 13:14:55 2016 From: miguelmclara at gmail.com (Miguel C) Date: Tue, 16 Feb 2016 13:14:55 +0000 Subject: Forwarding HTTPS to VM's HTTPS... In-Reply-To: References: <8e7d595333f7541f004f8d78978a736b.NginxMailingListEnglish@forum.nginx.org> <1E8A60DA-0191-40BB-B187-673749277574@googlemail.com> Message-ID: On Tue, Feb 16, 2016 at 1:14 PM, Miguel C wrote: > I have similar setups with freebsd jails... usually one the jails is a > 'frontend proxy server' which I'm guessing is what you're aiming at but > with linux containers.... > > Make sure the firewall allow traffic from the frontend to backends which > could be other nginx servers or just php-fpm it self depending on the > setup, but all you really need is to use proxy_pass. > > Since you want HTTPS you need to have the certificates config in the > frontend, regardless if the connection to the backends is also encrypted or > not. > > > A simple example assuming one VM(LXC) as php-fpm running you could just > setup the frontend as you would normally do just use: > > fastcgi_pass CONTAINER_IP:FPM_PORT > > > Another scenario is ofc you have nginx running in the LXC container which > is already "fastcgi_passing" to php, in this case you would use proxy_pass > to the backend niginx, IE: > > server { > listen IP:443; > server_name expemple.org; > > ssl on; > ssl_certificate /usr/local/etc/nginx/ssl/site.crt; > ssl_certificate_key /usr/local/etc/nginx/ssl/site.key; > > location / { > proxy_pass http://lxc_nginx; > } > } > > upstream lxc_nginx { > server 10.221.186.23:80; <<<< --- Note that in this case the > connection from frontend to the nginx container is not encrypted, but you > can use 443 here as long as the backup as the proper ssl config > (ssl_certificate and key) > } > > NOTE: 10.221.186.23:80 ; is ofc an example IP > (you're container IP) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Feb 16 13:19:29 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 16 Feb 2016 16:19:29 +0300 Subject: http2 enable or disable per virtual host In-Reply-To: <8e7d595333f7541f004f8d78978a736b.NginxMailingListEnglish@forum.nginx.org> References: <8e7d595333f7541f004f8d78978a736b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3303851.hy8X4UUY0o@vbart-workstation> On Tuesday 16 February 2016 01:16:09 matrixmatrix wrote: > Hi the ngx_http_v2_module could not switch enable or disable per virtual > host. > Yes, that's expected behavior. > The following code, http.domainname.com is enabled http2. > > I would like to enable only the http2 of http2.domainname.com. > > server { > listen 443 ssl http2; > server_name http2.domainname.com; > ssl_prefer_server_ciphers on; > ssl_ciphers AESGCM:HIGH:!aNULL:!MD5; > ssl on; > ssl_certificate cert.pem; > ssl_certificate_key key.pem; > } > > server { > listen 443 ssl; > server_name http.domainname.com; > ssl_prefer_server_ciphers on; > ssl_ciphers AESGCM:HIGH:!aNULL:!MD5; > ssl on; > ssl_certificate cert.pem; > ssl_certificate_key key.pem; > } > You need a separate IP in this case. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Feb 16 13:35:11 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Feb 2016 16:35:11 +0300 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: References: <20120109140615.GE67687@mdounin.ru> <251d2d0b363c3b612e11a2ca77924613.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160216133511.GO34421@mdounin.ru> Hello! On Tue, Feb 16, 2016 at 03:37:01AM -0500, cehes wrote: > we are now 4 years later, is there a solution now ? > I read things like : > > ---------------------------------------- > upstream http_backend { > server 1.1.1.1:80; > > keepalive 16; > } > > server { > ... > > location / { > proxy_pass http://http_backend/; > proxy_http_version 1.1; > proxy_set_header Connection ""; > ... > } > } > > ---------------------------------------- > > Is this the solution ? > Somebody tried it ? No, this is not expected to work - unless you are using the server with exactly one user. Proper support for Windows Authentication (aka NTLM) requires connections to backend servers to be bound to particular connections to clients, as NTLM authenticates connections, not requests. By using common keepalive pool as in the configuration above any authentication will basically authenticate arbitrary clients who happen to use the authenticated connection from the cache of keepalive connections to upstream servers. Proper support for proxying NTLM authentication was recently implemented in our commercial version, see http://nginx.org/r/ntlm. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Feb 16 16:29:32 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Tue, 16 Feb 2016 11:29:32 -0500 Subject: Using pcre_jit Message-ID: Hello, To use pcre_jit ( http://nginx.org/r/pcre_jit ), is it mandatory to compile nginx with "--with-pcre-jit"? On FreeBSD, nginx isn't compiled with "--with-pcre-jit", but I can still use "pcre_jit on;" without nginx throwing errors. So, does nginx really use PCRE JIT in this case? This bug report: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=200793 mention "--with-pcre-jit" isn't needed. Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264554,264554#msg-264554 From vbart at nginx.com Tue Feb 16 16:44:33 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 16 Feb 2016 19:44:33 +0300 Subject: Using pcre_jit In-Reply-To: References: Message-ID: <4284403.ZPUHS5gltO@vbart-workstation> On Tuesday 16 February 2016 11:29:32 Alt wrote: > Hello, > > To use pcre_jit ( http://nginx.org/r/pcre_jit ), is it mandatory to compile > nginx with "--with-pcre-jit"? > On FreeBSD, nginx isn't compiled with "--with-pcre-jit", but I can still use > "pcre_jit on;" without nginx throwing errors. So, does nginx really use PCRE > JIT in this case? If there's no warning in error log about JIT, then it is used. > > This bug report: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=200793 > mention "--with-pcre-jit" isn't needed. > [..] As it's written in the documentation, the flag is needed only if the PCRE library is built with nginx (i.e. the --with-pcre=... parameter is used). wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Feb 16 16:59:27 2016 From: nginx-forum at forum.nginx.org (cehes) Date: Tue, 16 Feb 2016 11:59:27 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: <4591f5f27dccbf0077c5b40428d1cdb4.NginxMailingListEnglish@forum.nginx.org> References: <20120109140615.GE67687@mdounin.ru> <251d2d0b363c3b612e11a2ca77924613.NginxMailingListEnglish@forum.nginx.org> <4591f5f27dccbf0077c5b40428d1cdb4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi hheiko, thanks for this update. i'm trying to access exchange using Outlook anywhere. Normaly, i would do a "proxy_pass https://ip-of-my-exchange-server" Here, if i understand well, i only have to replace "server 1.1.1.1:80" in the sample i gave with "server ip-of-my-exchange-server:443" and do a "proxy_pass https://http_backend" that's all and that will support Windows auth ?? great ! no problem with https ? Many thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,72871,264556#msg-264556 From mdounin at mdounin.ru Tue Feb 16 17:00:42 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Feb 2016 20:00:42 +0300 Subject: Using pcre_jit In-Reply-To: References: Message-ID: <20160216170042.GR34421@mdounin.ru> Hello! On Tue, Feb 16, 2016 at 11:29:32AM -0500, Alt wrote: > Hello, > > To use pcre_jit ( http://nginx.org/r/pcre_jit ), is it mandatory to compile > nginx with "--with-pcre-jit"? > On FreeBSD, nginx isn't compiled with "--with-pcre-jit", but I can still use > "pcre_jit on;" without nginx throwing errors. So, does nginx really use PCRE > JIT in this case? > > This bug report: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=200793 > mention "--with-pcre-jit" isn't needed. Sergey Osokin's response is correct. The "--with-pcre-jit" is only needed when you compile PCRE library using nginx configure (./configure --with-pcre= ...). When using a system PCRE library whether or not JIT is supported depends on how the library was compiled. On FreeBSD PCRE is compiled with JIT enabled except on few archs where it's not available. If you'll try to use "pcre_jit on" without JIT available, nginx will warn you during configuration parsing: nginx: [warn] nginx was built without PCRE JIT support in ... Or, if nginx was compiled with JIT available, but currently loaded PCRE library does not support JIT: nginx: [warn] PCRE library does not support JIT in ... If you don't see these messages - this means that JIT is used. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Feb 16 17:07:33 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Feb 2016 20:07:33 +0300 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: References: <20120109140615.GE67687@mdounin.ru> <251d2d0b363c3b612e11a2ca77924613.NginxMailingListEnglish@forum.nginx.org> <4591f5f27dccbf0077c5b40428d1cdb4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160216170733.GS34421@mdounin.ru> Hello! On Tue, Feb 16, 2016 at 11:59:27AM -0500, cehes wrote: > Hi hheiko, > > thanks for this update. > > i'm trying to access exchange using Outlook anywhere. > > Normaly, i would do a "proxy_pass https://ip-of-my-exchange-server" > > Here, if i understand well, i only have to replace "server 1.1.1.1:80" in > the sample i gave with "server ip-of-my-exchange-server:443" > and do a "proxy_pass https://http_backend" that's all and that will support > Windows auth ?? great ! > > no problem with https ? This won't work. See my response here: http://mailman.nginx.org/pipermail/nginx/2016-February/049889.html -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Feb 16 17:09:55 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Tue, 16 Feb 2016 12:09:55 -0500 Subject: Using pcre_jit In-Reply-To: <4284403.ZPUHS5gltO@vbart-workstation> References: <4284403.ZPUHS5gltO@vbart-workstation> Message-ID: <3c1f20981d4cbdc1e7ea28916fe5cd10.NginxMailingListEnglish@forum.nginx.org> Hello, Thanks Valentin for your answer! Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264554,264558#msg-264558 From nginx-forum at forum.nginx.org Tue Feb 16 17:11:38 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Tue, 16 Feb 2016 12:11:38 -0500 Subject: Using pcre_jit In-Reply-To: <20160216170042.GR34421@mdounin.ru> References: <20160216170042.GR34421@mdounin.ru> Message-ID: Hello, Thanks also Maxim for your answer! Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264554,264559#msg-264559 From nginx-forum at forum.nginx.org Wed Feb 17 10:34:43 2016 From: nginx-forum at forum.nginx.org (cehes) Date: Wed, 17 Feb 2016 05:34:43 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: <20160216170733.GS34421@mdounin.ru> References: <20160216170733.GS34421@mdounin.ru> Message-ID: Hi Maxim an thanks for this reply. I read your link and i can see that you added the keyword ntlm. You mean that i won't have that in the free version and that i have to purchase a commercial version, that's the only way, correct ? I did not even know there were a commercial version :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,72871,264569#msg-264569 From al-nginx at none.at Wed Feb 17 15:26:01 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 17 Feb 2016 16:26:01 +0100 Subject: Feature Request for access_log stdout; Message-ID: Hi. how difficult is it to be able to add "access_log stdout;" to nginx, similar like "error_log stderr;"? I ask because in some PaaS environment is it difficult to setup a dedicated user yust for nginx. It fits also a little bit better to http://12factor.net/logs BR Aleks From vbart at nginx.com Wed Feb 17 15:47:39 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 17 Feb 2016 18:47:39 +0300 Subject: Feature Request for access_log stdout; In-Reply-To: References: Message-ID: <1496160.3FZLdlZy0q@vbart-workstation> On Wednesday 17 February 2016 16:26:01 Aleksandar Lazic wrote: > Hi. > > how difficult is it to be able to add "access_log stdout;" to nginx, > similar like "error_log stderr;"? > > I ask because in some PaaS environment is it difficult to setup a > dedicated user yust for nginx. > > It fits also a little bit better to http://12factor.net/logs > [..] What's the problem with "access_log /dev/stdout"? Please note that writing logs to stdout can be a bottleneck, or cause nginx to stuck. The "error_log stderr;" exists mostly for development purposes. wbr, Valentin V. Bartenev From al-nginx at none.at Wed Feb 17 19:04:45 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 17 Feb 2016 20:04:45 +0100 Subject: Feature Request for access_log stdout; In-Reply-To: <1496160.3FZLdlZy0q@vbart-workstation> References: <1496160.3FZLdlZy0q@vbart-workstation> Message-ID: Hi. Am 17-02-2016 16:47, schrieb Valentin V. Bartenev: > On Wednesday 17 February 2016 16:26:01 Aleksandar Lazic wrote: >> Hi. >> >> how difficult is it to be able to add "access_log stdout;" to nginx, >> similar like "error_log stderr;"? >> >> I ask because in some PaaS environment is it difficult to setup a >> dedicated user yust for nginx. >> >> It fits also a little bit better to http://12factor.net/logs >> > [..] > > What's the problem with "access_log /dev/stdout"? Well I have the following problem on openshift v3. ####### nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied) 2016/02/17 18:34:32 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2 2016/02/17 18:34:32 [emerg] 1#1: open() "/dev/stdout" failed (13: Permission denied) I have no name!@nginx-test-8-emwut:/$ I have no name!@nginx-test-8-emwut:/$ ls -la /dev/stdout lrwxrwxrwx. 1 root root 15 Feb 17 18:24 /dev/stdout -> /proc/self/fd/1 I have no name!@nginx-test-8-emwut:/$ id uid=1000550000 gid=0(root) groups=0(root) ####### The config file is this ###### user nginx; worker_processes 1; error_log stderr warn; pid /tmp/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /dev/stdout; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } ########## The difficulty is that the build and run setup differs in that case that you are root at build time but any arbitrary user at runtime. Here some more details. https://docs.openshift.com/enterprise/3.1/creating_images/guidelines.html#use-uid In case you have a openshift running you can use this repo for testing. https://github.com/git001/nginx-osev3 > Please note that writing logs to stdout can be a bottleneck, or cause > nginx > to stuck. The "error_log stderr;" exists mostly for development > purposes. Thanks for tip. I remember that 'daemon off' have the same background ;-) br Aleks From mdounin at mdounin.ru Wed Feb 17 19:28:26 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Feb 2016 22:28:26 +0300 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: References: <20160216170733.GS34421@mdounin.ru> Message-ID: <20160217192826.GJ34421@mdounin.ru> Hello! On Wed, Feb 17, 2016 at 05:34:43AM -0500, cehes wrote: > I read your link and i can see that you added the keyword ntlm. You mean > that i won't have that in the free version and that i have to purchase a > commercial version, that's the only way, correct ? Yes. If you want to keep things free, consider switching from proprietary and non-standard NTLM to standard Basic authentication. Alternatively, you can try using stream module to proxy connections instead of HTTP requests, see http://nginx.org/en/docs/stream/ngx_stream_core_module.html. This approach has obvious downsides though. -- Maxim Dounin http://nginx.org/ From muhammadrehan69 at gmail.com Wed Feb 17 19:33:32 2016 From: muhammadrehan69 at gmail.com (Muhammad Rehan) Date: Thu, 18 Feb 2016 00:33:32 +0500 Subject: Need help with Nginx Logging Message-ID: Hey folks, I would like to ask what does the 10.200 right after 'excess:' indicate in the following log? Is this storage size in MBs for zone? I am really confused about that; I have gone through documentation pretty well but not able to find these attributes used in error logs. *2014/11/20 17:28:47 [error] 30347#0: *55 limiting requests, excess: 10.200 > by zone ?search?, client: 10.170.2.23, server: http://www.example.com > , request: ?GET /search/results/?keyword= > HTTP/1.1?, host: ?* I will be waiting for your response. Any help regarding this would be appreciated! Thanks, Rehan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 18 08:31:51 2016 From: nginx-forum at forum.nginx.org (jshare) Date: Thu, 18 Feb 2016 03:31:51 -0500 Subject: nginx-1.9.11 -- "Floating point exception" on exec after upgrading 1.9.10 -> 1.9.11 In-Reply-To: References: <20160210130046.GP70672@mdounin.ru> Message-ID: With the release of ngx_pagespeed 1.10.33.5, I'm no longer having any make issues with 1.9.11 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264422,264581#msg-264581 From ahutchings at nginx.com Thu Feb 18 08:59:30 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 18 Feb 2016 08:59:30 +0000 Subject: Feature Request for access_log stdout; In-Reply-To: References: <1496160.3FZLdlZy0q@vbart-workstation> Message-ID: <56C587F2.9030407@nginx.com> Hi Aleksandar, On 17/02/16 19:04, Aleksandar Lazic wrote: > Hi. > > Am 17-02-2016 16:47, schrieb Valentin V. Bartenev: >> On Wednesday 17 February 2016 16:26:01 Aleksandar Lazic wrote: >>> Hi. >>> >>> how difficult is it to be able to add "access_log stdout;" to nginx, >>> similar like "error_log stderr;"? >>> >>> I ask because in some PaaS environment is it difficult to setup a >>> dedicated user yust for nginx. >>> >>> It fits also a little bit better to http://12factor.net/logs >>> >> [..] >> >> What's the problem with "access_log /dev/stdout"? > > Well I have the following problem on openshift v3. > > ####### > nginx: [alert] could not open error log file: open() > "/var/log/nginx/error.log" failed (13: Permission denied) > 2016/02/17 18:34:32 [warn] 1#1: the "user" directive makes sense only if > the master process runs with super-user privileges, ignored in > /etc/nginx/nginx.conf:2 > 2016/02/17 18:34:32 [emerg] 1#1: open() "/dev/stdout" failed (13: > Permission denied) > > I have no name!@nginx-test-8-emwut:/$ > I have no name!@nginx-test-8-emwut:/$ ls -la /dev/stdout > lrwxrwxrwx. 1 root root 15 Feb 17 18:24 /dev/stdout -> /proc/self/fd/1 > I have no name!@nginx-test-8-emwut:/$ id > uid=1000550000 gid=0(root) groups=0(root) > ####### What version of Docker are you running? If it is prior to 1.9 you are likely to hit his bug: https://github.com/docker/docker/issues/6880 Kind Regards -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From ahutchings at nginx.com Thu Feb 18 09:16:45 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 18 Feb 2016 09:16:45 +0000 Subject: Feature Request for access_log stdout; In-Reply-To: <56C587F2.9030407@nginx.com> References: <1496160.3FZLdlZy0q@vbart-workstation> <56C587F2.9030407@nginx.com> Message-ID: <56C58BFD.5080905@nginx.com> On 18/02/16 08:59, Andrew Hutchings wrote: > Hi Aleksandar, > > On 17/02/16 19:04, Aleksandar Lazic wrote: >> Hi. >> >> Am 17-02-2016 16:47, schrieb Valentin V. Bartenev: >>> On Wednesday 17 February 2016 16:26:01 Aleksandar Lazic wrote: >>>> Hi. >>>> >>>> how difficult is it to be able to add "access_log stdout;" to nginx, >>>> similar like "error_log stderr;"? >>>> >>>> I ask because in some PaaS environment is it difficult to setup a >>>> dedicated user yust for nginx. >>>> >>>> It fits also a little bit better to http://12factor.net/logs >>>> >>> [..] >>> >>> What's the problem with "access_log /dev/stdout"? >> >> Well I have the following problem on openshift v3. >> >> ####### >> nginx: [alert] could not open error log file: open() >> "/var/log/nginx/error.log" failed (13: Permission denied) >> 2016/02/17 18:34:32 [warn] 1#1: the "user" directive makes sense only if >> the master process runs with super-user privileges, ignored in >> /etc/nginx/nginx.conf:2 >> 2016/02/17 18:34:32 [emerg] 1#1: open() "/dev/stdout" failed (13: >> Permission denied) >> >> I have no name!@nginx-test-8-emwut:/$ >> I have no name!@nginx-test-8-emwut:/$ ls -la /dev/stdout >> lrwxrwxrwx. 1 root root 15 Feb 17 18:24 /dev/stdout -> /proc/self/fd/1 >> I have no name!@nginx-test-8-emwut:/$ id >> uid=1000550000 gid=0(root) groups=0(root) >> ####### > > What version of Docker are you running? If it is prior to 1.9 you are > likely to hit his bug: https://github.com/docker/docker/issues/6880 Also, as Valentin mentioned. Performance of any application, not just NGINX, that logs a lot of data is going to be terrible when you actually manage to do this. There are many flaws in the The Factor-Factor App document and whilst it may work out for you backend services I wouldn't recommend following it too closely for any core infrastructure. Kind Regards -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Thu Feb 18 10:20:06 2016 From: nginx-forum at forum.nginx.org (cehes) Date: Thu, 18 Feb 2016 05:20:06 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: <20160217192826.GJ34421@mdounin.ru> References: <20160217192826.GJ34421@mdounin.ru> Message-ID: Ok, thanks for this reply. Where can i find the right version for that ? (the commercial one) I went on www.nginx.com and saw "nginx plus" is that what you're talking about ? I went on compare version but did not see NTML support. Will it be easy to upgrade from free version to the right one ? Do you have an idea of the price, i only see support prices. Thanks a lot Posted at Nginx Forum: https://forum.nginx.org/read.php?2,72871,264585#msg-264585 From jan.reges at siteone.cz Thu Feb 18 12:00:12 2016 From: jan.reges at siteone.cz (jan.reges at siteone.cz) Date: Thu, 18 Feb 2016 13:00:12 +0100 Subject: =?UTF-8?Q?Nep=C5=99=C3=ADtomnost_od_18=2E2=2E2016_do_22=2E2=2E2016?= Message-ID: <650c1f98e0d09fed78d950d99ad49f33-1455796812@office.siteone.cz> V??en? klienti, v dob? od 18.2.2016 do 22.2.2016 v?etn?, jsem mimo kancel?? a nebudu m?t pravideln? p??stup ke sv? e-mailov? schr?nce. V urgentn?ch p??padech, pros?m, kontaktujte m?ho kolegu Martina Stare?ka (martin.starecek at siteone.cz), kter? je obezn?men se v?emi projekty. Ostatn? p??pady vy?e??m po sv?m n?vratu. D?kuji za pochopen? a p?eji V?m hezk? den. S pozdravem J?n Rege? SiteOne, s.r.o. From al-nginx at none.at Thu Feb 18 14:02:53 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 18 Feb 2016 15:02:53 +0100 Subject: Feature Request for access_log stdout; In-Reply-To: <56C58BFD.5080905@nginx.com> References: <1496160.3FZLdlZy0q@vbart-workstation> <56C587F2.9030407@nginx.com> <56C58BFD.5080905@nginx.com> Message-ID: <1c1f76e5dd6fb0e018bbee2a23a79aab@none.at> Hi Andrew. Am 18-02-2016 10:16, schrieb Andrew Hutchings: > On 18/02/16 08:59, Andrew Hutchings wrote: >> Hi Aleksandar, >> >> On 17/02/16 19:04, Aleksandar Lazic wrote: >>> Hi. >>> >>> Am 17-02-2016 16:47, schrieb Valentin V. Bartenev: >>>> On Wednesday 17 February 2016 16:26:01 Aleksandar Lazic wrote: >>>>> Hi. >>>>> >>>>> how difficult is it to be able to add "access_log stdout;" to >>>>> nginx, >>>>> similar like "error_log stderr;"? >>>>> >>>>> I ask because in some PaaS environment is it difficult to setup a >>>>> dedicated user yust for nginx. >>>>> >>>>> It fits also a little bit better to http://12factor.net/logs [snipp] >> What version of Docker are you running? If it is prior to 1.9 you are >> likely to hit his bug: https://github.com/docker/docker/issues/6880 #### docker version Client: Version: 1.8.2-el7 API version: 1.20 Package Version: docker-1.8.2-10.el7.x86_64 Go version: go1.4.2 Git commit: a01dc02/1.8.2 Built: OS/Arch: linux/amd64 Server: Version: 1.8.2-el7 API version: 1.20 Package Version: Go version: go1.4.2 Git commit: a01dc02/1.8.2 Built: OS/Arch: linux/amd64 #### > Also, as Valentin mentioned. Performance of any application, not just > NGINX, that logs a lot of data is going to be terrible when you > actually manage to do this. I have received some suggestions on haproxy list, which I will try. > There are many flaws in the The Factor-Factor App document and whilst > it may work out for you backend services I wouldn't recommend > following it too closely for any core infrastructure. Well I assume that this document is written for apps not core infra ;-) BR Aleks From nginx-forum at forum.nginx.org Thu Feb 18 14:34:00 2016 From: nginx-forum at forum.nginx.org (raiblue) Date: Thu, 18 Feb 2016 09:34:00 -0500 Subject: Does stub_status itself cause performance issues? Message-ID: Hello, I searched everywhere but found nothing. Is there an overhead of setting stub_status page and activate it all the time? Using Apache's ExtendedStatus for example has an impact on performance and recommended to be used only for debugging purposes, not all the time. What about stub_status? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264592,264592#msg-264592 From mdounin at mdounin.ru Thu Feb 18 15:24:34 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Feb 2016 18:24:34 +0300 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: References: <20160217192826.GJ34421@mdounin.ru> Message-ID: <20160218152433.GG1489@mdounin.ru> Hello! On Thu, Feb 18, 2016 at 05:20:06AM -0500, cehes wrote: > Ok, thanks for this reply. > > Where can i find the right version for that ? (the commercial one) > I went on www.nginx.com and saw "nginx plus" is that what you're talking > about ? Yes, that's the only commercial version available. > I went on compare version but did not see NTML support. The NTLM support is documented here: http://nginx.org/r/ntlm > Will it be easy to upgrade from free version to the right one ? It should be, though depends on your particular setup. > Do you have an idea of the price, i only see support prices. Pricing for nginx plus is available here: https://www.nginx.com/products/pricing/ Please use the "contact sales" on nginx.com if you have any further questions. -- Maxim Dounin http://nginx.org/ From jimssupp at rushpost.com Thu Feb 18 15:30:04 2016 From: jimssupp at rushpost.com (JimS) Date: Thu, 18 Feb 2016 07:30:04 -0800 Subject: Tls 1.3 experimental support? Message-ID: <1455809404.3830361.525011442.59E4974F@webmail.messagingengine.com> What's the current feature-release plan for TLS 1.3 support in Nginx? Will it be added while still in draft status, or only after full release? Is it, in any form, on-schedule yet? Thanks, Jim From mdounin at mdounin.ru Thu Feb 18 16:04:22 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Feb 2016 19:04:22 +0300 Subject: Tls 1.3 experimental support? In-Reply-To: <1455809404.3830361.525011442.59E4974F@webmail.messagingengine.com> References: <1455809404.3830361.525011442.59E4974F@webmail.messagingengine.com> Message-ID: <20160218160422.GJ1489@mdounin.ru> Hello! On Thu, Feb 18, 2016 at 07:30:04AM -0800, JimS wrote: > What's the current feature-release plan for TLS 1.3 support in Nginx? > > Will it be added while still in draft status, or only after full release? > > Is it, in any form, on-schedule yet? Support for SSL and TLS in nginx relies on OpenSSL, so TLS 1.3 will be supported by nginx once added to OpenSSL. As for OpenSSL, it has TLS 1.3 in the roadmap, though without any dates for obvious reasons. And I don't think anything will be available at least till TLS 1.3 RFC is finalized. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Feb 18 16:16:53 2016 From: nginx-forum at forum.nginx.org (cehes) Date: Thu, 18 Feb 2016 11:16:53 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: <20160218152433.GG1489@mdounin.ru> References: <20160218152433.GG1489@mdounin.ru> Message-ID: <08db3c8d1dcfb760bf8a0484ebaed19e.NginxMailingListEnglish@forum.nginx.org> Thank you very much for all of that. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,72871,264598#msg-264598 From nginx-forum at forum.nginx.org Thu Feb 18 16:20:55 2016 From: nginx-forum at forum.nginx.org (vedranf) Date: Thu, 18 Feb 2016 11:20:55 -0500 Subject: Cache manager occasionally stops deleting cached files Message-ID: <3062c9927373ad6a88374a43a1335aba.NginxMailingListEnglish@forum.nginx.org> Hello, I'm having an issue where nginx (1.8) cache manager suddenly just stops deleting content thus the disk soon ends up being full until I restart it by hand. After it is restarted, it works normally for a couple of days, but then it happens again. Cache has some 30-40k files, nothing huge. Relevant config lines are: proxy_cache_path /home/cache/ levels=2:2 keys_zone=cache:25m inactive=7d max_size=2705g use_temp_path=on; proxy_temp_path /dev/shm/temp; # reduces parallel writes on the disk proxy_cache_lock on; proxy_cache_lock_age 10s; proxy_cache_lock_timeout 30s; proxy_ignore_client_abort on; Server gets roughly 100 rps and normally cache manager deletes a couple of files every few seconds, however when it gets stuck this is all it does for 20-30 minutes or more, i.e. there are 0 unlinks (until I restart it and it rereads the on-disk cache): ... epoll_wait(14, {}, 512, 1000) = 0 epoll_wait(14, {}, 512, 1000) = 0 epoll_wait(14, {}, 512, 1000) = 0 epoll_wait(14, {}, 512, 1000) = 0 gettid() = 11303 write(24, "2016/02/18 08:22:02 [alert] 11303#11303: ignore long locked inactive cache entry 380d3f178017bcd92877ee322b006bbb, count:1\n", 123) = 123 gettid() = 11303 write(24, "2016/02/18 08:22:02 [alert] 11303#11303: ignore long locked inactive cache entry 7b9239693906e791375a214c7e36af8e, count:24\n", 124) = 124 epoll_wait(14, {}, 512, 1000) = 0 ... I assume the mentioned error is due to relatively often nginx restarts and is benign. There's nothing else in the error log (except for occasional upstream timeouts). I'm aware this likely isn't enough info to debug the issue, but do you at least have some ideas on what might be causing this issue, where to look? I'm wild guessing cache manager waits for some lock to be released, but it never gets released so it just waits indefinitely. Thanks, Vedran Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264599,264599#msg-264599 From mdounin at mdounin.ru Thu Feb 18 17:11:18 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Feb 2016 20:11:18 +0300 Subject: Cache manager occasionally stops deleting cached files In-Reply-To: <3062c9927373ad6a88374a43a1335aba.NginxMailingListEnglish@forum.nginx.org> References: <3062c9927373ad6a88374a43a1335aba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160218171118.GK1489@mdounin.ru> Hello! On Thu, Feb 18, 2016 at 11:20:55AM -0500, vedranf wrote: > Hello, > > I'm having an issue where nginx (1.8) cache manager suddenly just stops > deleting content thus the disk soon ends up being full until I restart it by > hand. After it is restarted, it works normally for a couple of days, but > then it happens again. Cache has some 30-40k files, nothing huge. Relevant > config lines are: > > proxy_cache_path /home/cache/ levels=2:2 keys_zone=cache:25m > inactive=7d max_size=2705g use_temp_path=on; > proxy_temp_path /dev/shm/temp; # reduces parallel writes on the > disk > proxy_cache_lock on; > proxy_cache_lock_age 10s; > proxy_cache_lock_timeout 30s; > proxy_ignore_client_abort on; > > Server gets roughly 100 rps and normally cache manager deletes a couple of > files every few seconds, however when it gets stuck this is all it does for > 20-30 minutes or more, i.e. there are 0 unlinks (until I restart it and it > rereads the on-disk cache): > > ... > epoll_wait(14, {}, 512, 1000) = 0 > epoll_wait(14, {}, 512, 1000) = 0 > epoll_wait(14, {}, 512, 1000) = 0 > epoll_wait(14, {}, 512, 1000) = 0 > gettid() = 11303 > write(24, "2016/02/18 08:22:02 [alert] 11303#11303: ignore long locked > inactive cache entry 380d3f178017bcd92877ee322b006bbb, count:1\n", 123) = > 123 > gettid() = 11303 > write(24, "2016/02/18 08:22:02 [alert] 11303#11303: ignore long locked > inactive cache entry 7b9239693906e791375a214c7e36af8e, count:24\n", 124) = > 124 > epoll_wait(14, {}, 512, 1000) = 0 > ... > > I assume the mentioned error is due to relatively often nginx restarts and > is benign. There's nothing else in the error log (except for occasional > upstream timeouts). I'm aware this likely isn't enough info to debug the > issue, but do you at least have some ideas on what might be causing this > issue, where to look? I'm wild guessing cache manager waits for some lock to > be released, but it never gets released so it just waits indefinitely. The error logged is due to an entry nginx is going to remove an inactive cache entry but it is locked by some requests. Unless inactive time is very low (not your case) it indicate a problem somewhere else. Such locked entries can't be removed from cache. Addtitionally, once there are enough such locked entries, nginx won't be able to purge cache based on max_size. That is, it's expected that nginx will have problems with removing entries from cache if you see such messages. Most trivial reason for such messages is abnormally killed nginx processes. That is, if some processes die due to bugs, or killed by an unwary administrator or an incorrect script - the problem will appear sooner or later. To further debug things, try the following: - restart nginx and record pids of all nginx processes; - once the problem starts to appear again, check if there are the same processes running; - if some processes different from one recorded, dig further to find out why. Some trivial things like looking into logs for "worker process exited ..." messages and checking if the problem persists without 3rd party modules compiled in (see "nginx -V") may also help. -- Maxim Dounin http://nginx.org/ From jim at ohlste.in Thu Feb 18 21:19:20 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Thu, 18 Feb 2016 16:19:20 -0500 Subject: accept_filter being ignored Message-ID: <56C63558.6090101@ohlste.in> Hello, Not sure if I should be directing this to a FreeBSD list or here, but here goes. I have set accept_filter= in listen directives: server { listen 80 accept_filter=http_ready; listen [::]:80 accept_filter=http_ready; listen 443 ssl accept_filter=data_ready; listen [::]:443 ssl accept_filter=data_ready; ... } The correct modules are loaded at boot: # kldstat -v | grep accf 2 1 0xffffffff814bf000 1598 accf_data.ko (/boot/kernel/accf_data.ko) 1 accf_data 3 1 0xffffffff814c1000 26a0 accf_http.ko (/boot/kernel/accf_http.ko) 2 accf_http But I am seeing the following in the error log after a reboot (or on the console after an nginx restart): 2016/02/18 16:04:06 [alert] 823#100446: setsockopt(SO_ACCEPTFILTER, "http_ready") for 0.0.0.0:80 failed, ignored (2: No such file or directory) 2016/02/18 16:04:06 [alert] 823#100446: setsockopt(SO_ACCEPTFILTER, "http_ready") for [::]:80 failed, ignored (2: No such file or directory) 2016/02/18 16:04:06 [alert] 823#100446: setsockopt(SO_ACCEPTFILTER, "data_ready") for 0.0.0.0:443 failed, ignored (2: No such file or directory) 2016/02/18 16:04:06 [alert] 823#100446: setsockopt(SO_ACCEPTFILTER, "data_ready") for [::]:443 failed, ignored (2: No such file or directory) Box is running FreeBSD 10-STABLE. Any hints? -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From jjaques at gmail.com Thu Feb 18 23:07:54 2016 From: jjaques at gmail.com (Josh Jaques) Date: Thu, 18 Feb 2016 17:07:54 -0600 Subject: SSL handshake errors when configured as a reverse proxy Message-ID: Recently I tried setting up a basic nginx reverse proxy in production on Ubuntu 14.04 using their default supported version of nginx 1.4.6. Basic config as follows: server { listen 127.0.0.1:443; server_name myhost.ca; ssl on; ssl_certificate /etc/nginx/certs/cert.chained.with.intermediates.crt ssl_certificate_key /etc/nginx/certs/cert.key ssl_dhparam /etc/nginx/certs/dhparams.pem; ssl_session_timeout 5m; ssl_session_cache shared:test_cache:5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384: ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL: !EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; access_log /var/log/nginx/proxy.access.log; error_log /var/log/nginx/proxy.error.log; location / { proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass https://10.1.10.1; } } Config worked pretty good in testing, but when we put it in production, we quickly started seeing intermittent handshake failures. The handshakes were being rejected by the server with errors like this in the error log: 2016/02/16 13:30:18 [info] 6470#0: *6349 SSL_do_handshake() failed (SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac) while SSL handshaking, client: x.x.x.x, server: x.x.x.x:443 And sometimes from the client like this: 2016/02/16 13:27:51 [info] 6468#0: *6226 peer closed connection in SSL handshake while SSL handshaking, client: x.x.x.x, server: x.x.x.x:443 Upon further investigation we discovered that the same clients were more or less "randomly" effected by this handshake error on an intermittent basis, so one request might work but then the next would fail. Initially we didn't have any ssl_session_cache enabled, but adding that shared cache setting above had no effect on the random handshake errors. Thought it might be an openssl issue, so we tried updating from ubuntu's default version of 1.0.1f to 1.0.2f from source, but that had no impact on the clients receiving the handshake error. Subsequently, we switched the reverse proxy on the same system, with the same configuration (i.e. supported protocols and ciphers) from nginx to apache, and the intermittent handshake errors have gone away. I'd still like to know what was wrong with our nginx setup to be causing this in the first place, anyone have any ideas? -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Thu Feb 18 23:56:21 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 19 Feb 2016 00:56:21 +0100 Subject: Feature Request for access_log stdout; In-Reply-To: <1c1f76e5dd6fb0e018bbee2a23a79aab@none.at> References: <1496160.3FZLdlZy0q@vbart-workstation> <56C587F2.9030407@nginx.com> <56C58BFD.5080905@nginx.com> <1c1f76e5dd6fb0e018bbee2a23a79aab@none.at> Message-ID: <2543dd410b323c8ee8413760777265f7@none.at> Hi. Am 18-02-2016 15:02, schrieb Aleksandar Lazic: > Hi Andrew. > > Am 18-02-2016 10:16, schrieb Andrew Hutchings: [snipp] >>> What version of Docker are you running? If it is prior to 1.9 you are >>> likely to hit his bug: https://github.com/docker/docker/issues/6880 > > #### > docker version > Client: > Version: 1.8.2-el7 > API version: 1.20 > Package Version: docker-1.8.2-10.el7.x86_64 > Go version: go1.4.2 > Git commit: a01dc02/1.8.2 > Built: > OS/Arch: linux/amd64 > > Server: > Version: 1.8.2-el7 > API version: 1.20 > Package Version: > Go version: go1.4.2 > Git commit: a01dc02/1.8.2 > Built: > OS/Arch: linux/amd64 > #### > >> Also, as Valentin mentioned. Performance of any application, not just >> NGINX, that logs a lot of data is going to be terrible when you >> actually manage to do this. > > I have received some suggestions on haproxy list, which I will try. I have try to setup the syslog entry with variables, but it looks to me that this not implemented. sed -e's/access_log.*/access_log syslog:server=\$\{NGINX_TEST_PORT_8514_UDP_ADDR\}:\$\{NGINX_TEST_PORT_8514_UDP_PORT\};/' /etc/nginx/nginx.conf > /tmp/nginx.conf cat /tmp/nginx.conf ##### user nginx; worker_processes 1; error_log stderr warn; pid /tmp/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log syslog:server=${NGINX_TEST_PORT_8514_UDP_ADDR}:${NGINX_TEST_PORT_8514_UDP_PORT}; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } #### nginx -T -c /tmp/nginx.conf nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied) 2016/02/18 23:54:51 [warn] 12#12: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /tmp/nginx.conf:2 2016/02/18 23:54:51 [emerg] 12#12: invalid port in syslog server "${NGINX_TEST_PORT_8514_UDP_ADDR}:${NGINX_TEST_PORT_8514_UDP_PORT}" in /tmp/nginx.conf:22 nginx: configuration file /tmp/nginx.conf test failed Any plans to add this possibility or have I missed something? BR Aleks From gfrankliu at gmail.com Fri Feb 19 03:09:14 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 18 Feb 2016 19:09:14 -0800 Subject: proxy_bind pool Message-ID: Hi, Is it possible to use proxy_bind to a pool of IPs? Since each IP has a limited ephemeral ports that can be used to make outbound connections to upstream servers, it would be help if we can use a pool of IPs for proxy_bind, or is there another workaround to have more connections to upstream server farm? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Feb 19 07:22:35 2016 From: nginx-forum at forum.nginx.org (spyfox) Date: Fri, 19 Feb 2016 02:22:35 -0500 Subject: dynamic server name and aliases Message-ID: Hi everybody, I manage server, where each user has own subdomain: i1.domain -> user #1 i2.domain -> user #2 etc I have a nginx config to handle user's subdomains: server { listen *:80; server_name ~^i(?\d+)\.domain$; ... } It works fine. I have single config to handle all requests for all subdomains. I need to allow users specify custom domains as aliases, e.g.: i1.domain, super-user.com, another-domain.net -> user #1 i2.domain, custom.domain -> user #2 How can I add aliases without copy/paste config for each user? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264612,264612#msg-264612 From thresh at nginx.com Fri Feb 19 08:42:33 2016 From: thresh at nginx.com (Konstantin Pavlov) Date: Fri, 19 Feb 2016 11:42:33 +0300 Subject: Feature Request for access_log stdout; In-Reply-To: <2543dd410b323c8ee8413760777265f7@none.at> References: <1496160.3FZLdlZy0q@vbart-workstation> <56C587F2.9030407@nginx.com> <56C58BFD.5080905@nginx.com> <1c1f76e5dd6fb0e018bbee2a23a79aab@none.at> <2543dd410b323c8ee8413760777265f7@none.at> Message-ID: <56C6D579.8090603@nginx.com> On 19/02/2016 02:56, Aleksandar Lazic wrote: > access_log > syslog:server=${NGINX_TEST_PORT_8514_UDP_ADDR}:${NGINX_TEST_PORT_8514_UDP_PORT}; > > nginx: [alert] could not open error log file: open() > "/var/log/nginx/error.log" failed (13: Permission denied) > 2016/02/18 23:54:51 [warn] 12#12: the "user" directive makes sense only > if the master process runs with super-user privileges, ignored in > /tmp/nginx.conf:2 > 2016/02/18 23:54:51 [emerg] 12#12: invalid port in syslog server > "${NGINX_TEST_PORT_8514_UDP_ADDR}:${NGINX_TEST_PORT_8514_UDP_PORT}" in > /tmp/nginx.conf:22 > nginx: configuration file /tmp/nginx.conf test failed > > Any plans to add this possibility or have I missed something? Please check the following: https://hub.docker.com/_/nginx/, section "using environment variables in nginx configuration". You will need nginx:1.9.10 docker tag or later. -- Konstantin Pavlov From maxim at nginx.com Fri Feb 19 09:50:50 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 19 Feb 2016 12:50:50 +0300 Subject: proxy_bind pool In-Reply-To: References: Message-ID: <56C6E57A.5010906@nginx.com> Hi Frank, On 2/19/16 6:09 AM, Frank Liu wrote: > Hi, > > Is it possible to use proxy_bind to a pool of IPs? Since each IP has > a limited ephemeral ports that can be used to make outbound > connections to upstream servers, it would be help if we can use a > pool of IPs for proxy_bind, or is there another workaround to have > more connections to upstream server farm? > Yes, it's possible -- proxy_bind supports variables (this support was added exactly for the use case above). -- Maxim Konovalov From maxim at nginx.com Fri Feb 19 10:18:07 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 19 Feb 2016 13:18:07 +0300 Subject: accept_filter being ignored In-Reply-To: <56C63558.6090101@ohlste.in> References: <56C63558.6090101@ohlste.in> Message-ID: <56C6EBDF.50805@nginx.com> Hi Jim, On 2/19/16 12:19 AM, Jim Ohlstein wrote: > Hello, > > Not sure if I should be directing this to a FreeBSD list or here, > but here goes. > > I have set accept_filter= in listen directives: > > server { > listen 80 accept_filter=http_ready; > listen [::]:80 accept_filter=http_ready; > listen 443 ssl accept_filter=data_ready; > listen [::]:443 ssl accept_filter=data_ready; > [...] They should be "httpready" and "dataready", see nginx.org/r/listen for more details. Also see man pages for accept_filter(9) and friends. -- Maxim Konovalov From jim at ohlste.in Fri Feb 19 11:44:17 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Fri, 19 Feb 2016 06:44:17 -0500 Subject: accept_filter being ignored In-Reply-To: <56C6EBDF.50805@nginx.com> References: <56C63558.6090101@ohlste.in> <56C6EBDF.50805@nginx.com> Message-ID: <56C70011.1070502@ohlste.in> Hello, On 2/19/16 5:18 AM, Maxim Konovalov wrote: > Hi Jim, > > On 2/19/16 12:19 AM, Jim Ohlstein wrote: >> Hello, >> >> Not sure if I should be directing this to a FreeBSD list or here, >> but here goes. >> >> I have set accept_filter= in listen directives: >> >> server { >> listen 80 accept_filter=http_ready; >> listen [::]:80 accept_filter=http_ready; >> listen 443 ssl accept_filter=data_ready; >> listen [::]:443 ssl accept_filter=data_ready; >> > [...] > > They should be "httpready" and "dataready", see nginx.org/r/listen > for more details. > > Also see man pages for accept_filter(9) and friends. > Haha. It sucks getting old. I read through the docs several times and missed that. Thanks for the help and sorry for the noise. On a side note, "nginx -t" did not pick up that error. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From ahutchings at nginx.com Fri Feb 19 12:52:51 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Fri, 19 Feb 2016 12:52:51 +0000 Subject: SSL handshake errors when configured as a reverse proxy In-Reply-To: References: Message-ID: <56C71023.2000406@nginx.com> Hi Josh, When you installed the newer OpenSSL did you recompile NGINX to use the newer version? If not then it may still have been using the older OpenSSL with this bug in it. It is likely to be pinned to a specific version. You can check by running "ldd" on your NGINX binary. Kind Regards Andrew On 18/02/16 23:07, Josh Jaques wrote: > Recently I tried setting up a basic nginx reverse proxy in production on > Ubuntu 14.04 using their default supported version of nginx 1.4.6. > > Basic config as follows: > > server { > > listen 127.0.0.1:443 ; > > server_name myhost.ca ; > > ssl on; > > ssl_certificate /etc/nginx/certs/cert.chained.with.intermediates.crt > > ssl_certificate_key /etc/nginx/certs/cert.key > > ssl_dhparam /etc/nginx/certs/dhparams.pem; > > ssl_session_timeout 5m; > > ssl_session_cache shared:test_cache:5m; > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > ssl_ciphers > 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384: > > > ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL: > > > !EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; > > ssl_prefer_server_ciphers on; > > access_log /var/log/nginx/proxy.access.log; > > error_log /var/log/nginx/proxy.error.log; > > location / { > > proxy_buffering off; > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto $scheme; > > proxy_pass https://10.1.10.1; > > } > > } > > Config worked pretty good in testing, but when we put it in production, > we quickly started seeing intermittent handshake failures. > > The handshakes were being rejected by the server with errors like this > in the error log: > > 2016/02/16 13:30:18 [info] 6470#0: *6349 SSL_do_handshake() failed (SSL: > error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad > record mac) while SSL handshaking, client: x.x.x.x, server: x.x.x.x:443 > > And sometimes from the client like this: > > 2016/02/16 13:27:51 [info] 6468#0: *6226 peer closed connection in SSL > handshake while SSL handshaking, client: x.x.x.x, server: x.x.x.x:443 > > Upon further investigation we discovered that the same clients were more > or less "randomly" effected by this handshake error on an intermittent > basis, so one request might work but then the next would fail. > > Initially we didn't have any ssl_session_cache enabled, but adding that > shared cache setting above had no effect on the random handshake errors. > > Thought it might be an openssl issue, so we tried updating from ubuntu's > default version of 1.0.1f to 1.0.2f from source, but that had no impact > on the clients receiving the handshake error. > > Subsequently, we switched the reverse proxy on the same system, with the > same configuration (i.e. supported protocols and ciphers) from nginx to > apache, and the intermittent handshake errors have gone away. > > I'd still like to know what was wrong with our nginx setup to be causing > this in the first place, anyone have any ideas? > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Fri Feb 19 13:00:54 2016 From: nginx-forum at forum.nginx.org (vedranf) Date: Fri, 19 Feb 2016 08:00:54 -0500 Subject: Cache manager occasionally stops deleting cached files In-Reply-To: <20160218171118.GK1489@mdounin.ru> References: <20160218171118.GK1489@mdounin.ru> Message-ID: <16a8f8bead53d9c6229db6b7c20164af.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! Hello and thanks for the reply! > > I assume the mentioned error is due to relatively often nginx > restarts and > > is benign. There's nothing else in the error log (except for > occasional > > upstream timeouts). I'm aware this likely isn't enough info to debug > the > > issue, but do you at least have some ideas on what might be causing > this > > issue, where to look? I'm wild guessing cache manager waits for some > lock to > > be released, but it never gets released so it just waits > indefinitely. > > The error logged is due to an entry nginx is going to remove an > inactive cache entry but it is locked by some requests. Unless > inactive time is very low (not your case) it indicate a problem > somewhere else. > > Such locked entries can't be removed from cache. Addtitionally, > once there are enough such locked entries, nginx won't be able to > purge cache based on max_size. That is, it's expected that nginx > will have problems with removing entries from cache if you see > such messages. > > Most trivial reason for such messages is abnormally killed nginx > processes. That is, if some processes die due to bugs, or killed > by an unwary administrator or an incorrect script - the problem > will appear sooner or later. I see. I do have 1000-2000 of such errors in log per day, definitely more than couple of months ago. I remember server got crashed in the past, but not recently. > To further debug things, try the following: > > - restart nginx and record pids of all nginx processes; > > - once the problem starts to appear again, check if there are the > same processes running; > > - if some processes different from one recorded, dig further to > find out why. > > Some trivial things like looking into logs for "worker process > exited ..." messages and checking if the problem persists without > 3rd party modules compiled in (see "nginx -V") may also help. Thanks, I'll dig deeper. I do have 3rd party modules and there are occasional messages such as "worker process exited on signal 11", but they are rare, i'll try to figure out what causes them, but it'll take time. However, now that this already happens, is it possible so somehow unlock all entries and start clean, but without removing all cached content? Or alternatively, can I delete the locked files manually as a workaround? Regards, Vedran Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264599,264626#msg-264626 From mdounin at mdounin.ru Fri Feb 19 13:28:50 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Feb 2016 16:28:50 +0300 Subject: Cache manager occasionally stops deleting cached files In-Reply-To: <16a8f8bead53d9c6229db6b7c20164af.NginxMailingListEnglish@forum.nginx.org> References: <20160218171118.GK1489@mdounin.ru> <16a8f8bead53d9c6229db6b7c20164af.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160219132849.GS1489@mdounin.ru> Hello! On Fri, Feb 19, 2016 at 08:00:54AM -0500, vedranf wrote: [...] > Thanks, I'll dig deeper. I do have 3rd party modules and there are > occasional messages such as "worker process exited on signal 11", but they > are rare, i'll try to figure out what causes them, but it'll take time. So the problem with cache is clear - one worker crash is usually enough to stop nginx from removing cache items based on max_size sooner or later. You have to further debug crashes to fix things. > However, now that this already happens, is it possible so somehow unlock all > entries and start clean, but without removing all cached content? Or > alternatively, can I delete the locked files manually as a workaround? You need nginx to reload its cache matadata (as stored in keys_zone). This can be done either with restart or binary upgrade to the same nginx binary, see: http://nginx.org/en/docs/control.html#upgrade There is no need to delete cache files. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Feb 19 17:34:12 2016 From: nginx-forum at forum.nginx.org (cubicdaiya) Date: Fri, 19 Feb 2016 12:34:12 -0500 Subject: access_log in stream context Message-ID: <0276eb1a12c9dc0c70fabe860c0befd3.NginxMailingListEnglish@forum.nginx.org> Hello. I have been trying the stream module. But the way to record an access log (e.g. IP address) from a client is not found. Is access_log in stream context support planed in the future? Regards, -- Tatsuhiko Kubo Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264641,264641#msg-264641 From jjaques at gmail.com Fri Feb 19 18:13:11 2016 From: jjaques at gmail.com (Josh Jaques) Date: Fri, 19 Feb 2016 12:13:11 -0600 Subject: SSL handshake errors when configured as a reverse proxy In-Reply-To: References: Message-ID: <56C75B37.2070405@gmail.com> Hi Andrew, To clarify the setup earlier, I continued to use the Ubuntu compiled version of NGINX from apt-get. The specific procedure I used to change the lib that NGINX would load was by replacing the libssl.so.1.0.0 and libcrypto.so.1.0.0 files in the path referenced by ldd for the NGINX binary with ones compiled from source. When I switched to Apache, I reverted the system to the package manger versions of libssl and openssl. It continues to be in production without producing any handshake errors. So from my end there doesn't seem to be any evidence to support OpenSSL version ever being an issue, because Apache works fine using the same version of OpenSSL that we initially experienced the problem with in NGINX. The Apache version I am running is also the default from apt-get. From vbart at nginx.com Fri Feb 19 20:29:15 2016 From: vbart at nginx.com (=?utf-8?B?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Fri, 19 Feb 2016 23:29:15 +0300 Subject: access_log in stream context In-Reply-To: <0276eb1a12c9dc0c70fabe860c0befd3.NginxMailingListEnglish@forum.nginx.org> References: <0276eb1a12c9dc0c70fabe860c0befd3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4555858.OsurRviKc2@vbart-laptop> On Friday 19 February 2016 12:34:12 cubicdaiya wrote: > Hello. > > I have been trying the stream module. > But the way to record an access log (e.g. IP address) from a client is not > found. > > > Is access_log in stream context support planed in the future? > [..] There's no need in access_log in the stream module. Connections are recorded on the "info" level in error_log. wbr, Valentin V. Bartenev From muhammadrehan69 at gmail.com Fri Feb 19 20:54:32 2016 From: muhammadrehan69 at gmail.com (Muhammad Rehan) Date: Sat, 20 Feb 2016 01:54:32 +0500 Subject: Need help with Nginx Logging In-Reply-To: References: Message-ID: Hi, I am still looking for reply. Does anyone know about this? On Feb 18, 2016 12:33 AM, "Muhammad Rehan" wrote: > Hey folks, > > I would like to ask what does the 10.200 right after 'excess:' indicate in > the following log? Is this storage size in MBs for zone? I am really > confused about that; I have gone through documentation pretty well but not > able to find these attributes used in error logs. > > *2014/11/20 17:28:47 [error] 30347#0: *55 limiting requests, excess: >> 10.200 by zone ?search?, client: 10.170.2.23, >> server: http://www.example.com , request: ?GET >> /search/results/?keyword= HTTP/1.1?, host: ?* > > > I will be waiting for your response. Any help regarding this would be > appreciated! > > Thanks, > Rehan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Feb 20 01:37:37 2016 From: nginx-forum at forum.nginx.org (cubicdaiya) Date: Fri, 19 Feb 2016 20:37:37 -0500 Subject: access_log in stream context In-Reply-To: <4555858.OsurRviKc2@vbart-laptop> References: <4555858.OsurRviKc2@vbart-laptop> Message-ID: <20735b0b84af0e4abb7915bfe7ec6291.NginxMailingListEnglish@forum.nginx.org> Hello. ???????? ???????? Wrote: ------------------------------------------------------- > There's no need in access_log in the stream module. Connections are > recorded > on the "info" level in error_log. Thank you. I didn't notice it. Regards, -- Tatsuhiko Kubo Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264641,264653#msg-264653 From lenaigst at maelenn.org Sat Feb 20 05:42:27 2016 From: lenaigst at maelenn.org (Thierry) Date: Sat, 20 Feb 2016 07:42:27 +0200 Subject: Key pinning / Nginx reverse proxy Message-ID: <1192224646.20160220074227@maelenn.org> Dear all, I have installed few weeks ago the couple Nginx/Apache2. Nginx: front end - reverse proxy Apache2: Back end - web server Things are working smoothly ... But I am a bit lost concerning the config of the Key pinning. When testing my SSL config through the web site: https://www.ssllabs.com/ssltest/ I am A+ with HSTS on. But I am not able to validate the key pinning. I am moving around for 4 days now .... Where to put it ? I have tried: On Nginx: - nginx.conf - conf.d/*.conf On Apache2 - VirtualHost - apache2.conf Thx -- Cordialement, Thierry e-mail : lenaigst at maelenn.org PGP Key: 0xB7E3B9CD From atif.ali at gmail.com Sat Feb 20 06:37:11 2016 From: atif.ali at gmail.com (aT) Date: Sat, 20 Feb 2016 10:37:11 +0400 Subject: Need help with Nginx Logging In-Reply-To: References: Message-ID: excess: 10.200 in above log means that the requests are being limited as they are averaging at 10.2 requests / second . Storage size is defined with a unit , For example zone=one:10m . This means 10 MB . I hope it helped. On Sat, Feb 20, 2016 at 12:54 AM, Muhammad Rehan wrote: > Hi, I am still looking for reply. Does anyone know about this? > On Feb 18, 2016 12:33 AM, "Muhammad Rehan" > wrote: > >> Hey folks, >> >> I would like to ask what does the 10.200 right after 'excess:' indicate >> in the following log? Is this storage size in MBs for zone? I am really >> confused about that; I have gone through documentation pretty well but not >> able to find these attributes used in error logs. >> >> *2014/11/20 17:28:47 [error] 30347#0: *55 limiting requests, excess: >>> 10.200 by zone ?search?, client: 10.170.2.23, >>> server: http://www.example.com , request: ?GET >>> /search/results/?keyword= HTTP/1.1?, host: ?* >> >> >> I will be waiting for your response. Any help regarding this would be >> appreciated! >> >> Thanks, >> Rehan >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Syed Atif Ali Desk: 971 4 4493131 -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Feb 20 08:58:25 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 20 Feb 2016 08:58:25 +0000 Subject: dynamic server name and aliases In-Reply-To: References: Message-ID: <20160220085825.GE2835@daoine.org> On Fri, Feb 19, 2016 at 02:22:35AM -0500, spyfox wrote: Hi there, > server { > listen *:80; > server_name ~^i(?\d+)\.domain$; > ... Presumably later in this config, you do something with $instanceId, and you have a way to handle it not being set. > It works fine. I have single config to handle all requests for all > subdomains. > > I need to allow users specify custom domains as aliases, e.g.: > > i1.domain, super-user.com, another-domain.net -> user #1 > i2.domain, custom.domain -> user #2 > > How can I add aliases without copy/paste config for each user? Use "map" to turn your list of domains into the appropriate instanceId. Then change your server_name directive not to set that variable. Something like: map $host $instanceId { hostnames; default ""; ~^i(?P\d+)\.domain$ $a; super-user.com 1; another-domain.net 1; custom.domain 2; } server { listen *:80 default; server_name whatever; ... # use $instanceId if it is set } You could also set the default $instanceId to a particular value that means "not set", and have the rest of the system use that value for an "unknown" account. The alternative approach would be to have a separate server{} block for each account, with lots of duplication that is handled by your config-generator-from-template system. But f you want a single server{} block, the above is probably the simplest way. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Feb 20 09:07:33 2016 From: nginx-forum at forum.nginx.org (rsclmumbai) Date: Sat, 20 Feb 2016 04:07:33 -0500 Subject: Block direct access to .inc files not working Message-ID: <897437a05c903aa4b1d251428f9a3f41.NginxMailingListEnglish@forum.nginx.org> I've a nginx server on CentOS with PHP setup. I'm trying to block direct access to .inc files I've added the following to nginx.conf location ~ /\.inc { deny all; } & restarted nginx. But I still able to access .inc files like https://mydomain,com/includes/config.inc Pls suggest what I may be missing or how else to fix this issue. I Need to block access to all .inc files. Thanks Sans Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264663,264663#msg-264663 From muhammadrehan69 at gmail.com Sat Feb 20 09:11:38 2016 From: muhammadrehan69 at gmail.com (Muhammad Rehan) Date: Sat, 20 Feb 2016 14:11:38 +0500 Subject: Need help with Nginx Logging In-Reply-To: References: Message-ID: Thanks for the reply, I have setup the directives such that rate is 30r/s and burst is 10. So, my question is; why are these requests being limited when averaging at 10.2 while I have set the rate to 30r/s in req_limit_zone? On Feb 20, 2016 11:37 AM, "aT" wrote: > excess: 10.200 in above log means that the requests are being limited as > they are averaging at 10.2 requests / second . > > Storage size is defined with a unit , For example zone=one:10m . This > means 10 MB . > > I hope it helped. > > > > On Sat, Feb 20, 2016 at 12:54 AM, Muhammad Rehan < > muhammadrehan69 at gmail.com> wrote: > >> Hi, I am still looking for reply. Does anyone know about this? >> On Feb 18, 2016 12:33 AM, "Muhammad Rehan" >> wrote: >> >>> Hey folks, >>> >>> I would like to ask what does the 10.200 right after 'excess:' indicate >>> in the following log? Is this storage size in MBs for zone? I am really >>> confused about that; I have gone through documentation pretty well but not >>> able to find these attributes used in error logs. >>> >>> *2014/11/20 17:28:47 [error] 30347#0: *55 limiting requests, excess: >>>> 10.200 by zone ?search?, client: 10.170.2.23, >>>> server: http://www.example.com , request: ?GET >>>> /search/results/?keyword= HTTP/1.1?, host: ?* >>> >>> >>> I will be waiting for your response. Any help regarding this would be >>> appreciated! >>> >>> Thanks, >>> Rehan >>> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Syed Atif Ali > Desk: 971 4 4493131 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Sat Feb 20 09:21:50 2016 From: dewanggaba at xtremenitro.org (Dewangga Alam) Date: Sat, 20 Feb 2016 16:21:50 +0700 Subject: Block direct access to .inc files not working In-Reply-To: <897437a05c903aa4b1d251428f9a3f41.NginxMailingListEnglish@forum.nginx.org> References: <897437a05c903aa4b1d251428f9a3f41.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56C8302E.5050009@xtremenitro.org> Hello! On 2/20/2016 4:07 PM, rsclmumbai wrote: > I've a nginx server on CentOS with PHP setup. > I'm trying to block direct access to .inc files > > I've added the following to nginx.conf > > location ~ /\.inc > { > deny all; > } > Have you try : location ~ \.inc { deny all; } > & restarted nginx. > > But I still able to access .inc files like > https://mydomain,com/includes/config.inc > > Pls suggest what I may be missing or how else to fix this issue. I Need to > block access to all .inc files. > > Thanks > Sans > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264663,264663#msg-264663 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Sat Feb 20 09:36:17 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 20 Feb 2016 09:36:17 +0000 Subject: Need help with Nginx Logging In-Reply-To: References: Message-ID: <20160220093617.GG2835@daoine.org> On Sat, Feb 20, 2016 at 02:11:38PM +0500, Muhammad Rehan wrote: Hi there, > Thanks for the reply, I have setup the directives such that rate is 30r/s > and burst is 10. So, my question is; why are these requests being limited > when averaging at 10.2 while I have set the rate to 30r/s in req_limit_zone? How many requests do your log files say happened in the previous few seconds? About 10 per second, or about 40 per second, or something else? "excess" means "above the limit". (Reading the log file is not perfect - slow responses will distort what it appears to show. But if everything is logged, and responses take roughly the same amount of time, you should be able to see whether your actual rate is close to "burst", or close to "rate+burst".) > On Feb 20, 2016 11:37 AM, "aT" wrote: > > > excess: 10.200 in above log means that the requests are being limited as > > they are averaging at 10.2 requests / second . ...more than the configured limit. f -- Francis Daly francis at daoine.org From sca at andreasschulze.de Sat Feb 20 11:10:16 2016 From: sca at andreasschulze.de (A. Schulze) Date: Sat, 20 Feb 2016 12:10:16 +0100 Subject: Key pinning / Nginx reverse proxy In-Reply-To: <1192224646.20160220074227@maelenn.org> Message-ID: <20160220121016.Horde.aO2LcMeFASysOICm80mPA3t@andreasschulze.de> Thierry: > Nginx: front end - reverse proxy > Apache2: Back end - web server hpkp is an header served to the client as response to an https request I would add the Public-Key-Pins on the instance terminating the HTTPS request. without rproxy I have this in /etc/nginx/sites-enabled/example.org server { listen *:443 ssl http2; server_name example.org; ssl_certificate /etc/ssl/example.org/cert+intermediate.pem; ssl_certificate_key /etc/ssl/example.org/key.pem; ssl_stapling_file /etc/ssl/example.org/ocsp.response; add_header Public-Key-Pins "max-age=42424242; pin-sha256=\"..pin1...\"; pin-sha256=\"..pin2...\";"; ... } Andreas From ahutchings at nginx.com Sat Feb 20 15:11:52 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Sat, 20 Feb 2016 15:11:52 +0000 Subject: SSL handshake errors when configured as a reverse proxy In-Reply-To: <56C75B37.2070405@gmail.com> References: <56C75B37.2070405@gmail.com> Message-ID: <56C88238.7090807@nginx.com> Hi Josh, There are bugs in OpenSSL 1.0.1e that could trigger this which is why I asked. The two other things I would suggest trying are: 1. Look again at your cipher list, missing important ones out can trigger this error, especially with ssl_prefer_server_ciphers set. Judging by the quick skim looking at them they don't look correct for the ssl_protocols you have chosen. Using the defaults should be fine for most cases. 2. Upgrading to a supported version of NGINX, there have been many SSL related bug fixes since then (although I don't think any match your specific case). Our own apt repositories found on nginx.org have more current versions in them. Kind Regards Andrew On 19/02/16 18:13, Josh Jaques wrote: > Hi Andrew, > > To clarify the setup earlier, > > I continued to use the Ubuntu compiled version of NGINX from apt-get. > > The specific procedure I used to change the lib that NGINX would load > was by replacing the libssl.so.1.0.0 and libcrypto.so.1.0.0 files in the > path referenced by ldd for the NGINX binary with ones compiled from source. > > When I switched to Apache, I reverted the system to the package manger > versions of libssl and openssl. It continues to be in production without > producing any handshake errors. > > So from my end there doesn't seem to be any evidence to support OpenSSL > version ever being an issue, because Apache works fine using the same > version of OpenSSL that we initially experienced the problem with in NGINX. > > The Apache version I am running is also the default from apt-get. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Sun Feb 21 06:20:08 2016 From: nginx-forum at forum.nginx.org (rsclmumbai) Date: Sun, 21 Feb 2016 01:20:08 -0500 Subject: Block direct access to .inc files not working In-Reply-To: <56C8302E.5050009@xtremenitro.org> References: <56C8302E.5050009@xtremenitro.org> Message-ID: @dewanggaba Perfect. That worked like a charm. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264663,264680#msg-264680 From al-nginx at none.at Sun Feb 21 07:07:06 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 21 Feb 2016 08:07:06 +0100 Subject: Feature Request for access_log stdout; In-Reply-To: <56C6D579.8090603@nginx.com> References: <1496160.3FZLdlZy0q@vbart-workstation> <56C587F2.9030407@nginx.com> <56C58BFD.5080905@nginx.com> <1c1f76e5dd6fb0e018bbee2a23a79aab@none.at> <2543dd410b323c8ee8413760777265f7@none.at> <56C6D579.8090603@nginx.com> Message-ID: <1b7a710beeb45c3a1f7f65baf4ad3ba2@none.at> Hi. Am 19-02-2016 09:42, schrieb Konstantin Pavlov: > On 19/02/2016 02:56, Aleksandar Lazic wrote: [snipp env errors] >> >> Any plans to add this possibility or have I missed something? > > Please check the following: > > https://hub.docker.com/_/nginx/, section "using environment variables > in > nginx configuration". You will need nginx:1.9.10 docker tag or later. Thanks for the input ;-) Now I was able to create a more production ready setup for nginx and Openshift. https://github.com/git001/nginx-osev3 BR Aleks From lenaigst at maelenn.org Sun Feb 21 08:22:31 2016 From: lenaigst at maelenn.org (Thierry) Date: Sun, 21 Feb 2016 10:22:31 +0200 Subject: Key pinning / Nginx reverse proxy In-Reply-To: <20160220121016.Horde.aO2LcMeFASysOICm80mPA3t@andreasschulze.de> References: <1192224646.20160220074227@maelenn.org> <20160220121016.Horde.aO2LcMeFASysOICm80mPA3t@andreasschulze.de> Message-ID: <1328587712.20160221102231@maelenn.org> Dear Andreas, Thx for your help, but I still do have the same problem. Public Key Pinning (HPKP) No I don't know what to do anymore ... Thierry Le samedi 20 f?vrier 2016 ? 13:10:16, vous ?criviez : > Thierry: >> Nginx: front end - reverse proxy >> Apache2: Back end - web server > hpkp is an header served to the client as response to an https request > I would add the Public-Key-Pins on the instance terminating the HTTPS request. > without rproxy I have this in /etc/nginx/sites-enabled/example.org > server { > listen *:443 ssl http2; > server_name example.org; > ssl_certificate > /etc/ssl/example.org/cert+intermediate.pem; > ssl_certificate_key /etc/ssl/example.org/key.pem; > ssl_stapling_file /etc/ssl/example.org/ocsp.response; > add_header Public-Key-Pins "max-age=42424242; > pin-sha256=\"..pin1...\"; pin-sha256=\"..pin2...\";"; > ... > } > Andreas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Cordialement, Thierry e-mail : lenaigst at maelenn.org From francis at daoine.org Sun Feb 21 08:37:33 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Feb 2016 08:37:33 +0000 Subject: Key pinning / Nginx reverse proxy In-Reply-To: <1328587712.20160221102231@maelenn.org> References: <1192224646.20160220074227@maelenn.org> <20160220121016.Horde.aO2LcMeFASysOICm80mPA3t@andreasschulze.de> <1328587712.20160221102231@maelenn.org> Message-ID: <20160221083733.GH2835@daoine.org> On Sun, Feb 21, 2016 at 10:22:31AM +0200, Thierry wrote: Hi there, > Thx for your help, but I still do have the same problem. > > Public Key Pinning (HPKP) No > > I don't know what to do anymore ... curl -I https://your-server/your-test-url Every line in that response comes from your nginx config (possibly including defaults) or your back-end config (passed through). Do you see a "Public-Key-Pins:" line? If so, does it have the content that you expect? If not, what part of your nginx config processed the request; and does that part have the add_header directive that you want? If this is a public web server without any special authentications, then the curl response contains no secrets. f -- Francis Daly francis at daoine.org From lenaigst at maelenn.org Sun Feb 21 09:23:02 2016 From: lenaigst at maelenn.org (Thierry) Date: Sun, 21 Feb 2016 11:23:02 +0200 Subject: Key pinning / Nginx reverse proxy In-Reply-To: <20160221083733.GH2835@daoine.org> References: <1192224646.20160220074227@maelenn.org> <20160220121016.Horde.aO2LcMeFASysOICm80mPA3t@andreasschulze.de> <1328587712.20160221102231@maelenn.org> <20160221083733.GH2835@daoine.org> Message-ID: <1897618762.20160221112302@maelenn.org> Dear sir, After I have executed the curl command, it seems that I have an answer from my Apache2 back end server (apache2.conf) Yes I do see the "Public-Key-Pins:" line... And yes I do have the content that I expect. Public-Key-Pins: pin-sha256="DZNsRcNIolhfdouihfazormhrfozef=";pin-sha256="633ltusrlsqhoagfdgfo79xMD9r9Q="; max-age=2592000; includeSubDomains But, is it really the output of Apache2 ? There is a syntax difference between Nginx and Apache2: Nginx: pin-sha256="DZNsRcNIoiVdK8Img794j8/XGf4+6sDLFjADPWWOddw="; Apache2: pin-sha256=\"DZNsRcNIoirupeqrhfjpzehfrhfaefhpazf=\"; When the curl command return me the result, I can see that there is no "\" ... Is it normal ? If yes, why is "ssllabs.com/ssltest" doesn't see anything concerning the HPKP ? Thx Le dimanche 21 f?vrier 2016 ? 10:37:33, vous ?criviez : > On Sun, Feb 21, 2016 at 10:22:31AM +0200, Thierry wrote: > Hi there, >> Thx for your help, but I still do have the same problem. >> >> Public Key Pinning (HPKP) No >> >> I don't know what to do anymore ... > curl -I https://your-server/your-test-url > Every line in that response comes from your nginx config (possibly > including defaults) or your back-end config (passed through). > Do you see a "Public-Key-Pins:" line? > If so, does it have the content that you expect? > If not, what part of your nginx config processed the request; and does > that part have the add_header directive that you want? > If this is a public web server without any special authentications, > then the curl response contains no secrets. > f -- Cordialement, Thierry e-mail : lenaigst at maelenn.org From francis at daoine.org Sun Feb 21 09:49:50 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Feb 2016 09:49:50 +0000 Subject: Key pinning / Nginx reverse proxy In-Reply-To: <1897618762.20160221112302@maelenn.org> References: <1192224646.20160220074227@maelenn.org> <20160220121016.Horde.aO2LcMeFASysOICm80mPA3t@andreasschulze.de> <1328587712.20160221102231@maelenn.org> <20160221083733.GH2835@daoine.org> <1897618762.20160221112302@maelenn.org> Message-ID: <20160221094950.GI2835@daoine.org> On Sun, Feb 21, 2016 at 11:23:02AM +0200, Thierry wrote: Hi there, > After I have executed the curl command, it seems that I have an answer > from my Apache2 back end server (apache2.conf) > Yes I do see the "Public-Key-Pins:" line... And yes I do have the > content that I expect. That's good. How do you know what content to expect? > Public-Key-Pins: pin-sha256="DZNsRcNIolhfdouihfazormhrfozef=";pin-sha256="633ltusrlsqhoagfdgfo79xMD9r9Q="; max-age=2592000; includeSubDomains What is the actual sha256 of the certificate that the browser receives? Is it one of the two above? The details are in RFC7469. https://tools.ietf.org/html/rfc7469#appendix-A gives an example of how you mind find it. > But, is it really the output of Apache2 ? There is a syntax difference > between Nginx and Apache2: Should it be the output of Apache2? Your browser is speaking https to nginx. It should only see the pinning information from nginx. The browser never sees the Apache certificate, and so should not see anything related to it. > Nginx: pin-sha256="DZNsRcNIoiVdK8Img794j8/XGf4+6sDLFjADPWWOddw="; > Apache2: pin-sha256=\"DZNsRcNIoirupeqrhfjpzehfrhfaefhpazf=\"; I suspect that only one of those is valid in the response header. https://tools.ietf.org/html/rfc7469#section-2.1.5 suggests that the backslashes are unnecessary. (Note that neither of those sha256 values match the ones in the response header. What is actually written in your nginx.conf, and what is the actual response you get from curl? If they are different, you have more investigating to do.) > When the curl command return me the result, I can see that there is > no "\" ... Is it normal ? I think "yes". > If yes, why is "ssllabs.com/ssltest" doesn't see anything concerning > the HPKP ? Is there any documentation on the ssllabs.com site about what they are testing? Can you see, does "HPKP: No" distinguish between: * no Public-Key-Pins header returned * Public-Key-Pins header found, but with invalid formatting * valid Public-Key-Pins header found, but without the sha256 of the current certificate Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Feb 21 16:40:36 2016 From: nginx-forum at forum.nginx.org (maziar) Date: Sun, 21 Feb 2016 11:40:36 -0500 Subject: Nginx and pseudo-streaming Message-ID: <1113e1e8c4e7707485b7008613cf05c2.NginxMailingListEnglish@forum.nginx.org> I use this modules for compiling nginx for streaming videos on my web site by these options : nginx version: nginx/1.8.1 built by gcc 4.9.2 (Debian 4.9.2-10) built with OpenSSL 1.0.1k 8 Jan 2015 TLS SNI support enabled configure arguments: --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-http_dav_module --with-http_flv_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_mp4_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --user=www-data --group=www-data --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_stub_status_module --with-http_spdy_module --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --with-http_secure_link_module --add-module=naxsi-master/naxsi_src/ --with-http_gunzip_module --with-file-aio --with-http_addition_module --with-http_random_index_module --add-module=ngx_cache_purge-2.3/ --with-http_degradation_module --with-http_auth_request_module --with-pcre --with-google_perftools_module --with-debug --http-client-body-temp-path=/var/lib/nginx/client --add-module=nginx-rtmp-module-master/ --add-module=headers-more-nginx-module-master --add-module=nginx-vod-module-master/ --add-module=/home/user/M/ngx_pagespeed-release-1.9.32.2-beta/ and also i buy jwplayer for serving video on my site, and this is my configuration on Nginx for pseudo streaming on nginx.conf : location videos/ { flv; mp4; mp4_buffer_size 4M; mp4_max_buffer_size 10M; limit_rate 260k; limit_rate_after 3m; #mp4_limit_rate_after 30s;} but when i do this : http://172.16.1.2/videos/a.mp4?start=33 video started from beginning , whats wrong in my configuration? what should i do ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264691,264691#msg-264691 From nginx-forum at forum.nginx.org Mon Feb 22 01:52:33 2016 From: nginx-forum at forum.nginx.org (dust2) Date: Sun, 21 Feb 2016 20:52:33 -0500 Subject: Block direct access to .inc files not working In-Reply-To: <897437a05c903aa4b1d251428f9a3f41.NginxMailingListEnglish@forum.nginx.org> References: <897437a05c903aa4b1d251428f9a3f41.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6171fcc19cf12ddf2f46bb141a645dfe.NginxMailingListEnglish@forum.nginx.org> You should block access in php file. For example: You define using defined ( 'ACCESS'... in any file you want direct access and put the line below to every php files: defined ( 'ACCESS' ) or die ( 'Restricted Access' ); Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264663,264694#msg-264694 From nginx-forum at forum.nginx.org Mon Feb 22 08:52:42 2016 From: nginx-forum at forum.nginx.org (dr01) Date: Mon, 22 Feb 2016 03:52:42 -0500 Subject: Opposite behavior of keepalive (nginx reverse proxy on ElasticSearch) Message-ID: <1cef0cd3a9b6cf8cb07fc109129b289c.NginxMailingListEnglish@forum.nginx.org> I am setting up a nginx reverse proxy for ElasticSearch (with HTTP Basic Auth) as described in this article: https://www.elastic.co/blog/playing-http-tricks-nginx This is my nginx config file: events { worker_connections 1024; } http { upstream elasticsearch { server elasticsearch.example.org:9200; keepalive 64; } server { listen 8080; location / { auth_basic "ElasticSearch"; auth_basic_user_file /var/www/.htpasswd; proxy_pass http://elasticsearch.example.org:9200; proxy_http_version 1.1; proxy_set_header Connection "Keep-Alive"; proxy_set_header Proxy-Connection "Keep-Alive"; } } } The proxy correctly forwards port 8080 to 9200, and is supposed to keep persistent connections (keepalive) to Elasticsearch. This is the result of visiting either the URL http://elasticsearch.example.org:9200/_nodes/stats/http?pretty or http://elasticsearch.example.org:8080/_nodes/stats/http?pretty (HTTP authentication has already been done) in a browser: { "cluster_name" : "elasticsearch", "nodes" : { "rIFmzNwsRvGp8kipbcwajw" : { "timestamp" : 1455899085319, "name" : "Kid Colt", "transport_address" : "elasticsearch.example.org/10.3.3.3:9300", "host" : "10.3.3.3", "ip" : [ "elasticsearch.example.org/10.3.3.3:9300", "NONE" ], "http" : { "current_open" : 3, "total_opened" : 28 } } } } When visiting the page on port 9200 (direct connection to Elasticsearch) and reloading, the field total_opened is supposed to increase, while when visiting on port 8080 (through the nginx proxy) and reloading, the field should not change. In fact, it happens the opposite. What is the reason of this strange behavior? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264702,264702#msg-264702 From nginx-forum at forum.nginx.org Mon Feb 22 11:46:30 2016 From: nginx-forum at forum.nginx.org (txt3rob) Date: Mon, 22 Feb 2016 06:46:30 -0500 Subject: .XML running PHP getting 403 Message-ID: <416ebaa8653e06e71cd7b958585f1095.NginxMailingListEnglish@forum.nginx.org> Hi, I want a PHP file to have the extension of .xml but when i do the below when accessing the php file via web browser i get a access denied. location ~ ^(.+\.xml)(/.*)?$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } do i need to add anything like allow or something to this block? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264704,264704#msg-264704 From emeulie at gmail.com Mon Feb 22 11:56:01 2016 From: emeulie at gmail.com (Evert Meulie) Date: Mon, 22 Feb 2016 12:56:01 +0100 Subject: Building from source - Where is ./configure ? Message-ID: Hi all, I have obtained the sources via git from https://github.com/nginx/nginx , but seem to be missing a 'configure' script. Where/how do I get that? -- Greetings, Evert -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Mon Feb 22 11:58:37 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 22 Feb 2016 14:58:37 +0300 Subject: Building from source - Where is ./configure ? In-Reply-To: References: Message-ID: <56CAF7ED.4080406@nginx.com> On 2/22/16 2:56 PM, Evert Meulie wrote: > Hi all, > > I have obtained the sources via git > from https://github.com/nginx/nginx , but seem to be missing a > 'configure' script. > > Where/how do I get that? > http://hg.nginx.org/nginx/file/tip/auto/configure -- Maxim Konovalov From jim at ohlste.in Mon Feb 22 13:09:23 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Mon, 22 Feb 2016 08:09:23 -0500 Subject: .XML running PHP getting 403 In-Reply-To: <416ebaa8653e06e71cd7b958585f1095.NginxMailingListEnglish@forum.nginx.org> References: <416ebaa8653e06e71cd7b958585f1095.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56CB0883.8040909@ohlste.in> Hello, On 2/22/16 6:46 AM, txt3rob wrote: > Hi, > > I want a PHP file to have the extension of .xml but when i do the below when > accessing the php file via web browser i get a access denied. > > location ~ ^(.+\.xml)(/.*)?$ { > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi.conf; > } > > do i need to add anything like allow or something to this block? > You can uncomment "security.limit_extensions" in php-fpm.conf (or whatever your OS uses to configure php-fpm and its pools) and set it there, like: security.limit_extensions = .php .xml -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From nginx-forum at forum.nginx.org Mon Feb 22 15:59:03 2016 From: nginx-forum at forum.nginx.org (reaman) Date: Mon, 22 Feb 2016 10:59:03 -0500 Subject: PRB custom error page Message-ID: <0c52258a29776e6bcc25264dfdd5eb2b.NginxMailingListEnglish@forum.nginx.org> Hi, I want to custom personnal page and I make the page error.html in (/usr/share/nginx/html/error.html) and in the file rutorrent.conf in (/etc/nginx/sites-enabled/rutorrent.conf) I ajout this lines First test: server { ## redirection url ## listen 80; ............................... error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 500 501 502 503 504 505 506 507 /error.html; location = /error.html { root /usr/share/nginx/html; # internal; } and second test server { ## redirection url ## listen 80; .................................................. error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 500 501 502 503 504 505 506 507 /error.html; location = /error.html { try_files /error.html @error; internal; } ## Fallback Directory location @error { root /usr/share/nginx/html; } No fonction, no possible to see my personal page error.html when have the error, why ??? I no know :( Thx for you help Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264712,264712#msg-264712 From ahutchings at nginx.com Mon Feb 22 16:50:58 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Mon, 22 Feb 2016 16:50:58 +0000 Subject: Nginx and pseudo-streaming In-Reply-To: <1113e1e8c4e7707485b7008613cf05c2.NginxMailingListEnglish@forum.nginx.org> References: <1113e1e8c4e7707485b7008613cf05c2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56CB3C72.10902@nginx.com> Hi Maziar, On 21/02/16 16:40, maziar wrote: > but when i do this : http://172.16.1.2/videos/a.mp4?start=33 > > video started from beginning , whats wrong in my configuration? what should > i do ? Have you done "?start=0" first to read the metadata? From the manual: http://nginx.org/en/docs/http/ngx_http_mp4_module.html "To start playback, the player first needs to read metadata. This is done by sending a special request with the start=0 argument." Kind Regards -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From fahimehashrafy at gmail.com Mon Feb 22 17:28:36 2016 From: fahimehashrafy at gmail.com (Fahimeh Ashrafy) Date: Mon, 22 Feb 2016 20:58:36 +0330 Subject: config nginx cache Message-ID: Hello all how to config nginx to ac as both web server and cache server? I used these links but did not work https://serversforhackers.com/nginx-caching/ https://www.nginx.com/blog/nginx-caching-guide/ Thanks all -------------- next part -------------- An HTML attachment was scrubbed... URL: From fahimehashrafy at gmail.com Tue Feb 23 04:40:38 2016 From: fahimehashrafy at gmail.com (Fahimeh Ashrafy) Date: Tue, 23 Feb 2016 08:10:38 +0330 Subject: config nginx cache In-Reply-To: References: Message-ID: no help? On Mon, Feb 22, 2016 at 8:58 PM, Fahimeh Ashrafy wrote: > Hello all > how to config nginx to ac as both web server and cache server? > I used these links but did not work > https://serversforhackers.com/nginx-caching/ > https://www.nginx.com/blog/nginx-caching-guide/ > > Thanks all > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Feb 23 07:53:47 2016 From: r at roze.lv (Reinis Rozitis) Date: Tue, 23 Feb 2016 09:53:47 +0200 Subject: config nginx cache In-Reply-To: References: Message-ID: <516807DCDE26441BB55277DE04440DEB@NeiRoze> > no help? You have to be more specific. Saying "but did not work" doesn't tell what's the problem/issue - what/how did you test and what was the expected result? Also providing the relevant configuration parts speeds up the process. rr From nginx-forum at forum.nginx.org Tue Feb 23 15:09:11 2016 From: nginx-forum at forum.nginx.org (dontknowwhoiam) Date: Tue, 23 Feb 2016 10:09:11 -0500 Subject: Nginx and conditional proxy_next_upstream directive In-Reply-To: <9134c14da4f9ea11d0caf53699756b1f.NginxMailingListEnglish@forum.nginx.org> References: <9134c14da4f9ea11d0caf53699756b1f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <46f08a22a254ec06b31313de86eb4b38.NginxMailingListEnglish@forum.nginx.org> Is this question to stupid or has nobody an answer on it? ;) I just think a POSTs are not idempotent and should never be repeated for a technical reason. Is there any change to configure nginx in a way to try the next upstream only if the first one really failed when using POST requests? Timed out requests on one upstream could potentially still be processed by an application. Handing this request over to the next upstream must not cause the same result as the first upstream since POSTs are not idempotent. Any help would be appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264486,264734#msg-264734 From tolga.ceylan at gmail.com Wed Feb 24 01:41:10 2016 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Tue, 23 Feb 2016 17:41:10 -0800 Subject: Nginx and conditional proxy_next_upstream directive In-Reply-To: <46f08a22a254ec06b31313de86eb4b38.NginxMailingListEnglish@forum.nginx.org> References: <9134c14da4f9ea11d0caf53699756b1f.NginxMailingListEnglish@forum.nginx.org> <46f08a22a254ec06b31313de86eb4b38.NginxMailingListEnglish@forum.nginx.org> Message-ID: FYI, possible related issue on nginx-dev mail list: https://forum.nginx.org/read.php?29,264637,264637#msg-264637 From nginx-forum at forum.nginx.org Wed Feb 24 05:26:29 2016 From: nginx-forum at forum.nginx.org (keeyong) Date: Wed, 24 Feb 2016 00:26:29 -0500 Subject: nginx routing based on ip address Message-ID: <01476bff08fc114219086a9c0cba983e.NginxMailingListEnglish@forum.nginx.org> I am wondering if it is possible to compute some kind of hash value from ip address or do modulo operation on the last numeric value from ip address (for example on 24 given 172.16.4.24)? Based on this, I want to send the request to different endpoints (not to different servers). If this can be done without installing any new module, that would be the best. As far as I can see, I feel like I should do something similar using the regex? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264742,264742#msg-264742 From nginx-forum at forum.nginx.org Wed Feb 24 14:17:01 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Wed, 24 Feb 2016 09:17:01 -0500 Subject: Workers CPU leak [epoll_wait,epoll_ctl] Message-ID: Hello, I have strange issuses with nginx workers. For some time after start Nginx I notice that some process of workers cause high load to CPU ( principally sys CPU). At first I've got syscall traces from one of such process: futex(0x157d914, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x157d910, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 epoll_wait(38, {{EPOLLIN, {u32=7156096, u64=7156096}}}, 512, -1) = 1 epoll_ctl(38, EPOLL_CTL_ADD, 178, {EPOLLOUT|EPOLLET, {u32=3888102096, u64=140028411886288}}) = 0 epoll_wait(38, {{EPOLLOUT, {u32=3888102096, u64=140028411886288}}}, 512, -1) = 1 epoll_ctl(38, EPOLL_CTL_DEL, 178, 7ffda2bc7f30) = 0 futex(0x157d914, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x157d910, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 epoll_wait(38, {{EPOLLIN, {u32=7156096, u64=7156096}}}, 512, -1) = 1 epoll_ctl(38, EPOLL_CTL_ADD, 178, {EPOLLOUT|EPOLLET, {u32=3888102096, u64=140028411886288}}) = 0 epoll_wait(38, {{EPOLLOUT, {u32=3888102096, u64=140028411886288}}}, 512, -1) = 1 epoll_ctl(38, EPOLL_CTL_DEL, 178, 7ffda2bc7f30) = 0 futex(0x157d914, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x157d910, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x157d8d0, FUTEX_WAKE_PRIVATE, 1) = 1 epoll_wait, epoll_ctl, futex are repeated circularly. Then I've got lsof of process and see who owns of 38 file descriptor: nginx 18862 www 38u a_inode 0,9 0 6 [eventpoll] also I see several CLOSE_WAIT sockets nginx 18862 www 101u IPv4 85643376 0t0 TCP 154.59.82.194:http->105.107.179.210:24519 (CLOSE_WAIT) nginx 18862 www 133r REG 8,3 0 4743 /mnt/ssd1/wwwroot/71/7/27394667.mp4 (deleted) nginx 18862 www 178u IPv4 86054929 0t0 TCP 154.59.82.194:http->5adc98ed.bb.sky.com:45665 (CLOSE_WAIT) nginx 18862 www 179r REG 8,3 0 5098 /mnt/ssd1/wwwroot/21/9/29603499.mp4 (deleted) Nginx has such version and modules: nginx version: nginx/1.9.11 built with OpenSSL 1.0.2f 28 Jan 2016 TLS SNI support enabled configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --pid-path=/run/nginx.pid --lock-path=/run/lock/nginx.lock --with-cc-opt=-I/usr/include --with-ld-opt=-L/usr/lib64 --http-log-path=/var/log/nginx/access_log --http-client-body-temp-path=/var/lib/nginx/tmp/client --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --with-file-aio --with-ipv6 --with-pcre --with-threads --without-http_autoindex_module --without-http_fastcgi_module --without-http_geo_module --without-http_limit_req_module --without-http_limit_conn_module --without-http_memcached_module --without-http_uwsgi_module --with-http_flv_module --with-http_gzip_static_module --with-http_mp4_module --with-http_perl_module --add-module=external_module/headers-more-nginx-module-0.261 --add-module=external_module/ngx_estreaming_module-0.01 --add-module=external_module/ngx_slice_module-0.01 --with-http_ssl_module --without-mail_imap_module --without-mail_pop3_module --without-mail_smtp_module --user='www --group=www' and using for video streaming. Has anyone encountered such behavior ? Help please. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,264764#msg-264764 From nginx-forum at forum.nginx.org Wed Feb 24 14:39:04 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Wed, 24 Feb 2016 09:39:04 -0500 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: References: Message-ID: P.S: we are using Gentoo with 4.4.1 kernel and CPU X3330 @ 2.66GHz GenuineIntel GNU/Linux Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,264766#msg-264766 From mdounin at mdounin.ru Wed Feb 24 15:10:56 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Feb 2016 18:10:56 +0300 Subject: nginx-1.9.12 Message-ID: <20160224151056.GJ31796@mdounin.ru> Changes with nginx 1.9.12 24 Feb 2016 *) Feature: Huffman encoding of response headers in HTTP/2. Thanks to Vlad Krasnov. *) Feature: the "worker_cpu_affinity" directive now supports more than 64 CPUs. *) Bugfix: compatibility with 3rd party C++ modules; the bug had appeared in 1.9.11. Thanks to Piotr Sikora. *) Bugfix: nginx could not be built statically with OpenSSL on Linux; the bug had appeared in 1.9.11. *) Bugfix: the "add_header ... always" directive with an empty value did not delete "Last-Modified" and "ETag" header lines from error responses. *) Workaround: "called a function you should not call" and "shutdown while in init" messages might appear in logs when using OpenSSL 1.0.2f. *) Bugfix: invalid headers might be logged incorrectly. *) Bugfix: socket leak when using HTTP/2. *) Bugfix: in the ngx_http_v2_module. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Feb 24 15:25:35 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 24 Feb 2016 18:25:35 +0300 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: References: Message-ID: <3276415.0yY8t7h8rS@vbart-workstation> On Wednesday 24 February 2016 09:17:01 vizl wrote: > Hello, I have strange issuses with nginx workers. For some time after start > Nginx I notice that some process of workers cause high load to CPU ( > principally sys CPU). > > At first I've got syscall traces from one of such process: > > futex(0x157d914, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x157d910, {FUTEX_OP_SET, 0, > FUTEX_OP_CMP_GT, 1}) = 1 > epoll_wait(38, {{EPOLLIN, {u32=7156096, u64=7156096}}}, 512, -1) = 1 > epoll_ctl(38, EPOLL_CTL_ADD, 178, {EPOLLOUT|EPOLLET, {u32=3888102096, > u64=140028411886288}}) = 0 > epoll_wait(38, {{EPOLLOUT, {u32=3888102096, u64=140028411886288}}}, 512, -1) > = 1 > epoll_ctl(38, EPOLL_CTL_DEL, 178, 7ffda2bc7f30) = 0 > futex(0x157d914, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x157d910, {FUTEX_OP_SET, 0, > FUTEX_OP_CMP_GT, 1}) = 1 > epoll_wait(38, {{EPOLLIN, {u32=7156096, u64=7156096}}}, 512, -1) = 1 > epoll_ctl(38, EPOLL_CTL_ADD, 178, {EPOLLOUT|EPOLLET, {u32=3888102096, > u64=140028411886288}}) = 0 > epoll_wait(38, {{EPOLLOUT, {u32=3888102096, u64=140028411886288}}}, 512, -1) > = 1 > epoll_ctl(38, EPOLL_CTL_DEL, 178, 7ffda2bc7f30) = 0 > futex(0x157d914, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x157d910, {FUTEX_OP_SET, 0, > FUTEX_OP_CMP_GT, 1}) = 1 > futex(0x157d8d0, FUTEX_WAKE_PRIVATE, 1) = 1 > > epoll_wait, epoll_ctl, futex are repeated circularly. > > Then I've got lsof of process and see who owns of 38 file descriptor: > > nginx 18862 www 38u a_inode 0,9 0 6 > [eventpoll] > > also I see several CLOSE_WAIT sockets > > nginx 18862 www 101u IPv4 85643376 0t0 TCP > 154.59.82.194:http->105.107.179.210:24519 (CLOSE_WAIT) > nginx 18862 www 133r REG 8,3 0 4743 > /mnt/ssd1/wwwroot/71/7/27394667.mp4 (deleted) > nginx 18862 www 178u IPv4 86054929 0t0 TCP > 154.59.82.194:http->5adc98ed.bb.sky.com:45665 (CLOSE_WAIT) > nginx 18862 www 179r REG 8,3 0 5098 > /mnt/ssd1/wwwroot/21/9/29603499.mp4 (deleted) > > > Nginx has such version and modules: > > nginx version: nginx/1.9.11 > built with OpenSSL 1.0.2f 28 Jan 2016 > TLS SNI support enabled > configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error_log --pid-path=/run/nginx.pid > --lock-path=/run/lock/nginx.lock --with-cc-opt=-I/usr/include > --with-ld-opt=-L/usr/lib64 --http-log-path=/var/log/nginx/access_log > --http-client-body-temp-path=/var/lib/nginx/tmp/client > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --with-file-aio --with-ipv6 > --with-pcre --with-threads --without-http_autoindex_module > --without-http_fastcgi_module --without-http_geo_module > --without-http_limit_req_module --without-http_limit_conn_module > --without-http_memcached_module --without-http_uwsgi_module > --with-http_flv_module --with-http_gzip_static_module --with-http_mp4_module > --with-http_perl_module > --add-module=external_module/headers-more-nginx-module-0.261 > --add-module=external_module/ngx_estreaming_module-0.01 > --add-module=external_module/ngx_slice_module-0.01 --with-http_ssl_module > --without-mail_imap_module --without-mail_pop3_module > --without-mail_smtp_module --user='www --group=www' > > and using for video streaming. > > Has anyone encountered such behavior ? Help please. > [..] Could you provide a minimal configuration that is causing problems with debug log? See: http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Wed Feb 24 15:41:32 2016 From: nginx-forum at forum.nginx.org (rodrigda) Date: Wed, 24 Feb 2016 10:41:32 -0500 Subject: Nginx Tuning. Message-ID: <03b41cbc41c8fe732d5cee6f55911ac5.NginxMailingListEnglish@forum.nginx.org> Alright here is the situtation. I have nginx and with passenger running. I can send load to the server and after a certain point I just start getting 500's back. I have not been able to see what is causing it. I have made tweaks to the config based on some blog posts but I can't get past a certain point. I get to about 1400 requests per minute and than start seeing 500s. The configs are below. It almost seems like I might be hitting an OS or server limit that I can't seem to find. The server is a 8CPU, 8GB cloud server. nginx.conf user www-data; worker_processes 8; worker_rlimit_nofile 30000; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 3500; use epoll; } http { log_format main '$status:$request_time:$upstream_response_time:$pipe:$body_bytes_sent $connection $ remote_addr $host $remote_user [$time_local] "$request" "$http_referer" "$http_user_agent" "$http_x_f orwarded_for" $upstream_addr $upstream_cache_status "in: $http_cookie"' include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_requests 100; keepalive_timeout 65; gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_vary off; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/rss+xm l application/atom+xml text/javascript application/javascript application/json text/mathml; gzip_min_length 1000; gzip_disable "MSIE [1-6]\."; variables_hash_max_size 1024; variables_hash_bucket_size 64; server_names_hash_bucket_size 64; types_hash_max_size 2048; types_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } site.conf server { listen 0.0.0.0:80; location ~ ^/api/.* { include /etc/nginx/conf.d/include/restrict.include; } ## redirect http to https ## location ~ .* { return 301 https://$host$request_uri; } } server { listen 0.0.0.0:443; server_name somesite.net; ## set client body size to 15M, to address file upload limitations ## client_max_body_size 15M; include /etc/nginx/conf.d/actual_config/*.include; ssl on; ssl_certificate /etc/nginx/self_signed; ssl_certificate_key /etc/nginx/cert.key; # Display the maintenance page if it exists if (-f $document_root/system/maintenance.html){ rewrite ^(.*)$ /system/maintenance.html last; break; } # If the file exists as a static file serve it directly without # running all the other rewite tests on it if (-f $request_filename) { break; } # this is the meat of the rails page caching config # it adds .html to the end of the url and then checks # the filesystem for that file. If it exists, then we # rewite the url to have explicit .html on the end # and then send it on its way to the next config rule. # if there is no file on the fs then it sets all the # necessary headers and proxies to our upstream mongrels if (-f $request_filename.html) { rewrite (.*) $1.html break; } # Use any statically compressed javascript or stylesheet files location ~* ^/(javascripts|stylesheets)/.*\.(js|css) { #gzip_static on; } } passenger.conf passenger_root /opt/ruby/lib/ruby/gems/2.2.0/gems/passenger-enterprise-server-5.0.24; passenger_ruby /opt/ruby/bin/ruby; passenger_max_pool_size 31; passenger_min_instances 31; passenger_pre_start https://llocalhost; passenger_log_level 2; site.conf location / { passenger_enabled on; rails_env lt; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug; passenger_base_uri /; alias /home/sites/site/current/public/$1; passenger_app_root /home/sites/site/current; index index.html index.htm; } #Asset displaying. location ~ ^/(assets)/ { gzip_static on; expires max; add_header Cache-Control public; root /home/sites/site/current/public; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264775,264775#msg-264775 From kworthington at gmail.com Wed Feb 24 16:16:03 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 24 Feb 2016 11:16:03 -0500 Subject: [nginx-announce] nginx-1.9.12 In-Reply-To: <20160224151101.GK31796@mdounin.ru> References: <20160224151101.GK31796@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.12 for Windows https://kevinworthington.com/nginxwin1912 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Wed, Feb 24, 2016 at 10:11 AM, Maxim Dounin wrote: > Changes with nginx 1.9.12 24 Feb > 2016 > > *) Feature: Huffman encoding of response headers in HTTP/2. > Thanks to Vlad Krasnov. > > *) Feature: the "worker_cpu_affinity" directive now supports more than > 64 CPUs. > > *) Bugfix: compatibility with 3rd party C++ modules; the bug had > appeared in 1.9.11. > Thanks to Piotr Sikora. > > *) Bugfix: nginx could not be built statically with OpenSSL on Linux; > the bug had appeared in 1.9.11. > > *) Bugfix: the "add_header ... always" directive with an empty value > did > not delete "Last-Modified" and "ETag" header lines from error > responses. > > *) Workaround: "called a function you should not call" and "shutdown > while in init" messages might appear in logs when using OpenSSL > 1.0.2f. > > *) Bugfix: invalid headers might be logged incorrectly. > > *) Bugfix: socket leak when using HTTP/2. > > *) Bugfix: in the ngx_http_v2_module. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Feb 24 16:57:37 2016 From: nginx-forum at forum.nginx.org (George) Date: Wed, 24 Feb 2016 11:57:37 -0500 Subject: nginx-1.9.12 In-Reply-To: <20160224151056.GJ31796@mdounin.ru> References: <20160224151056.GJ31796@mdounin.ru> Message-ID: <289037d21c70f7c0ca47c6643c638213.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim and Nginx ! But no love for LibreSSL users as Nginx 1.9.12 seems to broken compilation against LibreSSL 2.2.6 for me https://trac.nginx.org/nginx/ticket/908#ticket ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264770,264780#msg-264780 From anoopalias01 at gmail.com Wed Feb 24 17:02:19 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 24 Feb 2016 22:32:19 +0530 Subject: Nginx Tuning. In-Reply-To: <03b41cbc41c8fe732d5cee6f55911ac5.NginxMailingListEnglish@forum.nginx.org> References: <03b41cbc41c8fe732d5cee6f55911ac5.NginxMailingListEnglish@forum.nginx.org> Message-ID: You should check the nginx error log as it may have vital clues to resolve this. -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Feb 24 17:05:35 2016 From: nginx-forum at forum.nginx.org (rodrigda) Date: Wed, 24 Feb 2016 12:05:35 -0500 Subject: Nginx Tuning. In-Reply-To: References: Message-ID: <7e0693096f41d5a83c0721c54a629120.NginxMailingListEnglish@forum.nginx.org> I just keep seeing things like this. 2016/02/24 17:04:56 [info] 5989#0: *108120 client 127.0.0.1 closed keepalive connection When I get the 500 that is all I get. The location of the call that was getting made and a 500. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264775,264782#msg-264782 From mdounin at mdounin.ru Wed Feb 24 17:11:52 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Feb 2016 20:11:52 +0300 Subject: nginx-1.9.12 In-Reply-To: <289037d21c70f7c0ca47c6643c638213.NginxMailingListEnglish@forum.nginx.org> References: <20160224151056.GJ31796@mdounin.ru> <289037d21c70f7c0ca47c6643c638213.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160224171152.GP31796@mdounin.ru> Hello! On Wed, Feb 24, 2016 at 11:57:37AM -0500, George wrote: > Thanks Maxim and Nginx ! > > But no love for LibreSSL users as Nginx 1.9.12 seems to broken compilation > against LibreSSL 2.2.6 for me https://trac.nginx.org/nginx/ticket/908#ticket > ? It's not expected to work at all in the first place. LibreSSL is a different library with different compilation procedure, and nginx doesn't know how to compile it, so using it in --with-openssl= is incorrect. Compile it yourself instead. -- Maxim Dounin http://nginx.org/ From nriamir60 at gmail.com Wed Feb 24 17:35:05 2016 From: nriamir60 at gmail.com (Amir Alam) Date: Wed, 24 Feb 2016 09:35:05 -0800 Subject: nginx-1.9.12 In-Reply-To: <20160224171152.GP31796@mdounin.ru> References: <20160224151056.GJ31796@mdounin.ru> <289037d21c70f7c0ca47c6643c638213.NginxMailingListEnglish@forum.nginx.org> <20160224171152.GP31796@mdounin.ru> Message-ID: But sir I'm wine chips. On 24-Feb-2016 10:41 pm, "Maxim Dounin" wrote: > Hello! > > On Wed, Feb 24, 2016 at 11:57:37AM -0500, George wrote: > > > Thanks Maxim and Nginx ! > > > > But no love for LibreSSL users as Nginx 1.9.12 seems to broken > compilation > > against LibreSSL 2.2.6 for me > https://trac.nginx.org/nginx/ticket/908#ticket > > ? > > It's not expected to work at all in the first place. LibreSSL is > a different library with different compilation procedure, and > nginx doesn't know how to compile it, so using it in > --with-openssl= is incorrect. Compile it yourself instead. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Feb 24 17:49:32 2016 From: nginx-forum at forum.nginx.org (digitalkapitaen) Date: Wed, 24 Feb 2016 12:49:32 -0500 Subject: nginx routing based on ip address In-Reply-To: <01476bff08fc114219086a9c0cba983e.NginxMailingListEnglish@forum.nginx.org> References: <01476bff08fc114219086a9c0cba983e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <468db5c5d540f9ca4a2f8aa00676251e.NginxMailingListEnglish@forum.nginx.org> keeyong Wrote: ------------------------------------------------------- > I am wondering if it is possible to compute some kind of hash value > from ip address or do modulo operation on the last numeric value from > ip address (for example on 24 given 172.16.4.24)? Based on this, I > want to send the request to different endpoints (not to different > servers). If this can be done without installing any new module, that > would be the best. > > As far as I can see, I feel like I should do something similar using > the regex? Yes, regex and map should help. If you want to derive four distinct values, you can do something like this: map $remote_addr $key { ~\.[0-9]$ 1; # 0..9 ~\.[0-5][0-9]$ 1; # 10..59 ~\.6[0-4]$ 1; # 60..64 ~\.6[5-9]$ 2; # 65..69 ~\.[7-9][0-9]$ 2; # 70..99 ~\.1[0-1][0-9]$ 2; # 100..119 ~\.12[0-8]$ 2; # 120..128 ~\.129$ 3; # 129 ~\.1[3-8][0-9]$ 3; # 130..189 ~\.19[0-2]$ 3; # 190..192 default 4; } Caveats: This does not work for IPv6. I have no idea if the fourth IP byte is normally distributed. If not, then may generate a full mapping (~\.1$, ~\.2$, ... ~\.10$, ..., ~\.99$, .., ~\.255$) If you want to do something even more fancy you can go totally binary with the $binary_remote_addr variable and \xYY in the regexp (never tried this as it is more difficult to debug). Oliver Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264742,264786#msg-264786 From sb at nginx.com Wed Feb 24 19:08:05 2016 From: sb at nginx.com (Sergey Budnevitch) Date: Wed, 24 Feb 2016 22:08:05 +0300 Subject: packages for the dynamic modules. testing required. Message-ID: Hello. Previously we built nginx with all modules, except those that required extra libraries. With dynamic modules it is possible to build them as the separate packages and nginx main package will not have extra dependences. For nginx 1.9.12 we build additional packages with xslt, image-filter and geoip modules. It is possible to install, for example, image filter module on RHEL/CentOS with command: % yum install nginx-module-image-filter or on Ubuntu/Debian with command: % apt-get install nginx-module-image-filter then to enable module it is necessary add load_module directive: load_module modules/ngx_http_image_filter_module.so; to the main section of the nginx.conf Please test these modules, any feedback will be helpful. From nginx-forum at forum.nginx.org Wed Feb 24 19:01:22 2016 From: nginx-forum at forum.nginx.org (George) Date: Wed, 24 Feb 2016 14:01:22 -0500 Subject: nginx-1.9.12 In-Reply-To: <20160224171152.GP31796@mdounin.ru> References: <20160224171152.GP31796@mdounin.ru> Message-ID: thanks I switched back to OpenSSL for now :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264770,264794#msg-264794 From nginx-forum at forum.nginx.org Wed Feb 24 22:51:29 2016 From: nginx-forum at forum.nginx.org (dshe) Date: Wed, 24 Feb 2016 17:51:29 -0500 Subject: nginx plus: sticky session with dynamic upstreams ? Message-ID: <35667f09465b2eaecf9eaf4907704b62.NginxMailingListEnglish@forum.nginx.org> Wonder what my options are in nginx plus to have sticky requests when upstream are dynamic. As I understand there are 2 ways to have dynamic upstreams in nginx plus: dns lookup with resolve/resolver or using ngx_http_upstream_conf_module. Will any/both work ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264804,264804#msg-264804 From nginx-forum at forum.nginx.org Wed Feb 24 23:28:08 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Wed, 24 Feb 2016 18:28:08 -0500 Subject: nginx-1.9.12 In-Reply-To: <20160224171152.GP31796@mdounin.ru> References: <20160224171152.GP31796@mdounin.ru> Message-ID: Hello, Thanks for this new nginx release! It would be great to officially support LibreSSL in nginx. Until now, nginx had no problem compiling LibreSSL using "--with-openssl=". Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264770,264805#msg-264805 From lkn2993 at gmail.com Thu Feb 25 08:34:21 2016 From: lkn2993 at gmail.com (Ali Kiaian) Date: Thu, 25 Feb 2016 12:04:21 +0330 Subject: PKCS#11 key not properly supported? Message-ID: I'll try to keep this message simple. E.G this command will work using the new version of engine_pkcs11: sudo openssl req -engine pkcs11 -new -key "pkcs11:model=SoftHSM;manufacturer=SoftHSM;serial=1;token=;id=%d4%b1%6d%62%5f%8c%f4%ec%19%05%0e%bc%2e%a0%9e%0f%d3%f1%2f%87;object=cakey;object-type=private;pin-value=1111" -keyform engine -out req.pem -text -x509 -subj "/CN=Test" But this will result in error: ssl_certificate_key "engine:pkcs11:pkcs11:model=SoftHSM;manufacturer=SoftHSM;serial=1;token=;id=%d4%b1%6d%62%5f%8c%f4%ec%19%05%0e%bc%2e%a0%9e%0f%d3%f1%2f%87;object=cakey;object-type=private;pin-value=1111"; The error message is: nginx: [emerg] ENGINE_load_private_key("pkcs11:model=SoftHSM;manufacturer=SoftHSM;serial=1;token=;id=%d4%b1%6d%62%5f%8c%f4%ec%19%05%0e%bc%2e%a0%9e%0f%d3%f1%2f%87;object=cakey;object-type=private;pin-value=1111") failed (SSL: error:26096075:engine routines:ENGINE_load_private_key:not initialised) Any help regarding this is appreciated, Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 25 10:46:32 2016 From: nginx-forum at forum.nginx.org (vergil) Date: Thu, 25 Feb 2016 05:46:32 -0500 Subject: Cache manager occasionally stops deleting cached files In-Reply-To: <3062c9927373ad6a88374a43a1335aba.NginxMailingListEnglish@forum.nginx.org> References: <3062c9927373ad6a88374a43a1335aba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0e5c2339c4b5a9b5047bc8c5e9b67f14.NginxMailingListEnglish@forum.nginx.org> vedranf Wrote: ------------------------------------------------------- > Hello, > > I'm having an issue where nginx (1.8) cache manager suddenly just > stops deleting content thus the disk soon ends up being full until I > restart it by hand. After it is restarted, it works normally for a > couple of days, but then it happens again. Cache has some 30-40k > files, nothing huge. Relevant config lines are: > > proxy_cache_path /home/cache/ levels=2:2 keys_zone=cache:25m > inactive=7d max_size=2705g use_temp_path=on; > proxy_temp_path /dev/shm/temp; # reduces parallel writes on > the disk > proxy_cache_lock on; > proxy_cache_lock_age 10s; > proxy_cache_lock_timeout 30s; > proxy_ignore_client_abort on; > > Server gets roughly 100 rps and normally cache manager deletes a > couple of files every few seconds, however when it gets stuck this is > all it does for 20-30 minutes or more, i.e. there are 0 unlinks (until > I restart it and it rereads the on-disk cache): > > ... > epoll_wait(14, {}, 512, 1000) = 0 > epoll_wait(14, {}, 512, 1000) = 0 > epoll_wait(14, {}, 512, 1000) = 0 > epoll_wait(14, {}, 512, 1000) = 0 > gettid() = 11303 > write(24, "2016/02/18 08:22:02 [alert] 11303#11303: ignore long locked > inactive cache entry 380d3f178017bcd92877ee322b006bbb, count:1\n", > 123) = 123 > gettid() = 11303 > write(24, "2016/02/18 08:22:02 [alert] 11303#11303: ignore long locked > inactive cache entry 7b9239693906e791375a214c7e36af8e, count:24\n", > 124) = 124 > epoll_wait(14, {}, 512, 1000) = 0 > ... > > I assume the mentioned error is due to relatively often nginx restarts > and is benign. There's nothing else in the error log (except for > occasional upstream timeouts). I'm aware this likely isn't enough info > to debug the issue, but do you at least have some ideas on what might > be causing this issue, where to look? I'm wild guessing cache manager > waits for some lock to be released, but it never gets released so it > just waits indefinitely. > > Thanks, > Vedran We have the same problem, but i'm not sure, that this is caused by often nginx restarts. As far as i know problem exist since version 1.6 (maybe even earlier, 1.4.6 from ubuntu repo is not affected) till now (1.9.9) I've collected related forum posts (should help analyze the problem): https://forum.nginx.org/read.php?21,258292,258292#msg-258292 https://forum.nginx.org/read.php?21,260990,260990#msg-260990 https://forum.nginx.org/read.php?2,263625,263625#msg-263625 Also, i think it's somehow related to write connection leak. (see image link) https://s3.eu-central-1.amazonaws.com/drive-public-eu/nginx/betelgeuse_nginx_connections.PNG Here we have our standard nginx configuration (before january, 28) with 7 days inactive time: proxy_cache_path /mnt/cache1/nginx levels=2:2 keys_zone=a.d-1_cache:2143M inactive=7d max_size=643G loader_sleep=1ms; Every ~8 days (when writing connections reaches ~10k mark) cache starts growing and fills the disk. Write connections falls on graph are nginx restarts. On january, 28 i changed inactive time to 8h. After write connections hits ~10k mark, nginx starts filling logs with "ignore long locked inactive cache entry" message (1-2 messages per minute on average). As you see write connections continuously grows. (When we had to power off the machine it's reached ~60k). For counting nginx connections we use standard http_stub_status_module. I think that nginx "reference counter" could be broken, because total established TCP connection remains the same all the time. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264599,264819#msg-264819 From youjie.zeng at gmail.com Thu Feb 25 13:28:59 2016 From: youjie.zeng at gmail.com (youjie zeng) Date: Thu, 25 Feb 2016 21:28:59 +0800 Subject: lose value of $uid_set Message-ID: Hello, master of nginx, I have a question when using ngx_http_userid_module. here is the detail description: nginx version:1.7 conf: log_format main '$remote_addr - $remote_user [$time_local] "$request" $uid_set' ... server { userid on; userid_name user_id; set $uid_reset myuid; ... } because i set $uid_reset not empty, $uid_set would get a new uuid every time nginx process a request. but i found some strange things, I got value "-" of $uid_set, that means $uid_set did not get a value. even though over 92% request have correct value of $uid_set, but the 2% do not. And i did not found abnormal of those request. Do you have any idea about this? looking forward your reply! have a nice day! -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at slcoding.com Thu Feb 25 13:44:19 2016 From: lucas at slcoding.com (Lucas Rolff) Date: Thu, 25 Feb 2016 14:44:19 +0100 Subject: lose value of $uid_set In-Reply-To: References: Message-ID: <56CF0533.9030808@slcoding.com> What is the http status code of those failed requests? > youjie zeng > 25 February 2016 at 14:28 > Hello, master of nginx, I have a question when using > ngx_http_userid_module. here is the detail description: > > nginx version:1.7 > conf: > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > $uid_set' > > ... > > server { > > userid on; > > userid_name user_id; > > set $uid_reset myuid; > > ... > > } > > > because i set $uid_reset not empty, $uid_set would get a new uuid > every time nginx process a request. > > but i found some strange things, I got value "-" of $uid_set, that > means $uid_set did not get a value. even though over 92% request have > correct value of $uid_set, but the 2% do not. And i did not found > abnormal of those request. > > Do you have any idea about this? > > looking forward your reply! > > have a nice day! > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Feb 25 14:21:29 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Feb 2016 17:21:29 +0300 Subject: Cache manager occasionally stops deleting cached files In-Reply-To: <0e5c2339c4b5a9b5047bc8c5e9b67f14.NginxMailingListEnglish@forum.nginx.org> References: <3062c9927373ad6a88374a43a1335aba.NginxMailingListEnglish@forum.nginx.org> <0e5c2339c4b5a9b5047bc8c5e9b67f14.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160225142129.GX31796@mdounin.ru> Hello! On Thu, Feb 25, 2016 at 05:46:32AM -0500, vergil wrote: > vedranf Wrote: > ------------------------------------------------------- > > Hello, > > > > I'm having an issue where nginx (1.8) cache manager suddenly just > > stops deleting content thus the disk soon ends up being full until I > > restart it by hand. After it is restarted, it works normally for a > > couple of days, but then it happens again. Cache has some 30-40k > > files, nothing huge. Relevant config lines are: [...] > We have the same problem, but i'm not sure, that this is caused by often > nginx restarts. This particular case was traced to segmentation faults, likely caused by 3rd party modules. [...] > Also, i think it's somehow related to write connection leak. (see image > link) > > https://s3.eu-central-1.amazonaws.com/drive-public-eu/nginx/betelgeuse_nginx_connections.PNG [...] > As you see write connections continuously grows. (When we had to power off > the machine it's reached ~60k). > > For counting nginx connections we use standard http_stub_status_module. > > I think that nginx "reference counter" could be broken, because total > established TCP connection remains the same all the time. Writing connections will grow due to segmentation faults as well, so you are likely have the same problem. See basic recommendations in my initial answer in this threads. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Thu Feb 25 17:57:12 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 25 Feb 2016 18:57:12 +0100 Subject: packages for the dynamic modules. testing required. In-Reply-To: References: Message-ID: Hello Sergey, Great news! :o) Would it possible to add a file in some conf.d containing the load_module directive so a package management system could automatically configure nginx to automatically load the module on the next reload? That is standard practice for some other technologies. --- *B. R.* On Wed, Feb 24, 2016 at 8:08 PM, Sergey Budnevitch wrote: > Hello. > > Previously we built nginx with all modules, except those that required > extra libraries. With dynamic modules it is possible to build them as > the separate packages and nginx main package will not have extra > dependences. > > For nginx 1.9.12 we build additional packages with xslt, image-filter > and geoip modules. It is possible to install, for example, image filter > module on RHEL/CentOS with command: > > % yum install nginx-module-image-filter > > or on Ubuntu/Debian with command: > > % apt-get install nginx-module-image-filter > > then to enable module it is necessary add load_module directive: > > load_module modules/ngx_http_image_filter_module.so; > > to the main section of the nginx.conf > > Please test these modules, any feedback will be helpful. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Thu Feb 25 21:50:09 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 25 Feb 2016 16:50:09 -0500 Subject: NGINX logging tab delimited format to syslog Message-ID: I would really like to output my nginx access log to syslog in a tab delimited format. I'm using the latest nginx and rsyslogd 7.2.5 I haven't found an example of doing this, I'm wondering if/how to add tabs to the format in the log_format directive And also if there is anything I need to do to syslogd to pass through the tab characters. Any help appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 25 22:54:27 2016 From: nginx-forum at forum.nginx.org (dshe) Date: Thu, 25 Feb 2016 17:54:27 -0500 Subject: nginx plus dashboard for clusters Message-ID: I think dashboard is a great feature in nginx plus but I was wondering if it can aggregate metrics from a cluster of nginx servers or each server has its own dashboard. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264857,264857#msg-264857 From dewanggaba at xtremenitro.org Thu Feb 25 22:55:10 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Fri, 26 Feb 2016 05:55:10 +0700 Subject: nginx plus dashboard for clusters In-Reply-To: References: Message-ID: <56CF864E.8070604@xtremenitro.org> Hello! On 02/26/2016 05:54 AM, dshe wrote: > I think dashboard is a great feature in nginx plus but I was wondering if it > can aggregate metrics from a cluster of nginx servers or each server has its > own dashboard. You can try amplify.nginx.com :) > Thanks > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264857,264857#msg-264857 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From ek at nginx.com Thu Feb 25 23:25:14 2016 From: ek at nginx.com (Ekaterina Kukushkina) Date: Fri, 26 Feb 2016 02:25:14 +0300 Subject: NGINX logging tab delimited format to syslog In-Reply-To: References: Message-ID: <179B2A79-B96D-43E2-80C7-5B0A406325E5@nginx.com> Hello CJ, > On 26 Feb 2016, at 00:50, CJ Ess wrote: > > I would really like to output my nginx access log to syslog in a tab delimited format. > > I'm using the latest nginx and rsyslogd 7.2.5 > > I haven't found an example of doing this, I'm wondering if/how to add tabs to the format in the log_format directive Just use '\t' instead of ' ' in your log_format. For example: log_format tabbed '$remote_addr\t-\t$remote_user\t[$time_local]\t"$request"'; > > And also if there is anything I need to do to syslogd to pass through the tab characters. By default, rsyslog convert control characters to their ASCII values (#011 in case of Tab). You may prevent this behavior by setting $EscapeControlCharactersOnReceive to off. > > Any help appreciated! > -- Ekaterina Kukushkina From youjie.zeng at gmail.com Fri Feb 26 00:21:22 2016 From: youjie.zeng at gmail.com (youjie zeng) Date: Fri, 26 Feb 2016 08:21:22 +0800 Subject: lose value of $uid_set In-Reply-To: <56CF0533.9030808@slcoding.com> References: <56CF0533.9030808@slcoding.com> Message-ID: most of them is 301, status 200 is about 10% of status 301, and there are some other status. i can not find obvious regulation for them. On 25 February 2016 at 21:44, Lucas Rolff wrote: > What is the http status code of those failed requests? > > youjie zeng > 25 February 2016 at 14:28 > Hello, master of nginx, I have a question when using > ngx_http_userid_module. here is the detail description: > > nginx version:1.7 > conf: > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > $uid_set' > ... > > server { > > userid on; > > userid_name user_id; > > set $uid_reset myuid; > > ... > > } > > > because i set $uid_reset not empty, $uid_set would get a new uuid every > time nginx process a request. > > but i found some strange things, I got value "-" of $uid_set, that means > $uid_set did not get a value. even though over 92% request have correct > value of $uid_set, but the 2% do not. And i did not found abnormal of those > request. > > Do you have any idea about this? > > looking forward your reply! > > have a nice day! > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Fri Feb 26 11:05:28 2016 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 26 Feb 2016 16:05:28 +0500 Subject: Nginx Zero Size Buffer Alerts !! Message-ID: Hi, We've many zero size buff alerts related to file_uploader.php file in nginx logs : 2016/02/26 12:56:02 [alert] 71067#0: *12457068 zero size buf in output t:0 r:0 f:1 0000000000000000 0000000000000000-0000000000000000 0000000803E29F08 0-0 while sending request to upstream, client: 182.178.58.123, server: domain.com request: "OPTIONS /actions/file_uploader.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/www.socket:", host: "domain.com" --------------------------------------------- Here is our Nginx Version : [root at cw001 /usr/local/etc]# nginx -V nginx version: nginx/1.8.0 built with OpenSSL 1.0.1j-freebsd 15 Oct 2014 (running with OpenSSL 1.0.1l-freebsd 15 Jan 2015) TLS SNI support enabled configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --with-file-aio --with-ipv6 --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --with-http_flv_module --with-http_geoip_module --with-http_mp4_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-pcre --with-http_spdy_module --with-http_ssl_module --------------------------------------------------- We don't have 3rd party module installed with nginx. Please let me know if i am missing something important here which could be cause of these errors ? Thanks in advance !! Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Feb 26 12:12:46 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 26 Feb 2016 15:12:46 +0300 Subject: Nginx Zero Size Buffer Alerts !! In-Reply-To: References: Message-ID: <1710573.kFRKtSGeg0@vbart-workstation> On Friday 26 February 2016 16:05:28 shahzaib shahzaib wrote: > Hi, > > We've many zero size buff alerts related to file_uploader.php file in nginx > logs : > > > 2016/02/26 12:56:02 [alert] 71067#0: *12457068 zero size buf in output t:0 > r:0 f:1 0000000000000000 0000000000000000-0000000000000000 0000000803E29F08 > 0-0 while sending request to upstream, client: 182.178.58.123, server: > domain.com request: "OPTIONS /actions/file_uploader.php HTTP/1.1", > upstream: "fastcgi://unix:/var/run/www.socket:", host: "domain.com" > > --------------------------------------------- > > Here is our Nginx Version : > > [root at cw001 /usr/local/etc]# nginx -V > nginx version: nginx/1.8.0 > built with OpenSSL 1.0.1j-freebsd 15 Oct 2014 (running with OpenSSL > 1.0.1l-freebsd 15 Jan 2015) > TLS SNI support enabled > configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I > /usr/local/include' --with-ld-opt='-L /usr/local/lib' > --conf-path=/usr/local/etc/nginx/nginx.conf > --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid > --error-log-path=/var/log/nginx-error.log --user=www --group=www > --with-file-aio --with-ipv6 > --http-client-body-temp-path=/var/tmp/nginx/client_body_temp > --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp > --http-proxy-temp-path=/var/tmp/nginx/proxy_temp > --http-scgi-temp-path=/var/tmp/nginx/scgi_temp > --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp > --http-log-path=/var/log/nginx-access.log --with-http_flv_module > --with-http_geoip_module --with-http_mp4_module --with-http_realip_module > --with-http_secure_link_module --with-http_stub_status_module --with-pcre > --with-http_spdy_module --with-http_ssl_module > > --------------------------------------------------- > > We don't have 3rd party module installed with nginx. Please let me know if > i am missing something important here which could be cause of these errors ? > > Thanks in advance !! > [..] If you're using spdy, then that is known issue. You can ignore this alert. wbr, Valentin V. Bartenev From shahzaib.cb at gmail.com Fri Feb 26 13:18:49 2016 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 26 Feb 2016 18:18:49 +0500 Subject: Nginx Zero Size Buffer Alerts !! In-Reply-To: <1710573.kFRKtSGeg0@vbart-workstation> References: <1710573.kFRKtSGeg0@vbart-workstation> Message-ID: Hi, Alright then, thanks for clarification. Regards. Shahzaib On Fri, Feb 26, 2016 at 5:12 PM, Valentin V. Bartenev wrote: > On Friday 26 February 2016 16:05:28 shahzaib shahzaib wrote: > > Hi, > > > > We've many zero size buff alerts related to file_uploader.php file in > nginx > > logs : > > > > > > 2016/02/26 12:56:02 [alert] 71067#0: *12457068 zero size buf in output > t:0 > > r:0 f:1 0000000000000000 0000000000000000-0000000000000000 > 0000000803E29F08 > > 0-0 while sending request to upstream, client: 182.178.58.123, server: > > domain.com request: "OPTIONS /actions/file_uploader.php HTTP/1.1", > > upstream: "fastcgi://unix:/var/run/www.socket:", host: "domain.com" > > > > --------------------------------------------- > > > > Here is our Nginx Version : > > > > [root at cw001 /usr/local/etc]# nginx -V > > nginx version: nginx/1.8.0 > > built with OpenSSL 1.0.1j-freebsd 15 Oct 2014 (running with OpenSSL > > 1.0.1l-freebsd 15 Jan 2015) > > TLS SNI support enabled > > configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I > > /usr/local/include' --with-ld-opt='-L /usr/local/lib' > > --conf-path=/usr/local/etc/nginx/nginx.conf > > --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid > > --error-log-path=/var/log/nginx-error.log --user=www --group=www > > --with-file-aio --with-ipv6 > > --http-client-body-temp-path=/var/tmp/nginx/client_body_temp > > --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp > > --http-proxy-temp-path=/var/tmp/nginx/proxy_temp > > --http-scgi-temp-path=/var/tmp/nginx/scgi_temp > > --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp > > --http-log-path=/var/log/nginx-access.log --with-http_flv_module > > --with-http_geoip_module --with-http_mp4_module --with-http_realip_module > > --with-http_secure_link_module --with-http_stub_status_module --with-pcre > > --with-http_spdy_module --with-http_ssl_module > > > > --------------------------------------------------- > > > > We don't have 3rd party module installed with nginx. Please let me know > if > > i am missing something important here which could be cause of these > errors ? > > > > Thanks in advance !! > > > [..] > > If you're using spdy, then that is known issue. You can ignore this alert. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at nginx.com Fri Feb 26 14:30:54 2016 From: sb at nginx.com (Sergey Budnevitch) Date: Fri, 26 Feb 2016 17:30:54 +0300 Subject: packages for the dynamic modules. testing required. In-Reply-To: References: Message-ID: > On 25 Feb 2016, at 20:57, B.R. wrote: > > Hello Sergey, > > Great news! :o) > Would it possible to add a file in some conf.d containing the load_module directive so a package management system could automatically configure nginx to automatically load the module on the next reload? It has no sense as nginx has no conditional configuration like if(some module) module_directive1 module_directive2 etc. Module specific directives will result in error on nginx start if there will be no loaded module. > That is standard practice for some other technologies. > --- > B. R. > > On Wed, Feb 24, 2016 at 8:08 PM, Sergey Budnevitch > wrote: > Hello. > > Previously we built nginx with all modules, except those that required > extra libraries. With dynamic modules it is possible to build them as > the separate packages and nginx main package will not have extra > dependences. > > For nginx 1.9.12 we build additional packages with xslt, image-filter > and geoip modules. It is possible to install, for example, image filter > module on RHEL/CentOS with command: > > % yum install nginx-module-image-filter > > or on Ubuntu/Debian with command: > > % apt-get install nginx-module-image-filter > > then to enable module it is necessary add load_module directive: > > load_module modules/ngx_http_image_filter_module.so; > > to the main section of the nginx.conf > > Please test these modules, any feedback will be helpful. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at nginx.com Fri Feb 26 15:04:36 2016 From: sb at nginx.com (Sergey Budnevitch) Date: Fri, 26 Feb 2016 18:04:36 +0300 Subject: nginx-debug package Message-ID: Hello, There was one noticeable change in the nginx packages recently. Before nginx 1.9.8 we shipped nginx-debug binary (that is nginx configured and compiled with ?with-debug option, needed for debugging: http://nginx.org/en/docs/debugging_log.html) in the separate package "nginx-debug". Since 1.9.8 this binary was moved to the main package. Now it is possible to start nginx-debug binary with the init-script: % service nginx stop % service nginx-debug start nginx-debug will use same config files. From mw-nginx at barfooze.de Fri Feb 26 19:27:28 2016 From: mw-nginx at barfooze.de (Moritz Wilhelmy) Date: Fri, 26 Feb 2016 20:27:28 +0100 Subject: XSLT and autoindex XML output: conditionally transform XML only when it's autoindex output? Message-ID: <20160226192728.GI1402@barfooze.de> Hi, The documentation said it's possible to transform XML dirlistings into XHTML in order to customize what they look like, so I did one that makes them look like lighttpd's: https://gist.github.com/wilhelmy/5a59b8eea26974a468c9 This works fine, but does anybody know how I apply the XSLT only to transform directory indexes but not other XML files that might be around? Currently, the way this config works is that all XML files are being transformed by this stylesheet. The config looks like this: https://gist.github.com/wilhelmy/5a59b8eea26974a468c9#file-nginx-snippet-conf Thanks and best regards, Moritz From sb at nginx.com Fri Feb 26 20:22:54 2016 From: sb at nginx.com (Sergey Budnevitch) Date: Fri, 26 Feb 2016 23:22:54 +0300 Subject: XSLT and autoindex XML output: conditionally transform XML only when it's autoindex output? In-Reply-To: <20160226192728.GI1402@barfooze.de> References: <20160226192728.GI1402@barfooze.de> Message-ID: > On 26 Feb 2016, at 22:27, Moritz Wilhelmy wrote: > > Hi, > > The documentation said it's possible to transform XML dirlistings into > XHTML in order to customize what they look like, so I did one that makes > them look like lighttpd's: > https://gist.github.com/wilhelmy/5a59b8eea26974a468c9 > > This works fine, but does anybody know how I apply the XSLT only > to transform directory indexes but not other XML files that might be > around? Currently, the way this config works is that all XML files are > being transformed by this stylesheet. > > The config looks like this: > https://gist.github.com/wilhelmy/5a59b8eea26974a468c9#file-nginx-snippet-conf Cannot check with xslt right now, but autoindex should works with this setup: root /opt/www; location / { try_files $uri @autoindex; } location @autoindex { autoindex on; } Try to add xslt and xml directives in @autoindex location. From mw-nginx at barfooze.de Fri Feb 26 20:46:32 2016 From: mw-nginx at barfooze.de (Moritz Wilhelmy) Date: Fri, 26 Feb 2016 21:46:32 +0100 Subject: XSLT and autoindex XML output: conditionally transform XML only when it's autoindex output? In-Reply-To: References: <20160226192728.GI1402@barfooze.de> Message-ID: <20160226204434.GA26392@barfooze.de> On Fri, Feb 26, 2016 at 23:22:54 +0300, Sergey Budnevitch wrote: > > > On 26 Feb 2016, at 22:27, Moritz Wilhelmy wrote: > > > > Hi, > > > > The documentation said it's possible to transform XML dirlistings into > > XHTML in order to customize what they look like, so I did one that makes > > them look like lighttpd's: > > https://gist.github.com/wilhelmy/5a59b8eea26974a468c9 > > > > This works fine, but does anybody know how I apply the XSLT only > > to transform directory indexes but not other XML files that might be > > around? Currently, the way this config works is that all XML files are > > being transformed by this stylesheet. > > > > The config looks like this: > > https://gist.github.com/wilhelmy/5a59b8eea26974a468c9#file-nginx-snippet-conf > > > Cannot check with xslt right now, but autoindex should works with this setup: > > root /opt/www; > > location / { > try_files $uri @autoindex; > } > > location @autoindex { > autoindex on; > } > > Try to add xslt and xml directives in @autoindex location. Thanks, that seems to have done the trick. I've updated the gist to reflect my current working config, for anyone who's curious. Best regards, Moritz From youjie.zeng at gmail.com Sat Feb 27 02:37:48 2016 From: youjie.zeng at gmail.com (youjie zeng) Date: Sat, 27 Feb 2016 10:37:48 +0800 Subject: lose value of $uid_set In-Reply-To: References: <56CF0533.9030808@slcoding.com> Message-ID: Is there some mistake ignored by me? I do not found useful information for this question, I am hoping you can give me some advises. maybe i should try third module if this issue can not resolve? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Sat Feb 27 19:46:51 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Sat, 27 Feb 2016 14:46:51 -0500 Subject: NGINX logging tab delimited format to syslog In-Reply-To: <179B2A79-B96D-43E2-80C7-5B0A406325E5@nginx.com> References: <179B2A79-B96D-43E2-80C7-5B0A406325E5@nginx.com> Message-ID: Thank you! I've got it all set up now, thanks for the pointer to $ EscapeControlCharactersOnReceive On Thu, Feb 25, 2016 at 6:25 PM, Ekaterina Kukushkina wrote: > Hello CJ, > > > > On 26 Feb 2016, at 00:50, CJ Ess wrote: > > > > I would really like to output my nginx access log to syslog in a tab > delimited format. > > > > I'm using the latest nginx and rsyslogd 7.2.5 > > > > I haven't found an example of doing this, I'm wondering if/how to add > tabs to the format in the log_format directive > > Just use '\t' instead of ' ' in your log_format. > For example: > log_format tabbed > '$remote_addr\t-\t$remote_user\t[$time_local]\t"$request"'; > > > > > And also if there is anything I need to do to syslogd to pass through > the tab characters. > > By default, rsyslog convert control characters to their ASCII values (#011 > in case > of Tab). You may prevent this behavior by setting > $EscapeControlCharactersOnReceive > to off. > > > > > Any help appreciated! > > > > -- > Ekaterina Kukushkina > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Feb 28 01:45:01 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Sat, 27 Feb 2016 20:45:01 -0500 Subject: http/2 with big pcitures (possible feature request?) Message-ID: <6d3d29540de1616b944aab028da469a7.NginxMailingListEnglish@forum.nginx.org> Hi all, I just enabled http2 for my website. Due to my website has lots of big pictures, it seems that the http2 new feature " single, multiplexed connection" will cause browser to download lots of data at the same time and use up all bandwidth. And on some mobile devices, it will freeze the web browser for several seconds. So, is there possible to add a limit for "single, multiplexed connection"? For example, if the file size is over 200K, then use old http/1 method or create a new connection for it? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264910,264910#msg-264910 From nginx-forum at forum.nginx.org Sun Feb 28 13:52:12 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Sun, 28 Feb 2016 08:52:12 -0500 Subject: enable reuseport then only one worker is working? Message-ID: Hi All, I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. After I enabled reuseport on my server, it seems now there is one worker always takes up 100% CPU and all the rest workers are use less than 1% CPU. In the day time it's OK because my website doesn't have lots of users. But at night it's very slow. Active connections: 5716 server accepts handled requests 175477 175477 1805645 Reading: 1 Writing: 698 Waiting: 5541 After I disable reuseport, it seems my website speed is back to normal. Each worker (18 workers in total) takes up about 10%-15% CPU. I understand that enable reuseport will disable accept_mutex. But My understanding is that Nginx will assign work load to different multiple socket listeners then to different workers. BUt it seems in my environment all workload was assign to the same worker, even though it's very busy and the rest are idle. Did I miss anything in the configuration? or for a busy server, it's better to use accept_mutex instead of reuseport? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264913,264913#msg-264913 From nginx-forum at forum.nginx.org Sun Feb 28 18:52:40 2016 From: nginx-forum at forum.nginx.org (vampeta) Date: Sun, 28 Feb 2016 13:52:40 -0500 Subject: help to understand proxy_pass and proxy_cache Message-ID: <74863bbcda1b8476323a3a66f7c4a135.NginxMailingListEnglish@forum.nginx.org> Hi guys, i would like to understand if nginx can be used for my request, i try to explain: i have a http mpegts server we are 5 client normally someone think to connect all together to the server like: client1-------------------> \ client2-------------------> \ client3-------------------> http server client4-------------------> / client5-------------------> / but i can't do for reson of bandwitch ( 5 stream take around 20/22 mb ) so i would like to use nginx in the middle, like: client1-------------------> \ client2-------------------> \ client3-------------------> (nginx proxy with cache stream) <-----------> http server client4-------------------> / client5-------------------> / 5 client request the same source to my local nginx server nginx get the stream from the main server ( in this case take only 4mb and sharing the cache with 5 client ) can be possible?? thx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264915,264915#msg-264915 From pchychi at gmail.com Sun Feb 28 21:13:43 2016 From: pchychi at gmail.com (Payam Chychi) Date: Sun, 28 Feb 2016 13:13:43 -0800 Subject: help to understand proxy_pass and proxy_cache In-Reply-To: <74863bbcda1b8476323a3a66f7c4a135.NginxMailingListEnglish@forum.nginx.org> References: <74863bbcda1b8476323a3a66f7c4a135.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi If your http server is on linux, why not simply use iptables to limit the speed per request? Anyways, yes, you can use nginx to limit transfer speeds. Nginx has several examples on the wiki Cherrs, Payam chychi On Feb 28, 2016, 10:52 AM -0800, vampeta, wrote: > Hi guys, > i would like to understand if nginx can be used for my request, i try to > explain: > > i have a http mpegts server > we are 5 client > > normally someone think to connect all together to the server like: > > client1------------------->\ > client2------------------->\ > client3------------------->http server > client4------------------->/ > client5------------------->/ > > but i can't do for reson of bandwitch ( 5 stream take around 20/22 mb ) > > so i would like to use nginx in the middle, like: > > client1------------------->\ > client2------------------->\ > client3------------------->(nginx proxy with cache stream)<----------- > http server > client4------------------->/ > client5------------------->/ > > 5 client request the same source to my local nginx server > > nginx get the stream from the main server ( in this case take only 4mb and > sharing the cache with 5 client ) > > can be possible?? > > thx > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264915,264915#msg-264915 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Feb 29 04:22:47 2016 From: vbart at nginx.com (=?utf-8?B?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Mon, 29 Feb 2016 07:22:47 +0300 Subject: enable reuseport then only one worker is working? In-Reply-To: References: Message-ID: <1579942.OlGCbpSOfM@vbart-laptop> On Sunday 28 February 2016 08:52:12 meteor8488 wrote: > Hi All, > > I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. [..] > Did I miss anything in the configuration? or for a busy server, it's better > to use accept_mutex instead of reuseport? > [..] In FreeBSD the SO_REUSEPORT option has completely different behavior and shouldn't be enabled in nginx. wbr, Valentin V. Bartenev From black.fledermaus at arcor.de Mon Feb 29 10:41:37 2016 From: black.fledermaus at arcor.de (basti) Date: Mon, 29 Feb 2016 11:41:37 +0100 Subject: SNI host with non SSL wrong cert Message-ID: <56D42061.7080503@arcor.de> Hello, I have nginx installed with multiple domainnames and multiple ssl-hosts use SNI. Now I add an other vhost with non-ssl server entry like example.com. when I try to use https://example.com/ I get a cert from an other vhost. I found this "solution" to "catch all" | | |server { listen 443 ssl; server_name _; ssl on; ssl_certificate /path/to/certificate.crt; ssl_certificate_key /path/to/certificate.key; return 444; }| |But I need a valid cert to get no error in browsser. Also when I try to redirect it to non-ssl area. Is there a solution without need a cert? somethink like ||server { listen 443; server_name ssl.example.com; break; }| From rsifon at inpres.gov.ar Mon Feb 29 12:47:30 2016 From: rsifon at inpres.gov.ar (Ing. Ricardo SIFON) Date: Mon, 29 Feb 2016 09:47:30 -0300 Subject: Problem with error file Message-ID: <000301d172ef$5a0767c0$0e163740$@gov.ar> Hi all! I have installed NGINX v1.8.0 with PHP v 5.2.6 in Windows 7 (64 bits). In the root directory of nginx (c:\nginx\), a file called "error" is generated. The size of this file grows too, causing the hard drive runs out of free space. Attachment my config file of nginx (nginx.conf). Regards, Ricardo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 2763 bytes Desc: not available URL: From florian.hesse at rbb-online.de Mon Feb 29 12:50:37 2016 From: florian.hesse at rbb-online.de (florian.hesse at rbb-online.de) Date: Mon, 29 Feb 2016 13:50:37 +0100 Subject: Antwort: PUT/POST files to akamai CDN In-Reply-To: References: Message-ID: Hello, i will start a second attmept for my question. I want to upload all files continuously generated by the rtmp mpeg dash module in a specific folder to another server. The upload needs to be processed with the HTTP POST function. If i would do it manualy with curl, i would use something like: curl -v -d "123456789" http://example.com/path/to/file.mpd Does anyone know if its possible with nginx and maybe how to do it? There is now way to start any process on the remote server. Big thanks, Florian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Feb 29 13:43:24 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 29 Feb 2016 08:43:24 -0500 Subject: Problem with error file In-Reply-To: <000301d172ef$5a0767c0$0e163740$@gov.ar> References: <000301d172ef$5a0767c0$0e163740$@gov.ar> Message-ID: It would be more helpful to see whats in this error logfile. Don't attach files here, use pastebin or the sorts. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264925,264927#msg-264927 From mdounin at mdounin.ru Mon Feb 29 13:54:33 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Feb 2016 16:54:33 +0300 Subject: Problem with error file In-Reply-To: <000301d172ef$5a0767c0$0e163740$@gov.ar> References: <000301d172ef$5a0767c0$0e163740$@gov.ar> Message-ID: <20160229135433.GL31796@mdounin.ru> Hello! On Mon, Feb 29, 2016 at 09:47:30AM -0300, Ing. Ricardo SIFON wrote: > In the root directory of nginx (c:\nginx\), a file called "error" is > generated. The size of this file grows too, causing the hard drive runs out > of free space. > > Attachment my config file of nginx (nginx.conf). Are you sure it's called "error"? According to your configuration, it should be called "off": ... error_log off; ... Note that "off" isn't a special value for the error_log directive. To reduce logging of errors consider tuning the logging level instead, see http://nginx.org/r/error_log. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Feb 29 14:06:29 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Feb 2016 17:06:29 +0300 Subject: enable reuseport then only one worker is working? In-Reply-To: References: Message-ID: <20160229140629.GM31796@mdounin.ru> Hello! On Sun, Feb 28, 2016 at 08:52:12AM -0500, meteor8488 wrote: > I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. > After I enabled reuseport on my server, it seems now there is one worker > always takes up 100% CPU and all the rest workers are use less than 1% CPU. > In the day time it's OK because my website doesn't have lots of users. But > at night it's very slow. While SO_REUSEPORT socket option is available on FreeBSD, its behaviour is different from one nginx relies on: instead of balancing between all sockets as Linux and DragonFly do, it preserves historic behaviour for TCP and delivers all connections to one socket instead. That is, there is no surprise that you see all work done by a single worker in your setup when you use "listen ... reuseport" on FreeBSD. Note that documentation explicitly says it's only expected to work on Linux and DragonFly: : This currently works only on Linux 3.9+ and DragonFly BSD. See http//nginx.org/r/listen for details. -- Maxim Dounin http://nginx.org/ From ayman_shorman at hotmail.com Mon Feb 29 14:09:00 2016 From: ayman_shorman at hotmail.com (Ayman Al-Shorman) Date: Mon, 29 Feb 2016 16:09:00 +0200 Subject: Image filter module Message-ID: Hello, I just installed image filter module for resizing images. It worked as expected but we faced an issue that this module doesn't respect EXIF so some images are being rotated to the original state then nginx resize it. Any idea how to fix this? Thanks Ayman Sent from my iPhone From rsifon at inpres.gov.ar Mon Feb 29 14:39:04 2016 From: rsifon at inpres.gov.ar (Ing. Ricardo SIFON) Date: Mon, 29 Feb 2016 11:39:04 -0300 Subject: Problem with error file In-Reply-To: <20160229135433.GL31796@mdounin.ru> References: <000301d172ef$5a0767c0$0e163740$@gov.ar> <20160229135433.GL31796@mdounin.ru> Message-ID: <001501d172fe$eff2c720$cfd85560$@gov.ar> Maxim, I'm Sorry, I have made a mistake. As you say, the line called: error_log is in "off" in config file. The file that I have problems is called: off (without extensi?n) in C:\nginx. Its content is: ... [error] 6072#0: *27 open() "/cygdrive/c/nginx/html/robots.txt" failed (2: No such file or directory), client: 10.0.0.1, server: localhost, request: "GET /robots.txt HTTP/1.1", host: "10.0.0.50" ... The same message is repeated constantly. Regards, Ricardo -----Mensaje original----- De: nginx [mailto:nginx-bounces at nginx.org] En nombre de Maxim Dounin Enviado el: lunes, 29 de febrero de 2016 10:55 Para: nginx at nginx.org Asunto: Re: Problem with error file Hello! On Mon, Feb 29, 2016 at 09:47:30AM -0300, Ing. Ricardo SIFON wrote: > In the root directory of nginx (c:\nginx\), a file called "error" is > generated. The size of this file grows too, causing the hard drive runs out > of free space. > > Attachment my config file of nginx (nginx.conf). Are you sure it's called "error"? According to your configuration, it should be called "off": ... error_log off; ... Note that "off" isn't a special value for the error_log directive. To reduce logging of errors consider tuning the logging level instead, see http://nginx.org/r/error_log. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Feb 29 14:49:21 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Feb 2016 17:49:21 +0300 Subject: Problem with error file In-Reply-To: <001501d172fe$eff2c720$cfd85560$@gov.ar> References: <000301d172ef$5a0767c0$0e163740$@gov.ar> <20160229135433.GL31796@mdounin.ru> <001501d172fe$eff2c720$cfd85560$@gov.ar> Message-ID: <20160229144921.GP31796@mdounin.ru> Hello! On Mon, Feb 29, 2016 at 11:39:04AM -0300, Ing. Ricardo SIFON wrote: > Maxim, > > I'm Sorry, I have made a mistake. > > As you say, the line called: error_log is in "off" in config file. > > The file that I have problems is called: off (without extensi?n) in > C:\nginx. So no problem here, nginx works exactly as you've configured it, writes errors to the file named "off". If you want it to work differently - configure it differently. -- Maxim Dounin http://nginx.org/ From ahutchings at nginx.com Mon Feb 29 15:03:36 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Mon, 29 Feb 2016 15:03:36 +0000 Subject: Image filter module In-Reply-To: References: Message-ID: <56D45DC8.4010502@nginx.com> Hi Ayman, The module uses the GD library underneath which doesn't support EXIF rotation (or really any of EXIF). In addition the module will strip off the EXIF data if it is more than 5% of the image size. If you wish to preserve the EXIF you may need to make a microservice to do the resizing whilst either rotating using the EXIF or preserving the EXIF. There are several examples on the internet on how to do this with a small PHP script using the EXIF and GD module or ImageMagick. Kind Regards Andrew On 29/02/16 14:09, Ayman Al-Shorman wrote: > Hello, > > I just installed image filter module for resizing images. > It worked as expected but we faced an issue that this module doesn't respect EXIF so some images are being rotated to the original state then nginx resize it. > Any idea how to fix this? > > Thanks > > Ayman > > Sent from my iPhone > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From r at roze.lv Mon Feb 29 15:20:24 2016 From: r at roze.lv (Reinis Rozitis) Date: Mon, 29 Feb 2016 17:20:24 +0200 Subject: Antwort: PUT/POST files to akamai CDN In-Reply-To: References: Message-ID: > I want to upload all files continuously generated by the rtmp mpeg dash > module in a specific folder to another server. > The upload needs to be processed with the HTTP POST function. > If i would do it manualy with curl, i would use something like: > curl -v -d "123456789" http://example.com/path/to/file.mpd > Does anyone know if its possible with nginx and maybe how to do it? > There is now way to start any process on the remote server. It's more of a topic for the nginx-rtmp list. Maybe this thread helps: https://groups.google.com/forum/#!topic/nginx-rtmp/FEyDi2VWblU In short you can use push or exec_push rr From rsifon at inpres.gov.ar Mon Feb 29 15:24:02 2016 From: rsifon at inpres.gov.ar (Ing. Ricardo SIFON) Date: Mon, 29 Feb 2016 12:24:02 -0300 Subject: Problem with error file In-Reply-To: <20160229144921.GP31796@mdounin.ru> References: <000301d172ef$5a0767c0$0e163740$@gov.ar> <20160229135433.GL31796@mdounin.ru> <001501d172fe$eff2c720$cfd85560$@gov.ar> <20160229144921.GP31796@mdounin.ru> Message-ID: <001601d17305$38345700$a89d0500$@gov.ar> Maxim, How should I set nginx to avoid writing in the "off" file? Regards, Ricardo -----Mensaje original----- De: nginx [mailto:nginx-bounces at nginx.org] En nombre de Maxim Dounin Enviado el: lunes, 29 de febrero de 2016 11:49 Para: nginx at nginx.org Asunto: Re: Problem with error file Hello! On Mon, Feb 29, 2016 at 11:39:04AM -0300, Ing. Ricardo SIFON wrote: > Maxim, > > I'm Sorry, I have made a mistake. > > As you say, the line called: error_log is in "off" in config file. > > The file that I have problems is called: off (without extensi?n) in > C:\nginx. So no problem here, nginx works exactly as you've configured it, writes errors to the file named "off". If you want it to work differently - configure it differently. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Feb 29 15:32:10 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Feb 2016 18:32:10 +0300 Subject: Problem with error file In-Reply-To: <001601d17305$38345700$a89d0500$@gov.ar> References: <000301d172ef$5a0767c0$0e163740$@gov.ar> <20160229135433.GL31796@mdounin.ru> <001501d172fe$eff2c720$cfd85560$@gov.ar> <20160229144921.GP31796@mdounin.ru> <001601d17305$38345700$a89d0500$@gov.ar> Message-ID: <20160229153210.GQ31796@mdounin.ru> Hello! On Mon, Feb 29, 2016 at 12:24:02PM -0300, Ing. Ricardo SIFON wrote: > How should I set nginx to avoid writing in the "off" file? Specify different filename using the "error_log" directive. See details in the documentation here: http://nginx.org/r/error_log -- Maxim Dounin http://nginx.org/ From jim at ohlste.in Mon Feb 29 15:34:48 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Mon, 29 Feb 2016 10:34:48 -0500 Subject: Problem with error file In-Reply-To: <001601d17305$38345700$a89d0500$@gov.ar> References: <000301d172ef$5a0767c0$0e163740$@gov.ar> <20160229135433.GL31796@mdounin.ru> <001501d172fe$eff2c720$cfd85560$@gov.ar> <20160229144921.GP31796@mdounin.ru> <001601d17305$38345700$a89d0500$@gov.ar> Message-ID: <56D46518.4080102@ohlste.in> Hello, On 2/29/16 10:24 AM, Ing. Ricardo SIFON wrote: > Maxim, > > How should I set nginx to avoid writing in the "off" file? > You can set error logging to a different level. See http://nginx.org/en/docs/ngx_core_module.html#error_log. Also, since it seems to be one file, you can use the "log_not_found" directive. See http://nginx.org/en/docs/http/ngx_http_core_module.html#log_not_found. Something like: location = /robots.txt { log_not_found off; access_log off; } > > > -----Mensaje original----- > De: nginx [mailto:nginx-bounces at nginx.org] En nombre de Maxim Dounin > Enviado el: lunes, 29 de febrero de 2016 11:49 > Para: nginx at nginx.org > Asunto: Re: Problem with error file > > Hello! > > On Mon, Feb 29, 2016 at 11:39:04AM -0300, Ing. Ricardo SIFON wrote: > >> Maxim, >> >> I'm Sorry, I have made a mistake. >> >> As you say, the line called: error_log is in "off" in config file. >> >> The file that I have problems is called: off (without extensi?n) in >> C:\nginx. > > So no problem here, nginx works exactly as you've configured it, > writes errors to the file named "off". If you want it to work > differently - configure it differently. > -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From guillaume at databerries.com Mon Feb 29 16:05:16 2016 From: guillaume at databerries.com (Guillaume Charhon) Date: Mon, 29 Feb 2016 17:05:16 +0100 Subject: Request processing rate and reverse proxy Message-ID: Hello, I have setup nginx 1.9.3 as a reverse proxy [1] with a rate limitation per server [2]. The rate limitation does not work on this scenario. The rate request limitation works well if I use nginx as a normal webserver (for example to serve the default welcome page). I have attached my configuration files (listen on 80 and redirect to another webserver running lighttpd). [1] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass [2] http://nginx.org/en/docs/http/ngx_http_limit_req_module.html Best Regards, poiuytrez PS : You can run it in a docker using the following commands if you need: docker run --rm --name nginx -p 8082:80 -v /yourdir/nginx.conf:/etc/nginx/nginx.conf:ro -v /yourdir/default.conf:/etc/nginx/conf.d/default.conf --link backend:backend nginx docker run --rm --name backend -p 8081:80 jprjr/lighttpd Go to http://localhost:8082/ in your web browser. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 705 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: default.conf Type: application/octet-stream Size: 411 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Mon Feb 29 18:06:43 2016 From: nginx-forum at forum.nginx.org (jonkeane) Date: Mon, 29 Feb 2016 13:06:43 -0500 Subject: Problems with nginx accepting tls connections Message-ID: <7c13e91d030b6e781651fed275d9cb22.NginxMailingListEnglish@forum.nginx.org> Apologies if this is not solely connected to nginx, but I think I've narrowed it down to the connection with nginx, and how it is handling TLS connections. I'm attempting to setup nginx to receive connections from an amazon dash button (using information from http://blog.nemik.net/2015/08/dash-button-corral/). Using ubuntu 14.04 and nginx 1.4.6 this setup is working correctly, the dash connects to my server, they exchange keys (although the key my server sends is not the one that the dash is expecting, it doesn't actually check this, and then the dash connects to the page 2/b on my server, and everything is great. I recently upgraded to Ubuntu 15.10 with nginx 1.9.3 and something is going wrong with the TLS/SSL connection. With the same setup, my server appropriately responds to the page 2/b if I get or put there manually (from a browser, etc.) but the dash is never able to connect. I've run ssldump on both setups, and it looks like on nginx 1.9.3 the connection never gets further than ServerHelloDone before the TCP FIN are sent from client to server and server to client, no client key is exchanged, and no data is exchanged (I've added output from ssldump with each below). Is this an nginx configuration issue? Is there anyway I can configure nginx/openssl so that these connections can go through like they did with previous setups? I'm happy to provide more detailed configuration, log files, or other information if needed. Thank you in advance for your help. with nginx 1.9.3 192.168.1.140 is the dash button, 192.168.1.21 is my server with nginx 1.9.3 on it: New TCP connection #50: 192.168.1.140(30004) <-> 192.168.1.21(443) 50 1 0.0090 (0.0090) C>SV3.1(49) Handshake ClientHello Version 3.1 random[32]= 00 00 37 5d 36 36 15 9d 59 8d da 1e ad f7 90 d7 a0 32 bd b9 c0 6f 58 6b cd 3f a0 5a a0 76 91 ca cipher suites TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_RC4_128_MD5 compression methods NULL 50 2 0.0094 (0.0004) S>CV3.1(74) Handshake ServerHello Version 3.1 random[32]= 87 08 53 95 a3 9e 1b 7b f0 a8 56 cd f8 2b cc 03 94 27 3e 0e 8f 84 63 3c f5 03 e9 94 d2 1d f2 a4 session_id[32]= d1 2b 21 f6 f6 e0 16 7b a2 a1 69 ef 18 df 3f d5 e5 50 2e bb c4 c7 b2 5d f1 b7 9c 12 5b 4b ca d1 cipherSuite TLS_RSA_WITH_AES_256_CBC_SHA compressionMethod NULL 50 3 0.0094 (0.0000) S>CV3.1(704) Handshake Certificate certificate[694]= [removed for brevity] 50 4 0.0094 (0.0000) S>CV3.1(4) Handshake ServerHelloDone 50 0.0271 (0.0176) C>S TCP FIN 50 0.0274 (0.0002) S>C TCP FIN with nginx 1.4.6 192.168.1.140 is the dash button, 192.168.1.20 is my server with nginx 1.4.6 on it: New TCP connection #4: 192.168.1.140(30003) <-> 192.168.1.20(443) 4 1 0.0081 (0.0081) C>SV3.1(49) Handshake ClientHello Version 3.1 random[32]= 00 00 34 dc c4 e3 62 d2 26 84 1e 82 be 3a 75 f3 2a c9 cf 82 f9 3d ad d8 1e 6b 5f 63 40 9f 0e 9c cipher suites TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_RC4_128_MD5 compression methods NULL 4 2 0.0084 (0.0003) S>CV3.1(74) Handshake ServerHello Version 3.1 random[32]= 20 fd 68 07 d1 e3 63 0a cf 39 b4 f8 65 e9 25 ed 09 9f c4 d9 c4 0d f2 b6 f0 82 2b f8 d9 ea 1a 3f session_id[32]= ea 25 8c fd 61 66 92 25 44 fb f0 74 7c 2a 4b bc d6 76 8b 05 16 ed 4a ee 84 0b 0c 74 7f 23 b9 de cipherSuite TLS_RSA_WITH_AES_256_CBC_SHA compressionMethod NULL 4 3 0.0084 (0.0000) S>CV3.1(704) Handshake Certificate certificate[694]= [removed for brevity] 4 4 0.0084 (0.0000) S>CV3.1(4) Handshake ServerHelloDone 4 5 0.0548 (0.0463) C>SV3.1(262) Handshake ClientKeyExchange EncryptedPreMasterSecret[256]= [removed for brevity] 4 6 0.0561 (0.0013) C>SV3.1(1) ChangeCipherSpec 4 7 0.0561 (0.0000) C>SV3.1(48) Handshake 4 8 0.0617 (0.0056) S>CV3.1(1) ChangeCipherSpec 4 9 0.0617 (0.0000) S>CV3.1(48) Handshake 4 10 0.0645 (0.0027) C>SV3.1(96) application_data 4 11 0.0647 (0.0001) C>SV3.1(64) application_data 4 12 0.0648 (0.0001) S>CV3.1(240) application_data 4 13 0.0653 (0.0004) C>SV3.1(112) application_data 4 14 0.0656 (0.0003) C>SV3.1(48) application_data 4 0.0658 (0.0001) S>C TCP FIN 4 0.0745 (0.0087) C>S TCP FIN Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264941,264941#msg-264941 From nginx-forum at forum.nginx.org Mon Feb 29 18:18:56 2016 From: nginx-forum at forum.nginx.org (hatlam) Date: Mon, 29 Feb 2016 13:18:56 -0500 Subject: nginx client authentication with 2 intermediate CAs Message-ID: <3ffb4f0acc95e14769a793a93682ab08.NginxMailingListEnglish@forum.nginx.org> I'm trying to get nginx to verify client certificate issued through the following chain, with self-signed root: Root CA => Signing CA => Subordinate CA => Client cert. I installed root_CA.crt on the server, and on the client side, the certs are concatenated with cat client.crt subordinate_CA.crt signing_CA.crt > cert-chain.pem. My nginx setting looks like this: ssl_client_certificate /path/to/root_CA.crt; ssl_verify_client on; ssl_verify_depth 3; I tried to connect with curl -k server.url:443 --cert cert-chain.pem but it gives me error curl: (35) error reading X.509 key or certificate file. If I try that with --key client.key then it gives me 400 Bad Request. I also tried to test with openssl s_client and the result is similar. I've verified that the nginx setting works if I have no intermediate CA, i.e., Root CA => Client cert. It also works if my intermediate CA certs are installed on the server and only the leaf cert is on the client side. However, in our case, the Signing CA and Subordinate CA certs cannot be installed on server ahead of time. Any idea what to try next? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264942,264942#msg-264942 From zxcvbn4038 at gmail.com Mon Feb 29 19:03:02 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Mon, 29 Feb 2016 14:03:02 -0500 Subject: Problem with proxy cache misses Message-ID: Hello! I'm testing out a new configuration and there are two issues with the proxy cacheing feature I'm getting stuck on. 1) Everything is a cache miss, and I'm not sure why: My cache config (anonymized): ... proxy_cache_path /var/www/test_cache levels=2:2 keys_zone=TEST:32m inactive=365d max_size=16g loader_files=256; ... upstream haproxy { server 127.0.0.1:8080; keepalive 256; } ... location ~ "^/[W][A-Za-z0-9_-]{7,13}$" { limit_except GET { deny all; } proxy_http_version 1.1; proxy_set_header Connection "Close"; # Disable Keepalives proxy_set_header Host "www.testhost.com"; # Upstream requires this value proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_cache TEST; proxy_cache_key $uri; proxy_cache_valid 301 365d; proxy_cache_valid 302 1d; proxy_cache_lock on; proxy_buffering off; proxy_pass http://haproxy; } ... A sample response coming back from the upstream (captured with wireshark) HTTP/1.1 301 Moved Permanently Date: Mon, 29 Feb 2016 18:16:02 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: close P3P: CP="some text" X-Frame-Options: deny Location: http://some.test.url/ 0 The cache directory is owned by the nginx user, perms 0700. I'm expecting the 301 in the example above to be cached for a year, but nothing is created under /var/www/test_cache, and subsequent requests for the same resources are also cache misses. 2) For each URL which doesn't match any of the location blocks, I am seeing an error in the log file: 2016/02/29 13:55:20 [error] 19524#0: *1509121054 open() "/var/www/html/W4ud7y1k4jjbj" failed (2: No such file or directory), client: a.b.c.d, server:test.com, request: "HEAD /W4ud7y1k4jjbj HTTP/1.1", host: "test.com" There is a "root /var/www/html" defined in the http block, although there is only one specific location which uses it: location = /apple-app-site-association { default_type application/pkcs7-mime; break; } And the final location block in my server config is: location = / { rewrite ^ https://www.someother.test.com/ permanent; break; } So my expectation is that since the request matches none of the location blocks, nginx will just issue a 404 response. However from the error log, it looks like it is trying the root directory first before issuing the 404. Is there some way to prevent that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pchychi at gmail.com Mon Feb 29 19:15:20 2016 From: pchychi at gmail.com (Payam Chychi) Date: Mon, 29 Feb 2016 11:15:20 -0800 Subject: Problem with proxy cache misses In-Reply-To: References: Message-ID: Look at your proxy cache path... (proxy_cache_path /var/www/test_cache)Are you sure the path exists and had proper perms/ownership? Payam On Feb 29, 2016, 11:03 AM -0800, CJ Ess, wrote: > Hello! I'm testing out a new configuration and there are two issues with the proxy cacheing feature I'm getting stuck on. > > 1) Everything is a cache miss, and I'm not sure why: > > My cache config (anonymized): > > ... > proxy_cache_path /var/www/test_cache levels=2:2 keys_zone=TEST:32m inactive=365d max_size=16g loader_files=256; > ... > upstream haproxy { > server127.0.0.1:8080(http://127.0.0.1:8080); > keepalive 256; > } > > ... > location ~ "^/[W][A-Za-z0-9_-]{7,13}$" { > limit_except GET { > denyall; > } > proxy_http_version 1.1; > proxy_set_header Connection "Close"; # Disable Keepalives > proxy_set_header Host "www.testhost.com(http://www.testhost.com)"; # Upstream requires this value > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto https; > proxy_cache TEST; > proxy_cache_key $uri; > proxy_cache_valid 301 365d; > proxy_cache_valid 302 1d; > proxy_cache_lock on; > proxy_buffering off; > proxy_passhttp://haproxy; > } > > ... > > A sample response coming back from the upstream (captured with wireshark) > > HTTP/1.1 301 Moved Permanently > Date: Mon, 29 Feb 2016 18:16:02 GMT > Content-Type: text/html > Transfer-Encoding: chunked > Connection: close > P3P: CP="some text" > X-Frame-Options: deny > Location:http://some.test.url/ > > 0 > > The cache directory is owned by the nginx user, perms 0700. > > I'm expecting the 301 in the example above to be cached for a year, but nothing is created under /var/www/test_cache, and subsequent requests for the same resources are also cache misses. > > 2) For each URL which doesn't match any of the location blocks, I am seeing an error in the log file: > > 2016/02/29 13:55:20 [error] 19524#0: *1509121054(tel:1509121054)open() "/var/www/html/W4ud7y1k4jjbj" failed (2: No such file or directory), client: a.b.c.d, server:test.com(http://test.com), request: "HEAD /W4ud7y1k4jjbj HTTP/1.1", host: "test.com(http://test.com)" > > There is a "root /var/www/html" defined in the http block, although there is only one specific location which uses it: > > location = /apple-app-site-association { > default_type application/pkcs7-mime; > break; > } > > > And the final location block in my server config is: > > location = / { > rewrite ^https://www.someother.test.com/permanent; > break; > } > > > So my expectation is that since the request matches none of the location blocks, nginx will just issue a 404 response. However from the error log, it looks like it is trying the root directory first before issuing the 404. Is there some way to prevent that? > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Mon Feb 29 20:20:22 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Mon, 29 Feb 2016 15:20:22 -0500 Subject: Problem with proxy cache misses In-Reply-To: References: Message-ID: Yes, it does exist: stat /var/www/test_cache File: `/var/www/test_cache' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 802h/2050d Inode: 22195081 Links: 2 Access: (0700/drwx------) Uid: ( 497/ nginx) Gid: ( 0/ root) Access: 2016-02-29 15:14:19.271701729 -0500 Modify: 2016-02-29 15:14:15.894687178 -0500 Change: 2016-02-29 15:14:15.894687178 -0500 I did an su to nginx and touched a file in the directory and removed it without issue. I did an nginx reload and I didn't see any warnings or errors regarding permissions in the error log. On Mon, Feb 29, 2016 at 2:15 PM, Payam Chychi wrote: > Look at your proxy cache path... (proxy_cache_path /var/www/test_cache) > Are you sure the path exists and had proper perms/ownership? > > Payam > > > On Feb 29, 2016, 11:03 AM -0800, CJ Ess , wrote: > > Hello! I'm testing out a new configuration and there are two issues with > the proxy cacheing feature I'm getting stuck on. > > 1) Everything is a cache miss, and I'm not sure why: > > My cache config (anonymized): > > ... > proxy_cache_path /var/www/test_cache levels=2:2 keys_zone=TEST:32m > inactive=365d max_size=16g loader_files=256; > ... > upstream haproxy { > server 127.0.0.1:8080; > keepalive 256; > } > ... > location ~ "^/[W][A-Za-z0-9_-]{7,13}$" { > limit_except GET { > deny all; > } > proxy_http_version 1.1; > proxy_set_header Connection "Close"; # Disable Keepalives > proxy_set_header Host "www.testhost.com"; # Upstream requires this > value > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto https; > proxy_cache TEST; > proxy_cache_key $uri; > proxy_cache_valid 301 365d; > proxy_cache_valid 302 1d; > proxy_cache_lock on; > proxy_buffering off; > proxy_pass http://haproxy; > } > ... > > A sample response coming back from the upstream (captured with wireshark) > > HTTP/1.1 301 Moved Permanently > Date: Mon, 29 Feb 2016 18:16:02 GMT > Content-Type: text/html > Transfer-Encoding: chunked > Connection: close > P3P: CP="some text" > X-Frame-Options: deny > Location: http://some.test.url/ > > 0 > > The cache directory is owned by the nginx user, perms 0700. > > I'm expecting the 301 in the example above to be cached for a year, but > nothing is created under /var/www/test_cache, and subsequent requests for > the same resources are also cache misses. > > 2) For each URL which doesn't match any of the location blocks, I am > seeing an error in the log file: > > 2016/02/29 13:55:20 [error] 19524#0: *1509121054 open() > "/var/www/html/W4ud7y1k4jjbj" failed (2: No such file or directory), > client: a.b.c.d, server:test.com, request: "HEAD /W4ud7y1k4jjbj > HTTP/1.1", host: "test.com" > > There is a "root /var/www/html" defined in the http block, although there > is only one specific location which uses it: > > location = /apple-app-site-association { > default_type application/pkcs7-mime; > break; > } > > And the final location block in my server config is: > > location = / { > rewrite ^ https://www.someother.test.com/ permanent; > break; > } > > So my expectation is that since the request matches none of the location > blocks, nginx will just issue a 404 response. However from the error log, > it looks like it is trying the root directory first before issuing the 404. > Is there some way to prevent that? > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wandenberg at gmail.com Mon Feb 29 20:32:24 2016 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Mon, 29 Feb 2016 17:32:24 -0300 Subject: Problem with proxy cache misses In-Reply-To: References: Message-ID: The location = / is a exactly match. To execute a "catch all" returning a 404 you can do a location / { return 404; } On Feb 29, 2016 16:15, "Payam Chychi" wrote: > Look at your proxy cache path... (proxy_cache_path /var/www/test_cache) > Are you sure the path exists and had proper perms/ownership? > > Payam > > > On Feb 29, 2016, 11:03 AM -0800, CJ Ess , wrote: > > Hello! I'm testing out a new configuration and there are two issues with > the proxy cacheing feature I'm getting stuck on. > > 1) Everything is a cache miss, and I'm not sure why: > > My cache config (anonymized): > > ... > proxy_cache_path /var/www/test_cache levels=2:2 keys_zone=TEST:32m > inactive=365d max_size=16g loader_files=256; > ... > upstream haproxy { > server 127.0.0.1:8080; > keepalive 256; > } > ... > location ~ "^/[W][A-Za-z0-9_-]{7,13}$" { > limit_except GET { > deny all; > } > proxy_http_version 1.1; > proxy_set_header Connection "Close"; # Disable Keepalives > proxy_set_header Host "www.testhost.com"; # Upstream requires this > value > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto https; > proxy_cache TEST; > proxy_cache_key $uri; > proxy_cache_valid 301 365d; > proxy_cache_valid 302 1d; > proxy_cache_lock on; > proxy_buffering off; > proxy_pass http://haproxy; > } > ... > > A sample response coming back from the upstream (captured with wireshark) > > HTTP/1.1 301 Moved Permanently > Date: Mon, 29 Feb 2016 18:16:02 GMT > Content-Type: text/html > Transfer-Encoding: chunked > Connection: close > P3P: CP="some text" > X-Frame-Options: deny > Location: http://some.test.url/ > > 0 > > The cache directory is owned by the nginx user, perms 0700. > > I'm expecting the 301 in the example above to be cached for a year, but > nothing is created under /var/www/test_cache, and subsequent requests for > the same resources are also cache misses. > > 2) For each URL which doesn't match any of the location blocks, I am > seeing an error in the log file: > > 2016/02/29 13:55:20 [error] 19524#0: *1509121054 open() > "/var/www/html/W4ud7y1k4jjbj" failed (2: No such file or directory), > client: a.b.c.d, server:test.com, request: "HEAD /W4ud7y1k4jjbj > HTTP/1.1", host: "test.com" > > There is a "root /var/www/html" defined in the http block, although there > is only one specific location which uses it: > > location = /apple-app-site-association { > default_type application/pkcs7-mime; > break; > } > > And the final location block in my server config is: > > location = / { > rewrite ^ https://www.someother.test.com/ permanent; > break; > } > > So my expectation is that since the request matches none of the location > blocks, nginx will just issue a 404 response. However from the error log, > it looks like it is trying the root directory first before issuing the 404. > Is there some way to prevent that? > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Mon Feb 29 21:49:22 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Mon, 29 Feb 2016 16:49:22 -0500 Subject: Problem with proxy cache misses In-Reply-To: References: Message-ID: The catch-all entry works and don't conflict with the = /. That solves my second problem, thank you! On Mon, Feb 29, 2016 at 3:32 PM, Wandenberg Peixoto wrote: > The location = / is a exactly match. > To execute a "catch all" returning a 404 you can do a > > location / { > return 404; > } > On Feb 29, 2016 16:15, "Payam Chychi" wrote: > >> Look at your proxy cache path... (proxy_cache_path /var/www/test_cache) >> Are you sure the path exists and had proper perms/ownership? >> >> Payam >> >> >> On Feb 29, 2016, 11:03 AM -0800, CJ Ess , wrote: >> >> Hello! I'm testing out a new configuration and there are two issues with >> the proxy cacheing feature I'm getting stuck on. >> >> 1) Everything is a cache miss, and I'm not sure why: >> >> My cache config (anonymized): >> >> ... >> proxy_cache_path /var/www/test_cache levels=2:2 keys_zone=TEST:32m >> inactive=365d max_size=16g loader_files=256; >> ... >> upstream haproxy { >> server 127.0.0.1:8080; >> keepalive 256; >> } >> ... >> location ~ "^/[W][A-Za-z0-9_-]{7,13}$" { >> limit_except GET { >> deny all; >> } >> proxy_http_version 1.1; >> proxy_set_header Connection "Close"; # Disable Keepalives >> proxy_set_header Host "www.testhost.com"; # Upstream requires this >> value >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_set_header X-Forwarded-Proto https; >> proxy_cache TEST; >> proxy_cache_key $uri; >> proxy_cache_valid 301 365d; >> proxy_cache_valid 302 1d; >> proxy_cache_lock on; >> proxy_buffering off; >> proxy_pass http://haproxy; >> } >> ... >> >> A sample response coming back from the upstream (captured with wireshark) >> >> HTTP/1.1 301 Moved Permanently >> Date: Mon, 29 Feb 2016 18:16:02 GMT >> Content-Type: text/html >> Transfer-Encoding: chunked >> Connection: close >> P3P: CP="some text" >> X-Frame-Options: deny >> Location: http://some.test.url/ >> >> 0 >> >> The cache directory is owned by the nginx user, perms 0700. >> >> I'm expecting the 301 in the example above to be cached for a year, but >> nothing is created under /var/www/test_cache, and subsequent requests for >> the same resources are also cache misses. >> >> 2) For each URL which doesn't match any of the location blocks, I am >> seeing an error in the log file: >> >> 2016/02/29 13:55:20 [error] 19524#0: *1509121054 open() >> "/var/www/html/W4ud7y1k4jjbj" failed (2: No such file or directory), >> client: a.b.c.d, server:test.com, request: "HEAD /W4ud7y1k4jjbj >> HTTP/1.1", host: "test.com" >> >> There is a "root /var/www/html" defined in the http block, although there >> is only one specific location which uses it: >> >> location = /apple-app-site-association { >> default_type application/pkcs7-mime; >> break; >> } >> >> And the final location block in my server config is: >> >> location = / { >> rewrite ^ https://www.someother.test.com/ permanent; >> break; >> } >> >> So my expectation is that since the request matches none of the location >> blocks, nginx will just issue a 404 response. However from the error log, >> it looks like it is trying the root directory first before issuing the 404. >> Is there some way to prevent that? >> >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Mon Feb 29 23:30:21 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Mon, 29 Feb 2016 18:30:21 -0500 Subject: Solution - Re: Problem with proxy cache misses Message-ID: Found it! The solution was in a a mailing list item from 2012. You have to turn proxy buffering on in order for the proxy cache to work. I'm caching like a champ now. There probably ought to be a warning about that someplace, or it should be in the docs someplace. On Mon, Feb 29, 2016 at 2:03 PM, CJ Ess wrote: > Hello! I'm testing out a new configuration and there are two issues with > the proxy cacheing feature I'm getting stuck on. > > 1) Everything is a cache miss, and I'm not sure why: > > My cache config (anonymized): > > ... > proxy_cache_path /var/www/test_cache levels=2:2 keys_zone=TEST:32m > inactive=365d max_size=16g loader_files=256; > ... > upstream haproxy { > server 127.0.0.1:8080; > keepalive 256; > } > ... > location ~ "^/[W][A-Za-z0-9_-]{7,13}$" { > limit_except GET { > deny all; > } > proxy_http_version 1.1; > proxy_set_header Connection "Close"; # Disable Keepalives > proxy_set_header Host "www.testhost.com"; # Upstream requires this > value > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto https; > proxy_cache TEST; > proxy_cache_key $uri; > proxy_cache_valid 301 365d; > proxy_cache_valid 302 1d; > proxy_cache_lock on; > proxy_buffering off; > proxy_pass http://haproxy; > } > ... > > A sample response coming back from the upstream (captured with wireshark) > > HTTP/1.1 301 Moved Permanently > Date: Mon, 29 Feb 2016 18:16:02 GMT > Content-Type: text/html > Transfer-Encoding: chunked > Connection: close > P3P: CP="some text" > X-Frame-Options: deny > Location: http://some.test.url/ > > 0 > > The cache directory is owned by the nginx user, perms 0700. > > I'm expecting the 301 in the example above to be cached for a year, but > nothing is created under /var/www/test_cache, and subsequent requests for > the same resources are also cache misses. > > 2) For each URL which doesn't match any of the location blocks, I am > seeing an error in the log file: > > 2016/02/29 13:55:20 [error] 19524#0: *1509121054 open() > "/var/www/html/W4ud7y1k4jjbj" failed (2: No such file or directory), > client: a.b.c.d, server:test.com, request: "HEAD /W4ud7y1k4jjbj > HTTP/1.1", host: "test.com" > > There is a "root /var/www/html" defined in the http block, although there > is only one specific location which uses it: > > location = /apple-app-site-association { > default_type application/pkcs7-mime; > break; > } > > And the final location block in my server config is: > > location = / { > rewrite ^ https://www.someother.test.com/ permanent; > break; > } > > So my expectation is that since the request matches none of the location > blocks, nginx will just issue a 404 response. However from the error log, > it looks like it is trying the root directory first before issuing the 404. > Is there some way to prevent that? > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: